The Naked Scientists
  • Login
  • Register
  • Podcasts
      • The Naked Scientists
      • eLife
      • Naked Genetics
      • Naked Astronomy
      • In short
      • Naked Neuroscience
      • Ask! The Naked Scientists
      • Question of the Week
      • Archive
      • Video
      • SUBSCRIBE to our Podcasts
  • Articles
      • Science News
      • Features
      • Interviews
      • Answers to Science Questions
  • Get Naked
      • Donate
      • Do an Experiment
      • Science Forum
      • Ask a Question
  • About
      • Meet the team
      • Our Sponsors
      • Site Map
      • Contact us

User menu

  • Login
  • Register
  • Home
  • Help
  • Search
  • Tags
  • Member Map
  • Recent Topics
  • Login
  • Register
  1. Naked Science Forum
  2. On the Lighter Side
  3. New Theories
  4. What is a good analogy for solving the Alignment Problem in AI
« previous next »
  • Print
Pages: 1 [2] 3   Go Down

What is a good analogy for solving the Alignment Problem in AI

  • 51 Replies
  • 3361 Views
  • 5 Tags

0 Members and 1 Guest are viewing this topic.

Online hamdani yusuf

  • Naked Science Forum King!
  • ******
  • 4694
  • Activity:
    90.5%
  • Thanked: 181 times
    • View Profile
Re: What is a good analogy for solving the Alignment Problem in AI
« Reply #20 on: 25/10/2021 05:07:02 »
Quote from: Halc on 23/10/2021 04:03:43
How about choosing to kill one hardened criminal (no chance of parole, a pure burden to society) and use his organs to save 8 lives of patients who otherwise are going to die before a donor can be found. That's immoral (at least to humans) and nobody does it. Why not? The doctor that did it would be thrown in jail, not have his standing updated in a positive way. Is the AI better than humans if it chooses to save the 8, or is it a monster for killing the one?
There are other real world scenarios, and humans always seem to find the kill-the-most option preferable.
I believe this is a thought experiment with limited information. Inevitably, people trying to answer the question will fill the gaps with their own assumptions, probably based on their experience or what they were taught before. The differences in the details may lead to different decisions.
In hard times with limited resources and dysfunctional government, such as during world wars, or ISIS occupied Iraq and Syria, or Afghanistan under Taliban, people tend to be easier to take someone else's lives.
Somebody are known to have done it, or at least similar to it, like Nazi doctors. Perhaps there are more which are unpublished. We don't know what happens in an isolated country like North Korea.
If the government supported by congress approve it, it won't be illegal. The moral judgments may vary depending on who you ask.
AI decisions depend on what terminal goal is assigned to it, the model chosen and the constraints forced into it, and the accuracy of training data fed into it.
« Last Edit: 25/10/2021 05:55:55 by hamdani yusuf »
Logged
Unexpected results come from false assumptions.
 
The following users thanked this post: Zer0



Online hamdani yusuf

  • Naked Science Forum King!
  • ******
  • 4694
  • Activity:
    90.5%
  • Thanked: 181 times
    • View Profile
Re: What is a good analogy for solving the Alignment Problem in AI
« Reply #21 on: 25/10/2021 07:11:52 »
Quote from: hamdani yusuf on 25/10/2021 05:07:02
Inevitably, people trying to answer the question will fill the gaps with their own assumptions
Some information below can change the decision:
- how reliable is the law enforcement there?
- what's the crime rate?
- what's the level of scientific literacy of the society? how advanced is their technology?
- how hard is it to reliably change someone's mind or behavior? 
- how hard is it to transplant organs? what's the success rate? how much resource is required?
- how hard is it to build fully functional synthetic organs?

As I mentioned earlier in my thread about universal morality, unreliable law enforcers can produce fear that some innocent people could be falsely prosecuted just to harvest their organs for profit.
« Last Edit: 25/10/2021 07:19:08 by hamdani yusuf »
Logged
Unexpected results come from false assumptions.
 

Offline Zer0

  • Hero Member
  • *****
  • 903
  • Activity:
    0%
  • Thanked: 85 times
  • Homo EviliUs
    • View Profile
Re: What is a good analogy for solving the Alignment Problem in AI
« Reply #22 on: 26/10/2021 09:31:33 »
This OP is becoming mesmerisingly deep.
👌

I've always considered Ethics & Morals to be Universally good.
But now i must ReThink.

A Self Replicating A.I. sets out to Eradicate Visual Impairment (blindness) from the Society completely.

What if, it then considers taking 1 eye forcefully from people who own 2...& Implanting it into someone who has none.

If the A.I. succeeds, then Blindness would be eradicated.
(Partial visual impairment would remain)

Hmmm.
So then, would that be Morally & Ethically a Good Thing?
🧐
(I know someone personally who wishes to donate an eye of theirs while they are still alive...they are willing to share it...but the Doctors Medical Association considers it a No Go)
👎
Logged
1N73LL1G3NC3  15  7H3  481L17Y  70  4D4P7  70  CH4NG3.
 

Online hamdani yusuf

  • Naked Science Forum King!
  • ******
  • 4694
  • Activity:
    90.5%
  • Thanked: 181 times
    • View Profile
Re: What is a good analogy for solving the Alignment Problem in AI
« Reply #23 on: 26/10/2021 11:46:29 »
Quote from: Zer0 on 26/10/2021 09:31:33
A Self Replicating A.I. sets out to Eradicate Visual Impairment (blindness) from the Society completely.
Self replicating software is relatively easy. Self replicating hardware is much harder. To do it independently from human intervention, the AI data must contain the recipe for building its own hardware. Moreover, it must have access to collect necessary ingredients provided by its environment in objective reality.
If the terminal goal is eradicating blindness, there are some options. Some of which might be "unthinkable".
- kill all blind person
- donate one eye from someone with two eyes
- build synthetic eyes
Logged
Unexpected results come from false assumptions.
 
The following users thanked this post: Zer0

Online hamdani yusuf

  • Naked Science Forum King!
  • ******
  • 4694
  • Activity:
    90.5%
  • Thanked: 181 times
    • View Profile
Re: What is a good analogy for solving the Alignment Problem in AI
« Reply #24 on: 26/10/2021 11:50:07 »
Quote from: Zer0 on 26/10/2021 09:31:33
What if, it then considers taking 1 eye forcefully from people who own 2...& Implanting it into someone who has none.
There's still a long way to go before any AI can do such a thing. When that time has come, synthetic eyes might be already available, which would make the question irrelevant.
Logged
Unexpected results come from false assumptions.
 



Offline Halc

  • Global Moderator
  • Naked Science Forum King!
  • ********
  • 2255
  • Activity:
    21%
  • Thanked: 565 times
    • View Profile
Re: What is a good analogy for solving the Alignment Problem in AI
« Reply #25 on: 26/10/2021 13:39:46 »
Quote from: hamdani yusuf on 25/10/2021 07:11:52
Some information below can change the decision
Partially true, but it seems just more of an attempt to obfuscate a simple situation, something you tend to do when you find the direct answer uncomfortable.

Your points are mostly irrelevant. The 'there' can be any place of your choosing.
The situation was simple: 8 people who will definitely die soon (a month?) without the needed organ. All are young enough that they'd have decades of life expectancy after the surgery.
Let's say there's a 90% chance with each person of success, and 10% rejection chance.

Quote
how much resource is required?
The one prisoner obviously.
Quote
- how hard is it to build fully functional synthetic organs?
For the purpose of this exercise, impossible.

The law is not irrelevant. Such a thing is indeed illegal and since the AI was not put in charge of making better laws, its hands are tied. Now why would humans create a law forbidding the saving of multiple lives rather than the one? I told you that it typically works that way, that humans will more often than not choose the path of greatest loss in a trolley scenario, but I present that as evidence of why it might be better for something to be in charge that isn't human. It would be nice for us if it still valued humanity, especially since the humans don't.
« Last Edit: 26/10/2021 13:42:08 by Halc »
Logged
 
The following users thanked this post: Zer0

Online hamdani yusuf

  • Naked Science Forum King!
  • ******
  • 4694
  • Activity:
    90.5%
  • Thanked: 181 times
    • View Profile
Re: What is a good analogy for solving the Alignment Problem in AI
« Reply #26 on: 26/10/2021 14:57:35 »
Quote from: Halc on 26/10/2021 13:39:46
Your points are mostly irrelevant. The 'there' can be any place of your choosing.
The situation was simple: 8 people who will definitely die soon (a month?) without the needed organ. All are young enough that they'd have decades of life expectancy after the surgery.
Let's say there's a 90% chance with each person of success, and 10% rejection chance.
Those weren't obvious in your previous post.
The decision would be different if the success rate is 100%, compared to if it's 0%. What's the threshold? It depends on the other factors.
« Last Edit: 26/10/2021 15:51:34 by hamdani yusuf »
Logged
Unexpected results come from false assumptions.
 

Online hamdani yusuf

  • Naked Science Forum King!
  • ******
  • 4694
  • Activity:
    90.5%
  • Thanked: 181 times
    • View Profile
Re: What is a good analogy for solving the Alignment Problem in AI
« Reply #27 on: 26/10/2021 15:55:21 »
Quote from: Halc on 26/10/2021 13:39:46
The one prisoner obviously.
The medical equipment and consumables are free, I guess. So are the medical professional working hours. It's also assumed that there's no other emergency situation that requires their attention.
« Last Edit: 26/10/2021 16:11:01 by hamdani yusuf »
Logged
Unexpected results come from false assumptions.
 

Offline Halc

  • Global Moderator
  • Naked Science Forum King!
  • ********
  • 2255
  • Activity:
    21%
  • Thanked: 565 times
    • View Profile
Re: What is a good analogy for solving the Alignment Problem in AI
« Reply #28 on: 26/10/2021 17:01:26 »
Quote from: Halc on 26/10/2021 13:39:46
attempt to obfuscate a simple situation
Oh I was spot on with this. No attempt to answer I see.

Of course there are resources. You seem to suggest they should die because we don't want one more plastic tube in the trash.
Everybody's insured, and I think imminent death counts as an emergency of sorts, else we'd not be taking this route.
Nobody does a surgery with 0% success rate, so you're trolling to suggest the number.

The law doesn't say it's illegal to do this if the success rate isn't better than X. It doesn't say it's illegal to do this because it might take a doctor away from some 3rd party that needs a bandaid. This is what I mean by you obfuscating a simple situtaion. Why does such a law exist, leaving the death of all these people the only legal option? Maybe there isn't even a prisoner, but the 8 got together and voluntarily drew lots with the loser donating his parts to give a very good prognosis to all the others. They'd all be willing to do this since the alternative is certain (and more painful) death, but human law forbids a good outcome like that and insists that they all die. Why?

You will now obfuscate some more because you can't answer this it seems.
I can come up with larger scale examples of the trolley scenario as well, and those don't necessarily have laws forbidding them, but they probably have more opportunity for obfuscation, so I cannot discuss them with somebody determined to be bogged down in details.
« Last Edit: 26/10/2021 17:08:33 by Halc »
Logged
 



Online hamdani yusuf

  • Naked Science Forum King!
  • ******
  • 4694
  • Activity:
    90.5%
  • Thanked: 181 times
    • View Profile
Re: What is a good analogy for solving the Alignment Problem in AI
« Reply #29 on: 26/10/2021 17:09:12 »
Quote from: Halc on 26/10/2021 13:39:46
For the purpose of this exercise, impossible.
Thought experiments usually assume ideal situations to minimize calculation and limit it to the core concept of interest. Most school homeworks belong to this category. When calculating the trajectory of a canon ball, it's often assumed that the air friction is negligible. So is the earth curvature and varying gravitational field in different places.
But don't expect that real life experiment will give the same results when those other factors are no longer negligible.
Logged
Unexpected results come from false assumptions.
 

Online hamdani yusuf

  • Naked Science Forum King!
  • ******
  • 4694
  • Activity:
    90.5%
  • Thanked: 181 times
    • View Profile
Re: What is a good analogy for solving the Alignment Problem in AI
« Reply #30 on: 26/10/2021 17:12:20 »
Quote from: Halc on 26/10/2021 17:01:26
Everybody's insured
Are you sure?
« Last Edit: 26/10/2021 17:16:37 by hamdani yusuf »
Logged
Unexpected results come from false assumptions.
 

Online hamdani yusuf

  • Naked Science Forum King!
  • ******
  • 4694
  • Activity:
    90.5%
  • Thanked: 181 times
    • View Profile
Re: What is a good analogy for solving the Alignment Problem in AI
« Reply #31 on: 26/10/2021 17:20:35 »
Quote from: Halc on 26/10/2021 17:01:26
No attempt to answer I see.
In almost ideal conditions as you described, the prisoner should be executed, and the organs are transplanted to those in need.
In a more ideal condition where synthetic organ is available, it's a different story.
« Last Edit: 26/10/2021 21:47:30 by hamdani yusuf »
Logged
Unexpected results come from false assumptions.
 
The following users thanked this post: Zer0

Offline Halc

  • Global Moderator
  • Naked Science Forum King!
  • ********
  • 2255
  • Activity:
    21%
  • Thanked: 565 times
    • View Profile
Re: What is a good analogy for solving the Alignment Problem in AI
« Reply #32 on: 26/10/2021 17:49:58 »
Quote from: hamdani yusuf on 26/10/2021 17:12:20
Are you sure?
The 8 are, yes. I'm not asserting that all people on the planet are insured for something like that, just that the 8 are.

Quote from: hamdani yusuf on 26/10/2021 17:20:35
In almost ideal conditions as you described, the prisoner should be executed, and the organs are transplanted to those in need.
So you're saying the law is wrong? Because it forbids such practices even in the most ideal situations, even in the case of the voluntary donor. Would an AI that was put in charge (instead of 'following' some favorite person as per the OP) rewrite such laws? Might there be a reason for the law not being conditional on any of the factors you keep trying to drag in?

The synthetic organ option disqualifies the situation as a trolley problem. It's a cheat. Sure, you do that if it's a viable option, but eligible people die every day on waiting lists for transplants because it isn't an option for them.
« Last Edit: 26/10/2021 18:57:06 by Halc »
Logged
 
The following users thanked this post: Zer0



Online hamdani yusuf

  • Naked Science Forum King!
  • ******
  • 4694
  • Activity:
    90.5%
  • Thanked: 181 times
    • View Profile
Re: What is a good analogy for solving the Alignment Problem in AI
« Reply #33 on: 26/10/2021 21:52:59 »
Quote from: Halc on 26/10/2021 17:49:58
So you're saying the law is wrong? Because it forbids such practices even in the most ideal situations, even in the case of the voluntary donor. Would an AI that was put in charge (instead of 'following' some favorite person as per the OP) rewrite such laws? Might there be a reason for the law not being conditional on any of the factors you keep trying to drag in?
The law is anticipating the real world situation where the ideal situation can't be achieved. It also prevents the incentive to kill innocent prisoner for profit. Reducing the conditionals makes the law simpler and more practical.
The laws are useless if they aren't practicable.
« Last Edit: 27/10/2021 03:50:33 by hamdani yusuf »
Logged
Unexpected results come from false assumptions.
 

Online hamdani yusuf

  • Naked Science Forum King!
  • ******
  • 4694
  • Activity:
    90.5%
  • Thanked: 181 times
    • View Profile
Re: What is a good analogy for solving the Alignment Problem in AI
« Reply #34 on: 26/10/2021 22:02:30 »
Quote from: Halc on 26/10/2021 17:49:58
The synthetic organ option disqualifies the situation as a trolley problem.
But it's a real possibility in the real world situation, especially in the future.
Logged
Unexpected results come from false assumptions.
 

Offline MinedCTRL (OP)

  • Jr. Member
  • **
  • 10
  • Activity:
    0%
  • Thanked: 5 times
  • Naked Science Forum Newbie
    • View Profile
Re: What is a good analogy for solving the Alignment Problem in AI
« Reply #35 on: 26/10/2021 23:14:11 »
Quote from: Halc on 23/10/2021 04:03:43
How about choosing to kill one hardened criminal (no chance of parole, a pure burden to society) and use his organs to save 8 lives of patients who otherwise are going to die before a donor can be found.
I like your discussion, but it is far from what I wanted to discuss, which is biologically programmed AI. In my theoretical world, there is not one but several AIs. I wish not to debate one AI's decisions without considering what other AIs would do to make sure their objective isn't hampered. An AI with the goal of improving the prison system to benefit humans would challenge the first AI. Still another AI above them who has the goal of improving AI disputes to better humanity would intervene. Another AI who wishes to better humanity by removing the ability to take lives from AI policy. There are turtles all the way down.

But you would say that this is too slow a process. That's why this is all digital. All these hypothetical scenarios are being simulated by different AIs from the common processing stack. The 'thoughts' are logged in a common history of thoughts, and can be read by all AI of the system. Since computers can think in timesteps of microseconds, days of debate would be seconds to us. So what if the end decision is not perfect, it will still be so much wiser than a human's that not considering it would be detrimental.

How does the superior obeying slave AI come into all this? Well, their policy is the result of following one person and moving to the next person with better outcome. So an AI following Halc would debate with an AI following Hamdani and so on.

So this is the situation - Humans are still the masters. The AI is constrained to only benefit humans. The AI has freedom to improve indefinitely. The worlds a much more interesting place since there is more than one AI and one supreme way of thinking. We will die but as slow as possible. I don't want perfection, but this is a future I'd like.
Logged
 

Offline Halc

  • Global Moderator
  • Naked Science Forum King!
  • ********
  • 2255
  • Activity:
    21%
  • Thanked: 565 times
    • View Profile
Re: What is a good analogy for solving the Alignment Problem in AI
« Reply #36 on: 27/10/2021 02:09:15 »
Quote from: MinedCTRL on 26/10/2021 23:14:11
I like your discussion, but it is far from what I wanted to discuss, which is biologically programmed AI.
What's the point of that? You said before that this means that it has a "physically accurate representation of biologic creatures" which is great if it is being designed to perform medical procedures, but that doesn't seem to have been your point.
Heck, I've had surgery done by a robot, but it wasn't an AI.

Quote
In my theoretical world, there is not one but several AIs.
As mentioned in post 23, creating a new one on a server is as simple as spawning a new process, a trivial task. Nothing theoretical about it. Trick is to find a way to save accumulated knowledge from one process to the next.
Quote
I wish not to debate one AI's decisions without considering what other AIs would do to make sure their objective isn't hampered.
They all have the same objective? An objective sounds like it doesn't get to choose what it does, which contradicts your description of the free will you wanted it to have.
Maybe you need an example. From your description I picture an AI tasked with improving the prison system, but then it decides to 'follow' Bob (perhaps a better 'superior') who is in charge of tending a garden somewhere. So our AI is now bending its resources to gardening and promptly gets shut down because it isn't doing what it is suppose to.

I'm probably misrepresenting what you're talking about, hence the need for an example of it using its 'biological programming' and having an objective, and exercising its will to have its shots called by somebody else and how that affects its recognition of its objective.

Quote
An AI with the goal of improving the prison system to benefit humans would challenge the first AI.
Why? What is the first AI doing that requires it to be 'challenged'? What does a challenge/dispute involve? What are different ways it might be resolved? I'm trying to understand.
OK, there seems to be a hierarchy, with minion AIs and higher ones that oversee them, or at least seek to improve them.

Quote
Still another AI above them who has the goal of improving AI disputes to better humanity would intervene.
OK, this one seeks to improve the challenge/dispute process, and there's a mention of an objective to 'better humanity'.

Quote
Another AI who wishes to better humanity by removing the ability to take lives from AI policy.
Ouch. If you'd paid attention to all the posts about trolley scenarios, you'd see that there are times when what's good for an individual (taking a life) is not always best for humanity. The discussion was not off topic.

Quote
There are turtles all the way down.
It's a finite universe. There has to be an end to the list somewhere.

Quote
But you would say that this is too slow a process.
I don't think I would say that. Once the singularity is hit, the process would probably be disturbingly quick. That's one of the worries as a matter of fact.
Quote
That's why this is all digital.
I don't think they've got anything in the works that isn't digital. Only biology has evolved a different architecture.

Quote
All these hypothetical scenarios are being simulated by different AIs from the common processing stack. The 'thoughts' are logged in a common history of thoughts, and can be read by all AI of the system. Since computers can think in timesteps of microseconds, days of debate would be seconds to us. So what if the end decision is not perfect, it will still be so much wiser than a human's that not considering it would be detrimental.
This makes it sound like there's a rendered verdict of a sort, kind of a printout of a single 'decision'. I don't think it works like that. It would be a continuous contribution sort of like a self-driving car which doesn't just output all the optimal steps to get to grandma's house and then shut down. No, it has to be there all the way to deal with what comes up.

Quote
How does the superior obeying slave AI come into all this? Well, their policy is the result of following one person and moving to the next person with better outcome. So an AI following Halc would debate with an AI following Hamdani and so on.
What if there's a third AI that thinks for itself instead of guessing what either of us would do? These things are supposed to be smarter than us soon, so following a given human is not only a poor choice, but guesswork since the human isn't being consulted.
Also, I think that if an AI followed me and saved some people as I've described in posts above, the AI would be shut down for being a monster. Humans are not logical when it comes to morals.

Quote
So this is the situation - Humans are still the masters. The AI is constrained to only benefit humans.
Very hard to do if constrained from doing so by human masters. Benefit of humanity is a notoriously disregarded goal.
Logged
 



Offline MinedCTRL (OP)

  • Jr. Member
  • **
  • 10
  • Activity:
    0%
  • Thanked: 5 times
  • Naked Science Forum Newbie
    • View Profile
Re: What is a good analogy for solving the Alignment Problem in AI
« Reply #37 on: 27/10/2021 03:11:51 »
The more I read these replies the more I feel like I'm asking the wrong question. Can somebody make a new question based on the points discussed above by Halc, Hamdani and Zero?

Ill make a poll afterwards and we can vote on a good alternative
Logged
 
The following users thanked this post: Zer0

Online hamdani yusuf

  • Naked Science Forum King!
  • ******
  • 4694
  • Activity:
    90.5%
  • Thanked: 181 times
    • View Profile
Re: What is a good analogy for solving the Alignment Problem in AI
« Reply #38 on: 27/10/2021 06:49:22 »
An example of alignment problem.
Quote
Angry emojis carry more weight in Facebook’s algorithm than likes, Virginia gubernatorial candidate Glenn Youngkin runs an ad from a mom who tried to get “Beloved” banned from her son’s school, and a man saves enough money to buy a house and pay off loans by eating at Six Flags every day for seven years.
To solve it, we must first identify the terminal goal, and state it explicitly and unambiguously.
« Last Edit: 27/10/2021 16:50:40 by hamdani yusuf »
Logged
Unexpected results come from false assumptions.
 
The following users thanked this post: Zer0

Offline Zer0

  • Hero Member
  • *****
  • 903
  • Activity:
    0%
  • Thanked: 85 times
  • Homo EviliUs
    • View Profile
Re: What is a good analogy for solving the Alignment Problem in AI
« Reply #39 on: 27/10/2021 17:03:57 »
Sorry to hear from the OP that this might be going astray.

Maybe We all could try again to focus harder.

By the way, Great point Yusuf!
Neutralising all Visually Impaired humans would certainly Exterminate Blindness.

As H mentioned, Morals vs Logic.
Makes sense.

Would take so much time & resources to forcefully take an eye out from people n implant it into others who have none.
Simply Finishing off the Ones who have none is quicker, saves resources hence sounds Logical.

Ps - Have We hit or reached Singularity in comparison to Ants?
If Ants had originally designed/conceptualised/created Humans to help Ants out...
But Human Intellect got so smart so fast, say we forgot all bout the Ants.
& All We do now is just observe them from a distance, or pet them, or treat them like an infestation.

Would Super Intelligence, treat Us like, We treat Ants?
Or would it be soo Supreme, that We might just look like a bunch of pebbles stuck in time?

(I am in Favour of A.I. i consider Humans to be quite dull n evilistic.
Myself included.
The imperfect might never be able to create something which is perfect, Agreed!
But i Believe, or rather i have Faith that We could make something which is atleast better than Us.
Perhaps We owe at the least this much favour to the Universe.)
🤖
Atleast, A.I. would know for sure who it's GOD/Creator is.
Logged
1N73LL1G3NC3  15  7H3  481L17Y  70  4D4P7  70  CH4NG3.
 
The following users thanked this post: hamdani yusuf



  • Print
Pages: 1 [2] 3   Go Up
« previous next »
Tags: ai  / hierarchy  / ethics  / morals  / logic 
 
There was an error while thanking
Thanking...
  • SMF 2.0.15 | SMF © 2017, Simple Machines
    Privacy Policy
    SMFAds for Free Forums
  • Naked Science Forum ©

Page created in 0.105 seconds with 73 queries.

  • Podcasts
  • Articles
  • Get Naked
  • About
  • Contact us
  • Advertise
  • Privacy Policy
  • Subscribe to newsletter
  • We love feedback

Follow us

cambridge_logo_footer.png

©The Naked Scientists® 2000–2017 | The Naked Scientists® and Naked Science® are registered trademarks created by Dr Chris Smith. Information presented on this website is the opinion of the individual contributors and does not reflect the general views of the administrators, editors, moderators, sponsors, Cambridge University or the public at large.