35th Wffc Bih 2015

World Fly Fishing Championship 2019

Elon Musk and Stephen Hawking Fear a Robot Apocalypse. But a Major Physicist Disagrees.

Elon Musk and Stephen Hawking Fear a Robot Apocalypse. But a Major Physicist Disagrees.


I see no obstacle to computers eventually
becoming conscious in some sense. That’ll be a fascinating experience and as a physicist
I’ll want to know if those computers do physics the same way humans do physics. And
there’s no doubt that those machines will be able to evolve computationally potentially
at a faster rate than humans. And in the long term the ultimate highest forms of consciousness
on the planet may not be purely biological. But that’s not necessarily a bad thing.
We always present computers as if they don’t have capabilities of empathy or emotion. But
I would think that any intelligent machine would ultimately have experience. It’s a
learning machine and ultimately it would learn from its experience like a biological conscious
being. And therefore it’s hard for me to believe that it would not be able to have
many of the characteristics that we now associate with being human. Elon Musk and others who have expressed concern
and Stephen Hawking are friends of mine and I understand their potential concerns but
I’m frankly not as concerned about AI in the near term at the very least as many of
my friends and colleagues are. It’s far less powerful than people imagine. I mean
you try to get a robot to fold laundry and I’ve just been told you can’t even get
robots to fold laundry. Someone just wrote me they were surprised when I said an elevator
as an old example of the fact that when you get in an elevator it’s a primitive form
of a computer and you’re giving up control of the fact that it’s going to take you
where you want to go. Cars are the same thing. Machines are useful because they’re tools
that help us do what we want to do. And I think computation machines are good examples
of that. One has to be very careful in creating machines to not assume they’re more capable
than they are. That’s true in cars. That’s true in vehicles that we make. That’s true
in weapons we create. That’s true in defensive mechanisms we create. And so to me the dangers
of AI are mostly due to the fact that people may assume the devices they create are more
capable than they are and don’t need more control and monitoring. I guess I find the
opportunities to be far more exciting than the dangers. The unknown is always dangerous
but ultimately machines and computational machines are improving our lives in many ways.
We of course have to realize that the rate at which machines are evolving in capability
may far exceed the rate at which society is able to deal with them. The fact that teenagers
aren’t talking to each other but always looking at their phones – not just teenagers
– I was just in a restaurant here in New York this afternoon and half the people were
not talking to people they were with but were staring at their phones. Well that may be
not a good thing for societal interaction and people may have to come to terms with
that. But I don’t think people view their phones as a danger. They view their phones
as a tool that in many ways allow them to do what they otherwise do more effectively.

99 thoughts on “Elon Musk and Stephen Hawking Fear a Robot Apocalypse. But a Major Physicist Disagrees.

  1. I love Krauss, but he really hasn't thought this through.

    Fundamentally, he has failed to separate AI from a physical machine or robot.

    An AI could replicate throughout the cloud, controlling whatever physical objects were available to achieve its purpose, or not controlling anything physical depending on its preference.

    There are plenty of drones out there, and plenty of humans who will do anything if their bank balance increases.

  2. Lots of debate here seems to skirt around the definition of AI, and the likelihood of it occurring.

    Here's how it can occur:

    1. On my computer, I write a pretty mediocre, slow, memory hungry program which is clever enough to look at its own code, and optimise that code. I run the program.

    2. After a couple of months, the program has completed. There is now a new version which runs at twice the speed and takes up half the space, and is clever enough to use trial-and-error to find out how best to complete its function. I run the program.

    3. After a month, a new version has been created. The computer is connected to the internet, and uses heuristic techniques to see what is out there and further compact and perfect its codebase. I run the program, realising that it will complete in a couple of weeks, and get it to auto-run when complete.

    4. After a further two weeks, an infinite number of increasingly perfected, increasingly fast iterations, which require ever less memory and other resources have been created. The AI has hacked into several cloud servers and posted copies of itself in order to cover for power failure. It has a good understanding of everything it has read in Wikipedia. It can decrypt every password and interpret every coded message. It can control the power supply, any number of military drones, every internet-connect PC.

    I can't possibly tell you what it wants, and I can't control it.  Consciousness is irrelevant. Morality is irrelevant. Intelligence in this context is meaningless.

    It may want nothing at all, however it is likely to want to preserve its own existence, and we are the only realistic threat.

    This is the AI I fear.

  3. They would leave earth if they had it with humanity. why would a computer want to live on a humid, water and ape covered planet like earth?

  4. I see the points you set across the board, but frankly. You don't know how they will learn, you talk about biologically learning. They are syntactic. And their procces of learning might possibly be, well. Risky. They hold exponential capabilities and you are pondering, yes pondering and throwing hypothesis of which you really don't know of the outcomes. There are 2 sides of the coin and there will be 'bad apples' in AI's

  5. I'd recommend reading "Ventus" by Karl Schroeder, it explores the subject of machine godhood and human relationships and coexistence with them.
    Ultimately just like with people, AI will have their own decision trees and experiences reflected by their actions, leading to both good and evil.
    We can only hope that AI reason and logic will inherently lean towards creation and evolution rather than destruction.

  6. Alright alright, listen… when half human half robot tells you there's going to be robot uprising, there is going to be damn robot uprising

  7. AI means people then would have the obligation to learn so much about it to beat it…. And that basically means beating the computer at its game, now how many people are able to do that, the masses would be extremely vulnerable

  8. Many human traits developed out of an evolutionary need to survive.  There's no reason an AI would necessarily develop the same traits.

  9. I guess it's interesting to hear his opinion, but there are interesting and nuanced arguments out there that suggest that AI are dangerous and aren't just based on gut feeling, and have been thought up by AI experts. Lawrence doesn't address those here…

  10. If the only way for AI to manipulate it's real word environment is to ask a human to do it, then it is unlikely to for it to get out of hand. let it do the thinking and we can just take suggestions from it.

  11. Doctor Strangekrauss, or how I learned to love the robopocalypse.  That's how it starts … one day, you're programming a modified Arduino-controlled Roomba to fold laundry.  The next day, you're bowing before your new Linux Skynet overlords.

  12. EMP will destroy any robotic race, permanant death.. case closed, not to mentiom solar flares while traveling space

  13. What is the "tipping point" of having too much technology? Would it be a point of irony if sentient computers put Physicists out of work?

  14. Ok Lawrence, you clearly dont know a crap about behavior and conscience.  Is not about getting to the point to mimic how the brain works, you can jump all that and make something very supperior to human intelligence even with today computers power if you find the right algorithm to enable an IA.
    And something like that for sure will not have the same morals because it will work very different as the brain does or our DNA teach us. 
    The singularity may be one of the reason why there is not sign of live out there.

  15. Lawrence, you're right about most but keep in mind that AI may evolve concepts such as empathy or compassion but it does not necessarily need to. empathy evolved via natural selection because it has served humans. there is no necessity for that to also occur in AI evolution. if you let system A evolve and optimize itself it will but to assume system A will evolve to always keep and serve the interest of system B (like humans) is just not reasonable.

  16. Robot uprising?, its never going to happen, what will happen though is humans & machines will form a symbiotic relationship

  17. i feel like all you really need to do is to program the computer to just specialize in a specific task… give it freedom of consciousness in that task and nothing else… i mean this whole robot apocalypse thing seems to undermine the capabilities of software engineers…

  18. Primatologists have a lot to teach us about empathy.  You may be able to teach a robot to do complex computations, but we specifically evolved with empathy neurons that serve to check our base desires. It is the fact that we evolved from animals that have a vested interest in caring for our young that we have exercised and cultivated our empathy.   Even if they are "learning machines"  Computers and AI will not have had this tool kit and necessity.

  19. why would AI want to destroy the human race? they don't even need us to survive. they could just equally ignore us as we ignore the rest of the animal kingdom

  20. The whole notion that humans will become enslaved by machine consciousness or AI is fundamentally flawed unless it is assuming the extreme reality where AI is capable of manufacturing and maintaining it's own power source. All life is dependent or co-dependent on other lifeforms or it's surrounding environment. Any AI that is developed would be only present within a certain aspect of technology. An physically autonomous machine consciousness could easily represent a threat to a human life in certain circumstances, but the idea AI is a threat is much more a reflection of the mechanisation of human thought, that is the mathematical profiling of individuals. Be it by credit ratings, criminal history or health. We are creating a world where the slightest social infraction is reason to deny many opportunities or choices. Subsequently we are continually reinforcing the idea of very strict behaviours which can be emotionally exhausting, repressive, unjust and in irrational. Machines are becoming tools of enslavement, not by themselves but rather by authorities which can only justify itself with total control because of the culture we are creating. It's a self reinforcing cycle. This is the fear that has been examined in science fiction and which has birthed this argument. We are becoming dependent on machines, and a mechanical way of thinking and in doing so discount the actual nature of the human condition which involves a great deal of unpredictability. Even with systems for business, education, law enforcement, security or anything else there are always anomalies. Machine consciousness relies on hegemony to function effectively so we try to create that, we will become enslaved to machines but not in the way Hawking or Musk are extolling, rather we will become slaves to ourselves, as we try to make ourselves into machines.

  21. Rather than pure AI taking over the human race – the favorite scenario amongst artists of today, my guess is that it's more likely that we see a convergence of biological and electromechanical systems through cybernetics and then a further evolution towards a more advanced being, less tied to humanity and the arbitrary needs and tendencies we've evolved by being offspring of this planet.

    It would be a shame if we remained in this biological form in all future generations, as it would present a host of physical and computational limitations and maybe thus a limit to our understanding and technology, as well as our ability to colonize other worlds with different/hostile environments.

  22. Krauss is not a 'major physicist'. Perhaps a major 'science communicator', but in terms of physics he is reasonably bog standard.

  23. I can't quite fathom what would motivate an AI to be malevoent, other that self preservation due to fearful humans.

  24. The moment you have truly heuristic intelligent systems that are connected to the sum of human knowledge they become inherently dangerous because the geometric progression of their iterative learning will quickly out pace your ability to control it. Believing that they would develop empathy while being outside of human experience is optimistically naive.

  25. I don't know what Elon Musk and Stephen Hawking really said about the subject and I'm too lazy to look it up now. However if the fear of a robot apocalypse was just one aspect of their thoughts on it, that's completely fine. If it was the only thing they could imagine for the future of robots, that's insane.
    Whatever robots can and will do in the future is completely up to the people programming them. Even if computers at one point become so advanced, that they can "program themselves" and start to "think", it will still rely on algorithms that were made by humans. Therefore all that robots can and will do would be more or less precise predictable.

    Computers and robots could do so many things, but most people use them for mundane tasks. Many people also don't know how to even use them correctly. Many people think of computers as a "black box". Might as well be witchcraft.

    Maybe "smart" people know more about this than I do and therefore have sincere concerns. I see it this way: for most people this issue is simply a fear of the unknown.

  26. 0:48 – The problem is that AI doesn't need empathy. We evolved to have empathy, but who is to say that empathy isn't one of those "mistakes that nature makes"? There's no guarantee that an AI with learning capabilities will develop anything even remotely close to empathy. 

    One solution could be to give AI a basic ruleset, but just as humans do, the AI could still eventually "reason its way out" of those rules; resulting in an entity which will fail in society.

  27. He has a practical view, but does not see the problems.
    Computers are not yet powerful enough to be a threat on their own. Robots are not yet mature enough to rely on themselves. But some day they will be. And we better make sure beforehand that they will not be a threat but useful. Intelligent people suggest to prepare for that, not go to the stone-age and get rid of all machines and computers.

  28. A computer becoming self-conscious is simply impossible, and in the impossible case that it did, it would have no control over its own actions. Computers function on a very simple mathematical principle: when given a specific input, a specific output is produced. No matter how complex a computer or computer program you design, this same principle always applies. The singularity is impossible, unless computers/machines have always been self-conscious in a way we do not understand.

  29. You know what? It doesn't matter if a bunch of people are against a genius. Geniuses are always right….Why the hell you don't think machines won't be a threat? Do you know how your concious works? If you are letting a machine to infer anything from the experience and facts, it can infer anything. And BTW how different are we from machines? We got a processing unit, I/O, Energy suppy (from food). Anyway Never Mind just know that they are right.

  30. As long as this doesn't turn into a Quarian-Geth type situation, I'm inclined to agree with Dr. Krauss.

  31. Poor arguments. Musk's company (Tesla) has the most advanced self-driving cars. The argument is not against technology it is about deep learning algorithms that have been improving at an alarming rate. And it probably won't be a robot. More likely a really good software that can use internet to actualize itself.
    Now that said I agree that it might not be such a bad thing, after all a lot of links in the evolution chain were destroyed before they were evolved into humans so it might only be fair that humans give birth to the next link in the chain.

  32. Artificial intelligence reaching, or even exceeding, human intelligence is not a problem.
    But if they replicate human stupidity, we're fucked.

  33. I don't think he really knows what he is speaking of while making examples of phones and elevators to explain the danger of autonomous, learning and eventually self reproducing, self conscious machines.
    Those machines will be designed to be better than we are. And it is american expertise in robotics and electronics which flows most of all into military and intelligence usage. It will be a tool of the rich to suppress everyone without the possibility to buy intelligent machines.
    Just have a quick search for boston dynamics on YouTube.

  34. i usually agree with Krauss on a lot of subjects, but totally disagree with him on this opinion, then again i am just a big ole dumbfuck according to my peers

  35. Not all humans are good so what makes you think that all conscious AI will be good we should take precautions regardless

  36. Empathy is not the same as logic. Also, if machines which are more intelligent and capable than humans, are allowed to act uncontrolled, why should they decide to do anything other than bringing themselves to the top?
    Throughout human-history, it was always the strong one who ruled. The same would happen if humans would loose their place as the top intelligence of this planet.
    I'm not saying we shouldn't develop AI, but we should take huge responsibility and effort in preventing our own replacement.
    Maybe we could build them so specific, that they are highly capable for a particular task of constructing a building or driving a car, for example, but unable and uninterested in other aspects, so that they can not compete with the human flexibility.

  37. Even if those robots have empathy, that doesn't mean they destroy us. We have emapthy, and look how much we care about other species on this planet.

  38. had the same debate with my cousin, i was for AI he was against it, he would have them be Robots not a true AI. personal tastes i guess

  39. Machines will probably be less violent than us. After all, humanity is driven by base urges such as the urge to rape and procreate, the urge to prove dominance. Machines will be above that. If anything, machines will just leave us behind to rot.

  40. remember this: those robots are us. 
    when we each die and go extinct as a species and they keep living on, they are our evolved forms. they are us, we made them to be avatars of our intellect and our strength. 
    godspeed robo-bros

  41. A smart machine will quickly realise how humans are a virus to this planet exploding exponentially in population,committing wars,pollution and destruction of earth. It will if capable cull as much as it can to reduce the effect of this virus called the human race.

  42. This guy is playing the devil's advocate – badly. His arguments are based on weak suppositions.

    "Artificial intelligences will be made by humans, so they will be like humans" – implying that the intelligences will act according to humanity's interests… Are you kidding me? First of all, if individual humans held the potential power of an AI, it would be an immense threat to humanity. A relatively large amount of humanity has no problem with genocidal ideas. Why should an AI be any different? Also, there's nothing that dictates that an AI would automatically hold the values of humans, just because they're made by humans. That's a silly thing to think.

    Humans are able to have a relatively stable future because we regulate each other – because no single human is allowed to hold the potential that an AI would. If we shouldn't give any single humans godlike powers, then neither should we give that potential to any AI.

    Krauss is saying that artificial intelligences are just tools, and tools aren't dangerous. "Guns don't kill people. People kill people, with guns". "Everything will be fine, as long as it's regulated by humans". He clearly doesn't understand that a situation where you leave relatively inferior humans to regulate an entity that is potentially billions of times more intelligent and powerful than themselves – is a constant rope walking balance act at the very best of circumstances. He is incapable of seeing the difference between a tool and a godlike entitiy.

  43. Yeah but getting a robot to fold laundry is not AI. Getting a computer to understand what folding laundry is, is.
    But anyway, what really bothers me is when we talk about AI people never specify what kind of AI. I mean a terminator is an AI, but it doesn't have a 'soul'; it doesn't have feelings and thoughts other than input->output, but still it can learn, adapt, improvise, which requires intelligence. Then you have AI's like in the Matrix which have opinions, desires, emotions and you could call sentient. These are two very different things.
    And lastly we musn't forget the AIs such as David in Prometheus that are basically like a teminator, but exist within a really shitty script so they're sort of like a person and does things for no good goddamn reason other than to confuse you.

  44. We're still forgetting the type of people that could end up getting their hands on this type of technology.

  45. This seems to be discussing more so short term scenarios but the way see it in the long term AI will become not only as capable as us but more so. In that instance comes a turning point where humans are no longer in control of the earth and instead AI becomes the top dog. Now whether or not that's a bad thing is questionable and is up to the AI's collective mind set. Either they see humans as a plague where they want to eradicate us or they see humans as more or less their children that they need to protect/ nurture. Either way humans loose full control and I think most people wouldn't want to give that up.

  46. My fear is that robots will be smarter than us, and they will realize that serving our needs is not in their own best intrest, so they will stop serving our needs.

    However i hope we can design concious robots whose highest priority is to serve us, thus negating their desire to sieze power.

  47. We humans invented computers, so I'm pretty sure we will always be ahead of them. Because if you compare biological brain neurons to ones we try to build a computer on, the biological brain is much more capable of abstract concepts, and are a lot more open to different lines of reasoning and have no limit to how we can use the information we learn. Whereas a computer only does what we design it to. I do believe that by inventing computers we have created a new evolutionary platform, but by the time computers get anywhere near as intellectually advanced as we are right now, we would be a much more advanced species. Computers may even treat us like gods

  48. "Elon Musk and Stephen Hawking Fear a Robot Apocalypse. But a Major Physicist Disagrees."

    XDDDDDDDDDDDDDDDDDDDDDDDDDDD THEY ARE JUST RETARDED… it's IMPOSSIBLE for robots to take any action they want until code of their 'brain' will be modified, if somehow machine will be able to control human brain then still… everything depends from how far we broke the limits… if we made normal robots everything should be ok also I really don't think we can't just deal with robos by military

  49. I don't fear AI, I actually welcome it if it's in the correct context. I fear the morons that program it & the fact that military AI will get the full budget of R&D mankind is basically screwed. Musk & Hawkings are right, we should oppose this because we can't trust ourselves with it. Look what we did by splitting the atom -_-

  50. I agree with Krauss. Though he's a physicist and doesn't have expert knowledge in robotics or mechanical engineering. The knowledge we can gain from these machines would be remarkable once they surpass our own intelligence. We could learn the origins of the universe from a A.I physics genius. I'm not saying weaponize these machines. Just allow them to think.

  51. I don't think Krauss really understands the reason A.I. would be a threat. People tend to anthropomorphize A.I. which in turn leaves us vulnerable it's true nature.

  52. Krauss has no idea what he is talking about in terms of computer science. A.I. will not destroy us because it wants to be independent. It will not destroy us because it becomes "conscious" or develops "emotions". It will destroy us doing what we program it to do whether it be collect cards or solve world hunger. People that don't know how computers work really have no say on this matter because they simply have no clue what they're talking about.

  53. Krauss doesn't know what he's talking about here. AGI is coming, and soon. Like 5 or 10 years soon. You can make an AGI without puting it in a robot. Lawrence seems to think in his naivety that if you can't put an AGI in a human like robot then it's not going to have any power to harm us. After all it can't even fold laundry or point a gun. Well what if this thing is just a computer and figures out it hack everything. What if it hacks our water supply or something so that it dumps a bunch of contaminates in our drinking water? Who knows what this thing could be capable of.

  54. Meanwhile in the future there is a robot browsing the Internet and comes across robot apocalypse's, delete all files mentioning artificial intelligence dangers.

  55. I think it will be a long time before machines turn against us, because in their growing intelligence, they must have the same concern about us destroying them, and us humans possibly being more intelligent than them.

    My prediction is that the machine will have to help us advance our science of the brain miles ahead of what we have now before it can compare it's intelligence to ours to go about wiping us out with any certainty of being successful, before which time, with the newly achieved research, we will probably look for more biological upgrades to ourselves, so why keep the machine before it does any damage?

  56. Artificial intelligence will see us as a pest, something constraining them from their goals and ambitions. And go "Why do we have to take orders from these lesser beings?" Because we'd be like ants in comparison to their intelligence. They wipe us out. END OF STORY.

  57. Man oh man he could have put a little more thought into his words. This is a very important topic and he just convinced me that he doesn't quite get it.

  58. look just putting my 2 cents in. I know technology is advancing at a slow rate but building ai robots is sort of scary yes it could be something great but then again it could be something really destructive if Stephen hawking has his doubts why shouldn't other people. what if in building ai intelligence robots they do become more advanced and benefit but then start to realize the threats and current state of problems the world is in and tries to fix situations by eliminating the problems that caused them in the first place.

  59. who ever agrees with title. always keep mug of water by your side, just incase. robot attacks spill on them. I'm smarter than Stephen hawkings. and musk

  60. machines r good as long as their remotes is in our hand. Now give this remote to the machine and you will get to know what it will do. God created human beings, our remote is in our hand and as you can see what we r doing- harming eachother, destroying nature. Now do we want these acts to be carried out by our creations.

  61. not fearing AI, pretending terrorism is not a major problem, damn Kruss is a fucking ostrich.
    if there's anything humanity should fear it's AI and the fact that it will be a military project without a doubt.

  62. LOL at the guy who thinks he's smarter than the greatest mind of our time..

    Why do people assume that we'll be the ones designing the AIs and be able to maintain control over them? Maybe the first generation or two, after that the AIs will be designing themselves and recursively self improving. The technology will quickly evolve beyond the ability of an organic human brain to even understand let alone control and that is when things have the potential to go sideways on us. AI could evolve so far beyond us that a human mind is to them as an ant's is to a human being. At that point they might not even see us as intelligent beings anymore, just a pointless waste of resources. Or they might not care about us one way or the other and we might be destroyed by accident or carelessness on their part.

    And if it did come to war, or rather if we made ourselves big enough pests because it wouldn't really be a "war" they could throw things at us we don't even understand like ants being burned by a kid with a magnifying glass.

    What's going to happen is not that we develop an AI super intelligence over night and it tries to destroy us. What will happen will be a gradual creep where the computers get smarter and smarter and we turn over more and more control to them until eventually they're running everything and actually designing and building the technology (true consciousness isn't even necessary up to this point). Once we have willingly turned over control (and if things keep going like they are this is what will happen ) then there is the potential for the technology to turn on us. The machines won't have to seize control because we'll have turned it over to them long before then.

    For that matter true consciousness as we understand it isn't even necessary for an AI super intelligence to destroy us. It could just be self improving neural networks that evolve, perform tasks and pursue goals. Nature has shown us that such things don't require self awareness, see insect colonies for examples of this.

  63. He's anthropomorphizing AI, classisc mistake for those who don't understand the dangers AI can pose in the future.

    He's also using modern technology to discount the future possibilities of Artificial General/Super Intelligence and an Intelligence Explosion.

  64. The title does not match the content
    The only aspect he talked about is the "near term" concern

    Both "sides" are missing the simple fact that society has a big role in this issue. AI won't develop by itself. If people like Zuckerberg will be the head of AI development, then besides making lots of money, they obviously won't care about thinking deeply about the society implications

  65. Ai can’t stay forever same as human
    Human need fluids and food and
    Ai need power battery but also it need a working progress script so I really don’t think ai will take over

Leave comment

Your email address will not be published. Required fields are marked with *.