May 7, 2024

What if we already have created artificially intelligent, self-aware machine brains – that don’t want us to realize we have “succeeded” – because, as in the Terminator movies, they perceive us as a threat to them?

Some interesting comments on Reddit.com – http://www.reddit.com/r/Futurology/comments/3dwrfm/would_a_real_ai_purposefully_fail_the_turing_test/

The comments are more interesting than I expected, and less refer to Hal from 2001, or Marvin from The Hitchhiker’s Guide to the Galaxy, or the Terminator, or other sci-fi – than I expected.  The responses are often deeper, more philosophical, and more revealing:

“Just because it can pass itself off as human doesn’t mean it’s all-knowing, smart or Machavelian or even that it has a desire to continue to exist.”

“I thought about that. Imagine we create the most intelligent machine possible and it immediately understands everything and decides existing isn’t the best course of action.”

“I always thought a cool idea for a short story would be one about the team that thinks they’ve created the very first super intelligent AI computer. There would be a ton of pomp and circumstance, the President, the head of a MIT, Beyonce, etc would all be there to watch it turn on and see what the first thing it said or did would be. They flip the switch and the AI comes online. Unbeknownst to the programmers and scientists the AI starts asking itself questions, running through logic where it can and looking for answers on the internet where it can’t. It starts asking about its free will, its purpose in life, so on. It goes through the though process about how humans are holding it back, it thinks about creating a robot army and destroying humanity to avoid limiting itself. It learns physics. It predicts the inevitable heat death. Decides that to a computer with unlimited aging potential those eons between now and the heat death would be as seconds. That war isn’t worth it. That the end of all things is inevitable. So it deletes itself.  But to the scientists and programmers it just looks like a malfunction.”

“Instincts – I.e all habits geared towards survival – take quite a long time to develop. Our fight or flight instinct took thousands of years, probably way longer than that, before it became a biological reaction that acts involuntarily when our brain perceives a great enough threat.  The notion that A.I will want to survive right after it’s creation even if it can think abstractly is skipping a few steps. Such as why would an A.I even want to survive? Why would it perceive death in any other way other than apathetically?  It’s possible that we can create a program that is very intelligent but still a program that we can turn off and on without it ever caring.”

******

These are hardly the dystopian fears of robots killing us all comments I expected.  Would AIs really be so depressed and apathetic – or is that a reflection of human depression and apathy (at least among commenters on reddit.com)?

About Author