Well, the world didn’t end on 21st December, as some people thought the ancient Mayans predicted. Indeed, all previous prophecies have failed. So far.
But should we be worried?
In November 2012 the Sunday Times reported on the opening of what was nicknamed ‘Terminator studies’ centre at Cambridge University where Science Editor Jonathan Leake said ‘leading academics will study the risk that super-intelligent robots and computers could become a threat to humanity’.
Existential Risk, Threatening Existence
A whole industry of sci-fi in films and books has been built on the way man-made machines take over and threaten destruction of everything human, subserving mankind as slaves or food sources. The Terminator, Hitchhikers’ Guide to the Galaxy, I Robot, 2001 A Space Odyssey spring to mind. There are many others.
This new laboratory, Centre for the Study of Existential Risk (CSER) is a serious proposition though. Distinguished contributions from the great and learned will come in the fields of astronomy, philosophy, robotics, biology, neuroscience and economics.
Lord Rees, the Astronomer Royal, cosmologist and author of Our Final Century (2003) about imminent human extinction is one of the experts behind the centre which will study the ‘four greatest threats’ to our species: artificial intelligence, climate change, nuclear war and rogue biotechnology.
Rees with his co-founders believe ‘developments in human technology may soon pose extinction-level risks to our species’. Professor Huw Price told Leake, ‘we have machines that have trumped human performance in chess, flying, driving, financial trading and also face/speech/handwriting recognition’.
Their point is that by handing over so much to AI we risk giving up planetary control to ‘intelligences that are simply indifferent to us and to things we consider invaluable’. That is the nub of it. The new intelligences cannot share our values, ambitions and emotions. We can only eventually appear inferior to them and must logically be declared redundant.
The Bigger Fear
Is all that too fanciful, and machines – predicted by American futurist Ray Kurzweil to be so powerful they will build all future machines and solve all human ills by 2040 – will not turn out to be so evil?
The more realistic fear is surely that there will be a failure, deliberate or accidental of the artificial intelligence systems and the damage to us will be beyond measure. The Cambridge team cite the disastrous financial crashes of recent years, which have not been lead by an obvious artificial intelligence.
Lord Rees summed it up with: ‘we fret about carcinogens in food, train crashes and low-level radiation. But we are in denial about ‘low-probability high-consequence’ events that should concern us more and could have global consequences’.
Cascading catastrophes through our increasingly interlinked networking, reliance on giant computer and data banks, diminishing earthly resources, pandemics and diseases, irrational conflicts and human errors are all considered worthy of study. The solutions are a different area altogether.
And we can’t ask a computer to tackle it all on our behalf, even if their speed and power doubles every eighteen months.
Check out, while there is still time:
Centre for the Study of Existential Risk, Cambridge University
Daily Mail, Amanda Williams, ‘Let’s make sure he WON’T be back’, 25 November 2012
Will Web Bots Predict the End of the World in 2012? 10 January 2012
Could Computer Over-Reliance Be the Death of Us All? 30 July 2012
Another Week, Another Systems Malfunction, 3 July 2012
Human Uniqueness Must Be Key to Security, 25 April 2012