Killer drones, cyber attacks and targeted propaganda will undermine national security as 'malicious AI' grows increasingly powerful, warn experts

  • Security implications of AI were announced by 26 experts in the AI field
  • They forecast rapid cybercrime growth, drone misuse and other types of crime
  • They expect a rise in the use of bots to manipulate elections and social media
  • The experts suggest that policy-makers and technical researchers work together to understand and prepare for the malicious use of AI

Terrorists, rogue states and criminals could soon use artificial intelligence to undermine national security, warns a new report.

Superhuman hacking, surveillance and persuasion are just some of the terrifying ways 'malicious' AI could threaten our freedom.

In a 100-page report, 26 AI experts have outlined the security implications of 'emerging technologies'.

They predict 'bots' to interfere with news gathering and penetrate social media among a host of plausible scenarios in the next five to 10 years.

Scroll down for video 

The security implications of 'emerging technologies' were announced by 26 experts in the AI field, who forecasted rapid cybercrime growth, drone misuse and the unprecedented rise in the use of 'bots' to manipulate everything from elections to the news agenda and social media

The security implications of 'emerging technologies' were announced by 26 experts in the AI field, who forecasted rapid cybercrime growth, drone misuse and the unprecedented rise in the use of 'bots' to manipulate everything from elections to the news agenda and social media

'Artificial intelligence is a game changer and this report has imagined what the world could look like in the next five to ten years,' said Dr Seán Ó hÉigeartaigh, co-author and Executive Director of Cambridge University's Centre for the Study of Existential Risk.

'We live in a world that could become fraught with day-to-day hazards from the misuse of AI and we need to take ownership of the problems - because the risks are real. 

'There are choices that we need to make now, and our report is a call-to-action for governments, institutions and individuals across the globe. 

'It is troubling, but necessary, to consider the implications of superhuman hacking, surveillance, persuasion, and physical target identification.'

The report - 'The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation', also recommends interventions to mitigate the threats posed by the malicious use of AI: The experts suggest that policy-makers and technical researchers need to work together now to understand and prepare for the malicious use of AI. 

AI was identified as having many positive applications, but it is known as a 'dual-use technology', meaning engineers should be mindful of and proactive about the potential for its misuse. 

While the dangers of AI systems have been highlighted in high-profile settings such as government, the effect on the general population has not been fully analysed- until now. 

The co-authors of the study come from a wide range of organisations and disciplines, including Oxford University's Future of Humanity Institute; Cambridge University's Centre for the Study of Existential Risk; the Center for a New American Security, a US-based bipartisan national security think-tank; and other organisations. 

WHY ARE PEOPLE SO WORRIED ABOUT AI?

It is an issue troubling some of the greatest minds in the world at the moment, from Bill Gates to Elon Musk.

SpaceX and Tesla CEO Elon Musk described AI as our 'biggest existential threat' and likened its development as 'summoning the demon'.

He believes super intelligent machines could use humans as pets.

Professor Stephen Hawking said it is a 'near certainty' that a major technological disaster will threaten humanity in the next 1,000 to 10,000 years.

They could steal jobs 

More than 60 percent of people fear that robots will lead to there being fewer jobs in the next ten years, according to a 2016 YouGov survey.

And 27 percent predict that it will decrease the number of jobs 'a lot' with previous research suggesting admin and service sector workers will be the hardest hit.

As well as posing a threat to our jobs, other experts believe AI could 'go rogue' and become too complex for scientists to understand.

A quarter of the respondents predicted robots will become part of everyday life in just 11 to 20 years, with 18 percent predicting this will happen within the next decade. 

They could 'go rogue' 

Computer scientist Professor Michael Wooldridge said AI machines could become so intricate that engineers don't fully understand how they work.

If experts don't understand how AI algorithms function, they won't be able to predict when they fail.

This means driverless cars or intelligent robots could make unpredictable 'out of character' decisions during critical moments, which could put people in danger.

For instance, the AI behind a driverless car could choose to swerve into pedestrians or crash into barriers instead of deciding to drive sensibly.

They could wipe out humanity 

Some people believe AI will wipe out humans completely.

'Eventually, I think human extinction will probably occur, and technology will likely play a part in this,' DeepMind's Shane Legg said in a recent interview.

He singled out artificial intelligence, or AI, as the 'number one risk for this century'.

Musk warned that AI poses more of a threat to humanity than North Korea.

'If you're not concerned about AI safety, you should be. Vastly more risk than North Korea,' the 46-year-old wrote on Twitter.

'Nobody likes being regulated, but everything (cars, planes, food, drugs, etc) that's a danger to the public is regulated. AI should be too.'

Musk has consistently advocated for governments and private institutions to apply regulations on AI technology.

He has argued that controls are necessary in order protect machines from advancing out of human control

Advertisement
AI was identified as having many positive applications, but it is known as a 'dual-use technology', meaning engineers should be mindful of and proactive about the potential for its misuse. For example, drones could be used as missiles with fleets deliberately crashed

AI was identified as having many positive applications, but it is known as a 'dual-use technology', meaning engineers should be mindful of and proactive about the potential for its misuse. For example, drones could be used as missiles with fleets deliberately crashed

The 100-page report identified three security domains as particularly relevant to the malicious use of AI, these being digital, physical and political security. 

It suggests that AI will grow out of hand and allow for large-scale, finely-targeted and highly-efficient attacks. 

The authors expect 'novel' cyber-attacks, such as automated hacking, impersonation of targets, or finely-targeted spam emails using information scraped from social media. 

Drones could be used as missiles, where fleets of the airborne vehicles could be deliberately crashed. 

Further physical risks include the rise of autonomous weapons systems on the battlefield, which risk the loss of meaningful human control. 

In politics, detailed analytics, targeted propaganda, and cheap, highly-believable fake videos present powerful tools for manipulating public opinion on previously 'unimaginable scales.' 

The report also detailed the threat to civil liberties, including surveillance, invasion of privacy and the potential of radically shifting the power between individuals, corporations and states. 

The authors explored several interventions to reduce threats associated with AI misuse, including 'institutional and technological solutions' to tip the balance in favour of 'those defending against attacks.' 

The report also 'games' several scenarios where AI might be maliciously used as examples of the potential threats in the coming decade. 

'For many decades hype outstripped fact in terms of AI and machine learning. No longer,' said Miles Brundage, Research Fellow at Oxford University's Future of Humanity Institute. 

'AI will alter the landscape of risk for citizens, organisations and states - whether it's criminals training machines to hack or 'phish' at human levels of performance or privacy-eliminating surveillance, profiling and repression - the full range of impacts on security is vast. 

'It is often the case that AI systems don't merely reach human levels of performance but significantly surpass it. 

'It is troubling, but necessary, to consider the implications of superhuman hacking, surveillance, persuasion, and physical target identification, as well as AI capabilities that are subhuman but nevertheless much more scalable than human labour.' 

HOW ARTIFICIAL INTELLIGENCES LEARN USING NEURAL NETWORKS

AI systems rely on artificial neural networks (ANNs), which try to simulate the way the brain works in order to learn.

ANNs can be trained to recognise patterns in information - including speech, text data, or visual images - and are the basis for a large number of the developments in AI over recent years.

Conventional AI uses input to 'teach' an algorithm about a particular subject by feeding it massive amounts of information.   

AI systems rely on artificial neural networks (ANNs), which try to simulate the way the brain works in order to learn. ANNs can be trained to recognise patterns in information - including speech, text data, or visual images

AI systems rely on artificial neural networks (ANNs), which try to simulate the way the brain works in order to learn. ANNs can be trained to recognise patterns in information - including speech, text data, or visual images

Practical applications include Google's language translation services, Facebook's facial recognition software and Snapchat's image altering live filters.

The process of inputting this data can be extremely time consuming, and is limited to one type of knowledge. 

A new breed of ANNs called Adversarial Neural Networks pits the wits of two AI bots against each other, which allows them to learn from each other. 

This approach is designed to speed up the process of learning, as well as refining the output created by AI systems. 

The comments below have not been moderated.

The views expressed in the contents above are those of our users and do not necessarily reflect the views of MailOnline.

We are no longer accepting comments on this article.