Cambridge

The seven ways terrorists could use AI to wreak havoc – and how we can fight back

Weaponised drones used as deadly targeted missiles, sophisticated CGI videos pumping out personalised fake news propaganda and all-seeing surveillance systems destroying personal privacy sound like props in a terrifying dystopia, but science fiction is becoming science fact.

Security experts warn we need to take action now to prevent artificial intelligence (AI) from becoming a threat to humanity.

A new 100-page report highlights some of the threats new technology poses, and the ways terrorists, cybercriminals and even our own governments could use them to make our lives a misery.

The authors include experts from Cambridge’s Centre for the Study of Existential Risk, a university team who focuses on the “catastrophic pitfalls” threatening future civilisations.

The report identifies three key ways AI could become a threat to our digital, physical and political security.

They include:

1. Turning commercial drones into face-targeting missiles

Could our own drones be used against us? (Image: Getty)

Groups like the so-called Islamic State are already using drones loaded with explosives, and with aerial delivery devices already being tested here in Cambridge, they could be put to devastating effects.

As well as conventional drones used as weapons, the report warns of “physical systems that it would be infeasible to direct remotely (e.g. a swarm of thousands of micro-drones).”

Video Loading Video Unavailable Click to playTap to playThe video will start in 8Cancel

2. Automated hacking

Could we develop computers capable of doing their own hacking? (Image: Getty)

Advanced AI systems could be used to hack into secure online systems, putting our bank details, personal information and private correspondence in the hands of criminals.

There is already evidence AI is being used by hackers. According to the report: “AI is already being used for offense by sophisticated and motivated adversaries.

“Expert opinion seems to agree that if this hasn’t happened yet, it will soon.”

Read More

3. Speech synthesis to impersonate targets

Sadly, not all online impersonations are as easy to detect (Image: Getty)

Even the best impressionists struggle to accurately mimic human speech, but there are fears speech recognition software could soon ‘learn’ to perfectly imitate individuals’ voices.

The report says: “There is no obvious reason why the outputs of these systems could not become indistinguishable from genuine recordings, in the absence of specially designed authentication measures.

“Such systems would in turn open up new methods of spreading disinformation and impersonating others.”

4. Crashing autonomous vehicles

Autonomous cars promise many benefits to drivers- but could they also have a sinister dark side? (Image: Getty)

With self-driving cars and lorries already being tested on our roads, the prospect of these vehicles being hijacked by terrorists is terrifying.

Hackers have already managed to bring a self-driving Jeep to a standstill on a busy highway in the US.

One of the threats highlighted in the report is that “Commercial systems are used in harmful and unintended ways, such as using drones or autonomous vehicles to deliver explosives and cause crashes.”

Read More

5. The creation of targeted propaganda

Can you be sure the politicians on your online feeds are really saying what you think they are? (Image: Getty)

With fake news already a growing concern around the globe, what would be the effect of mass-produced false video footage flooding our newsfeeds and social media accounts?

The report warns of malicious groups using targeted videos to “mislead the public”, suggesting AI systems could “simplify the production of high-quality fake video footage of, for example, politicians saying appalling (fake) things.”

6. The rise of ‘bots’

Is a robot reading this right now? (Image: Getty)

Online, can you tell who is real and who isn’t?

Thousands of fake ‘bots’ are already flooding Facebook and Twitter in an attempt to sway opinion and influence elections- and the situation could be about to get a whole lot worse.

The report says: “It is unclear to what extent political bots succeed in shaping public opinion, especially as people become more aware of their existence, but there is evidence they contribute significantly to the propagation of fake news.”

Read More

7. Autonomous weapons systems

While military technology hasn't quite caught up with sci-fi imaginings, the dangers are still there (Image: Getty)

AI systems are increasing used by the military, with computerised systems used on the battlefield in most modern conflict zones.

But what happens when this deadly technology is used against us?

As the report states, "Someone who uses an autonomous weapons system to carry out an assassination, rather than using a handgun, avoids both the need to be present at the scene and the need to look at their victim.

"A worst-case scenario in this category might be an attack on a server used to direct autonomous weapon systems, which could lead to large-scale friendly fire or civilian targeting."

What can be done?

Dr Seán Ó hÉigeartaigh is executive director of the Centre for the Study of Existential Risk, and was one of the report’s co-authors.

He said: “Artificial intelligence is a game-changer and this report has imagined what the world could look like in the next five to ten years.

“We live in a world that could become fraught with day-to-day hazards from the misuse of AI and we need to take ownership of the problems – because the risks are real. There are choices that we need to make now, and our report is a call-to-action for governments, institutions and individuals across the globe.

“For many decades hype outstripped fact in terms of AI and machine learning. No longer.

Artificial intelligence: it's hard to visualise in a stock photo, but its risks are very real… (Image: Getty)

“This report looks at the practices that just don’t work anymore – and suggests broad approaches that might help: for example, how to design software and hardware to make it less hackable – and what type of laws and international regulations might work in tandem with this.”

Read More

As well as pinpointing some of the risks, the report recommends actions that could be taken to combat the threats posed by AI. They include:

  • Policy-makers and technical researchers need to work together now to understand and prepare for the malicious use of AI.
  • AI has many positive applications, but is a dual-use technology and AI researchers and engineers should be mindful of and proactive about the potential for its misuse.
  • Best practices can and should be learned from disciplines with a longer history of handling dual use risks, such as computer security.
  • The range of stakeholders engaging with preventing and mitigating the risks of malicious use of AI should be actively expanded.

The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation is available to download and read at www.maliciousaireport.com .

You can keep up to date with all the latest news in and around Cambridge by downloading our free app.

It is available for the iPhone and iPad from Apple’s App Store or the Android version can be downloaded from Google Play .

Related Posts

Leave a Reply

Your email address will not be published. Required fields are marked *