In the near future, criminals will turn AIs against us - NewsGossipBull.BlogSpot.com - Latest News, Gossip & Bullshit
Quotes by TradingView

Twitter

In the near future, criminals will turn AIs against us




You check your voicemail – it’s your daughter, panicking, garbled. She’s travelling, away on her gap year. You’ve had a series of emails from her – she’s lost her phone and money, she needs help. Now, you’re tech savvy; you’re not easy to con. But this is her – her voice. You wire the money.

However, across the world a thousand other parents have received their own personalised emails and voicemails. A criminal’s algorithm has scraped social media for holiday videos and photos and created finely-targeted, tailor-made messages. The voice on your phone was synthesised. This was all done at low cost with minimal human labor, through the use of artificial intelligence.
AI has recently made significant progress, which unfortunately also make scenarios like the one above increasingly plausible. By and large, this progress has included impressive research advances. Google DeepMind’s AlphaZero system took just eight hours to master the ancient Chinese board game of Go by learning from experience; the underlying approaches may in time be applied to scientific and economic challenges, promising great benefits. Beyond technical feats, AI is revolutionising our daily lives, from voice-controlled Alexas and Siris to designing new medicines.

But as our new report, released today, makes clear, AI technology is ‘dual-use.’ While it will be used to benefit society in many ways, it can and will also be used maliciously. Criminals, terrorists and rogue states will exploit these powerful tools to harm people. We must explore ways to more systematically forecast, prevent, and mitigate these malicious uses.

The malicious use of AI doesn’t just threaten people’s property and privacy – in some of the more worrying scenarios, it also threatens their lives. The proliferation of drones and other cyber-physical systems such as autonomous vehicles and connected medical devices present tempting targets and tools for terrorists, hackers, and criminals. Possible scenarios include crashing autonomous vehicles, or turning cheap commercial drones into face-hunting missiles. 

AI could also be weaponised in a way that threatens our ability to sustain truthful public debates. The recent indictment from the US Special Counsel Robert Mueller alleges that Russia had a professional team of more than 80 people working full-time to disrupt the 2016 US presidential elections. 

What will happen when professional trolls can release cheap, highly-believable fake videos? There are already tools available to create fake videos from raw audio files, and there are tools that let us synthesise fake audio that sounds like a specific speaker. Combine the two, and it will soon be possible to create entirely fake news videos. What will happen when they can control an army of semi-autonomous bots using techniques based on ‘reinforcement learning’ and other AI approaches? What will happen when they can precisely target people with cheap, individualised propaganda? A team of 80 in the future, fully leveraging AI, could seem more like a team of 8,000 today. 

We’re not helpless in the face of these emerging risks, but we need to acknowledge the severity of these risks and act accordingly. Policymakers should collaborate closely with technical researchers to investigate, prevent, and mitigate potential malicious uses of AI.

AI researchers and companies have been thoughtful and responsible when it comes to the ethical implications of the technologies they are developing. Thousands have signed an open letter calling for robust and beneficial AI. AI companies are working together through the Partnership on AI. Several ethical guidelines are emerging, like the Asilomar AI Principles and the IEEE Ethically Aligned Design. This culture of responsibility needs to be continued and deepened when it comes to security, extending beyond the issues of unintentional harm (such as safety accidents and bias) that have occupied much of the debate up until now.

AI researchers and the organisations that employ them are in a unique position to shape the emerging security landscape. This calls for exploring a range of solutions, some of which may be uncomfortable in today’s academic culture, such as delaying a small number of select publications on certain techniques so that defenses can be developed, as is more common in the cybersecurity community. 

We need to ask the tricky questions: what sorts of AI research are most amenable to malicious use? What new technologies need to be developed to better defend against plausible attacks? What institutions or mechanisms can help us strike the right balance between maximising the benefits of AI and minimising its security risks? The sooner we grapple with these questions, the less likely the phone spoofing scenario above will become a reality.


Popular