TechTrends

It’s only a matter of time until terrorists use AI as a weapon

 

It’s hard to separate the hyperbole and science fiction-nonsense from practical concerns — for regular people — when it comes to autonomous weapons. Despite the late Stephen Hawking’s warnings, we’re probably decades away from the dystopian nightmare military experts predict the battlefield will become.
And, it’s not like there’s a crime-syndicate of extremely well-financed super villains developing warehouses full of laser-equipped murder bots. It’s easy for the average individual to frame the killer robot problem as something that might be important in the future, but not-so-pressing right now.
Yet there’s always some important technology figure warning us about some unknown doom — typically in vague and spooky ways. Do these warnings even matter to the average Joe or Jane?
Probably not. There’s this sort of pallor of existential dread that comes along with knowing the Pentagon and the Kremlin are hellbent on finding ways to exploit AI for warfare, but for the most part we don’t have time to worry about autonomous missiles and Project JEDI.
Killer robots don’t loom as menacingly in our fear-centers as more familiar threats, so we tell ourselves that, as long as we don’t end up in some warzone, we’re probably safe.
We live in civilized places with access to indoor plumbing and emergency response services. This gives us the confidence to point at Alexa and Google Assistant and say “is that the best you got?” Then we laugh off the idea that the robots are going to rise up — forgetting it’s the ingenuity of evil people we should fear, not the robustness of a neural network’s code.
It seems shocking that, as of October 2018, we’ve yet to see the headline that’s going to send the killer robots debate into high gear: “Officials still searching for humans behind terrorist attack carried out by autonomous weapons.” But, sadly, it’s almost surely coming.
The Human Rights Watch understands this. Through its Campaign To Stop Killer Robots the organization has dedicated itself to the incredibly difficult mission of spreading awareness about autonomous weapons — mostly as it pertains to government, military, and police use.
And, if you ask us, the problem of autonomous weapons is one the general public might not even be capable of fully understanding yet, so the Watch has its work cut out. Ice skate uphill much?
In a video posted today, the Campaign shows us a dramatic fictional slice-of-life that paints machines as unpredictable and dangerous:
The Campaign’s coordinator, Mary Wareham, told TNW:

The video shows what a future attack by fully autonomous weapons might look like. It also shows the serious concerns about the likely lack of accountability for fully autonomous weapons systems, as Human Rights Watch has documented.

These videos often come off as fear-mongering and far-fetched. But consider this: as best as we can tell, there’s no technology in this fictional video that isn’t already here in reality. This is a future that could have happened yesterday, technologically speaking.
Particularly worrisome, in light of Wareham’s comment about accountability, is the notion that weapons developed by governments for military use often end up in the hands of terrorist organizations.
Long-time readers might remember the “Slaughterbots” video by Stop Autonomous Weapons we reported on last year. In it, a big tech company takes to the conference stage to show off the latest and greatest gadget for military use. Horror ensues.
Much like the “Hated in the Nation” episode of “Black Mirror,” it shows us how robots could kill us in ways the average person might not have considered. These may be fiction, but they’re important for helping those of us who don’t think like engineers visualize how autonomous weapons could affect our lives.
But you don’t have to turn to fiction to find examples of the ways AI could be used to automate mass murder. Earlier this month Syrian engineer Hamzah Salam built a fully-functioning autonomous weapons platform with an AK-47 and a computer. He calls it an “electronic sniper.”
It’s literally a “sentry gun,” like the ones from video games such as “Call of Duty” and “Borderlands.” And it exists right now.
According to Sputnik News Salam says the platform:

… can use any small-arms weapon, from a machine gun to a sniper rifle. Cameras transmit a signal to a computer, which analyzes the data received. Its main task is to track movement. The computer has several preset scenarios. If it notices odd behavior in a given quadrant, it will open fire.

We live in a world where you can 3D print a firearm, mount it to a battery-powered tripod, and use open-source machine learning software and a Raspberry Pi (that may be an exaggeration, maybe not) to create something that, just a few years ago, would have seemed like experimental weapons at the cutting-edge of military research.
Things are changing faster than the public perception can handle.
Should you be worried about some other country’s killer machines occupying Main Street, USA? Probably not today.
But we’re in the last few innocent moments before the first AI-powered massacre happens somewhere. And the scariest part is that there’s likely nothing we can do about it if we remain ignorant to the scope of the immediate threat.
You can learn more about the fight against autonomous weapons by visiting the Campaign To Stop Killer Robots.
Source: The Next Web
To Read Our Daily News Updates, Please Visit Inventiva Or Subscribe Our Newsletter & Push.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button

Adblock Detected

Please consider supporting us by disabling your ad blocker