Artificial intelligence and killer drones… What could go wrong?
The AI is aggressive and becomes more deadly as it learns from human targets
Peace activists are zeroing-in on very troubling – and downright scary – developments in high-tech weaponry: artificial intelligence and autonomous weaponized drones.
The issue of autonomous weaponized drones, programmed to kill without a human finger pulling the trigger, is getting much more attention in the wake of revelations coming out of Libya.
As Popular Mechanics describes the scene:
“The world’s first recorded case of an autonomous drone attacking humans took place in March 2020, according to a United Nations (UN) security report detailing the ongoing Second Libyan Civil War. Libyan forces used the Turkish-made drones to “hunt down” and jam retreating enemy forces, preventing them from using their own drones.”
The lethal autonomous weapons systems were programmed to attack targets without requiring data connectivity between the operator and the munition: in effect, a true “fire, forget and find” capability, according to an official report.
The Kargu-2 (“Hawk”), from Turkish defense contractor STM, is a quadcopter drone designed to carry a weapons payload. Here is a promotional video from STM.
Even more astonishing is the use of artificial intelligence for weapons. Last year all eyes were on a highly-anticipated competition sponsored by the Pentagon between a human “Top Gun” pilot, and an artificial intelligence pilot, both flying F-16s within computer simulators.
Arms industry magazine Breaking Defense wrote that the AI was extremely aggressive in the games, with its AI pilot consistently able to turn and score killing hits on the simulated F-16 piloted by an unnamed Air Force pilot. The AI exhibited “superhuman aiming ability” during the simulation.
In five rounds of air-to-air combat against one of the US Air Force’s best pilots… the AI computer won every time.
In five rounds of air-to-air combat against one of the US Air Force’s best pilots… the AI computer won every time.
It’s not just the United States military developing AI weapons. News stories this week covered a showdown between a Chinese pilot and an AI in a similar dogfight.
At first, according to Forbes, the human pilot won easily, “shooting down” the enemy AI pilot’s plane. But then the tables were turned:
“As the exercise continued the AI learned from each encounter and steadily improved. By the end it was able to defeat [the pilot] using tactics it had learned from him, coupled with inhuman speed and precision," reported Forbes.
Writing about last year’s dogfight (which was won by Team Heron, a small women-owned tech firm), Project Ploughshares researcher Branka Marijan notes that the laws of war are inadequate.
“With regulation rapidly falling behind developments in technology, the [dogfight] simulation reveals the clear need for the immediate regulation and institution of global norms on the application of AI in warfare. And this is action that is firmly in human hands.”