An Autonomous Weaponized Drone "Hunted Down" Humans Without Command For First Time

Oxman

Well-Known Member
Skynet?




An autonomous drone may have hunted down and attacked humans without input from human commanders, a recent UN report has revealed. As well as being the first time such an attack by artificial intelligence (AI) has taken place on humans, it's unclear whether the drone may have killed people during the attack which took place in Libya in March 2020.

The report to the UN Security Council states that on March 27, 2020, Libyan Prime Minister Fayez al-Sarraj ordered "Operation PEACE STORM", which saw unmanned combat aerial vehicles (UCAV) used against Haftar Affiliated Forces. Drones have been used in combat for years, but what made this attack different is that they operated without human input, after the initial attack with other support had taken place.

"Logistics convoys and retreating HAF were subsequently hunted down and remotely engaged by the unmanned combat aerial vehicles or the lethal autonomous weapons systems such as the STM Kargu-2 (see annex 30) and other loitering munitions," according to the report.

"The lethal autonomous weapons systems were programmed to attack targets without requiring data connectivity between the operator and the munition: in effect, a true 'fire, forget and find' capability."

The KARGU is a rotary-wing attack drone designed for asymmetric warfare or anti-terrorist operations, which according to the manufacturers "can be effectively used against static or moving targets through its indigenous and real-time image processing capabilities and machine learning algorithms embedded on the platform." A video showcasing the drone shows it targeting mannequins in a field, before diving at them and detonating an explosive charge.


Against human targets, the drones proved effective.

"Units were neither trained nor motivated to defend against the effective use of this new technology and usually retreated in disarray," the report reads. "Once in retreat, they were subject to continual harassment from the unmanned combat aerial vehicles and lethal autonomous weapons systems, which were proving to be a highly effective combination."

The report did not go into specifics about whether there were casualties or deaths connected with the attack, although they note that the drones were "highly effective" in helping to inflict "significant casualties" on enemy Pantsir S-1 surface-to-air missile systems. It's perfectly possible that the first human has been attacked or killed by a drone operated by a machine learning algorithm.

The attack, whether it produced casualties or not, will not be welcomed by campaigners against the use of "killer robots".

"There are serious doubts that fully autonomous weapons would be capable of meeting international humanitarian law standards, including the rules of distinction, proportionality, and military necessity, while they would threaten the fundamental right to life and principle of human dignity," says the Human Rights Watch. "Human Rights Watch calls for a preemptive ban on the development, production, and use of fully autonomous weapons."

Among other concerns is that AI algorithms used by the robots may not be robust enough, or else trained on datasets with flaws within them. As well as being open to errors (such as a Tesla tricked into swerving off the road) there are countless examples of biases within machine-learning tech, from facial recognition that doesn't recognize non-white skin tones, to cameras that tell Asian people to stop blinking, to racist soap dispensers that won't give you soap if you're black and self-driving cars that are more likely to run you over if you are not white.

Now, it appears, we could soon be trusting life and death decisions to tech that may be open to similar problems.
 
1622555388608.jpeg
 
Back
Top