Inside the messy ethics of making war with machines


This is why a human hand must squeeze the trigger, why a human hand must click “Approve.” If a computer sets its sights upon the wrong target, and the soldier squeezes the trigger anyway, that’s on the soldier. “If a human does something that leads to an accident with the machine—say, dropping a weapon where it shouldn’t have—that’s still a human’s decision that was made,” Shanahan says.

But accidents happen. And this is where things get tricky. Modern militaries have spent hundreds of years figuring out how to differentiate the unavoidable, blameless tragedies of warfare from acts of malign intent, misdirected fury, or gross negligence. Even now, this remains a difficult task. Outsourcing a part of human agency and judgment to algorithms built, in many cases, around the mathematical principle of optimization will challenge all this law and doctrine in a fundamentally new way, says Courtney Bowman, global director of privacy and civil liberties engineering at Palantir, a US-headquartered firm that builds data management software for militaries, governments, and large companies. 

“It’s a rupture. It’s disruptive,” Bowman says. “It requires a new ethical construct to be able to make sound decisions.”

This year, in a move that was inevitable in the age of ChatGPT, Palantir announced that it is developing software called the Artificial Intelligence Platform, which allows for the integration of large language models into the company’s military products. In a demo of AIP posted to YouTube this spring, the platform alerts the user to a potentially threatening enemy movement. It then suggests that a drone be sent for a closer look, proposes three possible plans to intercept the offending force, and maps out an optimal route for the selected attack team to reach them.

And yet even with a machine capable of such apparent cleverness, militaries won’t want the user to blindly trust its every suggestion. If the human presses only one button in a kill chain, it probably should not be the “I believe” button, as a concerned but anonymous Army operative once put it in a DoD war game in 2019. 

In a program called Urban Reconnaissance through Supervised Autonomy (URSA), DARPA built a system that enabled robots and drones to act as forward observers for platoons in urban operations. After input from the project’s advisory group on ethical and legal issues, it was decided that the software would only ever designate people as “persons of interest.” Even though the purpose of the technology was to help root out ambushes, it would never go so far as to label anyone as a “threat.”

This, it was hoped, would stop a soldier from jumping to the wrong conclusion. It also had a legal rationale, according to Brian Williams, an adjunct research staff member at the Institute for Defense Analyses who led the advisory group. No court had positively asserted that a machine could legally designate a person a threat, he says. (Then again, he adds, no court had specifically found that it would be illegal, either, and he acknowledges that not all military operators would necessarily share his group’s cautious reading of the law.) According to Williams, DARPA initially wanted URSA to be able to autonomously discern a person’s intent; this feature too was scrapped at the group’s urging.

Bowman says Palantir’s approach is to work “engineered inefficiencies” into “points in the decision-­making process where you actually do want to slow things down.” For example, a computer’s output that points to an enemy troop movement, he says, might require a user to seek out a second corroborating source of intelligence before proceeding with an action (in the video, the Artificial Intelligence Platform does not appear to do this). 

Leave a Reply

Your email address will not be published. Required fields are marked *

8 Times Amy Jackson Took The Internet By Storm With Her Glamorous Avatar February 14, 2024 Here Are Best Anime Couples Of All Time – Koimoi Keerthy Suresh Exudes Elegance In A Timeless Yellow Saree February 1, 2024 Madhuri Dixit Steals Hearts In A Blue Saree February 3, 2024 8 Unknown Facts About Controversial Actress-model Poonam Pandey February 5, 2024 Sruthi Haasan Raises Glam Quotient In A Green Printed Co-ord Set February 5, 2024 8 Times Kriti Sanon Made Heads Turn In Stylish Outfits February 6, 2024 Alaya F Looks Smoking Hot In Silver Bodycon Dress February 7, 2024 Janhvi Kapoor Turns Heads In A Ravishing Red Dress February 8, 2024 Priyamani Looks Every Bit Of An Ethnic Diva In Royal Blue Striped Silk Saree February 9, 2024 Deepti Sati's Glamorous Lehenga Looks February 10, 2024 Netflix’s Global Top 10 Films: From Lift To The Super Mario Bros Movie, See The Full List! – Koimoi Must Watch Top 10 Anime High School Romcoms – Koimoi What To Watch On HBO And Max In February 2024 – Koimoi Top 7 Grinch Movies And TV Shows To Watch This Christmas Season – Complete Details Inside! – Koimoi Complete Guide To Watch Every Godzilla Movie In The Monsterverse — With Release And Chronological Order – Koimoi Top 5 Underrated Horror Anime Series – Koimoi Best Shoujo Romance Anime of All Time — You’re Not Ready for These – Koimoi Highest-Grossing Anime Movies: From Your Name, Demon Slayer & The First Slam Dunk – Films That Managed To Redefine Animated Cinema! – Koimoi 5 Best Black Anime Characters In Most Impactful And Trailblazing Anime – Koimoi