The Pentagon in the US has been for a long time one of the major sources of funds for basic scientific research. As we saw in last year’s Oppenheimer film.
That means it’s always worth keeping in mind that media hype about the Next New Thing in tech may be something the Pentagon and arms manufacturers may be very interested in developing. One of the issues for AI tech is the challenge of having AI machines and devices behave in a moral way. Self-driving cars are one well-known example of where that considerations come in, i.e., does it know to brake for humans walking in front of the car?
The “Lavender” story cited above presents an ever more significant case. Human-directed armies operate under military discipline and rules and also under international law - formally, at least. Obviously, there are way too many cases where the human versions of those provisions fail.
Amy Goodman interviewed the author on Democracy Now! (2)
For people like Bibi Netanyahu who is at the moment waging a ruthless war and starvation campaign against Palestinian civilians in Gaza, AI offers them a new alibi. Gee, it wasn’t us that programmed the machines to kill anything that moves in their designated target area! It was the robot that decided everything!!
Or, in the tech-hype version as the AI killers are being promoted, as Yuval Abrahm writes:
But they pass the Turing Test of being indistinguishable from human decision-makers, apparently:
Also, auditing and checking killer robots is apparently sooo-ooo boring that nobody in the Israeli Defense Forces (IDF) wants to bother with it. wants to bother with it: “During the early stages of the war, the army gave sweeping approval for officers to adopt Lavender’s kill lists, with no requirement to thoroughly check why the machine made those choices or to examine the raw intelligence data on which they were based.” [my emphasis]
But let’s give the killer robots a bit of a break, though, because apparently they were programmed with the depraved brand of morality for which the IDF have made themselves so famous these last six weeks:
It’s also worth noting here that the AI system was programmed to select targets but the actual decisions involved in killing them were still done humans in the IDF.
In other words, the AI devices were used as an alibi. Program then to identify targets based on what apparently were broad criteria. (“When that sample found that Lavender’s results had reached 90 percent accuracy in identifying an individual’s affiliation with Hamas, the army authorized the sweeping use of the system.“) Then the military can saw, it was our AI super-machines that dropped the dime on the targets, and then we killed them based on the machine’s direction.
When Alfred Nobel’s company developed dynamite, antiwar advocates of the time - including Nobel himself and his friend and fellow peace advocate Berta von Suttner and many others argued that dynamite was such a destructive technology that it would put an end to wars. People would see, they thought, that war was too destructive and deadly to contemplate using.
Today’s AI killer robots have different ideas. And their morality programming is definitely not surpassing the sadly limited capabilities of human beings yet. As Oliver Bendel wrote in 2018, "Most of the moral machines are like human fundamentalists. They act rigidly according to rules that someone has drummed into them." (3)
But the banal reality is that, however “intelligent” these robots are, it’s still the human decision-makers who are giving them their assignments.
The Australian journalist Anthony Loewenstein talks about the +972/Local Call report in this interview (5):
As he notes, high-tech military technology like Lavender is a very important export market for Israel.
Notes:
(1) Abraham, Yuval (2024); ‘Lavender’: The AI machine directing Israel’s bombing spree in Gaza. +972 Magazine 04/03/2024. <https://www.972mag.com/lavender-ai-israeli-army-gaza/> (Accessed: 2024-03-04).
(2) Lavender & Where's Daddy: How Israel Used AI to Form Kill Lists & Bomb Palestinians in Their Homes. Democracy Now! YouTube channel 04/05/2024. <https://youtu.be/4RmNJH4UN3s?si=PuG4NFn7_NP7H9KZ> (Accessed: 2024-05-04).
(3) Bendel, Oliver (2018): Überlegungen zur Disziplin der Maschinenethik. Aus Politik und Zeitgeschichte (APuZ): 6-8, 35. My translation from the German.
(4) Abraham, op. cit.
(5) Israel’s AI tactics, resulting in high civilian casualties, being exported abroad: Analysis. Al Jazeera English YouTube channel 04/04/2024. <https://youtu.be/hBWlZJx5cpE?si=BA0T_69KaVsTc41B> (Accessed: 2024-04-04).
Or, in the tech-hype version as the AI killers are being promoted, as Yuval Abrahm writes:
In 2021, a book titled “The Human-Machine Team: How to Create Synergy Between Human and Artificial Intelligence That Will Revolutionize Our World” was released in English under the pen name “Brigadier General Y.S.” In it, the author — a man who we confirmed to be the current commander of the elite Israeli intelligence unit 8200 — makes the case for designing a special machine that could rapidly process massive amounts of data to generate thousands of potential “targets” for military strikes in the heat of a war. Such technology, he writes, would resolve what he described as a “human bottleneck for both locating the new targets and decision-making to approve the targets.” [my emphasis]
But they pass the Turing Test of being indistinguishable from human decision-makers, apparently:
A new investigation by +972 Magazine and Local Call reveals that the Israeli army has developed an artificial intelligence-based program known as “Lavender,” unveiled here for the first time. According to six Israeli intelligence officers, who have all served in the army during the current war on the Gaza Strip and had first-hand involvement with the use of AI to generate targets for assassination, Lavender has played a central role in the unprecedented bombing of Palestinians, especially during the early stages of the war. In fact, according to the sources, its influence on the military’s operations was such that they essentially treated the outputs of the AI machine “as if it were a human decision.” [my emphasis]
Also, auditing and checking killer robots is apparently sooo-ooo boring that nobody in the Israeli Defense Forces (IDF) wants to bother with it. wants to bother with it: “During the early stages of the war, the army gave sweeping approval for officers to adopt Lavender’s kill lists, with no requirement to thoroughly check why the machine made those choices or to examine the raw intelligence data on which they were based.” [my emphasis]
But let’s give the killer robots a bit of a break, though, because apparently they were programmed with the depraved brand of morality for which the IDF have made themselves so famous these last six weeks:
Moreover, the Israeli army systematically attacked the targeted individuals while they were in their homes — usually at night while their whole families were present — rather than during the course of military activity. According to the sources, this was because, from what they regarded as an intelligence standpoint, it was easier to locate the individuals in their private houses. Additional automated systems, including one called “Where’s Daddy?” also revealed here for the first time, were used specifically to track the targeted individuals and carry out bombings when they had entered their family’s residences. [my emphasis]
It’s also worth noting here that the AI system was programmed to select targets but the actual decisions involved in killing them were still done humans in the IDF.
In other words, the AI devices were used as an alibi. Program then to identify targets based on what apparently were broad criteria. (“When that sample found that Lavender’s results had reached 90 percent accuracy in identifying an individual’s affiliation with Hamas, the army authorized the sweeping use of the system.“) Then the military can saw, it was our AI super-machines that dropped the dime on the targets, and then we killed them based on the machine’s direction.
When Alfred Nobel’s company developed dynamite, antiwar advocates of the time - including Nobel himself and his friend and fellow peace advocate Berta von Suttner and many others argued that dynamite was such a destructive technology that it would put an end to wars. People would see, they thought, that war was too destructive and deadly to contemplate using.
Today’s AI killer robots have different ideas. And their morality programming is definitely not surpassing the sadly limited capabilities of human beings yet. As Oliver Bendel wrote in 2018, "Most of the moral machines are like human fundamentalists. They act rigidly according to rules that someone has drummed into them." (3)
But the banal reality is that, however “intelligent” these robots are, it’s still the human decision-makers who are giving them their assignments.
B., a senior officer who used Lavender, echoed to +972 and Local Call that in the current war, officers were not required to independently review the AI system’s assessments, in order to save time and enable the mass production of human targets without hindrances.
“Everything was statistical, everything was neat — it was very dry,” B. said. He noted that this lack of supervision was permitted despite internal checks showing that Lavender’s calculations were considered accurate only 90 percent of the time; in other words, it was known in advance that 10 percent of the human targets slated for assassination were not members of the Hamas military wing at all. (4)
The Australian journalist Anthony Loewenstein talks about the +972/Local Call report in this interview (5):
As he notes, high-tech military technology like Lavender is a very important export market for Israel.
Notes:
(1) Abraham, Yuval (2024); ‘Lavender’: The AI machine directing Israel’s bombing spree in Gaza. +972 Magazine 04/03/2024. <https://www.972mag.com/lavender-ai-israeli-army-gaza/> (Accessed: 2024-03-04).
(2) Lavender & Where's Daddy: How Israel Used AI to Form Kill Lists & Bomb Palestinians in Their Homes. Democracy Now! YouTube channel 04/05/2024. <https://youtu.be/4RmNJH4UN3s?si=PuG4NFn7_NP7H9KZ> (Accessed: 2024-05-04).
(3) Bendel, Oliver (2018): Überlegungen zur Disziplin der Maschinenethik. Aus Politik und Zeitgeschichte (APuZ): 6-8, 35. My translation from the German.
(4) Abraham, op. cit.
(5) Israel’s AI tactics, resulting in high civilian casualties, being exported abroad: Analysis. Al Jazeera English YouTube channel 04/04/2024. <https://youtu.be/hBWlZJx5cpE?si=BA0T_69KaVsTc41B> (Accessed: 2024-04-04).
No comments:
Post a Comment