Saturday, July 6, 2024

AI risks: real, exaggerated, and imaginary

One of the striking features of the current AI “wave” is that it's often the same people spinning the techno-utopian dreams who also hype the AI doomerism. Tech-bro billionaires like Sam Alman or Peter Thiel who have their various techno-utopian fantasies are also calling for more regulation to head off the supposedly dystopian consequences that could result from AI developments.

Alex Hann and Emily Bender have a helpful take on this. Which is that it’s a strategy by the corporate AI boosters and their hangers-on to divert public and regulatory attention from the real existing negative consequences that are actually visible today. (1)

Artificial intelligence – the term itself was established in 1956 (2) - definitely has positive uses in medicine and all kinds of research applications. But - for better or worse - we're still a long way from Mister Data androids or Skynet.

Biological research on animal communication, for instance, is an intriguing application that is already being used:
Whether animals communicate with one another in terms we might be able to understand is a question of enduring fascination. Although people in many Indigenous cultures have long believed that animals can intentionally communicate, Western scientists traditionally have shied away from research that blurs the lines between humans and other animals for fear of being accused of anthropomorphism. But with recent breakthroughs in AI, “people realize that we are on the brink of fairly major advances in regard to understanding animals' communicative behavior,” Rutz says.

Beyond creating chatbots that woo people and producing art that wins fine-arts competitions, machine learning may soon make it possible to decipher things like crow calls, says Aza Raskin, one of the founders of the nonprofit Earth Species Project. Its team of artificial-intelligence scientists, biologists and conservation experts is collecting a wide range of data from a variety of species and building machine-learning models to analyze them. Other groups such as the Project Cetacean Translation Initiative (CETI) are focusing on trying to understand a particular species, in this case the sperm whale.

Decoding animal vocalizations could aid conservation and welfare efforts. (3)
Aside from the real and much-discussed possibilities for Deep Fakes and cheating on school and college work, there are two other big dark sides that we see in the real world for the current AI wave. One is that even since before the term AI was invented, the technologies that are part of it have been largely driven by defense needs, much of the basic research done with public funds especially by DARPA, the Pentagon tech-lab agency. There will also be a market for new whiz-bang technology that can be marketed as weapons that kill more efficiently.

AI and the military

Defense One reports:
The U.S. Army plans to ask contractors for help integrating industry-generated artificial-intelligence algorithms into its operations, part of the service's 100-day push to lay the groundwork for sweeping adoption of AI.

“One of the things that we want to do is we want to adopt third-party-generated algorithms as fast as y'all are building,” Young Bang, the principal deputy assistant Army secretary for acquisition, logistics and technology, said at the Amazon Web Services Washington D.C. Summit on Wednesday. “We realized, while we had tons of data…we're not gonna develop our algorithms better than y’all.”

Bang said that as one of the largest consumers of AI and algorithm technologies, the Army is anxious to generate partnerships with industry and incorporate newer proprietary technology.

“We want to have a partnership. We're going to break down obstacles for us to adopt third-party-generated algorithms,” he said. [my emphasis] (4)
The Israeli Defense Forces provided the world a couple of dramatic and grim examples with the “Lavender” and “Where’s Daddy?” AI military technology. (5) Neither of these are autonomous of their human users, of course. Military AI isn’t run by Skynet or some android. Human being build them, employ them, and bear responsibility for their misuse.

The dark side of science and technology is nothing new. They can be used for good or bad and has always been used for both, just as a hammer can be used to drive nails or to hit someone in the head. Oe the one hand, that’s a fairly banal observation. But it’s also important to keep constantly in mind, maybe especially when looking at tech fads like cryptocurrency or AI.

[I]t can be said that young people derive too much pleasure online from the kind of narcissism that the lifestyles of some celebrities promote. Such is a form of objectification of the human potential. People have to make mature choices. But online technology has exposed many young people to a self-centered way of life with very negative consequences. For instance, some may neglect the value and presence of people in their lives, including family and one’s commitment to society.

However, it must also be noted that modern technology as a social process is not something that is pre-determined. In contrast to the above, the internet-savvy generation today is drawn to the online world since virtual reality offers a vigorous way of life. Young people are enamored to the online world because social media creates many vibrant relationships and common ties that bind the young into groups. As such, being hooked online does not necessarily mean that an individual has become less responsible in terms of his commitments. (6)
Here-and-now risks vs. sci-fi movie ones

Hanna and Bender write:
Nevertheless, [in 2023] the nonprofit Center for AI Safety released a statement - co-signed by hundreds of industry leaders - warning of “the risk of extinction from AI,” which it asserted was akin to the threats of nuclear war and pandemics. Sam Altman, embattled CEO of Open AI, the company behind the popular language-learning model [LL] Chat-GPT, had previously alluded to such a risk in a congressional hearing, suggesting that generative AI tools could go “quite wrong.” Last summer executives from AI companies met with President Joe Biden and made several toothless voluntary commitments to curtail “the most significant sources of AI risks,” hinting at theoretical apocalyptic threats instead of emphasizing real ones. Corporate AI labs justify this kind of posturing with pseudoscientific research reports that misdirect regulatory attention to imaginary scenarios and use fearmongering terminology such as “existential risk.” [my emphasis]
But Hanna and Bender do warn about the ability to use AI to create fake texts makes the misinformation problem worse:
Unfortunately, that output can seem so plausible that without a clear indication of its synthetic origins, it becomes a noxious and insidious pollutant of our information ecosystem. Not only do we risk mistaking synthetic text for reliable information, but that noninformation reflects and amplifies the biases encoded in AI training data—in the case of large language models, every kind of bigotry found on the Internet. Moreover, the synthetic text sounds authoritative despite its lack of citation of real sources. The longer this synthetic text spill continues, the worse off we are because it gets harder to find trustworthy sources and harder to trust them when we do.
There is also risk of stolen intellectual property and old-fashioned exploitation of workers:
[AI] systems rely on enormous amounts of training data that are stolen without compensation from the artists and authors who created them. In addition, the task of labeling data to create “guardrails” intended to prevent an AI system’s most toxic output from being released is repetitive and often traumatic labor carried out by gig workers and contractors, people locked in a global race to the bottom in terms of their pay and working conditions. What is more, employers are looking to cut costs by leveraging automation, laying off people from previously stable jobs and then hiring them back as lower-paid workers to correct the output of the automated systems.
It was apparent early on in the Industrial Revolution that technological advances could have large disruptive effects on society. In fact, that was a central concern of Adam Smith’s The Wealth of Nations. So that is not new. But that also doesn’t mean that it isn’t a real problem that needs to be addressed by public economic policies. The alleged magic of the market won’t do it, unless we make an anarcho-capitalist circular argument that whatever conditions The Market produces is always the best of all possible worlds.

Hanna and Bender note that the long 2023 actors’ and writers’ strike in Hollywood was an instance of the labor movement addressing those very real problems. Market anarchy may be an ideal outcome for oligarchs, but certainly not for everyone.

They also note that much of the information on AI is put out by industry sources. So fans, users, and investors all need to take account of that limitation. They also observe that since much of the data is kept under wraps by the companies and agencies work on AI development, it is often not possible to attain independent verification of claims made, such as in one paper they cite “which claims to find “intelligence” in the output of GPT-4, one of OpenAI’s text-synthesis machines.”

I ran into a similar problem in an exercise that gave me a glimpse of that problem recently when I used a couple of LLMs (large language models) to try to find out whether there was actual evidence that large financial companies have been able to use AI to achieve improvement in productivity.

I asked the chatbots for five specific examples, and they gave me five companies: Intuit, PwC (PricewaterhouseCoopers), Xero, KPMG, and EY (Ernest & Young). Checking their websites, I found that they all do make glowing claims for AI, and they all market AI products. But their website marketing pitches that I saw were awfully short on examples of what kinds of AI tools they were offering and exactly how they would improve productivity.

When I asked my LLMs for specific independent studies on the improvements in productivity via AI at finance firms, I got what is a common quirk of LLMs right now. I got a list of specific studies with specific titles and the authors or institutions that put them together. But when I did Google searches for the specific studies, I didn’t find any that seem to actually exist. If you give an LLM a specific inquiry for sources like that, they are trained to produced results that meet the parameter of the request, i.e., a list of reports from plausible sources and even dates. But they are often not able to confirm whether the sources they name actually exist. They are trained on specific and large datasets. But at least the ones I’ve used don’t seem to have the ability to do a reality-check through the Internet on whether the sources are even real. The focus of the designs so far have tended to emphasize producing coherent and grammatically correct responses to inquiries. So the fact-checking is still up to the user.

The energy issue

Like with cryptocurrency, the amount of energy required to run "large language models" like ChatGPT is enormous.

One obvious difference between AI and mammal brains is that the mammal brains are staggeringly more energy-efficient that any AI currently on the horizon. Unless we soon are actually able to build power plants with sustainable fusion reactors, energy requirements will be a major brake on AI possibilities. Those fusion-based power plants been a constant fantasy of what we now call tech-bros since roughly the day of the Trinity a-bomb test.

The value of cryptocurrency over other means of exchanging money electronically has always been highly dubious. We can’t make the same generalization for all artificial intelligence applications. Things like the translation function on Microsoft Word are examples of AI that have been around for years. In fact, the current AI “wave” (or bubble”?) is actually the third of its kind.

Lois Parshley warns in The American Prospect that the huge amount of energy currently being used by current AI applications is something that deserves immediate attention by legislators and the public.
In early May, Google announced it would be adding artificial intelligence to its search engine. When the new feature rolled out, AI Overviews began offering summaries to the top of queries, whether you wanted them or not—and they came at an invisible cost.

Each time you search for something like “how many rocks should I eat” and Google’s AI “snapshot” tells you “at least one small rock per day,” you’re consuming approximately three watt-hours of electricity, according to Alex de Vries, the founder of Digiconomist, a research company exploring the unintended consequences of digital trends. That’s ten times the power consumption of a traditional Google search, and roughly equivalent to the amount of power used when talking for an hour on a home phone. (Remember those?) (7)
The energy issue with AI is a huge one.

Critical thinking

The fact that large established companies like Microsoft and Meta may put limits on the amount of dot-com bubble fluff and scams that come out of it. But don’t count on it.

Critical thinking about AI claims is always called for.

And since tools can often be used for a variety of purposes, AI can also be used to combat disinformation and unpack conspiracist thinking. Benjamin Radford reports:
Conspiracy theories—a perpetual and pernicious bane of skepticism and critical thinking—have traditionally been examined through several prisms, including folklore, psychology, and social psychology. More recently, the field of computational analysis has emerged to help identify, address, and mitigate rumors and misinformation. Among the field’s pioneers is Timothy Tangherlini, a professor of folklore at the University of California at Los Angeles and a fellow of the American Folklore Society. “I study narrative, how stories emerge and circulate on and across social networks; rumors, legends, conspiracy theories, and so on,” he explained in [a 2022] interview. …

Tangherlini researches narrative structures of conspiracy theories. He and colleagues note that “Despite the attention that conspiracy theories have drawn, little attention has been paid to their narrative structure, although numerous studies recognize that conspiracy theories rest on a strong narrative foundation or that there may be methods useful for classifying them according to certain narrative features such as topics or motifs” (Bandari et al. 2017). The team “developed a pipeline of interlocking computational methods to determine the generative narrative framework undergirding a knowledge domain or connecting several knowledge domains.”

Using research on public health concerns (specifically, anti-vaccination blog posts) and combining it with folklore legend research, the model consists of three primary components that populate the narrative structure. Just as every story has certain consistent elements (such as a setting, protagonist, conflict, and resolution), the model studies three main components: actants (people, places, and things), relationships between those actants, and a sequencing of those relationships. (8)
In a 2020 article by Tangherlini and others, they provide this chart of conspiracist narratives, which bears some resemblance to charts illustrating the functioning of AI neural networks: (9)



Notes:

(1) Hanna, Alex & Bender, Emily (2024): Theoretical AI Harms Are a Distraction. Scientific American 330:2 (Feb 2024), 69. <https://www.scientificamerican.com/article/we-need-to-focus-on-ais-real-harms-not-imaginary-existential-risks/> (Accessed: 2024-06-07).

(2) Artificial Intelligence Coined at Dartmouth: 1956. Dartmouth College website, n/d. <https://home.dartmouth.edu/about/artificial-intelligence-ai-coined-dartmouth> (Accessed: 2024-06-07).

(3) Parshley, Lois (2023): Artificial Intelligence Could Finally Let Us Talk with Animals. Scientific American 329:3 (Oct 2024), 46. <https://www.scientificamerican.com/article/artificial-intelligence-could-finally-let-us-talk-with-animals/> (Accessed: 2024-06-07).

(4) Kelley, Alexandra (2024): Army to seek industry help on AI. Defense One 06/27/2024. <https://www.defenseone.com/technology/2024/06/army-plans-multiple-ai-industry-partnerships/397705/> (Accessed: 2024-06-07).

5) Abraham, Yuval (2024): ‘Lavender’: The AI machine directing Israel’s bombing spree in Gaza. +972 Magazine 04/03/2024. <https://www.972mag.com/lavender-ai-israeli-army-gaza/> (Accessed: 2024-06-07).

(6) Maboloc, Christopher Ryan (2017): Social Transformation and Online Technology: Situating Herbert Marcuse in the Internet Age. Techné: Research in Philosophy and Technology 21:1, 56. Techné: Research in Philosophy and Technology 21:1.

(7) Parshley, Lois (2024): The Unknown Toll of the AI Takeover. The American Prospect 07/01/2024. <https://prospect.org/environment/2024-07-01-unknown-toll-of-ai-takeover/> (Accessed: 2024-06-07).

(8) Radford, enjamin (2024): Analyzing Conspiracies through Folklore, Epidemiology, and Artificial Intelligence. Skeptical Inquirer 48:3 (May-June 2024), 47-52. <https://skepticalinquirer.org/2024/04/analyzing-conspiracies-through-folklore-epidemiology-and-artificial-intelligence/> (Accessed: 2024-06-07).

9) Tangherlini, Timothy (2020): An automated pipeline for the discovery of conspiracy and conspiracy theory narrative frameworks: Bridgegate, Pizzagate and storytelling on the web. Plos One 06/16/2020. <https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0233879> (Accessed: 2024-06-07).

No comments:

Post a Comment