Friday, June 14, 2024

Skeptics and doomers and hype-sters around AI, oh my!

Here’s another reality check on AI’s ability to become our hyper-intelligence overlords:
AI predominantly refers to an ensemble of technologies and applications that digitally compute very large and heterogeneous data sets in a way that seems to mimic human intelligence, although it actually works very differently. … Algorithms are associated with intelligence when they are complex enough to learn, i.e. to modify their own programming in reaction to data. Such machine-learning algorithms encompass a number of classes, among them artificial neural networks. Although the concept for this class of algorithms was inspired by the structure of neural networks in the brain, they do not actually model a brain or even a part of it. With regard to specific, clearly demarcated tasks related to the identification of patterns in large amounts of data, they perform much better than the human brain thanks to their implementation of advanced statistics to find patterns in data. (Isabel Kusche) (1)
The current AI investment wave brings a lot of hype with it. And a lot of Terminator-movies “Skynet”-type doomerism. There are definitely bad implications for new technological developments, and not for the first time in history. That’s not an argument for Luddite thinking or for setting up rural communes with 19th-century technology. Though the latter idea has a certain amount of charm, at least in imagination.

The fantasies about cybernetic overlords leapfrog over the current state of the technology. Imagination works like that. For one thing, as Kusche says, the actual AI neural networks “do not actually model a brain or even a part of it.” One notable difference between human brains and AI is the brain functions in an exponentially more energy-efficient manner.

For people paying attention, this was a factor that came to light during the cryptocurrency bubble in the late 2010s. The supposedly revolutionary “blockchain” technology involved that was supposed to be so efficient and secure actually required enormous amounts of energy. “Bitcoin mining is an enormously energy-intensive process: the network now consumes more electricity than many countries.” (2)

Crypto has other problems, as well, that contribute to it being generally far less useful than the “legacy” electronic money transfer systems. But as long as the world is so heavily reliant on fossil fuels, the energy factor will continue to be a major issue for AI technology.

The military factor

A great deal of the technological development on AI has been and still is driven by military considerations. This is nothing new. A great deal of hate is called basic science or pure science research in the US has been driven by the Pentagon and its DARPA (Defense Advanced Research Projects Agency).

A recent Quincy Institute video discuss AI and the military: (3)


We’re a long way from SkyNet. But we already have Peter Thiel and various other tech-bros, drone technology, Deep Fakes, etc.

Paul Scharre in Foreign Policy indulges in a bit of SkyNet doomerism:
Some researchers worry about ... risks ... such as an AI model demonstrating power-seeking behavior, including acquiring resources, replicating itself, or hiding its intentions from humans. Current models have not demonstrated this behavior, but AI capability improvements are often surprising. No one can say for certain what AI capabilities will be possible in 12 months, much less a few years from now. (4)
Garrison Lovely on AI worries and risks

But, as Garrison Lovely noted earlier this year, the “SkyNet” fear is pretty common:
Public opinion on ¬AI has soured, particularly in the year since ChatGPT was released [in late 2022]. In all but one 2023 survey, more Americans than not have thought that ¬ could pose an existential threat to humanity. In the rare instances when pollsters asked people if they wanted human-level or beyond ¬AI, strong majorities in the United States and the UK said they didn’t. (5)
Like with any techie Next Big Thing, it’s important to understand what the technology really is and to try to sort out the investor hype and the inevitable scams and not get distracted from what real potentials and problems are.

The doomsday scenarios are based on “x-risk” possibilities, i.e., “extinction risks” for humans. Lovely describes AI researchers particularly concerned about this, the x-risk community: “Sometimes referred to as ¬AI safety advocates or doomers, this loose-knit group worries that AI poses an existential risk to humanity.”

Lovely’s piece is a good summary of the major serious currents in discussion of AI:
There are abundant examples of ¬AI systems exhibiting surprising and unwanted behaviors. A program meant to eliminate sorting errors in a list deleted the list entirely. One researcher was surprised to find an AI model “playing dead” to avoid being identified on safety tests.

Yet others see a Big Tech conspiracy looming behind these concerns. Some people focused on immediate harms from ¬ argue that the industry is actively promoting the idea that their products might end the world, like Myers West of the AI Now Institute, who says she “see[s] the narratives around so-called existential risk as really a play to take all the air out of the room, in order to ensure that there’s not meaningful movement in the present moment.” Strangely enough, Yann LeCun and Baidu ¬AI chief scientist Andrew Ng purport to agree.
Notes:

(1) Kusche, Isabel (2023): Artificial Intelligence and/as Risk. In: Peter Klimczak, Christer Petersen, eds. AI – Limits and Prospects of Artificial Intelligence, 143. Bielefeld: Transcript Verlag.

(2) Siriparapu, Anshu & Berman, Noah (2024): The Crypto Question: Bitcoin, Digital Dollars, and the Future of Money. Council on Foreign Relations 01/17/2024. <https://www.cfr.org/backgrounder/crypto-question-bitcoin-digital-dollars-and-future-money> (Accessed: 20241-10-06).

(3) The Dystopian Future of AI Warfare. Quincy Institute for Responsible Statecraft YouTube channel 07/07/2024. <https://youtu.be/YjZm4uxqGvY?si=FNV6-Ep3DZF9pTCG> (Accessed: 2024-07-06).

(4) Scharre, Paul (2023): What AI Means for Global Power. Foreign Policy (Summer 2023),34-40.

(5) Lovely, Garrison (2024): Can Humanity Survive AI? Jacobin 52:2024, 66-79.
See also: Lovely, Garrison (2024): My Cover Story in Jacobin: Can Humanity Survive AI? Garrison’s Substack 02/12/2024. <https://garrisonlovely.substack.com/p/my-cover-story-in-jacobin-can-humanity> (Accessed: 2024-07-06).
Podcast: https://garrisonlovely.substack.com/p/35-yoshua-bengio-on-why-ai-labs-are3

No comments:

Post a Comment