Monday, March 25, 2024

Artificial Intelligence - what is it? (Besides investor hype, of course!): The Turing Test and neural networks

The main booster-hype around cryptocurrency seems blessedly to have died down. Who knows when it may pop up again? As he put it:

To the extent that Bitcoin is the “future of money,” then, it is only the future of money in situations of extreme crisis or deprivation—I suspect a lot of the pro-crypto people who understand its present-day uselessness are betting on a future collapse of the global economic system, although I think they overestimate the chances that Bitcoin itself could keep functioning effectively in such a nightmare scenario (someone has to maintain the actual wires). (1)

This is not to say that cryptocurrency has no use - alongside its various problems. Creating an anarcho-libertarian market paradise is not one of them. Large corporations could use it to create a set of competing private currencies. The European Central Bank (ECB) is working on setting up an official crypto version of the euro. (2) This could be useful in blocking a corporate-oligarchical system of parallel currencies. It could also potentially be used to give the central bank an additional tool to limit the capability of the private banking system to cause financial crises or exacerbate them.

Of course, an official currency issued by a central bank for the purpose of stabilizing the economy and preventing damage to the system by wealthy private actors is about as far from the anarcho-libertarian ideal hyped by the crypto-bros as we could get!

Now artificial intelligence (AI) is the topic of a new round of investment hype for companies looking to become the leader of The Next Big Thing.

So it’s worth taking some time and effort to understand what AI really is while also watching who may become the Google or Apple of AI. Which could be Google or Apple, of course.

Franz Schneider offers this definition of AI:
To what extent could the performance of a machine be judged as ‘intelligent’ – that is, commensurable (measurable in the same terms) with human intelligence? Since the Turing test, machines have been judged as ‘intelligent’ by comparing their behavior with social conventions. Cybernetics investigated this question in a different way, that is, by postulating a common ‘mechanism’ (whatever logical or physiological) between humans and machines. But in the decades prior to cybernetics and computer science, psychometrics had already turned human intelligence into a quantifiable (and potentially computable) object. In the early twentieth century, [Charles] Spearman, for instance, proposed the statistical measurement of ‘general intelligence’ (or g factor) as the correlation between unrelated tasks in a skill test. For Spearman, these correlations mathematically demonstrated the existence of an underlying cognitive faculty that common sense would refer to as ‘intelligence’. (3) [my emphasis]

Two important concepts in the development of AI are the Turing Test and neural networks.

The Turing Test

Part of the idea of “artificial intelligence” is that it is more advanced that simple mathematical calculations. Interactivity between humans and the AI device is one of the ideas associated with it. The “Turing test” was a concept developed by the British mathematician Alan Turing (1912-1954) that was meant to show whether a device met that threshold.
In a Turing Test … a person poses questions to other people and to a computer. The person and the computer answer in chat format (without visual or audio contact). The Turing Test is passed [by the device] when the person posing the questions cannot say which of his “conversation partners” is a machine. (4)

Neural networks

The use of neural networks by the machines was also part of the concept of AI.
For decades, neuroscientists’ theories about how brains learn were guided primarily by a rule introduced in 1949 by the Canadian psychologist Donald Hebb, which is often paraphrased as “Neurons that fire together, wire together.” That is, the more correlated the activity of adjacent neurons, the stronger the synaptic connections between them. This principle, with some modifications, was successful at explaining certain limited types of learning and visual classification tasks.

But it worked far less well for large networks of neurons that had to learn from mistakes; there was no directly targeted way for neurons deep within the network to learn about discovered errors, update themselves and make fewer mistakes. “The Hebbian rule is a very narrow, particular and not very sensitive way of using error information,” said Daniel Yamins, a computational neuroscientist and computer scientist at Stanford University. (5)

Improvements on AI calculations continue to draw on the knowledge of how human intelligence functions. That may sound banal, because we use AI to work with humans. But the frameworks actually used are important to keep in mind. And also the fact that in the current state of development, AI is far away from the formation of intelligence of which the human brain is capable. Mr. Data and Skynet may be in our future. But they aren’t here yet.

TensorFlow.org has an interactive chart “playground” (6) to illustrate how neural networks function:


Notes:

(1) Robinson, Nathan J. (2021): Why Cryptocurrency Is A Giant Fraud. Current Affairs 04/22/2021. <https://www.currentaffairs.org/2021/04/why-cryptocurrency-is-a-giant-fraud> (Accessed: 2024-24-03).

(2) Schneider, Franz (2024): Digitaler Euro – Die Geister, die man ruft. Makroskop 15.03.2024. <https://makroskop.eu/09-2024/digitaler-euro-die-geister-die-man-ruft/> (Accessed: 2024-24-03).

(3) Pasquinelli, Matteo (2023): The Eye of the Master: A Social History of Artificial Intelligence (ebook). London & New York: Verso.

(4) Range, Thomas (2018): Mensch Fragt, Mascine Antwortet.Wie Künstliche Intelligenz Wirtschaft, Arbeit und unser Leben verändert. Aus Politik und Zeitgeschicte 68:6-8 , 16. Translation from the German is mine.

(5) Ananthaswamy, Anit (2024): Programm mit Köpfchen. Spektrum Spezial BMH 1.24, 52. English quote is from the original unsigned and undated article: Artificial Neural Nets Finally Yield Clues to How Brains Learn. Quanta Magazine. <https://www.quantamagazine.org/artificial-neural-nets-finally-yield-clues-to-how-brains-learn-20210218#> (Accessed: 2024-24-03).

(6) https://playground.tensorflow.org/ (Accessed: 2024-24-03).

No comments:

Post a Comment