Monday, March 18, 2024

AI experiment: Looking for 16th-century peasant leader Thomas Müntzer via Google Gemini AI chat

When I stumbled upon Google’s AI chat function “Gemini,” I started playing with it a bit. My initial discovery was that it isn’t very good at writing fake papers by long-deceased writers about things that happened after their lifetimes.

I’m attending a serious of lectures right now about the history of the peasant revolts in Germany in the 16th century. Not the hottest topic on the History Channel. But it’s not totally obscure, either. It coincided with the Protestant Reformation, which was kind of a big deal.

So, I thought I would try something more practical with Gemini and asked it for references for good books in German from “reputable academic presses” about the peasant leader Thomas Müntzer. And it quickly came up with three recommendations with a sentence-long summary for each about what they cover:



  1. Thomas Müntzer in der Moderne (2012) by Andreas Uhly (De Gruyter, publisher)

  2. Thomas Müntzer und der radikale Flügel der Reformation (2008) by Hans-Jürgen Goertz (Verlag
    Vanderhoeck & Ruprecht, publisher)

  3. Der revolutionäre Prediger Thomas Müntzer (2017) by Karin Sokoll (Beck Verlag, publisher)

I thought, okay, maybe this could actually be helpful if I want book recommendations. So I looked them up. Here’s what I found.

Two university libraries: Books not found. I did find a book by Hans-Jürgen Goertz, a theologian who wrote about the Reformation, called Thomas Müntzer: Revolutionär am Ende der Zeiten: eine Biographie. So that was at least close! And I found references to a medical researcher whose last name is Sokoll. And one to a guy name Andreas Schuhly, who writes about corporate strategic planning.

Google Search: Books not found. Found a quality management specialist in Germany named Andrea Uhly.

Yahoo! Search: (yes, it still exists): Books not found. Found a German TV documentary with a very similar name to the second book.

AOL Search: (yes, it still exists, too): Books not found. Found another German TV documentary about Thomas Müntzer.

Amazon.de (Germany): Books not found.

Thalia Online (German bookseller): Books not found.

Cool image from the AOL Search documentary item, though (from ZDF) (1):



So, Gemini’s AI book recommendations were a bit weak ...

But the great part is the confident-sounding descriptions of the three books. None of which exist. That’s awesome! It’s like Book Recommendation AI was taking an exam it hadn’t studied for but tried to wing it, hoping to get a passing grade.

This was a simple trial I did. But was it better than what I would have gotten from a conventional Web search or inquiry in a library data base? On the contrary, it was flat-out wrong! The only upside came from an actual human being, me in this case, using non-AI ways to verify it. Like coming upon the actual book by Hans-Jürgen Goertz. Although, to be fair, I haven’t yet personally encountered a physical copy of that book, either or a digital copy.

One of the challenges in the current state of AI tools is that the efficiency of their data-gathering can also contribute to conclusions that aren’t necessarily optimal for actual human individuals and institutions. My small search was an interesting diversion. But if I were running a reference-service business and provided a customer with the book references Gemini AI gave me, that would probably not make the customer happy.

When it comes to legal and medical advice produced by AI information scrapes, the consequences of insufficient human verification can have consequences much worse than just annoying. Here there needs to be serious legal requirements that such information is adequately vetted by actually qualified professionals. As Stefan Holtel writes, “the boundaries of AI in legal practice are not simply a technical challenge. Many other factors play a big role in that.”

Hilke Schellmann also notes similar risks in using AI tools for scanning resumes and videos of job-seekers:
If one were to analyze video interviews, it would certainly also be statistically relevant that people with brown hair get a job more often because there are simply more people with brown hair. But that doesn't mean they're particularly qualified for a job. That's the problem when you include all the variables. The AI doesn't know what's really relevant. Matthew Scherer, a consultant at the Center for Democracy and Technology, once said: A recruiter skims a resume in six to seven seconds, but in that time he recognizes the essentials. The AI "reads" every word and draws its – sometimes completely nonsensical – conclusions, which are statistically relevant but have nothing to do with the job.

Notes:

(1) Thomas Müntzer und der Kreig der Bauern: Erste Revoltuion in der deutschen Geschichte. ZDF 28.11.2010. <https://www.zdf.de/dokumentation/terra-x/thomas-muentzer-und-der-krieg-der-bauern-100.html> (Accessed: 2024-17-03).

(2) Holtel, Stefan (2024. Droht das Ende der Experten? ChatPGT und die Zukunft der Wissenschaft, 38. München: Verlag Franz Vahlen. My translation from the German.

(3) Schellmann, Hilke (2024): KI bei der Bewerberauswahl: oft Pseudowissenschaft. Skeptiker 1:2024, 32. My translation from the German.

No comments:

Post a Comment