People who act like large language models
Geoff Mulgan on AI and bullshitters:
Since the arrival of ChatGPT there has been much debate about how AI can replace humans or work with them. Here I discuss a slightly different phenomenon which I increasingly notice, and probably wouldn’t have without the presence of generative AI: people who act rather like large language models. The phenomenon isn’t new. It’s just that we now have a new way of understanding it. As I show it’s quite malign, particularly in academia.
The strength and weakness of ChatGPT is that it can quickly assemble a plausible answer to a question, drawing on materials out in the world. It sucks them in, synthesises and mimics, sometimes quite convincingly, someone knowledgeable talking about a subject.
Sound like postmodernists much? This is where I came in. B&W started as a jaundiced look at that kind of empty pretentious blather, that was a mimicry of thought rather than the real thing.
Lots of people now use ChatGPT to help them with first drafts of articles or talks. But I’m more interested in the people who act like an LLM even if they don’t actually use one. These are the smart people who absorb ways of talking and framing things and become adept at sounding convincing.
And, more than convincing, deep. Over the heads of plebeians like us. See: Judith Butler, passim.
The classic example in academia was Alan Sokal’s piece ‘Transgressing the Boundaries: Towards a Transformative Hermeutics of Quantum Gravity’. The article was submitted and accepted by the journal Social Text. The piece was deliberately written to sound plausible, at least to the academic community served by the journal. Yet it was in fact wholly meaningless. It was a perfect example of vapid mimicry and was bitterly resented by the academic community it mocked.
Sokal’s stunt was an extreme example. But what he was mocking is not so exceptional. Many people in many fields, including quite a few in academia, also act rather like a ChatGPT, particularly in academic disciplines that don’t do much empirical work, work with facts or testable hypotheses – the more they are just commenting on texts (as a surprising proportion of the social sciences and humanities do) the more such foggy talk is a risk.
I don’t object to commenting on texts as such. There’s plenty of brilliant commenting on texts, which is enlightening and depth-excavating. I’m a humanities type and I do see value in thinking and talking about literature. What I detest is the attempt to make it sound artificially “difficult” and not for the mere plebs.
But one advantage of age and experience is that I now realise that me not understanding what someone is saying is sometimes a sign that they don’t know what they’re talking about, and that they are essentially acting like an LLM. This would become apparent if they were ever interviewed in the way that politicians are sometimes interviewed in the media, with forensic questioning: ‘what do you actually mean by x? What’s an example of what you just said? What would your best critic say about your comments?’.
That kind of sums up what I do every day, especially the “what do you actually mean” bit. The more people do that the better, in my view.
I once attended a talk about the ups and downs of environmentalists working together with forestry workers. I was very interested in what the young man at the podium was going to say. (He was a grad student from somewhere.) For the entire 15 minutes I had no idea what he was talking about. I was completely baffled.
There was a Q and A and someone asked a question about one of the failures of the environmentalists’ strategies that the young man had supposedly talked about and he answered: “Well, they sold the forestry communities about ‘eco-tourism’ as a substitute for logging. But that didn’t work out.”
So, he was capable of speaking English when talking on the spot, but his prepared talk was just gibberish.
I don’t know about anybody else, but I once found myself in possession of the writings of the young Karl Marx and it was the same way for me. I had no idea what he was talking about most of the time and when I did know what he was talking about I couldn’t for the life of me understand it’s relevance to the essay’s supposed topic.
But maybe that’s just me.
Richard Dawkins wrote a wonderful essay for Nature way back in 1998, titled Postmodernism Disrobed, which starts this way:
It’s available on his website and makes for very satisfying reading.
So I went burrowing into the archives and find a 2010 link to that Dawkins essay,,,but it’s a dead link, so I’ll replace it.
Me, as an environmental scientist, that has been one of my bugbears: the inability of environmentalists to talk about the issues with clarity and coherence. Much of what they say is wrong, but it’s difficult to break through the bafflegab to expose the wrongness. Much of the environmental community outside of science are actually anti-science, but they adopt what they believe is the language of science to score points.
There was a man in my recent hometown who was a great example of this. He would show up at the City Council and talk on end about global warming…his comments made it clear to me that he only understood superficially what he was saying. He’d done some internet searches, made some emotional and god-based appeals, and then cloaked it all in gobbledgook. I actually wrote a poem about him once. It won’t ever be published, because it’s too local and obscure, but it was fun.
I’m a very late adopter of AI (as in, I haven’t even had a go with Chat GPT yet), and this essay isn’t convincing me that it would be a good use of my time:
https://www.theguardian.com/books/2023/sep/05/proust-chatgpt-and-the-case-of-the-forgotten-quote-elif-batuman
I’m not in academia any more, but I still read and review a lot of reports, and my most frequent comment is ‘what does this mean?’. I’ll speak with the author, read the sentence in question aloud to them, and ask them what it means; when they tell me, I delete the original sentence and type in what they just said. ISTG though sometimes they actually say ‘um, I don’t know.’
ChatGPT is an idiot savant regurgitating what it has been fed. My opinion, to be sure.
Big Tech has hired a legion of workers (clickworkers, raters) to assess the responses of large language models (LLMs) and improve them. There is precedent for these companies to lean on experts for data work — whether that be clinicians annotating medical images, or former military personnel working on defense-related AI products.
https://slate.com/technology/2023/08/chat-gpt-artificial-intelligence-jobs-economy-employment-labor.html
https://www.wired.com/story/prisoners-training-ai-finland
The Globe and Mail Sept. 16, 2023 B6
https://restofworld.org/2023/ai-revolution-outsourced-workers
https://restofworld.org/2023/ai-developers-fiction-poetry-scale-ai-appen
A well trained neural net (NN) can be helpful in different contexts, but is limited by its training data: there should be a human in the loop.
It is difficult to perform convincing tests of intelligence of AI (Michael Eisenstein “A test of artificial intelligence” 14 September 2023, Nature).