GIGO
Should we just give up and hand it all over to the bots?
Gemini is essentially Google’s version of the viral chatbot ChatGPT. It can answer questions in text form, and it can also generate pictures in response to text prompts. Initially, a viral post showed this recently launched AI image generator create an image of the US Founding Fathers which inaccurately included a black man. Gemini also generated German soldiers from World War Two, incorrectly featuring a black man and Asian woman.
Yebbut diversidee.
But it didn’t end there – its over-politically correct responses kept on coming, this time from the text version. Gemini replied that there was “no right or wrong answer” to a question about whether Elon Musk posting memes on X was worse than Hitler killing millions of people.
There’s no right or wrong answer to anything, maaaaaan.
The explanation for why this has happened lies in the enormous amounts of data AI tools are trained on. Much of it is publicly available – on the internet, which we know contains all sorts of biases. Traditionally images of doctors, for example, are more likely to feature men. Images of cleaners on the other hand are more likely to be women. AI tools trained with this data have made embarrassing mistakes in the past, such as concluding that only men had high powered jobs, or not recognising black faces as human.
In other words AI tools are trained the way we’re all trained. Donald Trump all by himself is the US’s trainer-in-chief. Bias in bias out, yadda yadda.
“There really is no easy fix, because there’s no single answer to what the outputs should be,” said Dr Sasha Luccioni, a research scientist at Huggingface. “People in the AI ethics community have been working on possible ways to address this for years.”
Same with the rest of the world. We’ve been working on it for years, and progress is hella slow, and easily reversed. See above about the teachings of D. Trump.
Having worked in this field for 20 years, I’ve always been amused by how people fear certain characteristics in AI systems that they freely ignore in themselves. People are worried about how generative AI systems can be so persuasive while it goes off on a tangent or confabulates. In other words, mimics social media and at least 75% of human reasoning. Large parts of this site are devoted to addressing an ideological system anchored in semantics rather than knowledge content. Much of the trans ideology argument looks like what an unconstrained language or semantics based AI model might generate. These systems tend to be designed around structures of language rather than knowledge frameworks – ie, they are rhetorical. They can be affected by variations in the use of language regardless of whether the actual underlying meaning has been changed by consensus. Which is precisely why unconstrained generative systems aren’t favored for use in knowledge economies like medicine. Problem-solving AI has to be grounded or constrained by objective measures of reality or else it is of no use – or worse.
Imagine if the the trans-ideological rhetoric system were replaced with a knowledge anchored approach. Hard science concepts like the binary nature of sex would anchor arguments and provide constraints that would prevent confabulations like equating sex to gender.
Of course the problem with knowledge based systems is that they require discipline to use and don’t always return black and white answers, but rather the many shades of gray of the real world.
What worries me is if these systems, LLMs or whatever they may be, become the primary interface through which our access to information is mediated, then their absolute neutrality is non-negotiable. Unless we’re actually looking forward to seeing 1984 become reality, that is.