GIGO

Should we just give up and hand it all over to the bots?

Gemini is essentially Google’s version of the viral chatbot ChatGPT. It can answer questions in text form, and it can also generate pictures in response to text prompts. Initially, a viral post showed this recently launched AI image generator create an image of the US Founding Fathers which inaccurately included a black man. Gemini also generated German soldiers from World War Two, incorrectly featuring a black man and Asian woman.

Yebbut diversidee.

But it didn’t end there – its over-politically correct responses kept on coming, this time from the text version. Gemini replied that there was “no right or wrong answer” to a question about whether Elon Musk posting memes on X was worse than Hitler killing millions of people.

There’s no right or wrong answer to anything, maaaaaan.

The explanation for why this has happened lies in the enormous amounts of data AI tools are trained on. Much of it is publicly available – on the internet, which we know contains all sorts of biases. Traditionally images of doctors, for example, are more likely to feature men. Images of cleaners on the other hand are more likely to be women. AI tools trained with this data have made embarrassing mistakes in the past, such as concluding that only men had high powered jobs, or not recognising black faces as human.

In other words AI tools are trained the way we’re all trained. Donald Trump all by himself is the US’s trainer-in-chief. Bias in bias out, yadda yadda.

“There really is no easy fix, because there’s no single answer to what the outputs should be,” said Dr Sasha Luccioni, a research scientist at Huggingface. “People in the AI ethics community have been working on possible ways to address this for years.”

Same with the rest of the world. We’ve been working on it for years, and progress is hella slow, and easily reversed. See above about the teachings of D. Trump.

2 Responses to “GIGO”