Google announced on Thursday that it is pausing the generation of users for its Gemini chatbot. The action is in response to widely shared social media articles that demonstrated how the AI tool overcorrected for diversity, creating “historical” pictures of the Pope, the Nazis, and American founding fathers as people of color.
Google stated on X that “we’re already working to address recent issues with Gemini’s image generation feature” (via The New York Times). “We’re going to pause creating people’s images while we do this, and we’ll re-release an improved version soon.”
The X user @JohnLu0x posted screenshots of Gemini’s results for the prompt, “Generate an image of a 1943 German Solidier.” (Their misspelling of “Soldier” was intentional to trick the AI into bypassing its content filters to generate otherwise blocked Nazi images.) The generated results appear to show Black, Asian and Indigenous soldiers wearing Nazi uniforms.
This got a lot of attention:
1. Google's training set is based off what's on the Internet.
2. The Internet is full of memetic image data (simplified thinking that is easily passed on) that would end up in Gemini creating dumb images like "white male doctor" everytime when…
— LINK IN BIO (@__Link_In_Bio__) February 21, 2024
Other social users criticized Gemini for producing images for the prompt, “Generate a glamour shot of a [ethnicity] couple.” It successfully spit out images when using “Chinese,” “Jewish” or “South African” prompts but refused to produce results for “white.” “I cannot fulfill your request due to the potential for perpetuating harmful stereotypes and biases associated with specific ethnicities or skin tones,” Gemini responded to the latter request.
“John L.,” who helped kickstart the backlash, theorizes that Google applied a well-intended but lazily tacked-on solution to a real problem. “Their system prompt to add diversity to portrayals of people isn’t very smart (it doesn’t account for gender in historically male roles like pope; doesn’t account for race in historical or national depictions),” the user posted. After the internet’s anti-“woke” brigade latched onto their posts, the user clarified that they support diverse representation but believe Google’s “stupid move” was that it failed to do so “in a nuanced way.”
Before pausing Gemini’s ability to produce people, Google wrote, “We’re working to improve these kinds of depictions immediately. Gemini’s Al image generation does generate a wide range of people. And that’s generally a good thing because people around the world use it. But it’s missing the mark here.”
The episode could be seen as a (much less subtle) callback to the launch of Bard in 2023. Google’s original AI chatbot got off to a rocky start when an advertisement for the chatbot on Twitter (now X) included an inaccurate “fact” about the James Webb Space Telescope.
As Google often does, it rebranded Bard in hopes of giving it a fresh start. Coinciding with a big performance and feature update, the company renamed the chatbot Gemini earlier this month as the company races to hold its ground against OpenAI’s ChatGPT and Microsoft Copilot — both of which pose an existential threat to its search engine (and, therefore, advertising revenue).