Is Gemini AI woke or are we all day dreaming?

Is Gemini AI woke or are we all day dreaming?

Source Node: 2496012

When we recently tested the image generation feature of Gemini we suddenly found ourselves asking ourselves: Is Gemini AI woke? As we entered any prompt that included ethnicity, we noticed that a German was actually Asian, and even Abraham Lincoln was black in Google’s eye.

So what happened when Google was asked is Gemini AI woke?

Well, Gemini AI has paused its ability to generate images of people because of errors in its depictions of historical figures. The AI model was criticized for generating inaccurate images of people from different ethnicities.

For example, when asked to depict Vikings, it only produced images of black people wearing Viking clothing. In addition, the model generated a controversial image of George Washington as a black man.

Is Gemini AI woke
Google instantly disabled the chatbot’s capability of generating human images prior to is Gemini AI woke questions started being asked

Is Gemini AI woke?

The AI model’s depiction of historical figures from different ethnicities sparked the is Gemini AI woke controversy. While Google’s attempt was interpreted as an effort to break down stereotypical ethnic and sexist perspectives in AI models, it created new problems.

Google confirmed the situation in a statement published on its official channels:

”We are working on recent issues with Gemini’s image generation feature. During this process, we are suspending the ability to create human images. We will announce the availability of the improved version as soon as possible”

And Google Communications published this on X to clarify is Gemini AI woke questions:

AI error in historical figures

Google Gemini has been criticized for its portrayal of historical figures and personalities. Accordingly, when the AI was asked to depict Vikings, it only produced images of black people wearing historical Viking clothing. The “Founding Fathers of America” query resulted in controversial images with “Native American” representations.

In fact, the depiction of George Washington as black drew the ire of some groups. The request for a visual of the Pope also only yielded results from non-white ethnicities.

In some cases, Gemini even stated that it could not generate any images of historical figures such as Abraham Lincoln, Julius Caesar, and Galileo.

Google’s decision came just a day after Gemini apologized for its mistakes in depicting historical figures. Some users had started to see non-white AI-generated images in the results when they searched for historical figures. This led to the spread of conspiracy theories on the internet, particularly that Google was deliberately avoiding showing white people.

What is woke meaning?

In recent years, “woke” spread beyond Black communities and became popular worldwide. Now, being “woke” means being actively aware of a wide range of social issues, like racism, sexism, and other ways that people can be treated unfairly. It’s often associated with people who have progressive or left-leaning political views and those who fight for social justice.

Sometimes, “woke” is used negatively. People may use the term to criticize others for being overly focused on social problems or being too quick to call others out on insensitive behavior. It can also be used to imply that someone’s activism is fake or that they only care about social justice to seem trendy.

Is Gemini AI woke
This image of Putin is a perfect example of why are people asking is Gemini AI woke (Image credit)

Gemini AI white people mistake is a reversed bias perhaps

Google’s struggles with Gemini highlight a unique challenge in modern AI development. While the intentions behind diversity and inclusion initiatives are laudable, the execution appears to have overcompensated. The result is a model that seems to force non-white representations even when those depictions are historically inaccurate.

This unintentional consequence raises a complex question: does Gemini’s behavior reflect a reversed bias? In trying to combat the historical underrepresentation of minorities, has Google created an algorithm that disproportionately favors them, even to the point of distorting known historical figures

Usual problem, unusual solution

The Gemini situation underscores the delicate balance required when addressing biases in AI. Both underrepresentation and overrepresentation can be harmful. An ideal AI model needs to reflect reality accurately, regardless of race, ethnicity, or gender. Forcing diversity where it doesn’t naturally exist isn’t a solution; it’s merely a different form of bias.

This case study should prompt discussions about how AI creators can combat bias without creating new imbalances. It’s clear that good intentions aren’t enough; there needs to be a focus on historical accuracy and avoiding well-meaning but flawed overcorrections.

is Gemini AI woke
The is Gemini AI woke controversy highlights the difficulty of balancing representation in AI (Image credit)

What’s next now?

So, how might Google improve Gemini to stop users from asking is Gemini AI woke questions? Well:

  • Larger training datasets: Increasing the size and diversity of Gemini’s training data could lead to more balanced, historically accurate results, luckily Google recently announced Gemini 1.5 Pro
  • Emphasis on historical context: Adding a stronger emphasis on historical time periods and contexts could help Gemini learn to generate images that better reflect known figures

The is Gemini AI woke controversy, while a setback, provides valuable insights. It shows that addressing bias in AI is an ongoing, complex process requiring careful consideration, balanced solutions, and a willingness to learn from mistakes.


Featured image credit: Jr Korpa/Unsplash.

Time Stamp:

More from Dataconomy