Google has issued a statement regarding a recent AI blunder, where an image generating model introduced diversity into pictures without considering historical context. The company attributes the issue to the model becoming overly sensitive.
The AI system responsible is Gemini, Google’s flagship conversational AI platform, which utilizes the Imagen 2 model to create images upon request.
Users recently discovered that requesting images related to specific historical events or figures led to inaccurate and comical results. For example, the founding fathers were depicted as a diverse group including people of color, despite historical accuracy.
This incident was quickly mocked online and became a topic of debate around diversity, equity, and inclusion in the tech industry.
Critics labeled the situation as an exaggeration of diversity initiatives, attributing it to political ideologies. However, Google clarified that the issue stemmed from addressing systemic bias in the training data.
When utilizing Gemini to generate images without specific characteristics, the model defaults to what it has learned from the training data, often leading to biased outcomes.
Google emphasized the importance of diversity in image outputs to cater to a global audience, rather than perpetuating biases present in the training data.
The challenge lies in ensuring that the model provides a diverse range of outputs without falling back on biased training data. Companies are continuously working on incorporating instructions to address such issues.
Implicit instructions play a significant role in guiding these AI models, ensuring they produce inclusive and appropriate outputs. Google’s failure to include vital instructions for historical context led to the recent mishap.
While errors are inevitable in AI technology, accountability rests with the creators rather than the models themselves. It is crucial for companies to take responsibility for addressing and rectifying such issues.