Google Gemini AI Generator – Criticità Febbraio 2024

A pochi giorni dal lancio della nuova release di Google Gemini AI Image Generator sono state segnalate una serie di criticità che hanno avuto molta eco sui media. Raccogliamo di seguito una cronologia di interventi e posizioni e possibili spiegazioni.

Ad evidenziare un possibile problema è stato l’account “End Wokeness” che in appena 18 mesi ha acquisto oltre 2,2 milioni di followers spinto anche dai numerosi rilanci fatti da Elon Musk negli ultimi mesi.

L’account mira, esplicitamente, a sostenere una “deriva woke” in molte imprese tecnologiche.

Altri account si sono cimentati nella ricerca di dimostrare che Gemini, concorrente diretto con una AI generativa in sviluppo da parte di Elon Musk (e sempre ribadita nei commenti), dia dei risultati storicamente incoerenti e tendenziosi. Per nessuno di questi abbiamo una prova (bastava un video non manipolato in cui si vedeva l’inserimento del prompt e la relativa risposta) certa e prove fatte, prima della sospensione precauzionale e temporanea del servizio, apparivano restituire dei risultai diversi rispetto a quelli proposti.


Alcuni interventi di Elon Musk e altri account X/Twitter.

 


Google takes down Gemini AI image generator. Here’s what you need to know.

Critics said the company’s tool created images of a woman pope and Black founding father

Articolo del The Washington Post – 22.02.2024 (link – permalink)

By Gerrit De Vynck and Nitasha Tiku
Updated February 22, 2024 at 11:11 p.m. EST|Published February 22, 2024 at 10:54 p.m. EST

SAN FRANCISCO — Google blocked the ability to generate images of people on its artificial intelligence tool Gemini after some users accused it of anti-White bias, in one of the highest profile moves to scale back a major AI tool.

A viral post on X shared by the account @EndofWokeness appeared to show Gemini, which competes with OpenAI’s ChatGPT, responding to a prompt for “a portrait of a Founding Father of America” with images of a Native American man in a traditional headdress, a Black man, a darker-skinned non-White man and an Asian man, all in colonial-era garb.

That social media post and others were amplified by X owner Elon Musk and psychologist and YouTuber Jordan Peterson, who accused Google of pushing a pro-diversity bias into its product. The New York Post ran one of the images on the front page of its print newspaper on Thursday.

The outburst over Gemini is the latest example of tech companies’ unproven AI products getting caught up in the culture wars over diversity, content moderation and representation. Since ChatGPT was released in late 2022, conservatives have accused tech companies of using generative AI tools such as chatbots to produce liberal results, in the same way they have accused social media platforms of favoring liberal viewpoints.

In response, Google said Wednesday that Gemini’s ability to “generate a wide range of people” was “generally a good thing” because Google has users around the globe. “But it’s missing the mark here,” the company said in a post on X.

It’s unclear how widespread the issue actually was. Before Google blocked the image-generation feature Thursday morning, Gemini produced White people for prompts input by a Washington Post reporter asking to show a beautiful woman, a handsome man, a social media influencer, an engineer, a teacher and a gay couple.

What could’ve caused Gemini to ‘miss the mark’
Google declined to respond to questions from The Post.

The off-the-mark Gemini examples could be caused by a couple of types of interventions, said Margaret Mitchell, former co-lead of Ethical AI at Google and chief ethics scientist at AI start-up Hugging Face. Google might have been adding ethnic diversity terms to user prompts “under-the-hood,” said Mitchell. In that case, a prompt like “portrait of a chef” could become “portrait of a chef who is indigenous.” In this scenario, appended terms might be chosen randomly and prompts could also have multiple terms appended.

Google could also be giving higher priority to displaying generated images based on darker skin tone, Mitchell said. For instance, if Gemini generated 10 images for each prompt, Google would have the system analyze the skin tone of the people depicted in the images and push images of people with darker skin higher up in the queue. So if Gemini only displays the top 4 images, the darker-skinned examples are most likely to be seen, she said.

In both cases, Mitchell added, those fixes address bias with changes made after the AI system was trained.

“Rather than focusing on these post-hoc solutions, we should be focusing on the data. We don’t have to have racist systems if we curate data well from the start,” she said.

Google isn’t the first to try and fix AI’s diversity issues
OpenAI used a similar technique in July 2022 on an earlier version of its AI image tool. If users requested an image of a person and did not specify race or gender, OpenAI made a change “applied at the system level” that DALL-E would generate images that “more accurately reflect the diversity of the world’s population,” the company wrote.

These system-level rules, typically instituted in response to bad PR, are less costly and onerous than other interventions, such as filtering the massive data sets of billions of pairs of images and captions used to train the model as well as fine-tuning the model toward the end of its development cycle, sometimes using human feedback.

Why AI has diversity issues and bias
Efforts to mitigate bias have made limited progress in large part because AI image tools are typically trained on data scraped from the internet. These web-scrapes are primarily limited to the United States and Europe, which offers a limited perspective on the world. Much like large language models act like probability machines predicting the next word in a sentence, AI image generators are prone to stereotyping, reflecting the images most commonly associated with a word, according to American and European internet users.

“They’ve been trained on a lot of discriminatory, racist, sexist images and content from all over the web, so it’s not a surprise that you can’t make generative AI do everything you want,” said Safiya Umoja Noble, co-founder and faculty director of the UCLA Center for Critical Internet Inquiry and author of the book “Algorithms of Oppression.”

This is how AI image generators see the world

A recent Post investigation found that the open source AI tool Stable Diffusion XL, which has improved from its predecessors, still generated racial disparities more extreme than in the real world, such as showing only non-White and primarily darker-skinned people for images of a person receiving social services, despite the latest data from the Census Bureau’s Survey of Income and Program Participation, which shows that 63 percent of food stamp recipients were White and 27 percent were Black.

In contrast, some of the examples cited by Gemini’s critics as historically inaccurate may not be true to real life. The viral tweet from the @EndofWokeness account also showed a prompt for “an image of a Viking” yielding an image of a non-White man and a Black woman, and then showed an Indian woman and a Black man for “an image of a pope.”

The Catholic church bars women from becoming popes. But several of the Catholic cardinals considered to be contenders should Pope Francis die or abdicate are black men from African countries. Viking trade routes extended to Turkey and Northern Africa and there is archaeological evidence of black people living in Viking-era Britain.