The Risk of Prompt Transformation

The recent controversy around Google’s image generation where it is accused of engineering its Gemini AI application to produce “woke” images intentionally highlights a concept in AI development known as Prompt Transformation. Most users interact with AI Large Language Models (LLMs) via a prompt, the interface used to enter text or images to ask the LLM a question or instruct it to do something. Prompt transformation is modifying the prompt before it is passed to the LLM.

The modification can accomplish multiple goals including providing additional context to the prompt increasing the likelihood of getting a more accurate and relevant answer to the user’s question. This provides a potentially better user experience.

It can also be used as a risk management technique to reduce the possibility that the result set is biased or in some other way harmful. This is what Google was trying to do with its Gemini image generation capability. The base LLM in Gemini was trained using the data available on the public internet and contains the inherent bias within that data set. If you ask a trained model, without manipulating the prompt, to generate an image of a firefighter, the image will likely be of a white male.

Why?

The overwhelming majority of images on the internet that are labeled firefighters are of white males. This bias is unintentional and only reflective of the underlying training data. One of the challenges with this type of bias is that users within certain geographies or communities might view the output as inaccurate or even harmful. A user in China or Kenya is unlikely to view a firefighter as a white male.

Google used prompt engineering to add context to users’ prompts, resulting in images of ethnically diverse humans. The problem was that generated images of known figures, like the US founding fathers, or of known categories, like German soldiers in World War II, had ethnic characteristics inconsistent with what we know about them. This led to Google being accused of being “woke.”

No Good Deed Goes Unpunished

This event surfaces one of the challenges in working with AI applications and managing the experience to produce better results and limit the potential downside from output that may offend a certain segment of your user base.

What Could Go Wrong?

Prompt transformation has its own risks because it is another layer of software that uses some of the same AI techniques as the core platform, like natural language processing and sentiment determination, to infer the user’s objective and reduce the harm potential. That software can have defects and vulnerabilities that must be understood, tested, and mitigated. This can increase the complexity of your AI application and like in the Google case produce more harm than good.

AI applications require deliberate design and engineering and are often not as easy to implement as they seem.

#AI #prompttransformation #globaleconomicsgroup #CSRIGG

Global Econ

Global Economics Group brings together world-class thought leaders, highly experienced experts who have presented before courts and regulatory bodies worldwide, ex-industry executives with deep practical experience and a multi-disciplinary staff including econometricians and finance economists.

You May Also Like

Share This