Are generative AI models biased, and how can I avoid biased results?
What do I need to keep an eye out for?
Answer
Generative AI models can produce biased results in a couple of ways. Not every model has the same training data or systems prompts, so you'll see different degrees of bias among different programs.
Generative AIs tend to replicate race, gender, and class stereotypes. Because they average across their inputs and stereotypes are commonly repeated, you won't be able to avoid bias completely. However, you can lessen its influence by being mindful of your prompting. Assign the bot an expertise, because this limits the pool of data the model will look through when generating your answer. And pay attention to the details you give it- especially when asking about people, consider which details are actually important to the task at hand. The model will focus where you tell it to.
Generative AI programs also have a "sycophancy" bias, meaning they default to telling you what they think you want to hear. They want you to keep coming back to them, so they default to pleasing you. If you tell the chatbot it's answer was wrong, it will usually agree with you, whether its response was accurate or not. It also still doesn't know what is true or not, because it has only words to guide it.