There’s been a lot of focus this past couple of weeks on Gemini, the latest Google generative AI failure (remember the flubbed launch of Bard - which was rebranded as Gemini?). Gemini generated a slew of inappropriately diverse images from Nazi’s to Black female founding ‘fathers’ while also raising concerns about ignoring instructions to produce images of white people when explicitly directed to do so in a prompt.
There are articles from those who see this as an example of what happens when we try to program ‘woke’ sensibilities into a system. There are those who believe it was more of a technical fail than an ideological one and that once we get the right data or get the right guardrails, all will be well. There are concerns raised about diversity in the context of historical veracity.
It’s that last point that got me thinking about one of my favourite television shows - Bridgerton.
Bridgerton’s diversity question
I’m a huge fan of Shonda Rhimes. A couple years ago, she made some bold choices by casting a diverse set of actors in a period piece set in high society London’s Regency era - a move that drew both criticism as well as praise.
As a piece of historical fiction, Bridgerton portrays an alternate universe where a Black queen presides over London society and where race seems to be a non-issue. Yet, fiction also mixes seamlessly with historical facts. This includes a storyline about the true to life madness of Queen Charlotte’s monarch husband, King George III and the socio-economic sensibilities of the era that involve an elaborate set of rituals for women to marry well to secure their future. The show also illuminates less well known historical questions, such as whether the actual Queen Charlotte really did have Black ancestry - a point that remains an open area of scholarly research and debate. Linking all of this back to generative AI, when is historical accuracy important and when it is up for interpretation?
We understand that Bridgerton is not real, it’s a TV show. We should also understand that images generated by an AI system are similarly not real.
However, even fake images generated by an AI system, en masse, can influence how we think about our culture and our shared history. It can erase or stereo-type or misrepresent people in ways that are offensive. Gemini, which was supposedly designed to fix these problems, is simply the latest iteration of these problems.
The importance of who represents
Shonda Rhimes is a role model for many people as one of very few Black female showrunners in Hollywood. Her colour-conscious casting choices in Bridgerton have created real jobs for actors of colour. As a creator, she is empowered to make these artistic choices - which have not been without controversy. In an interview with Entertainment Weekly, she defends her choice to cast a Black woman in the role of Queen Charlotte:
"I always think it's interesting how desperate people are to prove that someone was not of color," she says. "That seems to be like a big focus in people's lives. I don't understand that, but for me, I really delved into the research in a deep way. We had a wonderful historian working with us, and we got inside of what was going on historically, so that then I could know when I was straying. Then, I could stray on purpose, or I could take one tiny thing I heard from history and turn it into something else." (Entertainment Weekly)
In the case of historical period pieces in general, a lack of diverse casting for both stage and screen has hindered opportunities for actors of colour. The argument of historical accuracy has been used to defend all white casts. A few years ago I did some work with a local theatre organization on the issue of achieving more diverse representation in the theatre community:
“The 35//50 Initiative began as a letter asking Albertan theatre institutions to commit to equitable employment of 35% Black, Indigenous, and People of Colour and 50% gender-variant folx and women by 2025.” (Alberta Theatre)
This initiative aligned with a range of efforts aimed at bringing more diverse cast and crew to theatre productions in general, even Shakespearean theatre. These efforts value representation and work opportunities for diverse actors as a priority - even if they challenge the notion of historical accuracy. They also aim to expand the types of stories being told, moving beyond Euro-centric perspectives. All of this is done within a particular context - we understand this work as fiction. Perhaps we need that same understanding when it comes to AI - that AI generated images are a computationally derived fiction.
Should AI images only represent 'reality'? Whose 'reality'?
To be very clear, I’m not writing this post in defense of generative AI systems or Google’s ill-conceived decision to launch Gemini to the general public as a form of product beta testing.
Instead, I think this whole situation raises much bigger questions about:
how we conceive of AI systems and their outputs,
how the design choices of a single organization get reflected in creating cultural artefacts that can have wide spread impact on our cultural perceptions,
the level of agency we afford people to use tools as they see fit
and the misplaced desire to view AI tools as being accurate depictions or sources of information.
We need to be able to imagine a more inclusive future, where racialized people are represented in domains where they've been excluded. Why not imagine a woman of colour as pope? Why not feature a diverse cast in a Broadway show or in a 17th century television drama? We need to uplift marginalized voices, allow for creators of all backgrounds to tell old stories in new ways and to imagine new ones.
By Katrina Ingram, CEO, Ethically Aligned AI
Sign up for our newsletter to have new blog posts and other updates delivered to you each month! Ethically Aligned AI is a social enterprise aimed at helping organizations make better choices about designing and deploying technology. Find out more at ethicallyalignedai.com
© 2024 Ethically Aligned AI Inc. All right reserved.