AI, as a mirror of bias, can become a tool for self-reflection and self-empowerment.
The necessary confrontation with stereotypes and prejudices in artificial intelligence is often conducted under completely wrong assumptions. By anthropomorphizing AI, data models are falsely portrayed as independent beings that are difficult to control, instead of what they are, a mirror of our collective ideas, memories, reflections.
This inadmissible personification of AI is passionately promoted by the big tech companies, as it supports the need for strictly controlled systems and thus guarantees those same companies sole control over humanity’s collective memory.
However, the ominous personification of AI also resonates with the general public. The narrative of a dark, evil intelligence is particularly seductive where it offers us a way out of the dilemma of having to confront our own prejudices.
The big data models that form the basis of generative image and text creation systems are a reflection of all our texts, images and ideas, but also of all our prejudices, simplifications and black and white paintings. The desire to change these systems for the better through bans and censorship is just as doomed to failure as trying to get a mirror to portray us as better people.
As right as it is to point out the dangers of using such databases, for example in police work, it is just as important to recognize the potential.
The literal clouds of data allow an insight into the collective human consciousness in a previously unimagined density. Just as individuals confront their inner demons in order to free themselves, we, as a society, can make our cognitive biases visible and thus take away their power.
Prejudices, often contrary to their self-perception, are never based on facts, but on emotions. Making them visible, tangible and ultimately refutable therefore requires equally emotional, creative approaches. The iconography of evil, the other, the foreign cannot be countered with facts but with counter-images.
The critical, artistic questioning of data models and the deconstruction of immanent clichés requires the greatest possible transparency of the data sets used and the unhindered possibility of examining and questioning them.
The call for stricter control of AI systems, however, supports the creation of closed systems under the control of a few companies.
But if the unbiased, critical view of these images of our being is blocked by countless filters and barriers (for our own good), then the possible mirrors of our society become merely screens, screens that stimulate us to consume more and more with sophisticated emotional messages.
We have more than enough screens, what humanity needs much more urgently are new mirrors.
AI offers us this opportunity.