AI IS EVERYWHERE… But is it Playing Fair?
CHRIS L.
AI has slipped into our lives almost without us noticing. It’s finishing our sentences, suggesting what we might watch next and quietly curating the posts that show up in our feeds. It’s presented as efficient, neutral and even a little magical, like a super-smart assistant that’s always on call. But here’s a question worth asking - if it’s learning from us, could it also be inheriting our flaws?
The truth is, AI doesn’t start with a blank slate. It learns from massive collections of human created data such as books, articles, photos, videos and online conversations. That means it learns our creativity, our ideas and our progress, but it also learns our prejudices, blind spots and stereotypes. If a certain group of people rarely appears in that data, AI starts to treat them as if they barely exist. If history has tied certain roles to certain genders or ethnicities, AI may start pairing those things together automatically. Even tiny skews in the data can grow louder as the system “learns” from them.
We’ve already seen this play out in real life. AI image generators have been caught producing pictures of “CEOs” that are almost always White men in suits, while “nurses” appear almost exclusively as women. Search engines sometimes return homogenous images for jobs that are far more diverse in reality. Chatbots have been known to complete sentences with outdated or harmful assumptions and moderation tools have flagged certain cultural slang as offensive simply because it doesn’t fit a narrow idea of what “normal” looks like.
This matters because AI is shaping what we see and hear every day, often invisibly. If its outputs lean toward certain narratives while leaving others out, it can reinforce stereotypes, erase voices, and quietly influence how we understand the world. The question isn’t just whether AI is useful, it’s whether it’s fair and we’re paying enough attention to find out.
And that attention becomes even more critical when AI moves into high-stakes areas like health, crime and the law. In healthcare, biased data could mean misdiagnoses for certain groups whose symptoms are underrepresented in medical records. If an algorithm has seen more data from White male patients, it may be less accurate for women, people of colour, or those with rare conditions this would literally put lives at risk. In policing, predictive algorithms have sometimes directed more patrols toward neighbourhoods that were already over-policed in the past, creating a feedback loop where certain communities are constantly under scrutiny. And in the legal system, AI tools used to assess “risk” in bail or sentencing decisions have been shown to produce harsher outcomes for some racial groups, even when the underlying crime was the same.
These aren’t small glitches - they’re structural problems that can magnify inequality if left unchecked. They remind us that AI isn’t just a piece of tech; it’s a decision-maker and those decisions can have real, lasting consequences.
Fixing this isn’t about rejecting AI but about shaping it intentionally. It means feeding it training data that reflects the full diversity of human experience, not just the loudest voices. It means checking its outputs for imbalance, especially when decisions affect someone’s health, safety, or freedom. It means giving it regular “course corrections” when it gets things wrong and building systems with transparency, so people understand the limits before those limits hurt someone.
AI is powerful and it’s only going to become more woven into daily life. But that power comes with responsibility, not just for the engineers who design it, but for all of us who use it. We can choose to let it reflect a narrow slice of humanity, or we can teach it to see the world in its full, complicated, brilliant diversity. The choice, at least for now, is still ours.