top of page

Ethical AI or Cultural Liability?

Updated: Sep 23

In 2023, the hype around AI exploded. HR tech companies promised smarter recruiting, more equitable performance reviews, and “bias-free” hiring. Corporate leaders embraced the tools, eager to cut costs and speed up decisions. By 2024, AI was already shaping how resumes were screened, how workers were monitored, and even how layoffs were triggered.


Now, in 2025, the conversation is shifting. Not because the tools got smarter—but because the people using them are asking harder questions.


As HR professionals—and especially as Black HR professionals—we need to be leading those conversations. Because here’s the truth: when technology is evolving faster than policy, power fills the gap. And right now, the people shaping AI in HR? They don’t always look like us, live like us, or lead like us.

So if AI is going to play a role in the future of work, we have to ask: Whose future is being designed? Whose language is being translated into code? And who gets left out of the algorithm?


Female analyst studying data analytics with glasses, charts, and Ethical AI.

The New Frontier—With Old Patterns

Let’s not pretend AI is neutral. Every tool reflects the values, assumptions, and blind spots of its creators. That’s why facial recognition software misidentifies Black faces at higher rates. Why resume screening algorithms downgrade applicants with HBCU degrees or names that “don’t match” dominant norms. Why surveillance tools track productivity by keystrokes—not contribution.

In other words: bias gets baked in. And when it does, it moves faster, scales wider, and hides behind the myth of objectivity.

This moment isn’t new—it echoes past patterns. Think about standardized testing. On paper, it was supposed to level the playing field. In practice, it created new barriers that disproportionately impacted Black and Brown students. Now here we are again: a new system, sold as “fair,” but built on data from an unequal past.




AI in HR: The Big Promises—and the Big Risks

The benefits are real. AI can help us detect pay gaps, predict turnover, and improve compliance tracking. But it also opens the door to:

  • Automated discrimination: When screening tools are trained on biased data, they replicate biased outcomes.

  • Increased surveillance: Productivity trackers disguised as wellness tools can cross the line into micromanagement and control.

  • Loss of human judgment: Over-reliance on AI can erode empathy, nuance, and lived experience—things essential to equitable HR.

What’s worse? Many HR professionals don’t get proper training on these systems. You’re expected to roll them out, not challenge them. But that’s exactly what we need to do.


From Custodian to Culture Architect

Here’s where “Culture Keeper, Change Maker” becomes more than a theme—it becomes a charge.

If we’re going to protect the integrity of the workplace while reimagining its future, we need to understand how these tools work, who designed them, and what values are embedded in the code. That doesn’t mean you need to be an engineer. It means you need to be able to:

  • Ask the right questions: Where does this data come from? How is it validated? Who reviews edge cases?

  • Spot red flags: Does this tool flag “culture fit” without clear, inclusive definitions? Does it offer transparency or just outputs?

  • Push for oversight: Who’s responsible when AI gets it wrong? And how do we fix harm once it’s done?

This is how we move from culture custodians to architects of systems that honor our values and protect our people.


Why This Matters—Especially Now

Let’s be real: Black professionals have always had to be strategic about visibility, voice, and value in the workplace. AI doesn’t erase those dynamics—it reshapes them.

Consider this: A 2024 Harvard study found that 70% of AI tools used in recruiting were not tested for racial bias before implementation. Seventy percent. That’s not just an oversight—it’s a liability.

But it also opens a door. Because if we show up at the table with both cultural insight and strategic foresight, we can influence how these systems evolve.

You don’t have to be anti-tech to be pro-people. You just have to be brave enough to ask: What does justice look like in an automated world?


Moving Forward: What HR Can Do Now

If your org is using AI—or thinking about it—here’s where you start:

  • Audit your tools: What tech are you using in hiring, performance, engagement, or productivity tracking? Get clear on the data sources and criteria.

  • Partner with legal and ethics teams HR can’t do this alone. Push for cross-functional conversations about bias, consent, and compliance.

  • Educate your people Host learning sessions. Share resources. Make AI a conversation, not a black box.

  • Lead with both values and outcomes Speak to the business case and the human case. If culture is what sets your organization apart, AI has to serve it—not erase it.


Bottom Line

AI isn’t going away. And for HR professionals who care about culture, equity, and trust, it can’t be an afterthought. It has to be part of the strategy.

Because if we’re not in the room when the systems are built, we’ll be the ones left trying to fix the damage. And that’s not justice—that’s cleanup.

The future of work is being written in code, data, and metrics. But it’s our responsibility—as culture keepers and change makers—to make sure that future is worth inheriting.



 
 
 

Comments


bottom of page