Our February 2026 webinar tackled one of the most pressing questions in the future of work: can AI actually be ethical? Here’s what we covered.
Last week, our fearless Co-Founder Felicia Jadczak facilitated a live webinar exploring what it means to use AI ethically inside an organization and whether “ethical AI” is even achievable. With attendees joining from across the globe and the chat moving a mile a minute, it was a rich conversation that raised as many questions as it answered, which isn’t surprising given the novelty of this space.
Watch the full webinar recording here
If you missed it or want a quick rundown of what we covered, here are the key takeaways.
AI, in its current form, is a tool, not a person, not a search engine, and not neutral
Before we can talk about ethics, we need a shared understanding of what AI actually is. We spent time distinguishing between narrow AI (spam filters, fraud detection, recommendation algorithms) and generative AI (ChatGPT, Claude, image generators) and making the case that all of it should be understood as a tool built by humans, shaped by human decisions, and carrying human biases.
Is ethical AI even possible?
We polled our attendees and got a full spectrum of responses, from enthusiastic yeses to firm nos. Our position: AI can be designed to align with human values like fairness, transparency, and safety, but that design work is entirely on us. Ethics can’t be an afterthought. If it’s coming later, it’s already too late.
Key takeaways
On algorithmic bias:
- AI reflects and often amplifies the biases in its training data in hiring, healthcare, lending, criminal justice, and more.
- A 2025 Nature study found that ChatGPT-generated resumes for women were rated lower than equivalent resumes for men. The AI wasn’t just reflecting bias, it was reinforcing it.
- Before adopting any AI tool, ask: What was it trained on? Has it been tested across demographic groups? Can you audit its decisions?
On data privacy and consent:
- Data entered into public AI tools may be used to train future models, stored indefinitely, and collected beyond just what you typed. Metadata like location and device type is captured too.
- Treat any input into a public AI tool like a post on a public forum
- Use enterprise-grade tools with clear data privacy policies, anonymize sensitive inputs, and always verify outputs. AI can and does hallucinate.
On environmental and human labor impact:
- AI systems require significant energy and water to run, and the burden isn’t distributed equally. Marginalized communities often bear a disproportionate share of the environmental costs.
- There is significant “invisible labor” behind AI: data labelers, content reviewers, and remote monitors, often doing difficult work in low-wage conditions.
- Use AI judiciously. Not every task warrants it.
On workforce impact:
- Watch out for “AI washing“. Companies framing layoffs as inevitable AI-driven change when only ~2% have implemented AI at a scale that actually justifies large headcount reductions.
- Apply an equity lens. AI displacement is affecting workers across income levels, from customer service to software engineering. Understand who in your organization is most exposed and find ways to support them.
- Center employee voice before selecting or deploying tools.
Where to start: a quick action plan
- Audit what’s already in use. You may find more (or less) than you expected.
- Draft a values statement to guide AI decisions in your organization.
- Create space for dialogue. Consider surveys, listening sessions, honest conversations with your team.
- Develop vendor evaluation criteria. Include ethics, not just features and cost.
- Start small, pilot, iterate. Keep humans in the loop at every stage.
These questions don’t have easy answers, and the landscape is shifting fast. But the organizations that will navigate this well are the ones asking hard questions now.
Want to keep the conversation going? Reach out to us. We’d love to talk about what ethical AI strategy looks like in your specific context.