This is going to be a long one, so hang tight.
Last week, something interesting happened in Ottawa. The inaugural Global Responsible AI Summit brought together an unlikely mix: government officials who actually use AI at scale, philosophers asking uncomfortable questions, startup founders building in the trenches, and community activists representing people who never asked to be in an AI training dataset.
The vibe? Less "AI will save us all" and more "lets do this and let's not screw it up."
The Elephant in the Room
Here's the thing nobody wants to say out loud: most AI projects are failing. Not "underperforming" or "needs optimization", they are actually failing. IBM's Melanie shared that 75-80% of AI projects don't make it past the pilot stage.
But here's the twist: the winners aren't necessarily the ones with the best models or the biggest compute budgets. They're the ones who figured out trust early.
Why "Move Fast and Break Things" Doesn't Work Anymore
Brett Tackaberry from Google Cloud put it bluntly: building responsible AI isn't a nice-to-have, it's your competitive moat.
Think about it, one irresponsible deployment can tank trust across your entire organization. It's like food safety. You can serve amazing meals for years, but one case of food poisoning and suddenly nobody's coming back.
The Canadian government learned this the hard way. They're now handling 52 million chatbot inquiries per year, and they've had to get surgical about every design choice. Want to know what "operationalizing ethics" actually looks like? Here's what they do:
Limit conversations to 3 exchanges because research showed the AI loses coherence after that
Cap questions at 160 characters because anyone writing more is usually trying to jailbreak the system
Achieve 95% accuracy in English, 94.4% in French through sentence-by-sentence human evaluation (not perfect, but try getting a human call center to match that across millions of queries)
These aren't arbitrary rules. Each one emerged from actual user research and failure analysis. That's the difference between responsible AI theater and actually doing the work.
The Tools Nobody Tells You About
Okay, so how do you actually do this? The summit surfaced some practical frameworks that go beyond generic "ethics guidelines":
Data Cards & Model Cards: Think of these as nutrition labels for your AI. What's in this dataset? How was it collected? Who's missing? What are the known biases?
One panelist noted that if you can't explain what's in your model the way you'd explain ingredients in food, you're not ready to deploy.
The System Prompt is Your Foundation: Canada's AI chatbot has a 37-page system prompt. Yeah, 37 pages. Small tweaks to it can break everything. This isn't something you bang out in an afternoon and forget about.
Incident Response > Perfect Prevention: Here's something refreshing, they acknowledged that with foundation models updating constantly, you sometimes can't even replicate errors. They've had three serious bias incidents, and in two cases, couldn't recreate the problem because the model had already updated.
The takeaway? Build operational processes to catch and fix issues fast, rather than trying to prevent every possible failure upfront.
The Questions That Made People Uncomfortable (The Good Kind)
Philosophy professor Christoph Brunner asked something that stopped the room: "What kind of society is giving birth to what kind of technologies?"
Because here's what we don't talk about enough, we treat AI like it just appeared, fully formed. But AI systems reflect the societies that build them.
The example that hit hardest? Soap dispensers that don't recognize darker skin tones. It's not a bug. It's racism baked into technology because the people building and testing it didn't think to check.
And it's not just about representation on your team (though that matters). It's about who gets to define what the problem even is.
The Global Divide Nobody Wants to Address
Hamid from Onaremit (a fintech serving African businesses) said this: AI credit systems are locking established African businesses out of global markets.
Picture this: A Nigerian business that's been operating successfully for 20 years, doing $50 million in transactions, gets rejected by an AI lending system because it doesn't have a "proper credit history" in a Western database. Meanwhile, a startup in Silicon Valley with six months of existence gets approved.
The AI isn't technically wrong, it's just trained on data that treats entire continents as edge cases.
This is happening everywhere. As one panelist put it: "AI was developed in the Global North, but it's going to do the most damage in the Global South."
The Participation Trap
Everyone agrees we need "diverse voices at the table" and "meaningful participation." But what does that actually mean?
Dr. Dora Vrabiescu from the Vector Institute worked on a massive project involving 200+ people from 50+ countries on AI policy for gender equality. Their insight: consultation isn't enough, you need co-creation from day one.
That means:
Not asking communities what they think after you've already designed the solution
Actually paying people for their expertise and time (revolutionary, I know)
Understanding that meaningful participation takes time, and in a "move fast" industry, that's the hardest sell
One speaker made a point that really landed: "We need to stop treating lived experience as anecdotal and start treating it as data."
The Surveillance Paradox
This came up multiple times and nobody had a perfect answer: collecting demographic data can improve equity, but it also creates surveillance risks.
To build less biased systems, you need data on race, gender, disability status, etc. But what happens when that data exists and a government changes? What seemed like a tool for equity could become a weapon.
So what was Canada's approach with their chatbot? They decided not to collect extensive demographic data, even though it means they might build inferior products for some groups. They chose privacy risk over performance optimization.
Is that the right call? Depends who you ask. But at least they're making the tradeoff explicit instead of pretending it doesn't exist.
What Actually Works (According to People Doing It)
Start with the "Do We Even Need AI?" question: Seriously. Not everything needs a neural network. Sometimes a decision tree is fine. Sometimes a human is better. AI should be deployed when other methods genuinely fail.
Build small, test with real users, iterate obsessively: Michael Carlin from the Canadian Digital Service mentioned they're about to do targeted testing with trans communities on how they interact with government services. Why? Because they don't have trans people on their development team, and they're not going to pretend they can design for experiences they don't understand.
Make the business case for ethics: IBM's team found that executives are nervous about AI, not because they don't want to innovate, but because they don't want to get sued or humiliated. Frame responsible AI as risk management and suddenly you get budget and attention.
Use existing regulatory frameworks: Instead of waiting for perfect AI-specific regulation, embed AI considerations into existing frameworks, medical device approval, financial compliance, etc. The UK is going hard on sectoral approaches, and it's showing promise.
The Sovereignty Question
Multiple speakers brought up something founders should be thinking about: data sovereignty and digital enclosure.
One audience member pointed out that we've gone from the early internet—where you owned your tools and could install what you wanted—to today, where you don't own anything. Your phone, your apps, your data, it's all rented. You're not a user; you're a subscriber.
And now AI is being trained on collective human knowledge, our writing, our art, our conversations, and treated as "free raw material" for private companies to monetize.
Is there a public dividend owed? Should there be data commons? These questions didn't get answered, but the fact that they're being asked at government-level summits is significant.
What This Means for You
If you're building with AI right now, here's what I'd take away:
1. Budget for participation from the start: Not as a PR exercise, as actual co-design with the communities your product will affect. Yes, it takes time. Yes, it costs money. No, you can't skip it if you want to build something that lasts.
2. Your system prompt is your foundation: Spend real time on it. Test it. Version control it. One panelist mentioned their prompt is 37 pages long, that might sound insane, but it's the difference between a system that works and one that occasionally tells users to break the law.
3. Accuracy isn't everything: Canada's chatbot has a 5% error rate, which sounds great until you realize that's 5% of 52 million queries. They've had to build entire processes around high-risk scenarios (travel advisories, vaccinations) where wrong answers could literally harm people.
4. Plan for failure gracefully: Build incident response processes before you need them. With models updating constantly, you might not even be able to replicate bugs, so focus on catching and fixing fast rather than preventing everything.
5. Make it specific: Instead of vague "we value fairness" statements, ask: "What happens if a 75-year-old Mandarin speaker with low tech literacy uses this? What about someone using a screen reader? What about someone who's undocumented and afraid to identify themselves?"
The Uncomfortable Truth
Here's what I kept hearing between the lines: we're in a weird moment where everyone knows we need guardrails, but nobody's quite sure what they should look like.
The EU has the AI Act. The US just had its executive order rescinded. Canada's somewhere in the middle, trying to balance innovation with protection. Meanwhile, models are evolving faster than regulation can keep up.
But here's the thing, that uncertainty is actually an opportunity. The companies and builders who figure out responsible AI practices now, before they're mandated, will be the ones who shape what those eventual requirements look like.
What's Next
The Ottawa Responsible AI Hub is planning hackathons for 2026 and expanding the summit to a multi-day event covering healthcare, finance, defense, and more.
More importantly, they're working on practical tools, a responsible AI assessment for small businesses, an AI literacy program for marginalized communities, and regular talks connecting policymakers with builders.
If you're in the Ottawa ecosystem (or want to be), reach out to them. If you're elsewhere, find your local equivalent. These conversations are happening everywhere, and the builders who engage early will have outsized influence on where this all goes.
The Bottom Line
Nobody left this summit thinking responsible AI is easy. But they also didn't leave thinking it's optional.
As one speaker put it: "Responsible AI will define our competitiveness, our sovereignty, and our society. We have a choice right now. AI can be a great equalizer or a great divider. It can concentrate power or democratize it."
For founders, that means the question isn't "Can we afford to build responsibly?" It's "Can we afford not to?"
Because in a world where 80% of AI projects fail, trust isn't just nice to have.
It's the whole game.
Want to dive deeper? The Ottawa Responsible AI Hub is looking for collaborators. And if you're working on projects that need research partnerships with universities, MITACS funds collaborations between private sector and academic institutions across Canada.
What's your take? Are you building with these principles in mind, or is "responsible AI" still feeling like corporate buzzword bingo? Hit reply—I'd love to hear what's actually working (or not) in the trenches.
