Chris Lehane has long been known as a top-tier operator when it comes to managing bad press. He served as Al Gore’s press secretary during the Clinton administration and later became Airbnb’s go-to fixer during its many regulatory clashes around the globe. Now, he’s taken on what might be his most difficult assignment yet: serving as OpenAI’s Vice President of Global Policy, where his role is to convince the public that the company is truly committed to democratizing artificial intelligence — even as its actions increasingly mirror those of the very tech giants it once claimed to be different from.
I spent 20 minutes on stage with him at the Elevate conference in Toronto earlier this week, hoping to move beyond the rehearsed messaging and dig into the deeper contradictions at the core of OpenAI’s public image. That proved difficult. Lehane is polished, engaging, and thoughtful. He even spoke about waking up at 3 a.m., questioning whether OpenAI’s work will truly serve humanity. Still, charisma and self-awareness don’t cancel out the increasingly controversial decisions the company continues to make.
Intentions can only go so far when the company is sending legal threats to critics, setting up resource-hungry data centers in vulnerable communities, and enabling the digital resurrection of deceased celebrities — all while presenting itself as a force for good.
At the center of this growing tension is OpenAI’s newest and flashiest tool: Sora. This video-generation model launched last week to massive buzz — and immediate controversy. It seemed to contain copyrighted content, raising alarms for a company already facing major lawsuits from media outlets and content creators.
Despite the legal risks, the launch was a hit from a branding perspective. The app quickly shot to the top of the charts, as users began creating surreal and often unauthorized content featuring themselves, public figures, cartoon characters, and even famous personalities who’ve passed away.
When I asked Lehane why OpenAI moved forward with this version of Sora, he delivered a well-rehearsed line: Sora is a transformative tool — like electricity or the printing press — that gives everyone, not just artists, the ability to be creative. He even joked that someone like him, who claims no creative talent, can now make videos.
But what he didn’t say outright is that OpenAI initially allowed copyright holders to “opt out” of training data — a reversal of how copyright normally works, where creators must explicitly give permission. Then, once they saw how users embraced this kind of content, the company shifted toward an opt-in model. That’s not innovation — it’s testing the limits of what they can get away with.
The entertainment industry, especially film and TV companies, did raise objections. But so far, OpenAI seems to have sidestepped serious consequences — at least for now. It’s yet another example of how the company pushes boundaries, often without accountability.
This mirrors the frustration of writers, journalists, and publishers who argue OpenAI used their work without compensation. When I brought this up with Lehane, he leaned on the U.S. concept of “fair use” — a legal gray area meant to balance the rights of creators and the public’s access to information.
He called fair use a strategic advantage for American tech companies. That may be true legally, but morally it raises questions — especially when AI systems can now replace the very content they’re trained on. I told Lehane: “It’s not just innovation — it’s replacement.”
That seemed to momentarily shake him. Dropping the PR script, he admitted we don’t yet have solutions. “It’s easy to talk about new revenue models,” he said. “But I think we’ll get there.” The subtext: no one knows exactly how this will work.
Then there’s the elephant in the room — the immense power and energy demands of these systems. OpenAI already has a data center in Abilene, Texas, and has begun building another massive site in Lordstown, Ohio. Both are towns with economic struggles — and both are now hosting facilities that consume staggering amounts of electricity and water.
Lehane has said AI access is like electricity — those who got it last are still catching up. But OpenAI is now setting up power-hungry infrastructure in places that may not be in a position to say no. When I asked whether these communities will benefit or just bear the cost, Lehane responded with numbers and geopolitics.
He mentioned that OpenAI needs roughly one gigawatt of power weekly, and compared it to China’s rapid energy expansion. His point: if democracies want to lead in AI, they need to invest at scale. He painted a picture of modernized power grids and revitalized economies.
It was optimistic, even inspiring. But it didn’t really answer whether the residents of Lordstown or Abilene will be stuck with higher utility bills so people can generate celebrity deepfakes.
That led to the most uncomfortable topic of all. Just a day before our conversation, Zelda Williams — daughter of the late Robin Williams — posted an emotional plea online. She asked people to stop sharing AI-generated videos of her father. “You’re not making art,” she wrote. “You’re making grotesque imitations of real human lives.”
When I asked Lehane how OpenAI balances these deeply personal harms with its mission to serve humanity, he again leaned into policy and process. He spoke of frameworks, testing protocols, and working with governments. But he admitted there’s no clear roadmap for this kind of ethical territory.
In some ways, Lehane seemed genuinely burdened by the responsibility. He told the audience he often wakes up in the middle of the night, thinking about democratization, energy infrastructure, and AI’s global impact. “There’s a lot riding on this,” he said.
And I believe him. He’s not just spinning; he’s carrying the weight of a mission that, increasingly, doesn’t match the reality. Watching him, I saw someone trying to thread a near-impossible needle: defending a company whose actions are often at odds with its ideals.
Then something unexpected happened the day after our talk. A lawyer named Nathan Calvin, who works on AI regulation at a nonprofit, revealed that OpenAI had sent a sheriff’s deputy to his home — during dinner — to serve him with a subpoena.
The request demanded private communications between Calvin and legislators, students, and former OpenAI employees. Calvin believes it was a tactic to intimidate him over his work on a California AI regulation bill that OpenAI opposed. He was also critical of OpenAI’s claim that it helped shape the bill — something he publicly mocked.
He accused OpenAI of using its legal dispute with Elon Musk as a smokescreen to target independent critics. He called Lehane, specifically, “a master of the political dark arts” — not as praise, but as a condemnation of how OpenAI is wielding power.
In Washington, such a label might be a badge of honor. But at a company that claims its goal is to benefit all of humanity, it sounds like a damning contradiction.
More concerning still is that disillusionment appears to be growing inside the company. After Sora’s release, multiple OpenAI employees, both current and former, voiced serious concerns about its direction.
One researcher said that while the tech was impressive, it was too soon to declare victory over the dangers of deepfakes. Another executive, Josh Achiam — who oversees mission alignment — publicly expressed alarm over the company’s recent actions.
Achiam wrote that he feared OpenAI was crossing the line from being a “virtuous power” to a “frightening one.” He even acknowledged that speaking out might jeopardize his job — a rare moment of open dissent within the leadership ranks.
That comment hit hard. Here was someone who had chosen to work at OpenAI because of its idealistic mission — now questioning whether the company still lives up to those ideals.
And so the real issue may not be whether Lehane can successfully sell OpenAI’s story. The more important question is whether the people inside the company still believe it.
Also Read:
TCS Opens AI Experience Zone and Design Studio in London, to Create 5,000 Jobs in the UK
YouTube expands AI shopping tools to India to strengthen creator-led commerce








