When Brett Levenson left Apple in 2019 to join Facebook, the company was already facing intense scrutiny after the Cambridge Analytica scandal. Content moderation was under the spotlight, and like many others, Levenson believed better tech could help fix it.
But once he stepped inside, the reality was different.
Moderators were working under tough conditions. They had to rely on long policy documents that weren’t always clearly written, especially after translation. And they had very little time—roughly 30 seconds—to review each flagged post and decide what action to take.
It wasn’t just about spotting a violation. They had to quickly choose whether to remove the content, limit its reach, or ban the user altogether.
Even with all that effort, the results weren’t great. Decisions were often inconsistent, and in many cases, action came after the harm had already spread.
That experience stuck with Levenson.
As online platforms grew and bad actors became more sophisticated, it became clear that the old way of handling moderation wasn’t enough. The situation has only become more complicated with AI, where chatbots and image tools have sometimes produced harmful or unsafe content.
Instead of patching the system, Levenson started thinking about a different approach.
What if policies weren’t just documents people had to read—but systems that could actually run and enforce rules in real time?
That idea led to Moonbounce.
The startup recently raised $12 million in funding from Amplify Partners and StepStone Group. Its focus is simple: add a real-time safety layer wherever content is being created, whether by users or AI.
Moonbounce’s system reads a company’s policies, checks content instantly, and responds in a fraction of a second. Depending on how it’s set up, it can either pause content for review or block it right away if it sees a risk.
The company is already working with platforms across different areas—dating apps, AI companions, and image generation tools. It processes millions of moderation checks daily and supports platforms with large user bases.
Some of its customers include Channel AI, Civitai, Dippy AI, and Moescape.
Levenson sees safety differently from how it’s traditionally handled. For him, it’s not just something that happens behind the scenes—it can actually improve the product itself.
That idea is becoming more relevant as AI companies face growing pressure. Concerns about chatbot behaviour and misuse of AI-generated content are increasing, and companies are being pushed to take responsibility.
Moonbounce steps in as an independent layer between users and AI systems, focusing purely on enforcing rules in real time without getting caught up in the full context of conversations.
Levenson is building the company alongside Ash Bhardwaj, a former Apple colleague with experience in large-scale AI systems. Their next focus is something they call “iterative steering.”
Instead of simply blocking harmful interactions, the system aims to guide conversations in a safer direction—adjusting prompts on the fly so the AI responds in a more helpful and supportive way.
Looking ahead, Moonbounce could easily become an acquisition target for larger tech companies. But Levenson is cautious about that path.
He doesn’t want the technology to be locked away or limited to one platform. In his view, tools that improve online safety should be widely available—because the challenges they address affect the entire industry.
Also Read: Meta Delays Release of New AI Model Over Performance Concerns








