A former Facebook executive has turned his frustration with the platform’s broken content moderation system into a new AI-era startup and investors are taking notice.
Brett Levenson left Apple in 2019 to lead business integrity at Facebook, arriving in the middle of the Cambridge Analytica fallout. He quickly discovered the problem ran far deeper than technology.
Human reviewers were expected to memorize a 40-page policy document often machine-translated into their language and then had roughly 30 seconds per piece of flagged content to decide whether it violated the rules and what action to take.
The results were alarming. Those rapid-fire decisions were only “slightly better than 50% accurate,” Levenson said and that judgment often came days after the harm had already occurred.
That experience pushed Levenson toward a new concept: “policy as code” a way to transform static policy documents into executable, updatable logic directly tied to enforcement.
That insight led to the founding of Moonbounce, which announced Friday it has raised $12 million in seed funding, co-led by Amplify Partners and StepStone Group. The startup works with companies to provide an additional safety layer wherever content is generated, whether by a human user or by AI.
The timing is critical. AI companies are facing mounting legal and reputational pressure after chatbots have been accused of pushing vulnerable users toward self-harm, and image generators have been exploited to create nonconsensual imagery. Safety guardrails internally are failing and it’s becoming a liability question.
Amplify Partners’ Lenny Pruss said the firm envisions “objective, real-time guardrails” becoming the backbone of every AI-mediated application a vision Moonbounce is now racing to build.




