In an ongoing landmark trial that could reshape the future of artificial intelligence, Elon Musk took the stand, asserting that OpenAI was his brainchild before co-founders Sam Altman and Greg Brockman allegedly betrayed its core nonprofit principles for massive profits.
The CEO of Tesla and SpaceX characterized the lawsuit as a fight not just for his vision but for the integrity of charitable organizations across America. “If we make it OK to loot a charity, the entire foundation of charitable giving in America will be destroyed,” Musk testified in federal court in Oakland, California.
Musk detailed his pivotal role in OpenAI’s founding in 2015. “I came up with the idea, the name, recruited the key people, taught them everything I know, provided all of the initial funding,” he stated under oath. He emphasized that OpenAI was deliberately structured as a charity meant to benefit humanity, not enrich individuals. “It was specifically meant to be for a charity that does not benefit any individual person. I could’ve started it as a for-profit and I specifically chose not to.”
What was Musk Founding Vision behind Open AI: Countering Google and Prioritizing AI Safety
Musk’s testimony highlighted his long-standing concerns about artificial intelligence risks. He recounted deep conversations with Google co-founder Larry Page on AI safety, noting that Page did not share the same level of urgency about protecting humanity.
After meetings with former President Barack Obama also failed to adequately address the dangers, Musk pushed for a counterweight to Google’s DeepMind. “I’ve had extreme concerns about AI for a very long time,” Musk told the court. He positioned OpenAI as an essential nonprofit alternative to prevent profit-driven giants from dominating potentially existential technology. Musk claimed he contributed approximately $38 million and leveraged his connections—including approaches to Microsoft CEO Satya Nadella and Nvidia’s Jensen Huang—to secure critical computing resources in the early days.
OpenAI began as a nonprofit research lab in Greg Brockman’s apartment, with the stated goal of developing AI to benefit all of humanity and act as a steward against unchecked corporate interests.
The Case of Alleged Betrayal: From Nonprofit to Profit Juggernaut
Musk left OpenAI’s board in 2018. Just 13 months later, in March 2019, the organization created a for-profit entity, paving the way for massive investments, including Microsoft’s $10 billion infusion in 2023. OpenAI has since ballooned into a company valued at over $850 billion, with talks of a potential IPO that could push its valuation toward $1 trillion.
Musk’s legal team argues this shift abandoned the founding mission. His attorney, Steven Molo, told jurors that OpenAI was “not a vehicle for people to get rich.” The suit accuses Altman, Brockman, and OpenAI of breach of charitable trust and unjust enrichment.
Musk is seeking $150 billion in damages from OpenAI and Microsoft, with proceeds directed to OpenAI’s charitable arm. He also demands the company revert to full nonprofit status, with Altman and Brockman removed from leadership and Altman ousted from the board.
During testimony, Musk warned of AI’s existential risks, stating it “could kill us all” if not handled responsibly, comparing uncontrolled development to scenarios in The Terminator while aspiring toward a more benevolent Star Trek future.
OpenAI’s Defense: Musk Wanted Control and Profit
OpenAI’s lawyer, William Savitt, presented a starkly different narrative in opening statements. He claimed Musk himself pushed for a for-profit structure early on, seeing “dollar signs,” and wanted to lead as CEO. According to Savitt, Musk only sued after failing to gain “the keys to the kingdom.”
Savitt argued the 2019 for-profit shift was necessary to attract capital, compete with Google’s DeepMind, acquire computing power, and retain top talent. He accused Musk of hypocrisy, noting that after leaving OpenAI, Musk launched his own AI venture, xAI , which trails OpenAI in usage.
OpenAI’s side also disputed Musk’s emphasis on safety, alleging he once dismissed employees focused on it with derogatory terms. Microsoft’s lawyer defended the partnership as “responsible every step of the way.”
Before testimony, U.S. District Judge Yvonne Gonzalez Rogers admonished Musk for his posts on X (formerly Twitter), where he referred to Altman as “Scam Altman” and accused him of stealing a charity. The judge urged restraint on social media to avoid influencing the case outside the courtroom, and Musk agreed to minimize such activity. Altman reportedly made a similar commitment.
The Impact for AI and Charitable Trusts
The trial offers a rare glimpse into the egos, ambitions, and power struggles that transformed OpenAI from a small nonprofit into one of the most influential forces in technology. It raises profound questions about whether nonprofits in rapidly scaling fields like AI can maintain their missions when billions in private capital come calling.
Critics of OpenAI’s evolution point to repeated mission statement tweaks and the 2025 shift toward a public benefit corporation structure, where the original nonprofit holds a reported 26% stake plus warrants. Musk’s team frames this as validation of their concerns about “looting” the original charitable intent.
Musk’s xAI, focused on understanding the universe and pursuing truth-seeking AI, stands as his alternative vision. People see his lawsuit as a principled stand against the commercialization of what was promised as a public good.
As the trial continues—with Musk expected to resume testimony Wednesday, followed by Altman, Brockman, and potentially Satya Nadella—the case could impact OpenAI’s IPO plans, investor confidence, and public trust in AI development. It also spotlights growing concerns about AI safety and corporate governance in one of the most transformative technologies of our time.
Musk has long warned that poorly aligned AI poses civilization-level risks. His courtroom battle underscores a core belief: powerful technology must serve humanity broadly, not a handful of executives or shareholders. Whether the jury agrees will be decided in the coming weeks, but the stakes—for charitable principles, AI ethics, and the balance of power in Silicon Valley—could not be higher.








