📖 5 min read
On Thursday, April 30, 2026, Elon Musk sat in a federal courtroom in California and admitted – under oath – that his AI company xAI used OpenAI’s own models to help build Grok. The admission came during cross-examination in the ongoing Musk vs. Altman lawsuit, and it’s one of the most striking moments of the entire trial.
The exchange, as captured by WIRED, went like this:
OpenAI Lawyer William Savitt: Do you know what distillation is?
Musk: It means to use one AI model to train another AI model.
Savitt: Has xAI done that with OpenAI?
Musk: Generally all the AI companies [do that].
Savitt: So that’s a yes.
Musk: Partly.
When pressed further on whether OpenAI’s technology had been used to develop xAI’s models, Musk responded: “It is standard practice to use other AIs to validate your AI.”
📧 Want more like this? Get our free The 2026 AI Playbook: 50 Ways AI is Making People Rich — Free for a limited time - going behind a paywall soon
What Is Model Distillation – and Why Does It Matter?
Model distillation is a technique where a smaller AI model is trained to mimic the outputs of a larger, more capable “teacher” model. The result: the smaller model becomes cheaper and faster to run while preserving much of the bigger model’s intelligence. Think of it as a student copying homework from the smartest kid in class – then submitting it as their own work.
Used legitimately, distillation helps companies optimize their own models. Used aggressively, it lets a competitor shortcut years of expensive research by learning from a rival’s outputs. The line between the two is legally murky, which is precisely why OpenAI’s lawyer put Musk on the spot about it.
Join 2,400+ readers getting weekly AI insights
Free strategies, tool reviews, and money-making playbooks - straight to your inbox.
No spam. Unsubscribe anytime.
The Irony Is Hard to Ignore
Here is where this story gets genuinely remarkable. Musk is suing OpenAI, in part, over what he claims is OpenAI’s betrayal of its original nonprofit mission. His lawsuit argues OpenAI has acted improperly and in ways that harm public interest. Yet on the stand, he just confirmed that his own AI startup – a direct OpenAI competitor – used OpenAI’s technology to get ahead.
The parallel to recent AI geopolitics is striking. In a February 2026 memo to a House committee, OpenAI wrote that it has “taken steps to protect and harden our models against distillation,” specifically citing DeepSeek as the threat. OpenAI framed the issue as a national security matter: “China can’t advance autocratic AI by appropriating and repackaging American innovation.”
In April 2026, Michael Kratsios, the White House’s director of the Office of Science and Technology Policy, issued a separate memo on preventing Chinese companies from distilling American AI models. The Trump administration has made model distillation a front in the US-China tech war.
And yet, sitting in that California courtroom, Musk – the man who donated roughly $250 million to Donald Trump’s 2024 presidential campaign – admitted his company was doing the same thing to OpenAI that the US government is trying to prevent China from doing to everyone else.
How Does Grok Compare to What It Was Supposedly Copying?
| Model | Company | Training Scale | Key Benchmarks (as of early 2026) |
|---|---|---|---|
| GPT-4o | OpenAI | Undisclosed | Strong MMLU, coding, reasoning |
| Grok 3 | xAI | ~200,000 GPUs (Colossus), 10x Grok 2 | Claims competitive with GPT-4o class |
| Claude 3.7 Sonnet | Anthropic | Undisclosed | Top-tier coding, instruction-following |
Grok 3 was trained with roughly 10 times more computing power than its predecessor Grok-2, using the Colossus data center containing around 200,000 GPUs. That is a massive infrastructure investment. Whether distillation from OpenAI models was a key part of how Grok achieved its performance – or simply a minor validation step – is something the court will have to dig into further.
The “Everyone Does It” Defense
Musk’s defense – that “generally all the AI companies” use distillation – is not entirely wrong. It is a known practice. OpenAI itself has used distillation internally, as has Google, Anthropic, and Meta. The difference is whether a company is distilling its own models (legal and standard) or using a competitor’s model outputs without permission (legally gray to outright banned under most terms of service).
OpenAI’s terms of service explicitly prohibit using its outputs to train competing AI models. If xAI accessed GPT-4 or ChatGPT’s outputs and used them to train Grok, that would be a direct violation – and potentially the most damaging evidence to emerge from this trial, not for Musk’s lawsuit against OpenAI, but for OpenAI’s potential counterclaims against him.
Musk’s “partly” answer and his dodge into “standard practice” language suggests his legal team is walking a tightrope: acknowledging enough to seem forthcoming while not conceding anything that could be used as an admission of wrongdoing.
What Happens Next
The trial is ongoing. OpenAI’s legal team has more questions to ask, and the distillation admission is almost certainly not the last bombshell. The outcome of this case could reshape how the entire AI industry thinks about model outputs – whether they count as intellectual property, whether distillation from API access violates ToS in a legally enforceable way, and whether AI companies can sue each other for training on each other’s data.
For everyday users, the practical implication is simpler: every AI chatbot you use today was almost certainly influenced, in some way, by every other AI chatbot. The question courts now have to answer is whether that is innovation or theft – and who gets to decide.
BetOnAI Verdict
Story significance: 9/10. This is the kind of courtroom moment that reshapes an entire industry’s legal landscape. Musk’s admission – even hedged as “partly” – hands OpenAI’s lawyers a powerful data point. The irony is almost too clean: the man suing OpenAI for betraying its mission, who simultaneously pushed for cracking down on China’s AI model-copying, just admitted his own company did the same thing.
The distillation question is not going away. If courts rule that using a competitor’s model outputs to train your own AI is legally actionable, it will affect every lab on the planet. If they rule it is standard practice and therefore acceptable, it will accelerate the race to copy-and-improve at a pace that makes today’s AI competition look slow.
Watch this trial closely. The Musk vs. Altman lawsuit started as a dispute about nonprofit governance. It is turning into a landmark case about who owns AI intelligence itself.
Sources
- WIRED – Elon Musk Seemingly Admits xAI Has Used OpenAI’s Models to Train Its Own
- The Verge – Elon Musk confirms xAI used OpenAI’s models to train Grok
- TechCrunch – Elon Musk testifies that xAI trained Grok on OpenAI models
Enjoyed this? There's more where that came from.
Get the AI Playbook - 50 ways AI is making people money in 2026.
Free for a limited time.
Join 2,400+ subscribers. No spam ever.