On Wednesday, Meta dropped Muse Spark โ the first AI model from Meta Superintelligence Labs, the division they built around Alexandr Wang after poaching him from Scale AI for $14.3 billion last June. The stock popped 6.5%. The headlines were generous. And if you read past the press release, the picture is a lot more complicated.
Here's the thing: Meta just told investors they're spending between $115 billion and $135 billion on AI-related capital expenditures in 2026 alone. That's nearly double what they spent last year. And the first concrete product of all that spending is a model they describe as "small and fast by design" that โ by their own admission and third-party benchmarks โ still can't code as well as the competition.
That's not nothing. But it's not what $135 billion should buy you either.
What Muse Spark Actually Is
Let's start with what they built. Muse Spark is a natively multimodal reasoning model. It handles text, images, and tool use. It supports what Meta calls "visual chain of thought" and multi-agent orchestration. The headline feature is something called Contemplating Mode, where the model spawns multiple agents that reason in parallel to solve harder problems.
That Contemplating Mode is the interesting part. It's Meta's answer to Google's Gemini Deep Think and OpenAI's GPT Pro extended reasoning modes โ but instead of one model thinking longer, it uses a squad of agents working simultaneously. In theory, that's a clever architectural bet. Parallel reasoning could scale differently than serial chain-of-thought. In practice, we'll see.
The model itself was built from scratch. Meta's blog said the team "rebuilt our AI stack from the ground up" over nine months, and that this is just the foundation โ the next generation is already in development. They're positioning Muse Spark as the beginning, not the product.
But the beginning of what, exactly?
The Gap Nobody's Ignoring
The New York Times put it bluntly: Muse Spark "performed better than Meta's previous AI models but lags rivals on coding ability." That's a diplomatic way of saying it can't do one of the things enterprises care about most.
Coding is where the money is right now. Every major AI company โ OpenAI, Anthropic, Google โ is racing to build the best coding assistant because that's what enterprises are actually paying for. Developers are the first large-scale AI user base that reliably generates revenue. When your model can't compete on coding, you're missing the market that's paying the bills.
Meta's pitch is that Muse Spark excels at "science, math, and health" reasoning. That's fine. Those are real capabilities. But they're also capabilities that are harder to monetize directly, and Meta doesn't have the distribution infrastructure for enterprise AI the way OpenAI (via Microsoft) or Google (via Cloud) does.
So the question becomes: who is this for? Meta AI users on Facebook and Instagram? Developers building on Meta's platform? Enterprises evaluating their AI stack? The answer seems to be "everyone, eventually" โ which usually means "nobody in particular, right now."
The Alexandr Wang Bet
The backstory matters here. Meta hired Alexandr Wang โ the founder and CEO of Scale AI โ in June 2025 as part of a deal that valued Scale at $14.3 billion. Wang now runs Meta Superintelligence Labs, the unit that produced Muse Spark. This was Zuckerberg's biggest bet in the AI race: bring in the guy who built the data infrastructure that half the AI industry relies on and let him build something from scratch.
Nine months later, the first output is a model that Meta itself describes as "small and fast." That's not an insult โ there's real strategy in shipping something lean and iterating. But it's worth comparing timelines. When OpenAI brought in Mira Murati (and later shuffled her out), they shipped GPT-4 within a comparable window. When Google reorganized around Gemini, they had a competitive frontier model within months. Meta's first shot is deliberately sub-frontier, and they're telling you the real stuff is coming later.
That takes confidence. Or it takes not having a choice.
"Over the last nine months, Meta Superintelligence Labs rebuilt our AI stack from the ground up, moving faster than any development cycle we have run before." โ Meta, clearly aware they needed to frame the speed, not the output.
The Money Problem
Let's talk about the number that should make everyone uncomfortable. $115 to $135 billion in AI capex for 2026. That's not total spending โ that's capital expenditures alone, mostly on data centers and compute hardware. Nearly double what they spent in 2025.
For comparison, OpenAI is approaching $25 billion in annualized revenue. Anthropic is at about $19 billion. Those are revenues, not costs. Meta is spending more on infrastructure in a single year than either of those companies generates. The implicit bet is that AI capabilities will eventually translate into Meta's existing revenue streams โ better ad targeting, more engaging feeds, new product surfaces. But "eventually" is doing a lot of heavy lifting when you're writing checks that large.
The open-source strategy also shifted, and that matters. Meta's previous generation โ the Llama models โ were open source, which earned them enormous developer goodwill and ecosystem adoption. Muse Spark is proprietary. Meta said it's unclear whether they'll make it available outside their own products. That's a significant strategic pivot. They went from "we'll democratize AI" to "actually, we need to keep this one."
That pivot tells you something about what they think the competitive landscape looks like. Open-sourcing your models is a great strategy when you're behind โ you get ecosystem adoption, developer contributions, and goodwill. It's a harder strategy to maintain when you're trying to build something that actually differentiates your products. Meta seems to have decided they need proprietary advantages more than they need community love.
What Contemplating Mode Tells Us About Where AI Is Going
Strip away the corporate strategy and the Contemplating Mode feature is genuinely interesting as a technical direction. The idea that you can improve reasoning not by making one model think harder but by having multiple agents think in parallel is a real architectural insight.
It's similar to what Perplexity is doing with their multi-model orchestration, but at a different level. Perplexity routes different tasks to different specialized models. Meta is having multiple instances of the same model attack the same problem from different angles. Both are bets against the "one model to rule them all" thesis that dominated 2024 and early 2025.
If this works โ if parallel multi-agent reasoning turns out to scale better than serial chain-of-thought โ it could change how everyone builds frontier models. And Meta, for all its stumbles in AI product, has genuine research talent and more compute than almost anyone. The question isn't whether they can build interesting things. It's whether they can ship them as products people want before the market passes them by.
The Actual State of Play
Here's where I land on this. Meta Superintelligence Labs produced a model in nine months that's competitive but not leading in most categories, strong in science and math reasoning, weak in coding, and architecturally novel in its approach to parallel reasoning. The stock went up because Wall Street wanted to see anything from the $14.3 billion Wang acquisition, and Muse Spark is at least something.
But if you're an enterprise evaluating AI providers, nothing about Muse Spark changes your calculus today. Anthropic's Claude is still the enterprise reasoning leader. OpenAI's GPT-5.x line still dominates coding and general-purpose use. Google's Gemini still has the best multimodal integration with existing productivity tools. Meta just joined a race that the other three have been running for two years.
The $135 billion question โ literally โ is whether Meta's infrastructure advantage eventually translates into model superiority. They have more compute than anyone. They have Alexandr Wang. They have three billion users to deploy to. Those are real advantages. They're just not advantages that show up in the product today.
Check back next quarter. If Muse Spark 2 closes the coding gap and Contemplating Mode delivers on the parallel reasoning promise, this story changes entirely. But right now? Right now, $135 billion bought a foundation. Whether it becomes a building or an expensive hole in the ground is still an open question.