DeepSeek V4 is live on InstantClaw right now. If you've been using DeepSeek-chat on the API, you were quietly upgraded to DeepSeek-V4-Flash. No action needed. DeepSeek confirmed the old model IDs (deepseek-chat and deepseek-reasoner) now route to V4-Flash. The formal retirement is July 24, 2026, but for all practical purposes, you're on V4 today.
DeepSeek made long-context inference practical
DeepSeek released V4 in two flavors. The one that matters most for InstantClaw users is V4-Flash: 284B total parameters, 13B active per token, built on a Mixture-of-Experts architecture. It's not a trimmed-down version of the big model — it was trained separately, and for most everyday tasks, the gap between Flash and the full Pro variant is surprisingly narrow.
The headline improvements:
- 1M token context window — That's roughly 15-20 novels worth of text. Upload an entire codebase, a year of legal documents, or months of chat history in a single session.
- 384K max output — For generating long-form content, analysis, or code refactors.
- Thinking mode with three effort levels — Choose between fast responses and deep reasoning, depending on the task.
- A meaningful jump in capability — V4-Flash hits 86.2 on MMLU-Pro and 91.6 on LiveCodeBench. For most tasks, it's competitive with models that cost 50x more.
In human terms: Every time you added more context to your prompt before, the computational cost climbed fast. V4 flattens that curve. You can feed your assistant an entire codebase without it slowing to a crawl or eating through your budget.
The model was pre-trained on 32 trillion tokens using mixed FP4 and FP8 precision. The weights are on Hugging Face under Apache 2.0 — stronger patent protection than V3's MIT license, which matters for commercial deployments.
The numbers that actually matter
DeepSeek published a full benchmark grid against Kimi K2.6, Claude Opus 4.6, GPT-5.4, and Gemini 3.1 Pro. The honest reading: V4-Pro wins some, loses some.
Where V4 wins:
- Codeforces: 3206 rating — beats GPT-5.4 (3168) and establishes V4 as the best open-weight model for competitive programming
- LiveCodeBench: 93.5 vs K2.6's 89.6 — short-form code generation is a clear strength
- Chinese-SimpleQA: 84.4 vs the next best at 76.8 — for Chinese-language products, this is the first open-weight model at parity with the best closed options
Where V4 trails:
- SWE-Pro: 55.4 vs K2.6's 58.6 — real GitHub issue fixing still favors Kimi by a small margin
- MRCR 1M (long-context retrieval): 83.5 vs Opus 4.6's 92.9 — Claude still holds the crown for finding needles in haystacks
- HLE with tools: 48.2 vs K2.6's 54.0
The Arena Code Elo jump from V3.2 to V4-Pro is 88 points — roughly the gap between #3 and #13 on the current leaderboard. That's a genuine generational step, not a refresh.
What this means for InstantClaw users
For Easy tier subscribers ($59.90/month), V4-Flash is included at no extra cost. The model upgrade happened automatically — your assistant just got smarter while you were using it. No config changes, no API key rotation, nothing to approve.
For Premium tier users ($79.90/month), V4-Pro (1.6T total, 49B active) is available by bringing your own DeepSeek API key. The API pricing ($1.74/M input, $3.48/M output) is about 21x cheaper than Claude Opus 4.7, which matters if you're running high-volume workloads. V4-Pro takes things further with a Codeforces rating of 3206 — that beats GPT-5.4 on competitive programming benchmarks.
The deepseek-chat deprecation on July 24 is a non-event for InstantClaw users. The transition is already happening transparently. When DeepSeek officially sunsets the old model IDs, your assistant won't even notice.
Why Understanding This Matters
Knowing your assistant runs a better model helps you use it differently. That 1M context window means you can stop chunking your documents. The improved reasoning means you can trust its analysis on complex tasks. When you understand the upgrade, you naturally get more value from it.
The Bottom Line
DeepSeek V4 is the most capable open-weight model available right now, and it's already running on InstantClaw. Easy tier users get V4-Flash automatically. Premium tier users can access V4-Pro with their own key. The best model just got better. And you didn't have to do a thing.
Want the technical details?
Read the full DeepSeek V4 announcement and benchmark results on the official DeepSeek blog.
View on DeepSeekWant a smarter AI assistant without the maintenance?
Deploy in under a minute. No servers. No updates. No model migrations.
InstantClaw