
DeepSeek V4 API: Model IDs, Base URL, Thinking, and Tools
DeepSeek V4 is exposed through the DeepSeek OpenAI-compatible API. The current pricing page lists two V4 model IDs:
deepseek-v4-prodeepseek-v4-flash
The base URL is:
https://api.deepseek.comSource: DeepSeek API pricing.

API integration is mostly about choosing the right model ID, keeping the request shape compatible, and deciding when tools or Thinking should be enabled.
Minimal request shape
Use the chat completions API with one of the V4 model IDs:
{
"model": "deepseek-v4-flash",
"messages": [
{
"role": "user",
"content": "Explain DeepSeek V4 Flash pricing."
}
]
}Thinking mode
DeepSeek documents Thinking as a request option with enabled or disabled mode, plus reasoning effort. Use Thinking when you want the model to spend more reasoning budget on difficult tasks.
In product terms:
- Disable Thinking for fast answers and low-cost paths.
- Enable Thinking for code repair, planning, math, and long analysis.
- Use Pro when the answer quality ceiling matters more than cost.
Tools and web search
DeepSeek V4 can be used behind a tool-enabled chat route. On this site, web search is implemented as a server-side search_web tool and then passed into the model response. That means web search depends on the site's search provider configuration, not only DeepSeek itself.
Image input
DeepSeek V4 chat on D-Chat is text-only. The current V4 API documentation describes text, Thinking, tools, JSON, and FIM surfaces, so do not promise multimodal image understanding for DeepSeek V4.

