JUHE API Marketplace
Text Generation

DeepSeek-R1

DeepSeek-R1-Zero is a model trained via large-scale reinforcement learning (RL) without supervised fine-tuning (SFT) as a preliminary step. It's 671B parameters in size, with 37B active in an inference pass.

Available on Wisdom Gate
Provider
deepseek
Parameters
671b
Context Length
131k
Release Date
2025-01-20
License
mit
Country
CN 🇨🇳
Model Introduction

DeepSeek-R1-Zero is a model trained via large-scale reinforcement learning (RL) without supervised fine-tuning (SFT) as a preliminary step. It's 671B parameters in size, with 37B active in an inference pass.

It demonstrates remarkable performance on reasoning. With RL, DeepSeek-R1-Zero naturally emerged with numerous powerful and interesting reasoning behaviors.

DeepSeek-R1-Zero encounters challenges such as endless repetition, poor readability, and language mixing. See DeepSeek R1 for the SFT model.

Specs

Provider
deepseek
Parameters
671b
Context Length
131k
Release Date
2025-01-20
License
mit
Country
CN 🇨🇳

Boost your DeepSeek-R1 with Wisdom Gate LLM API

Supporting GPT-5, Claude-4, DeepSeek v3, Gemini and more.

Enjoy a free trial and save 20%+ compared to official pricing.