Liang Wenfeng
Founder of DeepSeek, the Chinese AI lab whose R1 and V3 models reset expectations on training cost and open-weight quality.
Who is Liang Wenfeng?
Liang Wenfeng is the founder of DeepSeek, the Chinese AI lab behind the R1 and V3 models that reset expectations around training cost and open-weight quality. He is best known for coming out of quantitative investing and building DeepSeek as a serious research organization. (apnews.com)
Background and career
Public reporting describes Liang as a low-profile founder who built his early career in quantitative finance before moving into AI. AP reports that he founded DeepSeek in 2023, after previously setting up High-Flyer Quantitative Investment Management in 2015, while TechCrunch notes that he also started Jacobi earlier and used AI algorithms for stock selection. (apnews.com)
His rise matters because it shows how frontier AI work can come from outside the usual Silicon Valley founder path. Reporting from AP and SCMP places him in Guangdong roots and notes that he studied in Zhejiang province, while DeepSeek’s public model pages show the company has since become known for releasing major models to the community. (apnews.com)
Key facts about Liang Wenfeng include:
- Founder: He founded DeepSeek in 2023.
- Earlier work: He built quantitative trading businesses before DeepSeek.
- Technical orientation: He is reported to spend time reading papers, writing code, and joining research discussions.
- Public profile: He has kept a notably low profile compared with many AI executives.
- Research stance: He has argued that Chinese AI should contribute original ideas, not just follow others. (apnews.com)
Notable contributions
- Founded DeepSeek: He created the lab that pushed open-weight reasoning models into the mainstream. (apnews.com)
- Backed efficient model training: DeepSeek’s public releases emphasize strong performance with lower reported compute and cost. (api-docs.deepseek.com)
- Helped popularize open releases: DeepSeek has published model cards, technical reports, and open weights for major releases. (deepseek.com)
- Led a reasoning-first research direction: R1 highlighted reinforcement-learning-driven reasoning as a practical path for LLMs. (github.com)
- Changed the market narrative: DeepSeek’s breakout made many teams rethink the cost of competitive frontier models. (fortune.com)
Why they matter in AI today
- Efficiency: Liang’s work pushed the idea that strong models do not always require the biggest budgets.
- Open weights: DeepSeek showed how open releases can accelerate adoption, evaluation, and fine-tuning.
- Reasoning focus: R1 made reasoning-centric training a practical reference point for builders.
- Stack design: Teams can study DeepSeek as an example of tight iteration between research, infrastructure, and product.
- Global competition: His success changed how many teams think about where frontier AI innovation can come from. (api-docs.deepseek.com)
Where to follow their work
The most useful primary source is DeepSeek’s own site, especially its model and transparency pages, where the team publishes releases and technical reports. AP and SCMP also provide readable background on Liang’s public remarks and career trajectory. (deepseek.com)
For builders, the best way to track his work is to watch DeepSeek’s model releases, docs, and GitHub repositories. That is where the research direction becomes visible in practice. (github.com)
How PromptLayer connects with Liang Wenfeng's work
Liang Wenfeng’s approach highlights the value of rapid iteration, strong evaluations, and disciplined model experimentation. PromptLayer helps teams manage prompts, compare outputs, and keep an audit trail as they build on top of models like DeepSeek, so research speed does not come at the expense of visibility.
Ready to try it yourself? Sign up for PromptLayer and start managing your prompts in minutes.