Jared Kaplan
Co-founder and Chief Science Officer at Anthropic. First author of the original LLM scaling laws paper.
Who is Jared Kaplan?
Jared Kaplan is the co-founder and Chief Science Officer at Anthropic. He is best known for co-authoring the original large language model scaling laws paper, which helped shape how teams think about model size, data, and compute. (anthropic.com)
Background and career
Kaplan’s career spans physics and frontier AI research. Public biographies describe him as a scientist-turned-AI leader who worked on foundational research before helping launch Anthropic, where he now leads scientific direction across safety, reliability, and model development. (anthropic.com)
He is widely associated with the scaling laws line of research, especially the 2020 paper Scaling Laws for Neural Language Models. That work showed that language model loss followed predictable power laws as models, data, and compute increased, and it became one of the most cited references for planning training runs and forecasting model performance. (arxiv.org)
Key facts about Jared Kaplan include:
- Current role: Co-founder and Chief Science Officer at Anthropic. (anthropic.com)
- Known for: Co-authoring the original LLM scaling laws paper. (arxiv.org)
- Research focus: Large-scale model behavior, training efficiency, and AI safety. (anthropic.com)
- Anthropic leadership: He has served in roles tied to responsible scaling and safety policy. (anthropic.com)
- Public profile: Frequently appears in Anthropic research and company updates. (anthropic.com)
Notable contributions
- Scaling laws for neural language models: The 2020 paper established a practical framework for predicting language model loss from scale, data, and compute. (arxiv.org)
- Anthropic co-founding: He helped launch Anthropic as an AI safety and research company. (anthropic.com)
- Responsible scaling leadership: Anthropic named him Responsible Scaling Officer as part of its policy work. (anthropic.com)
- Constitutional AI era: His name appears among the contributors behind Anthropic’s Claude constitution work. (anthropic.com)
- Ongoing Anthropic research: He continues to appear on recent Anthropic papers and research notes tied to reliability and reasoning. (www-cdn.anthropic.com)
Why they matter in AI today
- They made scaling measurable: Kaplan’s work gave teams a way to reason about whether more compute or more data would likely help. (arxiv.org)
- They influenced training strategy: The scaling laws literature still informs how builders size experiments and budgets. (huggingface.co)
- They connect capability and safety: At Anthropic, his work sits close to responsible scaling and frontier risk management. (anthropic.com)
- They reflect how research becomes product: His career shows how basic model research can shape real deployment decisions. (anthropic.com)
Where to follow their work
The main place to follow Jared Kaplan’s work is Anthropic, where he appears in company announcements, research posts, and policy updates. Anthropic’s public site is the clearest source for his current role and recent appearances. (anthropic.com)
For research context, his most cited work is the scaling laws paper and related follow-on papers on neural scaling and transfer. Those papers remain the best public record of his technical influence. (arxiv.org)
How PromptLayer connects with Jared Kaplan's work
Kaplan’s career is a good reminder that strong AI systems depend on disciplined experimentation, not just larger models. PromptLayer helps teams capture prompts, compare outputs, run evaluations, and keep shipping workflows organized as models and usage patterns change.
Ready to try it yourself? Sign up for PromptLayer and start managing your prompts in minutes.