GPT-3.5 Turbo
OpenAI
GPT-3.5 Turbo is OpenAI's fastest model. It can understand and generate natural language or code, and is optimized for chat and traditional completion tasks.
What is GPT-3.5 Turbo?
GPT-3.5 Turbo is OpenAI's fastest model. It can understand and generate natural language or code, and is optimized for chat and traditional completion tasks.
Training data up to Sep 2021.
Specifications
- Developer: OpenAI
- Context window: 16.4K tokens
- Max output: 4.1K tokens
- Input modalities: text
- Output modalities: text
- Input price: $0.5000 per 1M tokens
- Output price: $1.50 per 1M tokens
- Knowledge cutoff: 2021-09-30
- Supported parameters: frequency_penalty, logit_bias, logprobs, max_tokens, presence_penalty, response_format, seed, stop, structured_outputs, temperature, tool_choice, tools, top_logprobs, top_p
Use GPT-3.5 Turbo with PromptLayer
PromptLayer lets teams manage, evaluate, and observe prompts that run on GPT-3.5 Turbo alongside every other model in their stack. Version prompts, run evals across models, and ship safe rollouts from the same dashboard.
Ready to try it yourself? Sign up for PromptLayer and start managing your prompts in minutes.