GPT-3.5 Turbo 16k
OpenAI
This model offers four times the context length of gpt-3.5-turbo, allowing it to support approximately 20 pages of text in a single request at a higher cost. Training data: up...
What is GPT-3.5 Turbo 16k?
This model offers four times the context length of gpt-3.5-turbo, allowing it to support approximately 20 pages of text in a single request at a higher cost. Training data: up...
Specifications
- Developer: OpenAI
- Context window: 16.4K tokens
- Max output: 4.1K tokens
- Input modalities: text
- Output modalities: text
- Input price: $3.00 per 1M tokens
- Output price: $4.00 per 1M tokens
- Knowledge cutoff: 2021-09-30
- Supported parameters: frequency_penalty, logit_bias, logprobs, max_completion_tokens, max_tokens, presence_penalty, response_format, seed, stop, structured_outputs, temperature, tool_choice, tools, top_logprobs, top_p
Use GPT-3.5 Turbo 16k with PromptLayer
PromptLayer lets teams manage, evaluate, and observe prompts that run on GPT-3.5 Turbo 16k alongside every other model in their stack. Version prompts, run evals across models, and ship safe rollouts from the same dashboard.
Ready to try it yourself? Sign up for PromptLayer and start managing your prompts in minutes.