Llama 3.1 70B Hanami x1
Sao10k
This is [Sao10K](/sao10k)'s experiment over [Euryale v2.2](/sao10k/l3.1-euryale-70b).
What is Llama 3.1 70B Hanami x1?
This is [Sao10K](/sao10k)'s experiment over [Euryale v2.2](/sao10k/l3.1-euryale-70b).
Specifications
- Developer: Sao10k
- Context window: 16K tokens
- Max output: — tokens
- Input modalities: text
- Output modalities: text
- Input price: $3.00 per 1M tokens
- Output price: $3.00 per 1M tokens
- Knowledge cutoff: 2023-12-31
- Supported parameters: frequency_penalty, logit_bias, max_tokens, min_p, presence_penalty, repetition_penalty, seed, stop, temperature, top_k, top_p
Use Llama 3.1 70B Hanami x1 with PromptLayer
PromptLayer lets teams manage, evaluate, and observe prompts that run on Llama 3.1 70B Hanami x1 alongside every other model in their stack. Version prompts, run evals across models, and ship safe rollouts from the same dashboard.
Ready to try it yourself? Sign up for PromptLayer and start managing your prompts in minutes.