Llama-3.2-3B-F1-Instruct

Llama-3.2-3B-F1-Instruct

twinkle-ai

A 3B parameter instruction-tuned LLaMA model optimized for Traditional Chinese & English, developed by Twinkle AI & APMIC with strong performance on Taiwan-specific tasks and legal domains.

PropertyValue
Model TypeLlamaForCausalLM
LanguagesTraditional Chinese & English
Licensellama3.2
AuthorsHuang Liang Hsun, Min Yi Chen, Wen Bin Lin, Chao Chun Chuang & Dave Sung

What is Llama-3.2-3B-F1-Instruct?

Llama-3.2-3B-F1-Instruct, also known as Formosa-1 or F1, is a specialized language model developed through collaboration between Twinkle AI and APMIC, with technical guidance from the National Center for High-Performance Computing in Taiwan. This model represents a significant advancement in Traditional Chinese language processing, specifically tailored for Taiwan-specific contexts and requirements.

Implementation Details

The model builds upon the LlaMA architecture and has been fine-tuned to excel in various domain-specific tasks. Evaluation results show significant improvements over the base Llama-3.2-3B-Instruct model, particularly in TMMLLU+ (42.18%) and Taiwan legal domain tasks (31.26%). The model also demonstrates strong performance in mathematical reasoning (MATH-500: 51.40%) and general question-answering tasks (GPQA Diamond: 33.84%).

  • Enhanced instruction-following capabilities
  • Optimized for Traditional Chinese language understanding
  • Specialized in legal, educational, and daily-life applications
  • Improved performance on Taiwan-specific tasks

Core Capabilities

  • Strong performance in Traditional Chinese language processing
  • Enhanced mathematical reasoning abilities
  • Specialized legal domain knowledge for Taiwan context
  • Robust instruction-following capabilities

Frequently Asked Questions

Q: What makes this model unique?

The model's specialization in Traditional Chinese and Taiwan-specific contexts, combined with its strong performance in legal and educational domains, makes it particularly valuable for applications in Taiwan. Its instruction-tuning optimization sets it apart from general-purpose language models.

Q: What are the recommended use cases?

The model is well-suited for legal document processing, educational applications, and general-purpose Traditional Chinese language tasks. It shows particular strength in mathematical reasoning and can be effectively used in both academic and professional contexts within Taiwan.

Socials
PromptLayer
Company
All services online
Location IconPromptLayer is located in the heart of New York City
PromptLayer © 2026