Frigg-v1.35-8b-HIGH-FANTASY-1024k-i1-GGUF

Frigg-v1.35-8b-HIGH-FANTASY-1024k-i1-GGUF

mradermacher

8B parameter GGUF quantized language model optimized for high fantasy content, offering multiple compression variants from 2.1GB to 6.7GB with imatrix quantization

PropertyValue
Parameter Count8.03B
Model TypeGGUF Quantized
ArchitectureTransformer-based
Context Length1024k tokens

What is Frigg-v1.35-8b-HIGH-FANTASY-1024k-i1-GGUF?

This is a specialized quantized version of the Frigg high fantasy language model, offering various compression options through imatrix quantization. The model represents a significant advancement in making large language models more accessible for consumer hardware while maintaining quality output for fantasy-related content.

Implementation Details

The model comes in multiple quantization variants, ranging from 2.1GB to 6.7GB, each offering different trade-offs between size and quality. It uses advanced imatrix quantization techniques to maintain model performance while significantly reducing size requirements.

  • Multiple quantization options (IQ1_S through Q6_K)
  • Context window of 1024k tokens
  • Optimized for both ARM and x86 architectures
  • Includes specialized variants for different hardware capabilities

Core Capabilities

  • High-fantasy content generation
  • Flexible deployment options across different hardware configurations
  • Optimized performance with various quantization levels
  • Extended context handling capabilities

Frequently Asked Questions

Q: What makes this model unique?

This model stands out for its specialized focus on high-fantasy content while offering multiple quantization options that make it accessible across different hardware configurations. The imatrix quantization technique provides superior quality compared to traditional quantization methods.

Q: What are the recommended use cases?

For optimal performance, the Q4_K_M variant (5.0GB) is recommended as it provides the best balance of speed and quality. For systems with limited resources, the IQ2_M variant (3.0GB) offers a good compromise.

Related Models

Socials
PromptLayer
Company
All services online
Location IconPromptLayer is located in the heart of New York City
PromptLayer © 2026