BRIEF DETAILS: Vietnamese language semantic model for text embeddings, built on transformer architecture with support for word segmentation via pyvi. Ideal for NLP tasks.
BRIEF-DETAILS: Llama-3-8B is an 8B parameter Nordic-focused LLM, fine-tuned from Meta's Llama 3 on Swedish, Norwegian, and Danish data using 92 A100 GPUs
Brief Details: A powerful multilingual pre-trained model supporting 25 languages, developed by Facebook. Ideal for translation and summarization tasks.
Brief-details: Optimized Llama-3.2-1B-Instruct model with FP8 quantization, reducing memory footprint by 50% while maintaining 99.7% accuracy
Brief-details: ConvNeXt-V2 Atto variant - lightweight image classification model with 3.7M params, trained with FCMAE and fine-tuned on ImageNet-1k. Efficient but lower accuracy at 76.7% top-1.
Brief-details: A fast and efficient implementation of Meta's Llama 3 (8B) model, optimized by Unsloth for 2.4x faster finetuning with 58% less memory usage. Perfect for resource-conscious ML projects.
Brief-details: A French language NLI model based on DistilCamemBERT, optimized for natural language inference tasks with 2x faster inference than CamemBERT while maintaining good accuracy
Brief-details: LeViT-256 is a hybrid vision transformer model with 18.9M parameters, achieving 81.5% top-1 accuracy on ImageNet-1k, optimized for fast inference through ConvNet-style architecture.
BRIEF-DETAILS: Quantized ONNX version of paraphrase-multilingual-MiniLM-L12-v2 for efficient multilingual text embeddings and similarity search
Brief-details: Llama-Guard-3-1B is Meta's 3.1B parameter safety-focused language model designed to detect and filter harmful content, built on the Llama architecture.
Brief Details: LivePortrait_safetensors is an AI model by Kijai hosted on HuggingFace, designed for creating dynamic portrait animations and transformations
Brief-details: IBM's 34B parameter code-focused LLM, fine-tuned on permissive instruction data. Specializes in code intelligence, problem-solving, and logical reasoning with 8K context window.
Brief-details: LLaVA model fine-tuned from Meta-Llama-3-8B-Instruct with CLIP-ViT integration. Excels in vision-language tasks with strong performance on MMBench and other benchmarks.
Brief Details: A GPT2-based code completion model specialized for Python, offering intelligent autocompletion for lines and blocks of code. Built by shibing624.
Brief Details: GPT-2 model fine-tuned on ~12k horoscopes from Horoscopes.com, specialized in generating category-specific horoscope predictions with 5 distinct categories
Brief Details: ChemBERTa variant trained on ZINC250k dataset with 40k parameters, specialized for molecular structure analysis and chemical property prediction
Brief Details: Turkish news classification model achieving 97% accuracy on train/validation, supports 10 categories including Politics, Economy, World News. Built with transformers.
Brief Details: GPT-2 based emotion detection model trained on text data. Specializes in identifying emotional content in written text using transformer architecture.
Brief Details: Sentence embedding model that maps text to 384-dimensional vectors, optimized for semantic search and clustering using MiniLM architecture
Brief Details: A powerful sentence transformer model that maps text to 768-dimensional vectors, based on T5-11B architecture, optimized for semantic search tasks.
Brief Details: Portuguese text summarization model that converts multiple documents into Wikipedia-style abstracts using PT-T5 architecture