Brief Details: A specialized code vulnerability detection model based on RoBERTa, achieving 64.71% accuracy in identifying security flaws in C/C++ code with 125M parameters.
Brief Details: VulBERTa-MLP-MVD is a 125M parameter RoBERTa-based model for detecting security vulnerabilities in source code, achieving 64.71% accuracy.
Brief Details: SEW-D tiny speech recognition model with 24.1M params, reaching 10.47 WER on LibriSpeech clean. Optimized for efficiency and performance.
Brief-details: ProstT5 is a protein language model specialized in translating between protein sequences and structures, built on T5 architecture with 17M protein training examples.
Brief Details: VulBERTa-MLP-VulDeePecker is a 125M parameter RoBERTa-based model for detecting security vulnerabilities in C/C++ source code with state-of-the-art performance.
BRIEF-DETAILS: Optimized 1B parameter Llama model in MLC format (q4f16_1) for web deployment and edge devices, supporting chat and REST API functionality
Brief Details: OpenLRM-mix-base-1.1 is a 260M parameter image-to-3D model trained on Objaverse + MVImgNet, featuring 12 layers and 768-dim features for 3D generation.
Brief Details: An improved VAE (Variational Autoencoder) for Stable Diffusion, fine-tuned with MSE loss focus for smoother image reconstructions. Popular choice for enhanced image quality.
Brief Details: A RoBERTa-based cross-encoder model trained on STS benchmark dataset for semantic similarity scoring, with 91K+ downloads and Apache 2.0 license.
Brief-details: A compact BERT variant (4 layers, 512 hidden size, 8 attention heads) optimized for resource-constrained environments, achieving 71.2 GLUE score
Brief-details: Latest 32B parameter code-specialized LLM with 128K context, optimized for code generation, reasoning & fixing. State-of-the-art performance matching GPT-4.
BRIEF DETAILS: 7B parameter instruction-tuned code generation model from Meta's Code Llama family. Optimized for code synthesis with BF16 precision and commercial license.
Brief-details: Portuguese hate speech detection model based on BERT, achieving 0.716 validation score. Monolingual training approach with Apache 2.0 license.
Brief Details: A tokenizer-free multilingual T5 variant that processes raw UTF-8 bytes, supporting 102 languages and excelling at noisy text processing.
BRIEF DETAILS: Vision Transformer model trained with DINO self-supervised learning, 85.8M parameters, optimized for image feature extraction and classification tasks at 224x224 resolution.
Brief Details: A robust emotion classification model capable of detecting 20 distinct emotions from text, built on RoBERTa architecture with 82.1M parameters.
BRIEF DETAILS: An 8B parameter medical LLM built on Llama-3, achieving 70.33% average accuracy across medical subjects with enhanced clinical knowledge and diagnostic capabilities.
Brief-details: A fine-tuned ViT model achieving 89.13% accuracy on food image classification, trained on Food-101 dataset with 5 epochs using PyTorch.
BRIEF DETAILS: Meta's Llama 3.1 8B instruction-tuned model optimized by Unsloth for 2.4x faster performance and 58% less memory usage. Supports efficient finetuning.
Brief Details: PubMedBERT model fine-tuned on MS-MARCO for medical text similarity, offering 768-dimensional embeddings for healthcare information retrieval tasks
Brief Details: ESM-2 lightweight variant with 35M parameters. Protein language model for sequence analysis. Efficient 12-layer architecture, MIT licensed.