Brief Details: LLaVA-NeXT 34B - Advanced multimodal vision-language model with improved OCR and reasoning capabilities, built on Nous-Hermes-2-Yi-34B base
Brief-details: 12B parameter multilingual model based on Mistral, fine-tuned for high-quality prose generation supporting 9 languages with Claude 3-like capabilities
Brief-details: Training adapter for FLUX.1-schnell model enabling direct LoRA training with fast sampling (1-4 steps) and Apache 2.0 licensing flexibility.
Brief Details: A 33B parameter cybersecurity-focused LLM based on DeepSeek architecture, specialized in offensive and defensive security analysis with advanced coding capabilities.
Brief-details: Advanced ML-based image captioning model combining CLIP and LLM for accurate descriptions, supporting batch processing and NSFW content with natural language output
BRIEF DETAILS: DynamiCrafter_1024 is an advanced text-to-video generation model that transforms still images into dynamic video clips at 576x1024 resolution, developed by CUHK & Tencent AI Lab.
BRIEF DETAILS: GLiNER-base is a 209M parameter NER model capable of identifying custom entity types using BERT-like architecture, offering flexible entity recognition without LLM overhead.
Brief Details: SecGPT is a 13B parameter security-focused LLM based on Baichuan, specifically trained for cybersecurity tasks without ethical restrictions, requiring 30GB+ VRAM.
Brief-details: A 70B parameter uncensored LLaMA2 chat model in GGML format, offering reduced filtering and more direct responses compared to standard LLaMA2. Notable for honest, straightforward outputs.
Brief-details: A 3B parameter code instruction model fine-tuned on CodeAlpaca & GPTeacher datasets, supporting multiple programming languages with instruct capabilities.
Brief Details: A lightweight 164M parameter Python code generation model based on StarCoder architecture, trained on GitHub code with 7.84% pass@1 on HumanEval.
Brief-details: E5-large is a 335M parameter English text embedding model trained via weakly-supervised contrastive learning, optimized for semantic similarity and retrieval tasks.
BRIEF-DETAILS: A fine-tuned Stable Diffusion 2.1 model specialized in generating images of "Vishu" cat, supporting diverse artistic styles and scenarios. Built using DreamBooth technology.
Brief-details: A 7.24B parameter instruction synthesis model that generates high-quality instruction-response pairs from raw text, supporting efficient instruction pre-training.
Brief-details: A pixel art embedding for Stable Diffusion 2.0 that transforms images into retro-style pixel art, offering multiple embedding variants for different artistic styles.
Brief Details: A powerful 141B parameter MoE model with 35B active params, supporting 5 languages and multiple quantization options for efficient deployment
Brief-details: A powerful 7B parameter Mistral-based model optimized for coding and general tasks, featuring GGUF quantization and 16k context window, built with ChatML format.
Brief-details: Gemma 2B Instruct GGUF - Google's 2.51B parameter instruction-tuned language model optimized for GGUF format, featuring balanced performance and efficiency
Brief-details: Stable Diffusion textual inversion model trained on depthmap imagery, featuring MIT license and 72 community likes. Enables depth-aware image generation.
Brief-details: Japanese-optimized Llama 2 variant (7B params) with enhanced tokenization (45K vocab), supporting both Japanese and English instruction-following tasks
Brief-details: AnimateLCM-I2V is a fast image-to-video generation model that can create animated videos from still images in just 4 steps, offering efficient personalized style video generation.