Brief-details: A specialized LoRA model trained for creating 3D portrait-style cartoons using FLUX.1-dev as base. Features 64 network dimensions and 15 epochs of training on 19 images.
Brief Details: An 8B parameter LLaMA-based model with strong instruction-following capabilities, scoring 42.93% on IFEval and optimized for text generation tasks.
Brief-details: Luna-NSFW is a FLUX-based LoRA model for text-to-image generation, built on FLUX.1-dev with non-commercial licensing and LUNA trigger word support.
Brief-details: A Halloween-themed LoRA model for FLUX.1-dev that generates spooky seasonal images with pumpkins, bats, and ghosts. Trained on 19 curated images using AdamW optimizer.
Brief Details: Qwen-modelstock2-15B is a merged 14.8B parameter LLM using Model Stock technique, combining multiple Qwen variants for enhanced performance.
Brief Details: A 2.6B parameter Gemma model fine-tuned with ORPO and Alpaca datasets, optimized for multilingual text generation and code tasks. Features abliteration on layer 17.
Brief Details: TrOCR model fine-tuned for Russian handwriting OCR, featuring 334M parameters and trained on Cyrillic dataset with 0.048 CER score.
Brief Details: A multimodal biomedical foundation model for small molecules using MMELON approach, combining image, graph, and text representations for robust molecular property prediction.
Brief Details: 123B parameter Mistral-based model optimized for creative text generation and roleplay. Features 8-bit precision and EXL2 quantization.
Brief Details: A multimodal biomedical foundation model for molecular property prediction using MMELON architecture, combining image, graph, and text representations for drug discovery applications.
Brief-details: A multimodal reranker model based on Qwen2-VL-2B, specialized in image-query relevance scoring with impressive NDCG@5 improvements across various datasets.
Brief Details: A specialized LoRA model for generating 3D Sketchfab-style images, trained on FLUX.1-dev base model with 39 training images and optimized parameters for 3D object generation.
Brief Details: Multimodal biomedical foundation model for small molecule analysis using MMELON approach. Features 84.6M parameters, combining image, graph, and text representations for drug discovery.
Brief-details: A 14.8B parameter GGUF-formatted instruction model derived from Qwen2.5, optimized for llama.cpp with Q6_K quantization, suitable for conversational AI tasks.
Brief-details: A specialized 33.4M parameter embedding model fine-tuned for medical and clinical information retrieval, achieving strong performance on healthcare NLP tasks.
Brief-details: Aya Expanse 32B is a powerful multilingual LLM supporting 23 languages with 128K context length, optimized for research and non-commercial use.
Brief-details: MASC LoRA - A specialized Flux Schnell-based model for generating photorealistic masculine imagery, trained on 400 curated images with focus on diverse male representation.
Brief Details: A 3.2B parameter Llama-based model merged using MergeKit, combining scientific capabilities with general instruction following for enhanced reasoning tasks.
Brief-details: French tax law classification model with 118M params, fine-tuned on multilingual-e5-base, achieving 90.61% accuracy across 8 tax categories
BRIEF-DETAILS: Quantized version of Mistral-8B-Instruct featuring 8B parameters, multilingual support (10 languages), 128k context window, and optimized for research use under MRL license
Brief Details: A 14.8B parameter Qwen-based model focused on Python coding assistance with an uncensored, rebellious personality. Built using TIES merging of multiple Qwen2.5 variants.