𧬠Breaking news in Clinical AI: Introducing the OpenMed NER Model Discovery App on Hugging Face π¬
OpenMed is back! π₯ Finding the right biomedical NER model just became as precise as a PCR assay!
I'm thrilled to unveil my comprehensive OpenMed Named Entity Recognition Model Discovery App that puts 384 specialized biomedical AI models at your fingertips.
π― Why This Matters in Healthcare AI: Traditional clinical text mining required hours of manual model evaluation. My Discovery App instantly connects researchers, clinicians, and data scientists with the exact NER models they need for their biomedical entity extraction tasks.
π¬ What You Can Discover: β Pharmacological Models - Extract "chemical compounds", "drug interactions", and "pharmaceutical" entities from clinical notes β Genomics & Proteomics - Identify "DNA sequences", "RNA transcripts", "gene variants", "protein complexes", and "cell lines" β Pathology & Disease Detection - Recognize "pathological formations", "cancer types", and "disease entities" in medical literature β Anatomical Recognition - Map "anatomical systems", "tissue types", "organ structures", and "cellular components" β Clinical Entity Extraction - Detect "organism species", "amino acids", 'protein families", and "multi-tissue structures"
π‘ Advanced Features: π Intelligent Entity Search - Find models by specific biomedical entities (e.g., "Show me models detecting CHEM + DNA + Protein") π₯ Domain-Specific Filtering - Browse by Oncology, Pharmacology, Genomics, Pathology, Hematology, and more π Model Architecture Insights - Compare BERT, RoBERTa, and DeBERTa implementations β‘ Real-Time Search - Auto-filtering as you type, no search buttons needed π¨ Clinical-Grade UI - Beautiful, intuitive interface designed for medical professionals
Ready to revolutionize your biomedical NLP pipeline?
π Try it now: OpenMed/openmed-ner-models 𧬠Built with: Gradio, Transformers, Advanced Entity Mapping
nanoLLaVA-1.5 is here! Same size (1B), better performance π₯π₯π₯ And it is much more powerful than v1.0 Try it out now on HF Spaces: qnguyen3/nanoLLaVA Model: qnguyen3/nanoLLaVA-1.5
π Introducing nanoLLaVA, a powerful multimodal AI model that packs the capabilities of a 1B parameter vision language model into just 5GB of VRAM. π This makes it an ideal choice for edge devices, bringing cutting-edge visual understanding and generation to your devices like never before. π±π»
Under the hood, nanoLLaVA is based on the powerful vilm/Quyen-SE-v0.1 (my Qwen1.5-0.5B finetune) and Google's impressive google/siglip-so400m-patch14-384. π§ The model is trained using a data-centric approach to ensure optimal performance. π
In the spirit of transparency and collaboration, all code and model weights are open-sourced under the Apache 2.0 license. π€