slug
stringlengths
15
15
content
listlengths
1
129
rawContent
stringlengths
1
2k
author
dict
attachments
listlengths
0
49
mentions
listlengths
0
49
reactions
listlengths
0
12
publishedAt
stringlengths
24
24
updatedAt
stringlengths
24
24
commentators
listlengths
0
52
url
stringlengths
25
46
totalUniqueImpressions
int64
1
42.1k
โŒ€
numComments
int64
0
621
986806198797388
[ { "type": "text", "value": "๐Ÿ๐ŸŽ๐Ÿ๐Ÿ’, ๐ญ๐ก๐ž ๐ฒ๐ž๐š๐ซ ๐จ๐Ÿ ๐š๐ ๐ž๐ง๐ญ ๐ฐ๐จ๐ซ๐ค๐Ÿ๐ฅ๐จ๐ฐ๐ฌ ๐Ÿ”ง๐Ÿฆพ๐Ÿค–", "raw": "๐Ÿ๐ŸŽ๐Ÿ๐Ÿ’, ๐ญ๐ก๐ž ๐ฒ๐ž๐š๐ซ ๐จ๐Ÿ ๐š๐ ๐ž๐ง๐ญ ๐ฐ๐จ๐ซ๐ค๐Ÿ๐ฅ๐จ๐ฐ๐ฌ ๐Ÿ”ง๐Ÿฆพ๐Ÿค–", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null...
๐Ÿ๐ŸŽ๐Ÿ๐Ÿ’, ๐ญ๐ก๐ž ๐ฒ๐ž๐š๐ซ ๐จ๐Ÿ ๐š๐ ๐ž๐ง๐ญ ๐ฐ๐จ๐ซ๐ค๐Ÿ๐ฅ๐จ๐ฐ๐ฌ ๐Ÿ”ง๐Ÿฆพ๐Ÿค– I've just watched Andrew Ng's talk at Sequoia last week. If you're interested in Agents, you should really watch it! ๐—ช๐—ต๐˜† ๐˜‚๐˜€๐—ฒ ๐—ฎ๐—ด๐—ฒ๐—ป๐˜ ๐˜„๐—ผ๐—ฟ๐—ธ๐—ณ๐—น๐—ผ๐˜„๐˜€? The current LLM task solving workflow is not very intuitive: We ask it โ€œwrite an essay all in one shot, without ever using backspace.โ€ Why not allow the LLM a more similar process to what we would do? - โ€œWrite an essay outlineโ€ - โ€œDo you need wen research?โ€ - โ€œWrite a first draftโ€ - โ€œConsider improvementsโ€ โ€ฆ This is called an Agentic workflow. Existing ones bring a huge performance boost. With HumanEval: GPT-4 zero-shot gets 67% score, agentic with either one of tool use or reflection goes over 90%, and the combination of the two scores even higher! ๐—”๐—ด๐—ฒ๐—ป๐˜๐—ถ๐—ฐ ๐—ฟ๐—ฒ๐—ฎ๐˜€๐—ผ๐—ป๐—ถ๐—ป๐—ด ๐—ฑ๐—ฒ๐˜€๐—ถ๐—ด๐—ป ๐—ฝ๐—ฎ๐˜๐˜๐—ฒ๐—ฟ๐—ป๐˜€ On the following two points, the tech is robust: โš™๏ธ ๐—ฅ๐—ฒ๐—ณ๐—น๐—ฒ๐˜…๐—ถ๐—ผ๐—ป: For instance: add a critic step after the writing step ๐Ÿ› ๏ธ ๐—ง๐—ผ๐—ผ๐—น ๐˜‚๐˜€๐—ฒ: extends the capabilities of the LLM by allowing it to call tools, like search or calculator The next two will be needed to go further, but the tech for them is more emerging and not reliable yet: ๐Ÿ—บ๏ธ ๐—ฃ๐—น๐—ฎ๐—ป๐—ป๐—ถ๐—ป๐—ด forward to decompose task into subtasks. This allows great behaviours like an AI Agent re-routing after a failure ๐Ÿ ๐— ๐˜‚๐—น๐˜๐—ถ-๐—ฎ๐—ด๐—ฒ๐—ป๐˜ ๐—ฐ๐—ผ๐—น๐—น๐—ฎ๐—ฏ๐—ผ๐—ฟ๐—ฎ๐˜๐—ถ๐—ผ๐—ป: Program a flock of agents with tasks. Improving the two above points will unlock huge performance boosts! Andrew NG says Research agents are already part of his workflow! ๐—–๐—น๐—ผ๐˜€๐—ถ๐—ป๐—ด ๐˜๐—ต๐—ผ๐˜‚๐—ด๐—ต๐˜๐˜€ Andrew speculates that through agentic workflows, maybe generating many tokens fast from a small LLM will give better results than slower throughput from a powerful LLM like GPT-5. ๐ŸŽฌ Watch the talk here ๐Ÿ‘‰ https://www.youtube.com/watch?v=sal78ACtGTc ๐Ÿ“š I've added his recommended reads to https://huggingface.co/collections/m-ric/agents-65ba776fbd9e29f771c07d4e
{ "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/63d10d4e8eaa4831005e92b5/7p7-OmWM6PqqCs7ZStPGD.jpeg", "fullname": "Aymeric Roucher", "name": "m-ric", "type": "user", "isPro": false, "isHf": true, "isMod": false, "followerCount": 494, "isFollowing": false }
[]
[]
[ { "reaction": "โค๏ธ", "users": [ "osanseviero", "clem", "neelam91721", "samusenps", "nikgr", "JoPmt", "Moibe", "abidlabs", "poGlingus" ], "count": 9 }, { "reaction": "๐Ÿ‘", "users": [ "apol", "abidlabs" ], "count": 2 ...
2024-04-02T16:08:03.000Z
2024-04-03T08:45:03.935Z
[ { "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/61b9df9b22e5b0fdd501a113/i2yTGbK7pFnw9YLwZ7elp.jpeg", "fullname": "Akhil B", "name": "hakunamatata1997", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": 4, "isFollowing": fals...
/posts/m-ric/986806198797388
1,879
1
704809400668436
[ { "type": "text", "value": "Aurora-M", "raw": "Aurora-M", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url...
Aurora-M The First Open Source Multilingual Language Model Red-teamed according to the U.S. Executive Order https://huggingface.co/papers/2404.00399 Pretrained language models underpin several AI applications, but their high computational cost for training limits accessibility. Initiatives such as BLOOM and StarCoder aim to democratize access to pretrained models for collaborative community development. However, such existing models face challenges: limited multilingual capabilities, continual pretraining causing catastrophic forgetting, whereas pretraining from scratch is computationally expensive, and compliance with AI safety and development laws. This paper presents Aurora-M, a 15B parameter multilingual open-source model trained on English, Finnish, Hindi, Japanese, Vietnamese, and code. Continually pretrained from StarCoderPlus on 435 billion additional tokens, Aurora-M surpasses 2 trillion tokens in total training token count. It is the first open-source multilingual model fine-tuned on human-reviewed safety instructions, thus aligning its development not only with conventional red-teaming considerations, but also with the specific concerns articulated in the Biden-Harris Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. Aurora-M is rigorously evaluated across various tasks and languages, demonstrating robustness against catastrophic forgetting and outperforming alternatives in multilingual settings, particularly in safety evaluations.
{ "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg", "fullname": "AK", "name": "akhaliq", "type": "user", "isPro": false, "isHf": true, "isMod": false, "followerCount": 5205, "isFollowing": false }
[ { "type": "image", "url": "https://cdn-uploads.huggingface.co/production/uploads/60f1abe7544c2adfd699860c/7C3ctlbiugipQMW9FLjpJ.png" } ]
[]
[ { "reaction": "๐Ÿ‘€", "users": [ "osanseviero", "CulturedMan", "DmitryRyumin", "samusenps", "clem", "Catliba", "mayank-mishra", "sugatoray" ], "count": 8 }, { "reaction": "โค๏ธ", "users": [ "huu-ontocord", "Xa9aX", "mayank-mis...
2024-04-02T16:03:20.000Z
2024-04-02T16:03:33.451Z
[]
/posts/akhaliq/704809400668436
2,194
0
748359845705883
[ { "type": "text", "value": "How would you benchmark performance estimation algorithms vs data drift signals?", "raw": "How would you benchmark performance estimation algorithms vs data drift signals?", "href": null, "resource": null, "url": null, "code": null, "user": null, "labe...
How would you benchmark performance estimation algorithms vs data drift signals? I'm working on a benchmarking analysis, and I'm currently doing the following: - Get univariate and multivariate drift signals and measure their correlation with realized performance. - Use drift signals as features of a regression model to predict the model's performance. - Use drift signals as features of a classification model to predict a performance drop. - Compare all the above experiments with results from Performance Estimation algorithms. Any other ideas?
{ "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1657144463525-629a173153a72d997d3f57d0.jpeg", "fullname": "Santiago Viquez", "name": "santiviquez", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": 84, "isFollowing": false }
[]
[]
[ { "reaction": "๐Ÿ”ฅ", "users": [ "osanseviero" ], "count": 1 } ]
2024-04-02T15:04:45.000Z
2024-04-02T15:04:45.903Z
[]
/posts/santiviquez/748359845705883
1,350
0
700636875300688
[ { "type": "text", "value": "๐Ÿฅ‡Open CoT Leaderboard", "raw": "๐Ÿฅ‡Open CoT Leaderboard", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, ...
๐Ÿฅ‡Open CoT Leaderboard We're delighted to announce the [Open CoT Leaderboard](https://huggingface.co/spaces/logikon/open_cot_leaderboard) on ๐Ÿค— Spaces. Unlike other LLM performance leaderboards, the Open CoT Leaderboard is not tracking absolute benchmark accuracies, but relative **accuracy gains** due to **chain-of-thought**. Eval datasets that underpin the leaderboard are hosted [here](https://huggingface.co/cot-leaderboard). Feedback and suggestions more than welcome. @clefourrier
{ "avatarUrl": "/avatars/78be882adf32b808686713e9b457797d.svg", "fullname": "Gregor Betz", "name": "ggbetz", "type": "user", "isPro": true, "isHf": false, "isMod": false, "followerCount": 4, "isFollowing": false }
[]
[ { "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1644340617257-noauth.png", "fullname": "Clรฉmentine Fourrier", "name": "clefourrier", "type": "user", "isPro": false, "isHf": true, "isMod": false, "followerCount": 459 } ]
[ { "reaction": "โค๏ธ", "users": [ "clefourrier", "lunarflu", "osanseviero", "samusenps", "giux78", "davidhought0n", "reuank", "victor", "nikgr", "yakazimir", "clem" ], "count": 11 } ]
2024-04-02T13:32:28.000Z
2024-04-08T08:50:14.370Z
[ { "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1644340617257-noauth.png", "fullname": "Clรฉmentine Fourrier", "name": "clefourrier", "type": "user", "isPro": false, "isHf": true, "isMod": false, "followerCount": 459, "isFollowing": false }, { "ava...
/posts/ggbetz/700636875300688
1,438
5
985506609005199
[ { "type": "text", "value": "We would like to announce our Aurora-M multilingual models which is based on Starcoderplus.", "raw": "We would like to announce our Aurora-M multilingual models which is based on Starcoderplus.", "href": null, "resource": null, "url": null, "code": null, "...
We would like to announce our Aurora-M multilingual models which is based on Starcoderplus. Twitter: https://twitter.com/ontocord/status/1772778544051155029 LinkedIn: https://www.linkedin.com/feed/update/urn:li:activity:7178521998845759488/ Blog post: https://huggingface.co/blog/mayank-mishra/aurora Arxiv: https://huggingface.co/papers/2404.00399 Current LLMs are very susceptible to generating toxic, harmful and even dangerous content. They can also generate outputs with gender or racial biases. The Biden-Harris Executive Order https://www.federalregister.gov/documents/2023/11/01/2023-24283/safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence) sets forth guidelines on what is considered a safe AI system. Following up on these guidelines, we present the world's first open source Biden-Harris Executive Order Red teamed Multilingual Language Model: Aurora-M. Inspired by BigScience, the model is trained on 5 languages: English, Hindi, Japanese, Vietnamese and Finnish. * Red teamed model: https://huggingface.co/aurora-m/aurora-m-biden-harris-redteamed(safety tuned according to the order mentioned above) * Base model: https://huggingface.co/aurora-m/aurora-m-base (not safety tuned) * Instruct model: https://huggingface.co/aurora-m/aurora-m-instruct (not safety tuned) @mayank-mishra @cabbage972 @sted97 @Xa9aX @Taishi-N324 @Muennighoff @vumichien @prateeky2806 @felfri @spyysalo and many many others!
{ "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/5fc6879e1c5ee87b1164876d/Tjnm_lv0Bq0gPbFOTDH6E.jpeg", "fullname": "Huu Nguyen", "name": "huu-ontocord", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": 42, "isFollowing": false }
[]
[ { "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/627bda4f1b02d7b5bf63f1f8/sqK5uRemxbv-zBQ9B8ugE.jpeg", "fullname": "VM", "name": "cabbage972", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": 2 }, { "avatarUrl": "https://...
[ { "reaction": "โค๏ธ", "users": [ "samusenps", "Xa9aX", "lunarflu", "osanseviero", "mayank-mishra", "vumichien", "kenhktsui", "nikgr", "sted97", "muhtasham", "Taishi-N324", "Gokumase" ], "count": 12 }, { "reaction": "๐Ÿ”ฅ", ...
2024-04-02T13:25:39.000Z
2024-04-02T13:25:39.370Z
[]
/posts/huu-ontocord/985506609005199
1,615
0
730068367902681
[ { "type": "text", "value": "โšก AutoQuant", "raw": "โšก AutoQuant", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, ...
โšก AutoQuant AutoQuant is the evolution of my previous AutoGGUF notebook (https://colab.research.google.com/drive/1P646NEg33BZy4BfLDNpTz0V0lwIU3CHu). It allows you to quantize your models in five different formats: - GGUF: perfect for inference on CPUs (and LM Studio) - GPTQ/EXL2: fast inference on GPUs - AWQ: super fast inference on GPUs with vLLM (https://github.com/vllm-project/vllm) - HQQ: extreme quantization with decent 2-bit and 3-bit models Once the model is converted, it automatically uploads it on the Hugging Face Hub. To quantize a 7B model, GGUF only needs a T4 GPU, while the other methods require an A100 GPU. Here's an example of a model I quantized using HQQ and AutoQuant: https://huggingface.co/mlabonne/AlphaMonarch-7B-2bit-HQQ I hope you'll enjoy it and quantize lots of models! :) ๐Ÿ’ป AutoQuant: https://colab.research.google.com/drive/1b6nqC7UZVt8bx4MksX7s656GXPM-eWw4
{ "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/61b8e2ba285851687028d395/JtUGAwVh_4cDEsjNcfpye.png", "fullname": "Maxime Labonne", "name": "mlabonne", "type": "user", "isPro": true, "isHf": false, "isMod": false, "followerCount": 3486, "isFollowing": false }
[]
[]
[ { "reaction": "๐Ÿš€", "users": [ "marcsun13", "mishig", "Outrun32", "johko", "kgourgou", "DataSoul", "samusenps", "lunarflu", "osanseviero", "julien-c", "OxxoCodes", "musfiqdehan", "seyf1elislam", "nasiruddin15", "Heng66...
2024-04-02T09:46:56.000Z
2024-04-26T08:25:12.877Z
[ { "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/5dd96eb166059660ed1ee413/NQtzmrDdbG0H8qkZvRyGk.jpeg", "fullname": "Julien Chaumond", "name": "julien-c", "type": "user", "isPro": true, "isHf": true, "isMod": false, "followerCount": 1580, "isFollowing": fal...
/posts/mlabonne/730068367902681
9,234
15
588552545989925
[ { "type": "text", "value": "Check out our work Symbol-LLM ! We have open-sourced both 7B / 13B model weights, as well as part of the symbolic collections ! Try it !", "raw": "Check out our work Symbol-LLM ! We have open-sourced both 7B / 13B model weights, as well as part of the symbolic collections ! T...
Check out our work Symbol-LLM ! We have open-sourced both 7B / 13B model weights, as well as part of the symbolic collections ! Try it ! Paper link: https://huggingface.co/papers/2311.09278 Model weights: https://huggingface.co/Symbol-LLM/Symbol-LLM-7B-Instruct
{ "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/656d73ed0bbc114fe6449704/gpteBU9GmKSHRVkRBUHld.png", "fullname": "Symbol-LLM", "name": "Symbol-LLM", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": 29, "isFollowing": false }
[]
[]
[ { "reaction": "๐Ÿš€", "users": [ "Symbol-LLM", "xufangzhi", "FeYuan", "vladbogo", "lunarflu", "osanseviero", "nikgr" ], "count": 7 }, { "reaction": "๐Ÿ‘", "users": [ "xufangzhi", "FeYuan", "QiushiSun", "lunarflu", "osan...
2024-04-02T07:00:36.000Z
2024-04-02T07:00:36.678Z
[]
/posts/Symbol-LLM/588552545989925
1,735
0
252508216375531
[ { "type": "text", "value": "Introducing Indic Chat!", "raw": "Introducing Indic Chat!", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, ...
Introducing Indic Chat! Try out best opensource Indic LLMs now on https://www.indic.chat/ Models available: โ€ข Telugu-LLM-Labs/Indic-gemma-7b-finetuned-sft-Navarasa-2.0 โ€ข GenVRadmin/AryaBhatta-GemmaOrca โ€ข BhabhaAI/Gajendra-v0.1 โ€ข ai4bharat/Airavata Additionally: 1. We open up our discord for everyone to collaborate & accelerate Indic LLMs: https://bhabha.ai/discord 2. We release ~600K rows filtered & Hindi translated version of OpenHermes-2.5 instruction dataset: https://huggingface.co/datasets/BhabhaAI/openhermes-2.5-hindi Also, thanks to our compute sponsor - Telugu LLM Labs & Bhabha AI in helping us serve models for Indic Chat. If youโ€™d like to be a sponsor too, checkout https://www.indic.chat/sponsor
{ "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/614efbb6ddd8df0d8bfd0a5a/0oMIv-WwL7sqPEQZv63cu.jpeg", "fullname": "Satpal Singh Rathore", "name": "satpalsr", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": 10, "isFollowing": false }
[ { "type": "image", "url": "https://cdn-uploads.huggingface.co/production/uploads/614efbb6ddd8df0d8bfd0a5a/aA2ei7e97aLXvNfjzefri.png" } ]
[]
[ { "reaction": "๐Ÿ”ฅ", "users": [ "monsoon-nlp", "lunarflu", "osanseviero", "alielfilali01", "ajibawa-2023", "nikgr", "ashimksaha", "Saugatkafley" ], "count": 8 }, { "reaction": "โค๏ธ", "users": [ "samusenps", "lunarflu", "osan...
2024-04-02T06:31:39.000Z
2024-04-02T06:31:39.614Z
[]
/posts/satpalsr/252508216375531
1,669
0
463436477837552
[ { "type": "text", "value": "On evaluating fine tuned 7B Italian open source LLMs I have collected many data points and I created a super simple explorative analyses. My hypothesis based on data are:", "raw": "On evaluating fine tuned 7B Italian open source LLMs I have collected many data points and I c...
On evaluating fine tuned 7B Italian open source LLMs I have collected many data points and I created a super simple explorative analyses. My hypothesis based on data are: - mmlu is hard to improve when fine tuning a base model on a different language - fine tuning also on single GPUs can improve by 5% to 10% the base model on common tasks but a lot more on specific cases with the right training time and data - fine tuning can specialize well but at cost of loosing some foundational knowledge. Here the data https://docs.google.com/spreadsheets/d/1MBcxy1loK8eIycZG4DN84Q2ejZ0jSjxUBgoShHDR6IY/edit?usp=sharing Here the colab https://colab.research.google.com/drive/1ra4_skG5QYWSYOzvagOoIoj4bibQD8Gw?usp=sharing Here an article with some considerations https://medium.com/@giuxale/an-analyses-on-italian-llms-models-evaluations-51bffe1d44d1
{ "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/5fef4eb7770b06e11c2c6381/1NMdigjCGtn0yvQZSi5NJ.png", "fullname": "Alessandro Ercolani", "name": "giux78", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": 44, "isFollowing": false }
[]
[]
[ { "reaction": "๐Ÿš€", "users": [ "Osd111", "beekay24", "lunarflu", "nikgr", "giux78" ], "count": 5 } ]
2024-04-01T22:24:36.000Z
2024-04-01T22:24:36.481Z
[]
/posts/giux78/463436477837552
1,779
0
738432237563121
[ { "type": "text", "value": "Google DeepMind introduces Gecko a new text embedding! Gecko uses a two-step process that leverages synthetic data generation and reranking.", "raw": "Google DeepMind introduces Gecko a new text embedding! Gecko uses a two-step process that leverages synthetic data generation...
Google DeepMind introduces Gecko a new text embedding! Gecko uses a two-step process that leverages synthetic data generation and reranking. Keypoints: * Uses an LLM to generate diverse synthetic queries and tasks from web passages * Refines the data by retrieving candidate passages and relabeling positives/negatives using the same LLM * Achieves very good results on the Massive Text Embedding Benchmark, where compact 256D Gecko outperforms 768D models. * 768D Gecko achieves state-of-the-art performance competing with models a lot larger larger. Paper: https://huggingface.co/papers/2403.20327 More details in my blog: https://huggingface.co/blog/vladbogo/gecko Congrats to the authors for their work!
{ "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/657217faabb25ed8aedd5e48/UUHAXeGtOnQBXFD3nYtf2.jpeg", "fullname": "Vlad Bogolin", "name": "vladbogo", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": 109, "isFollowing": false }
[ { "type": "image", "url": "https://cdn-uploads.huggingface.co/production/uploads/657217faabb25ed8aedd5e48/W0n9HjNewR63RiI8IcjFx.png" } ]
[]
[ { "reaction": "โค๏ธ", "users": [ "samusenps", "clem", "monsoon-nlp", "beekay24", "mohammedbriman", "lunarflu", "nikgr", "tatv047", "super-cinnamon" ], "count": 9 } ]
2024-04-01T20:33:03.000Z
2024-04-01T20:33:03.780Z
[]
/posts/vladbogo/738432237563121
2,572
0
366590839467245
[ { "type": "text", "value": "Jamba", "raw": "Jamba", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": nul...
Jamba A Hybrid Transformer-Mamba Language Model https://huggingface.co/papers/2403.19887 We present Jamba, a new base large language model based on a novel hybrid Transformer-Mamba mixture-of-experts (MoE) architecture. Specifically, Jamba interleaves blocks of Transformer and Mamba layers, enjoying the benefits of both model families. MoE is added in some of these layers to increase model capacity while keeping active parameter usage manageable. This flexible architecture allows resource- and objective-specific configurations. In the particular configuration we have implemented, we end up with a powerful model that fits in a single 80GB GPU. Built at large scale, Jamba provides high throughput and small memory footprint compared to vanilla Transformers, and at the same time state-of-the-art performance on standard language model benchmarks and long-context evaluations. Remarkably, the model presents strong results for up to 256K tokens context length. We study various architectural decisions, such as how to combine Transformer and Mamba layers, and how to mix experts, and show that some of them are crucial in large scale modeling. We also describe several interesting properties of these architectures which the training and evaluation of Jamba have revealed, and plan to release checkpoints from various ablation runs, to encourage further exploration of this novel architecture. We make the weights of our implementation of Jamba publicly available under a permissive license.
{ "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg", "fullname": "AK", "name": "akhaliq", "type": "user", "isPro": false, "isHf": true, "isMod": false, "followerCount": 5205, "isFollowing": false }
[ { "type": "image", "url": "https://cdn-uploads.huggingface.co/production/uploads/60f1abe7544c2adfd699860c/1ZvoI687Tw4BTtaua5gtV.png" } ]
[]
[ { "reaction": "โค๏ธ", "users": [ "samusenps", "osanseviero", "Benyucong", "clem", "beekay24", "mohammedbriman", "vladbogo", "adhi01", "lunarflu", "nikgr", "kramp", "ABX-AI" ], "count": 12 }, { "reaction": "๐Ÿ˜Ž", "users"...
2024-04-01T18:27:08.000Z
2024-04-01T18:27:08.367Z
[]
/posts/akhaliq/366590839467245
2,280
0
788381176898132
[ { "type": "text", "value": "๐ŸŽฏ๐Ÿ–ผ๏ธ๐ŸŒŸ New Research Alert - ICLR 2024! ๐ŸŒŸ ๐Ÿ–ผ๏ธ๐ŸŽฏ", "raw": "๐ŸŽฏ๐Ÿ–ผ๏ธ๐ŸŒŸ New Research Alert - ICLR 2024! ๐ŸŒŸ ๐Ÿ–ผ๏ธ๐ŸŽฏ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", ...
๐ŸŽฏ๐Ÿ–ผ๏ธ๐ŸŒŸ New Research Alert - ICLR 2024! ๐ŸŒŸ ๐Ÿ–ผ๏ธ๐ŸŽฏ ๐Ÿ“„ Title: Adversarial AutoMixup ๐Ÿ–ผ๏ธ ๐Ÿ“ Description: Adversarial AutoMixup is an approach to image classification augmentation. By alternately optimizing a classifier and a mixed-sample generator, it attempts to generate challenging samples and improve the robustness of the classifier against overfitting. ๐Ÿ‘ฅ Authors: Huafeng Qin et al. ๐Ÿ“… Conference: ICLR, May 7-11, 2024 | Vienna, Austria ๐Ÿ‡ฆ๐Ÿ‡น ๐Ÿ”— Paper: https://huggingface.co/papers/2312.11954 ๐Ÿ“ Repository: https://github.com/JinXins/Adversarial-AutoMixup ๐Ÿ“š More Papers: more cutting-edge research presented at other conferences in the https://huggingface.co/spaces/DmitryRyumin/NewEraAI-Papers curated by @DmitryRyumin ๐Ÿ” Keywords: #AutoMixup #ImageClassification #ImageAugmentation #AdversarialLearning #ICLR2024 #DeepLearning #Innovation
{ "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/nRCxbVng_PPBqKd-Z3KVc.jpeg", "fullname": "Dmitry Ryumin", "name": "DmitryRyumin", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": 377, "isFollowing": false }
[ { "type": "image", "url": "https://cdn-uploads.huggingface.co/production/uploads/6493306970d925ae80523a53/PrpxY45EnzC_SYovZANjZ.png" }, { "type": "image", "url": "https://cdn-uploads.huggingface.co/production/uploads/6493306970d925ae80523a53/YLwo41WQ9YKN5QSea82PE.png" }, { "type": "image...
[ { "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/nRCxbVng_PPBqKd-Z3KVc.jpeg", "fullname": "Dmitry Ryumin", "name": "DmitryRyumin", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": 377 } ]
[ { "reaction": "โค๏ธ", "users": [ "samusenps", "osanseviero", "clem", "beekay24", "lunarflu", "atrxial", "MiSTe-R", "nikgr", "PantheonAI" ], "count": 9 }, { "reaction": "๐Ÿ”ฅ", "users": [ "DmitryRyumin", "clem", "lunarflu...
2024-04-01T17:52:04.000Z
2024-04-04T23:40:30.370Z
[ { "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/nRCxbVng_PPBqKd-Z3KVc.jpeg", "fullname": "Dmitry Ryumin", "name": "DmitryRyumin", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": 377, "isFollowing": false }, { ...
/posts/DmitryRyumin/788381176898132
1,414
4
630974928940204
[ { "type": "text", "value": "Little know gem: the Open-source Cookbook", "raw": "Little know gem: the Open-source Cookbook", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null,...
Little know gem: the Open-source Cookbook A collection of notebooks for building practical AI applications using open-source tools and models: https://lnkd.in/e6m6Jmwu Doc: https://lnkd.in/e3FE6TUq Currently contains 16 notebooks in English (and some in Chinese): 1. Using LLM-as-a-judge ๐Ÿง‘โ€โš–๏ธ for an automated and versatile evaluation 2. Create a legal preference dataset 3. Suggestions for Data Annotation with SetFit in Zero-shot Text Classification 4. Implementing semantic cache to improve a RAG system 5. Building A RAG Ebook โ€œLibrarianโ€ Using LlamaIndex 6. Stable Diffusion Interpolation 7. Building A RAG System with Gemma, MongoDB and Open Source Models 8. Prompt Tuning with PEFT Library 9. Migrating from OpenAI to Open LLMs Using TGIโ€™s Messages API 10. Automatic Embeddings with TEI through Inference Endpoints 11. Simple RAG for GitHub issues using Hugging Face Zephyr and LangChain 12. Embedding multimodal data for similarity search using ๐Ÿค— transformers, ๐Ÿค— datasets and FAISS 13. Fine-tuning a Code LLM on Custom Code on a single GPU 14. RAG Evaluation Using Synthetic data and LLM-As-A-Judge 15. Advanced RAG on HuggingFace documentation using LangChain 16. Detecting Issues in a Text Dataset with Cleanlab
{ "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1583857746553-5df7e9e5da6d0311fd3d53f9.jpeg", "fullname": "Thomas Wolf", "name": "thomwolf", "type": "user", "isPro": true, "isHf": true, "isMod": false, "followerCount": 704, "isFollowing": false }
[]
[]
[ { "reaction": "๐Ÿ”ฅ", "users": [ "giux78", "samusenps", "osanseviero", "clem", "beekay24", "ajibawa-2023", "vladbogo", "peaceAsh", "lunarflu", "nikgr", "vikas" ], "count": 11 }, { "reaction": "โค๏ธ", "users": [ "samusenp...
2024-04-01T14:06:19.000Z
2024-04-01T14:06:19.658Z
[]
/posts/thomwolf/630974928940204
1,992
0
402648201818133
[ { "type": "text", "value": "Diaries of Open Source. Part 13 ๐Ÿค—", "raw": "Diaries of Open Source. Part 13 ๐Ÿค—", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\...
Diaries of Open Source. Part 13 ๐Ÿค— ๐ŸคTwo different bitnet 1.5 open-source replications Original paper: https://hf.co/papers/2402.17764 1bitllm experiment: https://hf.co/blog/joey00072/experiments-with-bitnet-1-5 NousResearch experiment https://hf.co/NousResearch/OLMo-Bitnet-1B ๐ŸฅณTiny and large multimodal models great for embeddings GitHub: https://github.com/unum-cloud/uform Encoders: https://hf.co/collections/unum-cloud/multimodal-encoders-660553903617c5297eb16838 ONNX weights: https://hf.co/collections/unum-cloud/uform-vl-english-large-onnx-66055a57c182d846f3bc1949 ๐Ÿ“œ SMPLer-X: Expressive Human Pose and Shape Estimation Project website: https://caizhongang.com/projects/SMPLer-X/ Demo: https://huggingface.co/spaces/caizhongang/SMPLer-X Paper: https://hf.co/papers/2309.17448 ๐Ÿง™GeoWizard: 3D Geometry Estimation Project website: https://fuxiao0719.github.io/projects/geowizard/ Demo: https://hf.co/spaces/lemonaddie/geowizard Misc models and datasets - Dolphin-2.8-mistral-7b-v0.2 https://hf.co/cognitivecomputations/dolphin-2.8-mistral-7b-v02 - Hermes-2-Pro-11B, a self-frankenmerge 11B variant https://hf.co/mattshumer/Hermes-2-Pro-11B - Large conversational dataset based on Usenet data in the Italian language https://hf.co/datasets/mii-community/UsenetArchiveIT-conversations
{ "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6032802e1f993496bc14d9e3/w6hr-DEQot4VVkoyRIBiy.png", "fullname": "Omar Sanseviero", "name": "osanseviero", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": 2868, "isFollowing": false }
[]
[]
[ { "reaction": "โค๏ธ", "users": [ "ruggsea", "mrinaldi", "DmitryRyumin", "samusenps", "giux78", "clem", "vladbogo", "lunarflu", "nikgr", "abidlabs" ], "count": 10 }, { "reaction": "๐Ÿ”ฅ", "users": [ "samusenps", "giux78",...
2024-04-01T11:02:12.000Z
2024-04-01T14:04:53.783Z
[ { "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6032802e1f993496bc14d9e3/w6hr-DEQot4VVkoyRIBiy.png", "fullname": "Omar Sanseviero", "name": "osanseviero", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": 2868, "isFollowing":...
/posts/osanseviero/402648201818133
2,272
3
581937223513516
[ { "type": "text", "value": "This is the closest Iโ€™ve seen of a scalable AI/LLM Operating System - it has all the major ingredients of a feasible AI OS 1 architecture:", "raw": "This is the closest Iโ€™ve seen of a scalable AI/LLM Operating System - it has all the major ingredients of a feasible AI OS 1 ar...
This is the closest Iโ€™ve seen of a scalable AI/LLM Operating System - it has all the major ingredients of a feasible AI OS 1 architecture: - Extends classical OS functionalities with an LLM Kernel. - Multi agent-centric approach. - Optimized resource allocation system that allows for LLM-based tasks and Classical OS tasks to coexist. - An Agent Scheduler that can perform classical os operations (FIFO, RR). - A Context Manager to improve alignment. - Lazy Memory Manager for agents (ensures data is stored and accessible only while the agent is active) - An Enhanced security module for the AI-driven environment. It does hit all checkpoints, doesnโ€™t it? An upscale version of @karpathyโ€™s. Code: https://github.com/agiresearch/AIOS
{ "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6438a9027de34e8ea7e4b257/vib8QSd1AWMr_bR9ig_xJ.jpeg", "fullname": "Jaward Sesay", "name": "Jaward", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": 191, "isFollowing": false }
[ { "type": "image", "url": "https://cdn-uploads.huggingface.co/production/uploads/6438a9027de34e8ea7e4b257/wFxgCDc4qA6KCxtvxErAp.jpeg" }, { "type": "image", "url": "https://cdn-uploads.huggingface.co/production/uploads/6438a9027de34e8ea7e4b257/1SXcuxpfYIShdA39WyP3a.jpeg" }, { "type": "ima...
[ { "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1660434061546-62f83661fe21cc4875221c0f.jpeg", "fullname": "Andrej K", "name": "karpathy", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": 476 } ]
[ { "reaction": "๐Ÿ”ฅ", "users": [ "ajibawa-2023", "osanseviero", "phuong-d-h-nguyen", "Dlbk", "rsamantha", "KvrParaskevi", "AIAdventurist", "JoshuaGu", "NeuralNovel", "thomwolf", "kazeMace", "disham993", "kubo-vn", "abhinit21",...
2024-03-30T12:17:22.000Z
2024-03-31T04:13:32.107Z
[ { "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/64aea8ff67511bd3d965697b/Jxn52EmDF5RApJh8antxn.jpeg", "fullname": "Feynman Innovations", "name": "ajibawa-2023", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": 138, "isFollow...
/posts/Jaward/581937223513516
4,511
4
424262013167981
[ { "type": "text", "value": "๐Ÿš€๐Ÿš€ Exciting times for the document AI community! ", "raw": "๐Ÿš€๐Ÿš€ Exciting times for the document AI community! ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line"...
๐Ÿš€๐Ÿš€ Exciting times for the document AI community! We're thrilled to announce the release of some of the largest OCR datasets available to the public. ๐Ÿ”ฅ With over 26 million pages , 18 billion text tokens, and 6TB of data, these resources are a significant leap forward for document AI research. Here's how to access these datasets quickly: ``` from datasets import load_dataset pdfa_dataset = load_dataset('pixparse/pdfa-eng-wds', streaming=True) IDL_dataset = load_dataset('pixparse/idl-wds', streaming=True) ``` This enables you to stream them directly, integrating seamlessly with your projects using the Hugging Face datasets library. On the hub, you can find them here: https://huggingface.co/datasets/pixparse/pdfa-eng-wds https://huggingface.co/datasets/pixparse/idl-wds For lean data loading, the new [chug](https://github.com/huggingface/chug) library offers a solution with pdf decoding: ``` import chug task_cfg = chug.DataTaskDocReadCfg( page_sampling='all', ) data_cfg = chug.DataCfg( source='pixparse/pdfa-eng-wds', split='train', batch_size=None, format='hfids', num_workers=0, ) data_loader = chug.create_loader( data_cfg, task_cfg, ) sample = next(iter(data_loader)) ``` We owe a huge thank you to Peter Wyatt, Kate Tasker, Rachel Taketa, Ali Furkan Biten, Ruben Tito, and their colleagues for their contributions. Their work putting these datasets together has been invaluable. ๐Ÿค— Looking Ahead: We're on a mission to enhance document AI capabilities, and these datasets are just the beginning. With your engagement and innovation, we're confident in the community's ability to develop robust OCR solutions. We encourage you to explore these datasets, experiment with the code, and contribute to the collective progress in document AI. For detailed information on usage and licensing, please refer to the dataset cards on the Hugging Face hub.
{ "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/64789feb79f2d49511ed7db4/IzaIwiVgnkTZHrcLDQk0C.jpeg", "fullname": "Pablo Montalvo", "name": "Molbap", "type": "user", "isPro": true, "isHf": true, "isMod": false, "followerCount": 97, "isFollowing": false }
[]
[]
[ { "reaction": "๐Ÿ”ฅ", "users": [ "abidlabs", "VictorSanh", "lhoestq", "Dlbk", "thomwolf", "manishiitg", "clefourrier", "IlyasMoutawwakil", "osanseviero", "samusenps", "DmitryRyumin", "ajibawa-2023", "Pclanglais", "loubnabnl", ...
2024-03-29T22:12:58.000Z
2024-04-02T18:55:27.055Z
[ { "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1667002643224-604a5184dca2c7ac7508b849.jpeg", "fullname": "Ross Wightman", "name": "rwightman", "type": "user", "isPro": false, "isHf": true, "isMod": false, "followerCount": 221, "isFollowing": false }, ...
/posts/Molbap/424262013167981
5,000
4
712385114238442
[ { "type": "text", "value": "Diaries of Open Source. Part 12 ๐Ÿค—", "raw": "Diaries of Open Source. Part 12 ๐Ÿค—", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\...
Diaries of Open Source. Part 12 ๐Ÿค— ๐Ÿš€Alibaba releases Qwen1.5-MoE-A2.7B, an interesting MoE with 2.7B activated parameters and 64 experts Blog https://qwenlm.github.io/blog/qwen-moe/ Demo: https://hf.co/spaces/Qwen/qwen1.5-MoE-A2.7B-Chat-demo Models: https://hf.co/Qwen GitHub: https://github.com/QwenLM/Qwen1.5 ๐ŸŽตVoiceCraft, SOTA speech editing and text to speech GitHub: https://github.com/jasonppy/VoiceCraft Model: https://huggingface.co/pyp1/VoiceCraft ๐Ÿ AI21Labs release Jamba, an SSM-Transformer, pretrained MoE which allows a large context window (256K) and high throughput Blog https://www.ai21.com/blog/announcing-jamba Model https://huggingface.co/ai21labs/Jamba-v0.1 โœจ Berkeley releases Starling-LM-7B, an RLHF-ed model, and -RM-34B, a Yi-based reward model very good for its size Starling Beta: https://hf.co/Nexusflow/Starling-LM-7B-beta Starling RM: https://hf.co/Nexusflow/Starling-RM-34B ๐Ÿ–ฅ๏ธStability releases Stable Code Instruct 3B, an instruct model for code generation Blog: https://stability.ai/news/introducing-stable-code-instruct-3b Demo: https://hf.co/spaces/stabilityai/stable-code-instruct-3b Report: https://stability.ai/s/Stable_Code_TechReport_release.pdf ๐Ÿ“šCommon Corpus: the largest public domain dataset for training LLMs Blog: https://hf.co/blog/Pclanglais/common-corpus Dataset: https://hf.co/collections/PleIAs/common-corpus-65d46e3ea3980fdcd66a5613 Misc: โšกGaLore: a very memory-efficient technique that allows pretraining models in consumer GPUs https://hf.co/blog/galore Moirai ๐Ÿ“ˆMoirai, foundation models for time series forecasting https://hf.co/collections/Salesforce/moirai-10-r-models-65c8d3a94c51428c300e0742 ๐Ÿ”ฅ Mistral-ORPO-Capybara-7K, a high-quality Mistral fine-tune using ORPO, a new alignment technique https://hf.co/kaist-ai/mistral-orpo-capybara-7k ๐ŸคฏAPISR, an anime super-resolution upscaling model https://hf.co/spaces/HikariDawn/APISR
{ "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6032802e1f993496bc14d9e3/w6hr-DEQot4VVkoyRIBiy.png", "fullname": "Omar Sanseviero", "name": "osanseviero", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": 2868, "isFollowing": false }
[]
[]
[ { "reaction": "โค๏ธ", "users": [ "samusenps", "Dlbk", "manishiitg", "Joseph717171", "DmitryRyumin", "ajibawa-2023", "HikariDawn", "ElenaRyumina", "Chunte", "gabrielmbmb", "mmhamdy", "callmeraxz", "clem", "lunarflu", "nik...
2024-03-29T21:24:04.000Z
2024-03-31T09:22:45.335Z
[ { "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6032802e1f993496bc14d9e3/w6hr-DEQot4VVkoyRIBiy.png", "fullname": "Omar Sanseviero", "name": "osanseviero", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": 2868, "isFollowing":...
/posts/osanseviero/712385114238442
3,514
4
857297626421239
[ { "type": "text", "value": "Happy to share Living Images and the demo video of the product outpainting model behind it ๐Ÿš€", "raw": "Happy to share Living Images and the demo video of the product outpainting model behind it ๐Ÿš€", "href": null, "resource": null, "url": null, "code": null, ...
Happy to share Living Images and the demo video of the product outpainting model behind it ๐Ÿš€ Send your generation requests in the thread ๐Ÿงตor use https://img.coframe.ai
{ "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1626345802673-60ec109280b434965987d1de.jpeg", "fullname": "Aleksey Korshuk", "name": "AlekseyKorshuk", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": 153, "isFollowing": false }
[ { "type": "video", "url": "https://cdn-uploads.huggingface.co/production/uploads/60ec109280b434965987d1de/zeDFWoDjxjYIMZQOWzF65.mp4" } ]
[]
[ { "reaction": "๐Ÿš€", "users": [ "AlekseyKorshuk", "Glavin001", "tunahfish", "onum", "TamasZumpf", "clem", "lunarflu", "nikgr", "qyxs" ], "count": 9 }, { "reaction": "๐Ÿค", "users": [ "shiv2050", "AlekseyKorshuk", "luna...
2024-03-29T19:53:20.000Z
2024-03-29T19:53:20.856Z
[]
/posts/AlekseyKorshuk/857297626421239
3,427
0
174172922896344
[ { "type": "text", "value": "Fun fact about evaluation!", "raw": "Fun fact about evaluation!", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": ...
Fun fact about evaluation! Did you know that, if you evaluate the same model, with the same prompt formatting & the same fixed few-shot examples, only changing โ™ป๏ธthe order in which the few shot examples are added to the prompt โ™ป๏ธ you get a difference of up to 3 points in evaluation score? I did a small experiment using some MMLU subsets on the best performing 7B and lower pretrained models from the leaderboard. I tried 8 different prompting methods (containing more or less information, such as just the question, or Question: question, or Question: question Choices: ..., see the x axis) that are commonly used in evaluation. I then compared the results for all these methods, in 5-shot, during 2 runs. The *only difference* between the first and second run being that the samples used in few-shot are not introduced in the same order. For example, run one would have been "A B C D E Current sample", vs, in run 2, "D C E A B Current sample". All the other experiment parameters stayed exactly the same. As you can see on the attached picture, you get a difference of up to 3 points between the 2 few-shot samples shuffling. So, when just changing *the order of the few shot samples* can change your results by several points, what is the impact of all other "minimal" and unreported prompting changes? -> Any kind of model score, provided without an evaluation script for reproducibility, is basically bullshit (or coms). -> This is why we need reproducible evaluation in a fair and exactly similar setup, using evaluation suites such as `lm_eval` from the Harness, `lighteval` from HF, or the Open LLM Leaderboard.
{ "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1644340617257-noauth.png", "fullname": "Clรฉmentine Fourrier", "name": "clefourrier", "type": "user", "isPro": false, "isHf": true, "isMod": false, "followerCount": 459, "isFollowing": false }
[ { "type": "image", "url": "https://cdn-uploads.huggingface.co/production/uploads/6202a599216215a22221dea9/tweU_jkcnWjnF3GProUia.png" } ]
[]
[ { "reaction": "๐Ÿ‘", "users": [ "shiv2050", "ssml2050", "InferenceIllusionist", "mmhamdy", "ricfergas", "giux78", "clem", "lunarflu", "nikgr" ], "count": 9 }, { "reaction": "๐Ÿ”ฅ", "users": [ "osanseviero", "clem", "lun...
2024-03-29T19:11:35.000Z
2024-04-02T14:44:47.873Z
[ { "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/63e80664e02ee67e8e570ec4/rGfRhywmjd_lbqfYzOEdd.png", "fullname": "EsKa", "name": "SerialKicked", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": 12, "isFollowing": false }, ...
/posts/clefourrier/174172922896344
2,350
4
175193132662887
[ { "type": "text", "value": "Teraflop AI is excited to help support the Caselaw Access Project and Harvard Library Innovation Lab, in the release of over 6.6 million state and federal court decisions published throughout U.S. history. It is important to democratize fair access to data to the public, legal co...
Teraflop AI is excited to help support the Caselaw Access Project and Harvard Library Innovation Lab, in the release of over 6.6 million state and federal court decisions published throughout U.S. history. It is important to democratize fair access to data to the public, legal community, and researchers. This is a processed and cleaned version of the original CAP data. During the digitization of these texts, there were erroneous OCR errors that occurred. We worked to post-process each of the texts for model training to fix encoding, normalization, repetition, redundancy, parsing, and formatting. Teraflop AIโ€™s data engine allows for the massively parallel processing of web-scale datasets into cleaned text form. Link to the processed dataset: https://huggingface.co/datasets/TeraflopAI/Caselaw_Access_Project The Caselaw Access Project dataset is licensed under the CC0 License. We plan to release trillions of commercially licensed text tokens, images, audio, videos, and other datasets spanning numerous domains and modalities over the next months. If you are interested in contributing commercially licensed data be sure to reach out: https://twitter.com/EnricoShippole Follow us for the next collaborative dataset releases: https://twitter.com/TeraflopAI
{ "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6276ba3c2d26ac639e5a2b01/k7LHkSbNjPR31ma4EereF.png", "fullname": "Enrico Shippole", "name": "conceptofmind", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": 128, "isFollowing": false }
[ { "type": "image", "url": "https://cdn-uploads.huggingface.co/production/uploads/6276ba3c2d26ac639e5a2b01/AHQkbn1YQnTu926M552z9.png" } ]
[]
[ { "reaction": "๐Ÿ”ฅ", "users": [ "osanseviero", "samusenps", "rsamantha", "Pclanglais", "muhtasham", "ssml2050", "clem", "nikgr" ], "count": 8 }, { "reaction": "โค๏ธ", "users": [ "samusenps", "clem" ], "count": 2 }, { ...
2024-03-29T16:44:42.000Z
2024-03-29T16:44:42.733Z
[]
/posts/conceptofmind/175193132662887
2,501
0
819003891866897
[ { "type": "text", "value": "New - add your bluesky account to your HF profile:", "raw": "New - add your bluesky account to your HF profile:", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", ...
New - add your bluesky account to your HF profile: https://huggingface.co/settings/profile Is the grass greener, the sky bluer? Will try and figure it out at https://bsky.app/profile/jeffboudier.bsky.social By the way, HF people starter pack https://bsky.app/starter-pack/huggingface.bsky.social/3laz5x7naiz22
{ "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1605114051380-noauth.jpeg", "fullname": "Jeff Boudier", "name": "jeffboudier", "type": "user", "isPro": false, "isHf": true, "isMod": false, "followerCount": 195, "isFollowing": false }
[ { "type": "image", "url": "https://cdn-uploads.huggingface.co/production/uploads/5fac18fb5eec0323e9470ba2/XOdGKkHok6Ebxci1wO0Ml.png" } ]
[]
[ { "reaction": "๐Ÿค—", "users": [ "rwightman", "John6666", "vinhnx90" ], "count": 3 } ]
2024-11-22T20:17:38.000Z
2024-11-22T20:17:38.713Z
[]
/posts/jeffboudier/819003891866897
688
0
483522010371958
[ { "type": "text", "value": "Weekend Dribble ๐Ÿ“ฆ๐Ÿบ", "raw": "Weekend Dribble ๐Ÿ“ฆ๐Ÿบ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "r...
Weekend Dribble ๐Ÿ“ฆ๐Ÿบ Adapters for Product Ad Backdrops, Smooth Polaroids, Minimalist Sketch cards, Super Blends!! ๐ŸคDemo on: https://huggingface.co/spaces/prithivMLmods/FLUX-LoRA-DLC Stranger Zones : ๐Ÿ‘‰๐Ÿผ{ Super Blend } : https://huggingface.co/strangerzonehf/Flux-Super-Blend-LoRA ๐Ÿ‘‰๐Ÿผ{ Product Concept Ad } : https://huggingface.co/prithivMLmods/Flux-Product-Ad-Backdrop ๐Ÿ‘‰๐Ÿผ{ Frosted Mock-ups } : https://huggingface.co/prithivMLmods/Flux.1-Dev-Frosted-Container-LoRA ๐Ÿ‘‰๐Ÿผ{ Polaroid Plus } : https://huggingface.co/prithivMLmods/Flux-Polaroid-Plus ๐Ÿ‘‰๐Ÿผ{Sketch Cards} : https://huggingface.co/prithivMLmods/Flux.1-Dev-Sketch-Card-LoRA ๐Ÿ‘‰Stranger Zone: https://huggingface.co/strangerzonehf ๐Ÿ‘‰Flux LoRA Collections: https://huggingface.co/collections/prithivMLmods/flux-lora-collections-66dd5908be2206cfaa8519be . . . @prithivMLmods ๐Ÿค—
{ "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/65bb837dbfb878f46c77de4c/UVtVbF_3rdt0DC8xTkpL1.jpeg", "fullname": "Prithiv Sakthi", "name": "prithivMLmods", "type": "user", "isPro": true, "isHf": false, "isMod": false, "followerCount": 393, "isFollowing": false }
[ { "type": "image", "url": "https://cdn-uploads.huggingface.co/production/uploads/65bb837dbfb878f46c77de4c/bJU0-GPle-FgfsAtNGEbx.png" }, { "type": "image", "url": "https://cdn-uploads.huggingface.co/production/uploads/65bb837dbfb878f46c77de4c/uF8WYBLOpwtPQnMZC-XNI.png" }, { "type": "image...
[ { "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/65bb837dbfb878f46c77de4c/UVtVbF_3rdt0DC8xTkpL1.jpeg", "fullname": "Prithiv Sakthi", "name": "prithivMLmods", "type": "user", "isPro": true, "isHf": false, "isMod": false, "followerCount": 393 } ]
[ { "reaction": "๐Ÿ”ฅ", "users": [ "Ngrthm", "darksfx", "John6666", "rdrede", "ai4life44", "victor", "loubnabnl", "nzhenev", "linoyts" ], "count": 9 }, { "reaction": "๐Ÿค—", "users": [ "RenderIo", "Ngrthm", "darksfx", ...
2024-11-22T18:09:16.000Z
2024-11-22T18:15:45.165Z
[]
/posts/prithivMLmods/483522010371958
1,440
0
308810445844150
[ { "type": "text", "value": "Can language models replace developers? #RepoCod says โ€œNot Yetโ€, because GPT-4o and other LLMs have <30% accuracy/pass@1 on real-world code generation tasks. ", "raw": "Can language models replace developers? #RepoCod says โ€œNot Yetโ€, because GPT-4o and other LLMs have <30% ac...
Can language models replace developers? #RepoCod says โ€œNot Yetโ€, because GPT-4o and other LLMs have <30% accuracy/pass@1 on real-world code generation tasks. - Leaderboard https://lt-asset.github.io/REPOCOD/ - Dataset: https://huggingface.co/datasets/lt-asset/REPOCOD @jiang719 @shanchao @Yiran-Hu1007 Compared to #SWEBench, RepoCod tasks are - General code generation tasks, while SWE-Bench tasks resolve pull requests from GitHub issues. - With 2.6X more tests per task (313.5 compared to SWE-Benchโ€™s 120.8). Compared to #HumanEval, #MBPP, #CoderEval, and #ClassEval, RepoCod has 980 instances from 11 Python projects, with - Whole function generation - Repository-level context - Validation with test cases, and - Real-world complex tasks: longest average canonical solution length (331.6 tokens) and the highest average cyclomatic complexity (9.00) Introducing hashtag #RepoCod-Lite ๐ŸŸ for faster evaluations: 200 of the toughest tasks from RepoCod with: - 67 repository-level, 67 file-level, and 66 self-contains tasks - Detailed problem descriptions (967 tokens) and long canonical solutions (918 tokens) - GPT-4o and other LLMs have < 10% accuracy/pass@1 on RepoCod-Lite tasks. - Dataset: https://huggingface.co/datasets/lt-asset/REPOCOD_Lite #LLM4code #LLM #CodeGeneration #Security
{ "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/65831342f9c5cda913df366a/h7MLYt--shRYQj4-q5XmR.jpeg", "fullname": "Lin Tan", "name": "lin-tan", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": 5, "isFollowing": false }
[ { "type": "image", "url": "https://cdn-uploads.huggingface.co/production/uploads/65831342f9c5cda913df366a/YGqQYEKVWnIzuUWX4WorD.png" }, { "type": "image", "url": "https://cdn-uploads.huggingface.co/production/uploads/65831342f9c5cda913df366a/as5x1o-983nkjp2ke9MiL.png" }, { "type": "image...
[ { "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/629e4ca2f2bda18349b6d330/gSnGTLm2ugpINECylbuuQ.jpeg", "fullname": "Nan Jiang", "name": "jiang719", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": 4 }, { "avatarUrl": "/av...
[ { "reaction": "๐Ÿ”ฅ", "users": [ "John6666", "AdinaY", "clem", "lin-tan", "shanchao", "bhadresh-savani", "iky1e" ], "count": 7 }, { "reaction": "๐Ÿค—", "users": [ "prithivMLmods", "lin-tan", "shanchao", "mvelic" ], "co...
2024-11-22T17:14:55.000Z
2024-11-23T02:39:03.479Z
[ { "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/649a54b896d5747b35e2163b/tdZmsov6fN1VHztaE5kX9.jpeg", "fullname": "Vezora", "name": "Vezora", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": 116, "isFollowing": false } ]
/posts/lin-tan/308810445844150
1,077
1
475162100629292
[ { "type": "text", "value": "401 Client Error: Unauthorized for url: ", "raw": "401 Client Error: Unauthorized for url: ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "resource", "value": null, ...
401 Client Error: Unauthorized for url: https://huggingface.co/black-forest-labs/FLUX.1-dev/resolve/main/vae/config.json The above exception was the direct cause of the following exception: huggingface_hub.errors.GatedRepoError: 401 Client Error. (Request ID: Root=1-6740a1d6-26b6f3b44563a26a49aea19d;fa54ac02-6068-44e1-b499-f793dd20335c) Cannot access gated repo for url https://huggingface.co/black-forest-labs/FLUX.1-dev/resolve/main/vae/config.json. Access to model black-forest-labs/FLUX.1-dev is restricted. You must have access to it and be authenticated to access it. Please log in. During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/user/app/app.py", line 32, in <module> good_vae = AutoencoderKL.from_pretrained(base_model, subfolder="vae", torch_dtype=dtype).to(device)
{ "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/66d45453cc638c79e6447240/CqV6FDv3Wv6ORUBuM02R6.png", "fullname": "John", "name": "Keltezaa", "type": "user", "isPro": true, "isHf": false, "isMod": false, "followerCount": 18, "isFollowing": false }
[ { "type": "image", "url": "https://cdn-uploads.huggingface.co/production/uploads/66d45453cc638c79e6447240/ty9lwSVT7jvMLq28XsLcg.png" }, { "type": "image", "url": "https://cdn-uploads.huggingface.co/production/uploads/66d45453cc638c79e6447240/CWl-lTx0n8FbqxdgULkmp.png" } ]
[]
[ { "reaction": "๐Ÿ‘€", "users": [ "John6666" ], "count": 1 } ]
2024-11-22T15:35:26.000Z
2024-11-22T17:25:10.247Z
[ { "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6740ab9e864cf5f619a70ee4/kLxfZKMB08I0F4XSD47MX.jpeg", "fullname": "Miguel Saraiva", "name": "n1k0p0l", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": null, "isFollowing": fal...
/posts/Keltezaa/475162100629292
333
2
557320861915536
[ { "type": "text", "value": "Marco-o1๐Ÿ”ฅ an open Reasoning Models by AIDC team", "raw": "Marco-o1๐Ÿ”ฅ an open Reasoning Models by AIDC team", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", ...
Marco-o1๐Ÿ”ฅ an open Reasoning Models by AIDC team Model: https://huggingface.co/AIDC-AI/Marco-o1 Paper: https://huggingface.co/papers/2411.14405 โœจFine-tuned with CoT data (open-source + synthetic). โœจExpands solution space with MCTS, guided by model confidence. โœจNovel reasoning strategies & self-reflection enhance complex problem-solving. โœจPioneers LRM in multilingual machine translation.
{ "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/63a369d98c0c89dcae3b8329/6OUJ7Hc9T1jXynYH3FGaf.png", "fullname": "Adina Yakefu", "name": "AdinaY", "type": "user", "isPro": false, "isHf": true, "isMod": false, "followerCount": 240, "isFollowing": false }
[]
[]
[ { "reaction": "๐Ÿš€", "users": [ "enzostvs", "John6666", "ai-everyday", "AtAndDev" ], "count": 4 }, { "reaction": "๐Ÿ”ฅ", "users": [ "prithivMLmods", "AtAndDev" ], "count": 2 } ]
2024-11-22T15:24:14.000Z
2024-11-22T16:22:03.111Z
[]
/posts/AdinaY/557320861915536
787
0
425637070931186
[ { "type": "text", "value": "๐Ÿš€๐Ÿš€๐Ÿš€Introducing Insight-V! An early attempt towards o1-like multi-modal reasoning. ", "raw": "๐Ÿš€๐Ÿš€๐Ÿš€Introducing Insight-V! An early attempt towards o1-like multi-modal reasoning. ", "href": null, "resource": null, "url": null, "code": null, "user": null,...
๐Ÿš€๐Ÿš€๐Ÿš€Introducing Insight-V! An early attempt towards o1-like multi-modal reasoning. We offer a structured long-chain visual reasoning data generation pipeline and a multi-agent system to unleash the reasoning potential of MLLMs. ๐Ÿ“œ Paper: https://arxiv.org/abs/2411.14432 ๐Ÿ› ๏ธ Github: https://github.com/dongyh20/Insight-V ๐Ÿ’ผ Model Weight: https://huggingface.co/collections/THUdyh/insight-v-673f5e1dd8ab5f2d8d332035
{ "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/652965773a416e1f2173443b/y9MB8YgHzbwCXAc4EI9T3.jpeg", "fullname": "Yuhao Dong", "name": "THUdyh", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": 24, "isFollowing": false }
[]
[]
[ { "reaction": "๐Ÿ”ฅ", "users": [ "AdinaY", "liuziwei7", "Mackya", "Mondayfelix" ], "count": 4 }, { "reaction": "๐Ÿ‘€", "users": [ "John6666", "AdinaY", "Mackya" ], "count": 3 }, { "reaction": "๐Ÿ˜Ž", "users": [ "Mackya" ],...
2024-11-22T14:51:23.000Z
2024-11-22T14:51:23.815Z
[]
/posts/THUdyh/425637070931186
781
0
519030878351508
[ { "type": "text", "value": "Made a new app to visualize the LLM race โ‡’ ๐—ก๐—ผ ๐—˜๐˜‚๐—ฟ๐—ผ๐—ฝ๐—ฒ๐—ฎ๐—ป ๐—ฐ๐—ผ๐—บ๐—ฝ๐—ฎ๐—ป๐˜† ๐—ถ๐—ป ๐˜๐—ต๐—ฒ ๐˜๐—ผ๐—ฝ ๐Ÿญ๐Ÿฌ ๐Ÿ‡ช๐Ÿ‡บโŒ", "raw": "Made a new app to visualize the LLM race โ‡’ ๐—ก๐—ผ ๐—˜๐˜‚๐—ฟ๐—ผ๐—ฝ๐—ฒ๐—ฎ๐—ป ๐—ฐ๐—ผ๐—บ๐—ฝ๐—ฎ๐—ป๐˜† ๐—ถ๐—ป ๐˜๐—ต๐—ฒ ๐˜๐—ผ๐—ฝ ๐Ÿญ๐Ÿฌ ๐Ÿ‡ช๐Ÿ‡บโŒ", "href": null, "resource": null, ...
Made a new app to visualize the LLM race โ‡’ ๐—ก๐—ผ ๐—˜๐˜‚๐—ฟ๐—ผ๐—ฝ๐—ฒ๐—ฎ๐—ป ๐—ฐ๐—ผ๐—บ๐—ฝ๐—ฎ๐—ป๐˜† ๐—ถ๐—ป ๐˜๐—ต๐—ฒ ๐˜๐—ผ๐—ฝ ๐Ÿญ๐Ÿฌ ๐Ÿ‡ช๐Ÿ‡บโŒ See the app here ๐Ÿ‘‰ https://huggingface.co/spaces/m-ric/llm-race-to-the-top I've adapted an app by @andrewrreed that tracks progress of LLMs (https://huggingface.co/spaces/andrewrreed/closed-vs-open-arena-elo), on the Chatbot Arena leaderboard, to compare companies from different countries. The outcome is quite sad, as a Frenchman and European. The top 10 is exclusively US ๐Ÿ‡บ๐Ÿ‡ธ and Chinese ๐Ÿ‡จ๐Ÿ‡ณ companies (after great Chinese LLM releases recently, like the Qwen2.5 series), with the notable exception of Mistral AI ๐Ÿ‡ซ๐Ÿ‡ท. American companies are making fast progress, Chinese ones even faster. Europe is at risk of being left behind. And the EU AI Act hasn't even come into force yet to slow down the EU market. We need to wake up ๐Ÿ˜ฌ โš ๏ธ Caution: This Chatbot Arena ELO ranking is not the most accurate, especially at high scores like this, because LLM makers can game it to some extent.
{ "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/63d10d4e8eaa4831005e92b5/7p7-OmWM6PqqCs7ZStPGD.jpeg", "fullname": "Aymeric Roucher", "name": "m-ric", "type": "user", "isPro": false, "isHf": true, "isMod": false, "followerCount": 494, "isFollowing": false }
[ { "type": "image", "url": "https://cdn-uploads.huggingface.co/production/uploads/63d10d4e8eaa4831005e92b5/S6JeMca06C9mnh8DYpkEV.png" } ]
[ { "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/61d375fd733d3a83ecd1bba9/oIXwvvs1-HaCnJXMCZgkc.jpeg", "fullname": "Andrew Reed", "name": "andrewrreed", "type": "user", "isPro": false, "isHf": true, "isMod": false, "followerCount": 106 } ]
[ { "reaction": "โค๏ธ", "users": [ "andrewrreed", "LiteSoulAI", "John6666", "clem", "AdinaY", "k-young", "okedialf", "AtAndDev", "Tanvir1337" ], "count": 9 } ]
2024-11-22T13:13:31.000Z
2024-11-22T13:18:31.612Z
[]
/posts/m-ric/519030878351508
932
0
247264884415285
[ { "type": "text", "value": "What a week! A recap for everything you missed โ„๏ธ", "raw": "What a week! A recap for everything you missed โ„๏ธ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", ...
What a week! A recap for everything you missed โ„๏ธ https://huggingface.co/collections/merve/nov-22-releases-673fbbcfc1c97c4f411def07 Multimodal โœจ > Mistral AI released Pixtral 124B, a gigantic open vision language model > Llava-CoT (formerly known as Llava-o1) was released, a multimodal reproduction of o1 model by PKU > OpenGVLab released MMPR: a new multimodal reasoning dataset > Jina has released Jina-CLIP-v2 0.98B multilingual multimodal embeddings > Apple released new SotA vision encoders AIMv2 LLMs ๐Ÿฆ™ > AllenAI dropped a huge release of models, datasets and scripts for Tรผlu, a family of models based on Llama 3.1 aligned with SFT, DPO and a new technique they have developed called RLVR > Jina has released embeddings-v3: new multilingual embeddings with longer context > Hugging Face released SmolTalk: synthetic dataset used to align SmolLM2 using supervised fine-tuning > Microsoft released orca-agentinstruct-1M-v1: a gigantic instruction dataset of 1M synthetic instruction pairs Image Generation ๐Ÿ–ผ๏ธ > Black Forest Labs released Flux 1. tools: four new models for different image modifications and two LoRAs to do image conditioning and better steer generations Lastly Hugging Face released a new library Observers: a lightweight SDK for monitoring interactions with AI APIs and easily store and browse them ๐Ÿ“š $ pip install observers
{ "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1648113222875-6141a88b3a0ec78603c9e784.png", "fullname": "Merve Noyan", "name": "merve", "type": "user", "isPro": false, "isHf": true, "isMod": false, "followerCount": 5589, "isFollowing": false }
[]
[]
[ { "reaction": "๐Ÿ”ฅ", "users": [ "Steveeeeeeen", "victor", "loubnabnl", "John6666", "enzostvs", "clem", "AdinaY", "veeraleto", "LeonceNsh", "rwightman", "FranckAbgrall", "SherlockRamos", "Mouradology" ], "count": 13 }, {...
2024-11-22T12:31:25.000Z
2024-11-24T12:42:10.966Z
[ { "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/M9Ee6_TUrgsgmZ4IccQok.jpeg", "fullname": "prinz tim", "name": "myprime", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": null, "isFollowing": false } ]
/posts/merve/247264884415285
1,950
2
479387783609004
[ { "type": "text", "value": "Excited to share my analysis of the most groundbreaking DCN-V2 paper from ", "raw": "Excited to share my analysis of the most groundbreaking DCN-V2 paper from ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, ...
Excited to share my analysis of the most groundbreaking DCN-V2 paper from @Google, which introduces significant improvements to deep learning recommendation systems! Key technical highlights: >> Core Architecture - Starts with an embedding layer that handles both sparse categorical and dense features - Unique capability to handle variable embedding sizes from small to large vocabulary sizes - Cross network creates explicit bounded-degree feature interactions - Deep network complements with implicit feature interactions - Two combination modes: stacked and parallel architectures >> Key Technical Innovations - Enhanced cross layers with full matrix-based feature interaction learning instead of vector-based - Mixture of Low-Rank architecture with: * Multiple expert networks learning in different subspaces * Dynamic gating mechanism to adaptively combine experts * Efficient time complexity when specific conditions are met * Support for non-linear transformations in projected spaces >> Production Optimizations - Low-rank matrix approximation leveraging singular value decay patterns - Mixture-of-Experts decomposition into smaller subspaces - Efficient parameter allocation between cross and deep networks - Automatic feature interaction learning for higher-order interactions in multi-layered networks - Support for both homogeneous and heterogeneous polynomial patterns >> Real-World Impact - Successfully deployed across Google's recommendation systems - Significant gains in both offline accuracy and online metrics - Better performance-latency tradeoffs through low-rank approximations - Proven effectiveness on large-scale data with billions of training examples This represents a major leap forward in making deep learning recommendation systems more practical and efficient at scale. Thoughts? Would love to hear your experiences implementing similar architectures in production!
{ "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/662bf5bfe93bb73804ef9344/WXYLnjjJ4SROkoveIi7If.png", "fullname": "Kuldeep Singh Sidhu", "name": "singhsidhukuldeep", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": 219, "isFollowing": false }
[ { "type": "image", "url": "https://cdn-uploads.huggingface.co/production/uploads/662bf5bfe93bb73804ef9344/_eTkYBdP52SmSKwr2SEZY.jpeg" } ]
[]
[ { "reaction": "๐Ÿ‘€", "users": [ "John6666", "clem" ], "count": 2 }, { "reaction": "โค๏ธ", "users": [ "clem" ], "count": 1 } ]
2024-11-22T11:01:09.000Z
2024-11-22T11:01:09.090Z
[]
/posts/singhsidhukuldeep/479387783609004
691
0
765598269884749
[ { "type": "text", "value": "Apple released AIMv2 ๐Ÿ a family of state-of-the-art open-set vision encoders", "raw": "Apple released AIMv2 ๐Ÿ a family of state-of-the-art open-set vision encoders", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": nu...
Apple released AIMv2 ๐Ÿ a family of state-of-the-art open-set vision encoders https://huggingface.co/collections/apple/aimv2-6720fe1558d94c7805f7688c > like CLIP, but add a decoder and train on autoregression ๐Ÿคฏ > 19 open models come in 300M, 600M, 1.2B, 2.7B with resolutions of 224, 336, 448 > Load and use with ๐Ÿค— transformers
{ "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1648113222875-6141a88b3a0ec78603c9e784.png", "fullname": "Merve Noyan", "name": "merve", "type": "user", "isPro": false, "isHf": true, "isMod": false, "followerCount": 5589, "isFollowing": false }
[ { "type": "image", "url": "https://cdn-uploads.huggingface.co/production/uploads/6141a88b3a0ec78603c9e784/6CesZUCcICywT2AEF9r1s.png" } ]
[]
[ { "reaction": "๐Ÿ‘€", "users": [ "RedSparkie", "crun885", "John6666", "clem" ], "count": 4 }, { "reaction": "โค๏ธ", "users": [ "esab", "ivanfioravanti", "atasoglu", "wchai" ], "count": 4 }, { "reaction": "๐Ÿ‘", "users": [ ...
2024-11-22T10:07:29.000Z
2024-11-22T10:07:29.993Z
[]
/posts/merve/765598269884749
1,136
0
220418838710285
[ { "type": "text", "value": "Import any dataset from the Hub and configure your labeling tasks without needing any code!", "raw": "Import any dataset from the Hub and configure your labeling tasks without needing any code!", "href": null, "resource": null, "url": null, "code": null, "...
Import any dataset from the Hub and configure your labeling tasks without needing any code! Really excited about extending the Hugging Face Hub integration with many more streamlined features and workflows, and we would love to hear your feedback and ideas, so don't feel shy and reach out ๐Ÿซถ๐Ÿฝ https://huggingface.co/blog/argilla-ui-hub
{ "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1677141720071-634ff41ff32062e9eb7b06a3.jpeg", "fullname": "David Berenstein", "name": "davidberenstein1957", "type": "user", "isPro": false, "isHf": true, "isMod": false, "followerCount": 167, "isFollowing": false }
[]
[]
[ { "reaction": "๐Ÿ‘€", "users": [ "John6666", "dvilasuero", "NickyNicky" ], "count": 3 } ]
2024-11-04T16:15:53.000Z
2024-11-04T16:15:53.521Z
[]
/posts/davidberenstein1957/220418838710285
2,082
0
376713378079900
[ { "type": "text", "value": "Discovered an outrageous bug on the ChatGPT official website, especially for those using ad-blocking plugins. Please make sure to add ", "raw": "Discovered an outrageous bug on the ChatGPT official website, especially for those using ad-blocking plugins. Please make sure to a...
Discovered an outrageous bug on the ChatGPT official website, especially for those using ad-blocking plugins. Please make sure to add `browser-intake-datadoghq.com` to your ad block whitelist. The ChatGPT webpage relies on this site for heartbeat detection, but since it belongs to an ad tracking network, it's included in major ad-blocking lists. (If you're using Clash, also remember to add it to the whitelist.) Failing to do so may cause the ChatGPT web interface to display a greyed-out send button after clicking, with no response. For users with Chinese IP addresses, consider adding this URL to the rules of your U.S. node, as the response headers from this site will report the user's physical location to GPT.
{ "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/64bce15bafd1e46c5504ad38/bQFX1iFbXEBXcQvUNL811.png", "fullname": "Di Zhang", "name": "qq8933", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": 108, "isFollowing": false }
[]
[]
[ { "reaction": "๐Ÿ‘", "users": [ "aust-t", "dingangui", "csabakecskemeti", "Csplk", "ai-everyday" ], "count": 5 }, { "reaction": "๐Ÿ‘€", "users": [ "John6666", "TouchNight" ], "count": 2 } ]
2024-11-04T15:09:31.000Z
2024-11-14T11:32:01.358Z
[ { "avatarUrl": "/avatars/b2725bb163fa15d6c5856121780d52eb.svg", "fullname": "Ci Splunk", "name": "Csplk", "type": "user", "isPro": true, "isHf": false, "isMod": false, "followerCount": 43, "isFollowing": false }, { "avatarUrl": "https://cdn-avatars.huggingface.co/v1/produ...
/posts/qq8933/376713378079900
2,289
3
380609696467893
[ { "type": "text", "value": "๐ŸŽ‰ Celebrating One Year of #SauerkrautLM with Two Groundbreaking Releases! ", "raw": "๐ŸŽ‰ Celebrating One Year of #SauerkrautLM with Two Groundbreaking Releases! ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, ...
๐ŸŽ‰ Celebrating One Year of #SauerkrautLM with Two Groundbreaking Releases! We're thrilled to announce the release of SauerkrautLM-v2-14b in two specialized versions: https://huggingface.co/VAGOsolutions/SauerkrautLM-v2-14b-SFT and https://huggingface.co/VAGOsolutions/SauerkrautLM-v2-14b-DPO. Built on the robust Qwen2.5-14B foundation, these models represent a significant leap forward in multilingual AI capabilities. ๐Ÿ”ฌ Technical Breakthroughs: ๐Ÿ’  Innovative three-phase Fine-Tuning approach ๐Ÿ’  Two-step Spectrum SFT + one-step Spectrum DPO optimization phase for enhanced performance ๐Ÿ’  Balance of German and English language capabilities ๐Ÿ’  Advanced function calling - almost on par with Claude-3.5-Sonnet-20240620 ๐Ÿ‡ฉ๐Ÿ‡ช German Language Excellence: What sets this release apart is our unique achievement in simultaneously improving both German and English capabilities. Through our specialized training approach with over 1.2B tokens across two phases, we've managed to: ๐Ÿ’  Enhance German language understanding and generation (SFT Version > DPO Version) ๐Ÿ’  Maintain authentic German linguistic nuances ๐Ÿ’  Improve cross-lingual capabilities ๐Ÿ’  Preserve cultural context awareness ๐Ÿ“Š Training Innovation: Our three-phase approach targeted specific layer percentages (15%, 20% and 25%) with carefully curated datasets, including: ๐Ÿ’  Mathematics-focused content (proprietary classifier-selected) ๐Ÿ’  High-quality German training data ๐Ÿ’  Specialized function calling datasets ๐Ÿ’  Premium multilingual content ๐ŸŽ Community Contribution: We're also releasing two new datasets in a few days: 1๏ธโƒฃ SauerkrautLM-Fermented-GER-DPO: 3,300 high-quality German training samples 2๏ธโƒฃ SauerkrautLM-Fermented-Irrelevance-GER-DPO: 2,000 specialized samples for optimized function call irrelevance handling Thank you to our incredible community and partners who have supported us throughout this journey. Here's to another year of AI innovation!ย ๐Ÿš€
{ "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/64b999a40b24527e9c25583a/xFHCewJdf5EGn8qDPypqy.jpeg", "fullname": "David Golchinfar", "name": "DavidGF", "type": "user", "isPro": true, "isHf": false, "isMod": false, "followerCount": 49, "isFollowing": false }
[ { "type": "image", "url": "https://cdn-uploads.huggingface.co/production/uploads/64b999a40b24527e9c25583a/p2DKcLtfuRnsNBlsG6qSI.png" }, { "type": "image", "url": "https://cdn-uploads.huggingface.co/production/uploads/64b999a40b24527e9c25583a/veWKzsQQtRA-RZ2A7-mIF.jpeg" }, { "type": "imag...
[]
[ { "reaction": "๐Ÿ”ฅ", "users": [ "rizky-gumelar", "ZeroXClem", "John6666", "foscraft", "djuna", "not-lain" ], "count": 6 }, { "reaction": "๐Ÿ‘", "users": [ "flozi00", "Jason233" ], "count": 2 } ]
2024-11-04T14:34:23.000Z
2024-11-04T14:35:08.561Z
[]
/posts/DavidGF/380609696467893
2,973
0
878599389112746
[ { "type": "text", "value": "Early Morning Before Work Project:", "raw": "Early Morning Before Work Project:", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\...
Early Morning Before Work Project: ๐ŸŒŒ Introducing Cascade of Semantically Integrated Layers (CaSIL): A Humorously Over-Engineered Algorithm That Actuallyโ€ฆ Works ๐Ÿคทโ€โ™‚๏ธ Let me introduce CaSIL โ€“ the Cascade of Semantically Integrated Layers. Imagine giving a single question the level of introspection typically reserved for philosophical debates or maybe therapy. In short, CaSIL is a pure Python reasoning algorithm that, in a series of semantically rich layers, takes any input and rebuilds it into a nuanced response thatโ€™s (surprisingly) meaningful to a human. Iโ€™ve been experimenting with various reasoning and agent approaches lately and decided to contribute my own quirky take on layered processing. Itโ€™s built without agent frameworksโ€”just good ol' Python and mathโ€”and it plays nicely with any LLM. The result? A transformation from simple responses to deeper, interconnected insights. Hereโ€™s a quick peek at the steps: โœจ How CaSIL Works: Initial Understanding: The first layer captures the basic concepts in your input, just as a warm-up. Relationship Analysis: A lightweight knowledge graph (because why not?) maps out related ideas and builds interconnections. Context Integration: Adds historical or contextual knowledge, bringing a bit of depth and relevance. Response Synthesis: Pieces it all together, aiming to produce a response that feels more like a conversation than an outdated search result. Does it work? Yes! And in record time, too. Admittedly, the code is roughโ€”two days of intense coding with some friendly help from Claude. The beauty of CaSIL is its simplicity and versatility; itโ€™s a pure algorithm without complex dependencies, making it easy to integrate into your own LLM setups. ๐Ÿ”— Explore the repo here: https://github.com/severian42/Cascade-of-Semantically-Integrated-Layers ๐Ÿ“œ Example outputs: https://github.com/severian42/Cascade-of-Semantically-Integrated-Layers/blob/main/examples.md
{ "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/64740cf7485a7c8e1bd51ac9/CXZCJm2x4ToT83pEIYyQR.png", "fullname": "Beckett Dillon", "name": "Severian", "type": "user", "isPro": true, "isHf": false, "isMod": false, "followerCount": 175, "isFollowing": false }
[]
[]
[ { "reaction": "๐Ÿ‘€", "users": [ "John6666" ], "count": 1 } ]
2024-11-04T14:19:39.000Z
2024-11-04T14:19:39.731Z
[]
/posts/Severian/878599389112746
470
0
915742231261639
[ { "type": "text", "value": "I just shipped ", "raw": "I just shipped ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "inline_code", "value": null, "raw": "`retrain-pipelines 0.1.1`", "hre...
I just shipped `retrain-pipelines 0.1.1` today. The doc is also pimped compared to previous release. That was clearly not mature then. I'll have to focus on another project for the next couple weeks but, anyone feel free to open issues on the GitHub repo and discuss any interest you'd have there if you will (please?) ! In the meantime, you may enjoy retrying this : https://huggingface.co/blog/Aurelien-Morgan/stateful-metaflow-on-colab
{ "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/651e93137b2a2e027f9e55df/5oXWJeEDCrMJLA4s_0I93.png", "fullname": "Aurรฉlien-Morgan CLAUDON", "name": "Aurelien-Morgan", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": 9, "isFollowing": false }
[ { "type": "image", "url": "https://cdn-uploads.huggingface.co/production/uploads/651e93137b2a2e027f9e55df/Uwll3V6Tnc7p6LQ2ac5mh.png" } ]
[]
[ { "reaction": "๐Ÿ‘€", "users": [ "John6666" ], "count": 1 } ]
2024-11-04T12:58:30.000Z
2024-11-04T12:58:30.527Z
[]
/posts/Aurelien-Morgan/915742231261639
454
0
169924015276572
[ { "type": "text", "value": "๐Ÿ™‹๐Ÿปโ€โ™‚๏ธhey there folks,", "raw": "๐Ÿ™‹๐Ÿปโ€โ™‚๏ธhey there folks,", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, ...
๐Ÿ™‹๐Ÿปโ€โ™‚๏ธhey there folks, periodic reminder : if you are experiencing โš ๏ธ500 errors โš ๏ธ or โš ๏ธ abnormal `spaces` behavior on load or launch โš ๏ธ we have a thread ๐Ÿ‘‰๐Ÿป https://discord.com/channels/879548962464493619/1295847667515129877 if you can record the problem and share it there , or on the forums in your own post , please dont be shy because i'm not sure but i do think it helps ๐Ÿค—๐Ÿค—๐Ÿค—
{ "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/62a3bb1cd0d8c2c2169f0b88/eT2TS0IlQbZtz-F_zHLz9.jpeg", "fullname": "Joseph [open/acc] Pollack", "name": "Tonic", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": 313, "isFollowing": false }
[]
[]
[ { "reaction": "๐Ÿ‘", "users": [ "prithivMLmods", "Nymbo", "John6666", "AtAndDev", "clem", "eyov", "Zaws", "rrg92" ], "count": 8 }, { "reaction": "โค๏ธ", "users": [ "clem", "John6666" ], "count": 2 }, { "reaction": "...
2024-11-04T11:41:55.000Z
2024-11-09T02:33:22.761Z
[ { "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6640bbd0220cfa8cbfdce080/wiAHUu5ewawyipNs0YFBR.png", "fullname": "John Smith", "name": "John6666", "type": "user", "isPro": true, "isHf": false, "isMod": false, "followerCount": 398, "isFollowing": false }...
/posts/Tonic/169924015276572
3,233
2
788696446784520
[ { "type": "text", "value": "New Style, New Mix, New Drop ๐Ÿงค", "raw": "New Style, New Mix, New Drop ๐Ÿงค", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", ...
New Style, New Mix, New Drop ๐Ÿงค ๐ŸงจFlux LoRA DLC: https://huggingface.co/spaces/prithivMLmods/FLUX-LoRA-DLC ๐ŸŽ†Glowing-Body: https://huggingface.co/prithivMLmods/Glowing-Body-Flux-LoRA ๐ŸŽ†Electric-Blue: https://huggingface.co/prithivMLmods/Electric-Blue-Flux-LoRA ๐ŸŽ†Intense-Red: https://huggingface.co/prithivMLmods/Intense-Red-Flux-LoRA ๐ŸŽ†Clouds-Illusion: https://huggingface.co/prithivMLmods/Clouds-Illusion-Flux-LoRA ๐ŸŽ†Digital-Yellow: https://huggingface.co/prithivMLmods/Digital-Yellow-Flux-LoRA ๐ŸงจFlux LoRA Collection: https://huggingface.co/collections/prithivMLmods/flux-lora-collections-66dd5908be2206cfaa8519be . . . @prithivMLmods
{ "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/65bb837dbfb878f46c77de4c/UVtVbF_3rdt0DC8xTkpL1.jpeg", "fullname": "Prithiv Sakthi", "name": "prithivMLmods", "type": "user", "isPro": true, "isHf": false, "isMod": false, "followerCount": 393, "isFollowing": false }
[ { "type": "image", "url": "https://cdn-uploads.huggingface.co/production/uploads/65bb837dbfb878f46c77de4c/VifKrKv_kxDWXE1dZLr6X.png" }, { "type": "image", "url": "https://cdn-uploads.huggingface.co/production/uploads/65bb837dbfb878f46c77de4c/tksVqqDwdOz9tRyxLfdf3.png" }, { "type": "image...
[ { "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/65bb837dbfb878f46c77de4c/UVtVbF_3rdt0DC8xTkpL1.jpeg", "fullname": "Prithiv Sakthi", "name": "prithivMLmods", "type": "user", "isPro": true, "isHf": false, "isMod": false, "followerCount": 393 } ]
[ { "reaction": "โค๏ธ", "users": [ "Tonic", "realaliarain", "speedchemistry", "JayNagose", "prithivMLmods", "multimodalart", "radames", "d8rt8v", "Hamed744", "rdrede", "hypergod", "darksfx", "ai4life44", "faruqhrp", "Ngrth...
2024-11-04T10:35:40.000Z
2024-11-05T07:56:16.114Z
[]
/posts/prithivMLmods/788696446784520
5,603
0
301983274684168
[ { "type": "text", "value": "Vector Search (most) datasets on the Hugging Face Hub ๐Ÿ”ฆ", "raw": "Vector Search (most) datasets on the Hugging Face Hub ๐Ÿ”ฆ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": ...
Vector Search (most) datasets on the Hugging Face Hub ๐Ÿ”ฆ Powered by: Polars, DuckDB, Gradio and model2vec (lightning-fast embeddings by Stรฉphan Tulkens). Should work fast enough for datasets up to 100K. https://huggingface.co/spaces/davidberenstein1957/vectorsearch-hub-datasets
{ "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1677141720071-634ff41ff32062e9eb7b06a3.jpeg", "fullname": "David Berenstein", "name": "davidberenstein1957", "type": "user", "isPro": false, "isHf": true, "isMod": false, "followerCount": 167, "isFollowing": false }
[]
[]
[ { "reaction": "๐Ÿค—", "users": [ "davidberenstein1957", "Tonic", "prithivMLmods", "xi0v", "Nymbo", "djuna", "thanhkt" ], "count": 7 }, { "reaction": "๐Ÿš€", "users": [ "davidberenstein1957", "Tonic", "xi0v", "Nymbo", "Go...
2024-11-04T10:19:03.000Z
2024-11-04T10:19:03.151Z
[]
/posts/davidberenstein1957/301983274684168
3,077
0
419590652134553
[ { "type": "text", "value": "Hello researchers! Here are scripts to generate reviews on HF Daily Papers:", "raw": "Hello researchers! Here are scripts to generate reviews on HF Daily Papers:", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, ...
Hello researchers! Here are scripts to generate reviews on HF Daily Papers: ๐Ÿ‘‰ https://github.com/averkij/top_papers โš™๏ธ Works on GitHub Actions ๐Ÿค– Claude, GPT-4o, FLUX ๐ŸŒ Multiple languages ๐Ÿ“š Classification by 38 topics (#agents, #multimodal, #plp, etc.) ๐Ÿ”บ https://HFday.ru
{ "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1635314457124-5f32b2367e583543386214d9.jpeg", "fullname": "Sergei Averkiev", "name": "averoo", "type": "user", "isPro": true, "isHf": false, "isMod": false, "followerCount": 20, "isFollowing": false }
[ { "type": "image", "url": "https://cdn-uploads.huggingface.co/production/uploads/5f32b2367e583543386214d9/eNrWiDtpRGgBk-aRBJCGX.png" }, { "type": "image", "url": "https://cdn-uploads.huggingface.co/production/uploads/5f32b2367e583543386214d9/-kPHSc3clhpzSvRTp_1PA.png" } ]
[]
[ { "reaction": "๐Ÿ‘€", "users": [ "John6666" ], "count": 1 } ]
2024-11-04T10:07:24.000Z
2024-11-04T10:08:40.111Z
[]
/posts/averoo/419590652134553
348
0
808388125499602
[ { "type": "text", "value": "It's work like this that in some way signal the eventual โ€œdominanceโ€ of AI over all the sciences.", "raw": "It's work like this that in some way signal the eventual โ€œdominanceโ€ of AI over all the sciences.", "href": null, "resource": null, "url": null, "code":...
It's work like this that in some way signal the eventual โ€œdominanceโ€ of AI over all the sciences. โ€œWe train our model on the six-dimensional N-body phase space, predicting particle velocities as the time derivative of the modelโ€™s displacement outputsโ€ The emulator is capable of predicting the nonlinear displacement and velocity fields for 128^3 particles in half a second on a single GPU๐Ÿคฏ
{ "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6438a9027de34e8ea7e4b257/vib8QSd1AWMr_bR9ig_xJ.jpeg", "fullname": "Jaward Sesay", "name": "Jaward", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": 191, "isFollowing": false }
[ { "type": "image", "url": "https://cdn-uploads.huggingface.co/production/uploads/6438a9027de34e8ea7e4b257/yxh8K8-a8AR8Wyl0SYG4y.png" }, { "type": "video", "url": "https://cdn-uploads.huggingface.co/production/uploads/6438a9027de34e8ea7e4b257/s7puQYqsEnc0SkSzs-FX7.qt" }, { "type": "image"...
[]
[ { "reaction": "๐Ÿ”ฅ", "users": [ "mediiiiii3", "ajibawa-2023", "John6666", "Chief-Inspector", "ai-everyday" ], "count": 5 } ]
2024-11-04T07:12:47.000Z
2024-11-05T07:42:43.778Z
[ { "avatarUrl": "/avatars/8aaab676f66023255d397ba82b4bcb6e.svg", "fullname": "James Hunter Carter", "name": "jameshuntercarter", "type": "user", "isPro": true, "isHf": false, "isMod": false, "followerCount": 2, "isFollowing": false } ]
/posts/Jaward/808388125499602
2,089
1
241878000943561
[ { "type": "text", "value": "Hey, I just added three useful advanced use cases to ", "raw": "Hey, I just added three useful advanced use cases to ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "resou...
Hey, I just added three useful advanced use cases to https://huggingface.co/datasets/do-me/SemanticFinder/blob/main/README.md#advanced-use-cases. SemanticFinder is a collection of embeddings for public documents or books. You can create your own index file from any text or pdf and save it without installing or downloading anything. Try yourself: 1. Translating from 100+ languages to English (even though it might confuse a strawberry with a grapefruit ;D): https://do-me.github.io/SemanticFinder/?hf=List_of_the_Most_Common_English_Words_70320cde&firstOnly=true&inferencingActive=False 2. Finding English synonyms: https://do-me.github.io/SemanticFinder/?hf=List_of_the_Most_Common_English_Words_0d1e28dc&firstOnly=true&inferencingActive=False 3. The "universal index idea": create an embedding index with 30k English words and reuse it on unseen texts. You can decide to fill the gaps in the index by additional inferencing or just stick to the 30k index for instant semantic similarity. Initial idea: https://github.com/do-me/SemanticFinder/discussions/48 Try here: https://do-me.github.io/SemanticFinder/?hf=List_of_the_Most_Common_English_Words_0d1e28dc&inferencingActive=False&universalIndexSettingsWordLevel with a text of your choice. This could be enhanced by adding duplets or triplets like "climate change" or "green house gas". Eventually I'd like to set up vector DB integrations. Super happy to hear your feedback, ideas and maybe even contributions! :) --- Edit: Apparently markdown url formatting does only work for HF links.
{ "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/IiercF_qxHWize2kitl9X.jpeg", "fullname": "Dominik Weckmรผller", "name": "do-me", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": 38, "isFollowing": false }
[ { "type": "image", "url": "https://cdn-uploads.huggingface.co/production/uploads/64c4da8719565937fb268b32/_oXEubLaDx7pNQan3_z9I.png" }, { "type": "image", "url": "https://cdn-uploads.huggingface.co/production/uploads/64c4da8719565937fb268b32/BtcJXsuC1Fp-vdYjjieFg.png" }, { "type": "image...
[]
[ { "reaction": "๐Ÿ”ฅ", "users": [ "reach-vb", "osanseviero" ], "count": 2 }, { "reaction": "โค๏ธ", "users": [ "samusenps", "fahmimmaliki" ], "count": 2 }, { "reaction": "๐Ÿ‘", "users": [ "shiv2050" ], "count": 1 } ]
2024-03-29T13:55:41.000Z
2024-03-29T13:59:26.360Z
[]
/posts/do-me/241878000943561
1,899
0
430691527295062
[ { "type": "text", "value": "โ˜๏ธโ˜” New Research Alert! โ„๏ธ๐ŸŒ™", "raw": "โ˜๏ธโ˜” New Research Alert! โ„๏ธ๐ŸŒ™", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "hre...
โ˜๏ธโ˜” New Research Alert! โ„๏ธ๐ŸŒ™ ๐Ÿ“„ Title: CoDA: Instructive Chain-of-Domain Adaptation with Severity-Aware Visual Prompt Tuning ๐Ÿ“ Description: CoDA is a UDA methodology that boosts models to understand all adverse scenes (โ˜๏ธ,โ˜”,โ„๏ธ,๐ŸŒ™) by highlighting the discrepancies within these scenes. CoDA achieves state-of-the-art performances on widely used benchmarks. ๐Ÿ‘ฅ Authors: Ziyang Gong, Fuhao Li, Yupeng Deng, Deblina Bhattacharjee, Xiangwei Zhu, Zhenming Ji ๐Ÿ”— Paper: https://huggingface.co/papers/2403.17369 ๐Ÿ“ Repository: https://github.com/Cuzyoung/CoDA ๐Ÿ“š More Papers: more cutting-edge research presented at other conferences in the https://huggingface.co/spaces/DmitryRyumin/NewEraAI-Papers curated by @DmitryRyumin ๐Ÿ” Keywords: #CoDA #DomainAdaptation #VisualPromptTuning #SAVPT #DeepLearning #Innovation
{ "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/nRCxbVng_PPBqKd-Z3KVc.jpeg", "fullname": "Dmitry Ryumin", "name": "DmitryRyumin", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": 377, "isFollowing": false }
[ { "type": "image", "url": "https://cdn-uploads.huggingface.co/production/uploads/6493306970d925ae80523a53/lX46WOvwgIq9KJ-Mz_JVk.png" }, { "type": "image", "url": "https://cdn-uploads.huggingface.co/production/uploads/6493306970d925ae80523a53/SmO5hUxZ1TBBAXutPbGv8.png" }, { "type": "image...
[ { "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/nRCxbVng_PPBqKd-Z3KVc.jpeg", "fullname": "Dmitry Ryumin", "name": "DmitryRyumin", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": 377 } ]
[ { "reaction": "๐Ÿ‘", "users": [ "DmitryRyumin", "Cusyoung", "reach-vb", "osanseviero", "samusenps", "shiv2050", "ssml2050", "Makya" ], "count": 8 }, { "reaction": "๐Ÿ”ฅ", "users": [ "DmitryRyumin", "nikgr", "reach-vb", ...
2024-03-29T09:44:13.000Z
2024-03-29T09:44:13.353Z
[]
/posts/DmitryRyumin/430691527295062
2,134
0
839109593076616
[ { "type": "text", "value": "Are you looking for the perfect leaderboard/arena for your use case? ๐Ÿ‘€", "raw": "Are you looking for the perfect leaderboard/arena for your use case? ๐Ÿ‘€", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lan...
Are you looking for the perfect leaderboard/arena for your use case? ๐Ÿ‘€ There's a new tool for this! https://huggingface.co/spaces/leaderboards/LeaderboardFinder Select your modality, language, task... then search! ๐Ÿ” Some categories of interest: - does the leaderboard accept submissions? - is the test set private or public? - is it using an automatic metric, human evaluators, or llm as a judge? The spaces list is build from space metadata, and reloaded every hour. Enjoy!
{ "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1644340617257-noauth.png", "fullname": "Clรฉmentine Fourrier", "name": "clefourrier", "type": "user", "isPro": false, "isHf": true, "isMod": false, "followerCount": 459, "isFollowing": false }
[]
[]
[ { "reaction": "๐Ÿ”ฅ", "users": [ "m-ric", "nikgr", "VictorSanh", "osanseviero", "bmorphism" ], "count": 5 }, { "reaction": "โค๏ธ", "users": [ "samusenps", "jiuxing123" ], "count": 2 }, { "reaction": "๐Ÿ‘", "users": [ "shiv205...
2024-03-29T07:47:34.000Z
2024-03-29T07:47:34.949Z
[]
/posts/clefourrier/839109593076616
2,011
0
786994188068359
[ { "type": "text", "value": "Excited to introduce Jamba by AI21", "raw": "Excited to introduce Jamba by AI21", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\...
Excited to introduce Jamba by AI21 https://huggingface.co/ai21labs/Jamba-v0.1 We are thrilled to announce Jamba, the worldโ€™s first production-grade Mamba based model. Key Features: - First production-grade Mamba based model built on a novel SSM-Transformer hybrid architecture - 3X throughput on long contexts compared to Mixtral 8x7B - Democratizes access to a massive 256K context window - The only model in its size class that fits up to 140K context on a single GPU Jamba is based on a novel architecture that combines Mamba and Transformer. While our initial results show great efficiency gains, we expect this to be further explored and improved with the help of the community. Check out our blog post for more info: https://ai21-labs.webflow.io/blog/announcing-jamba
{ "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/65e60c0ed5313c06372446ff/tX-xYqpVkE0yTUfRN3SWB.jpeg", "fullname": "Or Dagan", "name": "ordagan", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": 20, "isFollowing": false }
[ { "type": "image", "url": "https://cdn-uploads.huggingface.co/production/uploads/65e60c0ed5313c06372446ff/Iz_qA2KJZJQkj34s-5-sg.jpeg" }, { "type": "image", "url": "https://cdn-uploads.huggingface.co/production/uploads/65e60c0ed5313c06372446ff/QskiyLYkBUGpkErZi_2Dn.png" } ]
[]
[ { "reaction": "๐Ÿ”ฅ", "users": [ "osanseviero", "nikgr", "taufiqdp", "samusenps", "ajibawa-2023", "Srulikbdd", "theKibster", "victor", "VictorSanh", "reach-vb", "apollomarv", "Asaf-Yehudai", "MaziyarPanahi", "DataSoul", ...
2024-03-28T22:45:26.000Z
2024-03-29T15:00:09.926Z
[ { "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/64aea8ff67511bd3d965697b/Jxn52EmDF5RApJh8antxn.jpeg", "fullname": "Feynman Innovations", "name": "ajibawa-2023", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": 138, "isFollow...
/posts/ordagan/786994188068359
2,161
2
474909441976843
[ { "type": "text", "value": "A new paper titled \"Long-Form Factuality in Large Language Models\" proposes a new approach to evaluate the long-form factuality of large language models using an AI agent! They introduce SAFE (Search-Augmented Factuality Evaluator) which leverages an LLM to break down responses...
A new paper titled "Long-Form Factuality in Large Language Models" proposes a new approach to evaluate the long-form factuality of large language models using an AI agent! They introduce SAFE (Search-Augmented Factuality Evaluator) which leverages an LLM to break down responses into individual facts, query Google to verify each fact, and perform multi-step reasoning. Keypoints: * SAFE (Search-Augmented Factuality Evaluator) is an automated method using an LLM agent to evaluate factuality * It also introduces LongFact, a 2,280 prompt set spanning 38 topics to test open-domain factual knowledge * SAFE achieves a 72% humans agreement while being 20x cheaper. It also wins 76% of the disagreements measured on a small scale experiment where a more thorough human procedure (researchers + full internet search) was used. * Larger models like GPT-4, Claude Opus and Gemini Ultra tend to exhibit better long-form factuality. Paper: https://huggingface.co/papers/2403.18802 Code and data: https://github.com/google-deepmind/long-form-factuality Congrats to the authors for their work!
{ "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/657217faabb25ed8aedd5e48/UUHAXeGtOnQBXFD3nYtf2.jpeg", "fullname": "Vlad Bogolin", "name": "vladbogo", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": 109, "isFollowing": false }
[ { "type": "image", "url": "https://cdn-uploads.huggingface.co/production/uploads/657217faabb25ed8aedd5e48/xEzUkOnojzRNKSUSw1wzO.png" }, { "type": "image", "url": "https://cdn-uploads.huggingface.co/production/uploads/657217faabb25ed8aedd5e48/uE4DOZwEstJIeDIgl9aYc.png" }, { "type": "image...
[]
[ { "reaction": "๐Ÿ”ฅ", "users": [ "nikgr", "zelus82", "dillfrescott" ], "count": 3 }, { "reaction": "๐Ÿ‘", "users": [ "shiv2050", "ssml2050", "dillfrescott" ], "count": 3 }, { "reaction": "โค๏ธ", "users": [ "samusenps", "dillf...
2024-03-28T22:34:25.000Z
2024-03-28T22:34:39.045Z
[]
/posts/vladbogo/474909441976843
1,701
0
706415412818350
[ { "type": "text", "value": "A Little guide to building Large Language Models in 2024", "raw": "A Little guide to building Large Language Models in 2024", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": ...
A Little guide to building Large Language Models in 2024 This is a post-recording of a 75min lecture I gave two weeks ago on how to train a LLM from scratch in 2024. I tried to keep it short and comprehensive โ€“ focusing on concepts that are crucial for training good LLM but often hidden in tech reports. In the lecture, I introduce the students to all the important concepts/tools/techniques for training good performance LLM: * finding, preparing and evaluating web scale data * understanding model parallelism and efficient training * fine-tuning/aligning models * fast inference There is of course many things and details missing and that I should have added to it, don't hesitate to tell me you're most frustrating omission and I'll add it in a future part. In particular I think I'll add more focus on how to filter topics well and extensively and maybe more practical anecdotes and details. Now that I recorded it I've been thinking this could be part 1 of a two-parts series with a 2nd fully hands-on video on how to run all these steps with some libraries and recipes we've released recently at HF around LLM training (and could be easily adapted to your other framework anyway): *`datatrove` for all things web-scale data preparation: https://github.com/huggingface/datatrove *`nanotron` for lightweight 4D parallelism LLM training: https://github.com/huggingface/nanotron *`lighteval` for in-training fast parallel LLM evaluations: https://github.com/huggingface/lighteval Here is the link to watch the lecture on Youtube: https://www.youtube.com/watch?v=2-SPH9hIKT8 And here is the link to the Google slides: https://docs.google.com/presentation/d/1IkzESdOwdmwvPxIELYJi8--K3EZ98_cL6c5ZcLKSyVg/edit#slide=id.p Enjoy and happy to hear feedback on it and what to add, correct, extend in a second part.
{ "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1583857746553-5df7e9e5da6d0311fd3d53f9.jpeg", "fullname": "Thomas Wolf", "name": "thomwolf", "type": "user", "isPro": true, "isHf": true, "isMod": false, "followerCount": 704, "isFollowing": false }
[]
[]
[ { "reaction": "โค๏ธ", "users": [ "clem", "osanseviero", "giux78", "samusenps", "dillfrescott", "mohammedbriman", "reach-vb", "rwightman", "Venkman42", "gusthema", "9bow", "amenur", "lisuizhe", "The-Great-Genius", "danja"...
2024-03-28T21:44:02.000Z
2024-04-04T12:44:05.941Z
[ { "avatarUrl": "/avatars/cc5c07cc99c647da19cff5219f43869d.svg", "fullname": "noh au", "name": "noh-au-00", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": 2, "isFollowing": false }, { "avatarUrl": "https://cdn-avatars.huggingface.co/v1/prod...
/posts/thomwolf/706415412818350
4,913
2
506318409499980
[ { "type": "text", "value": "๐“๐ก๐ž ๐ซ๐ž๐ญ๐ฎ๐ซ๐ง ๐จ๐Ÿ ๐ญ๐ก๐ž ๐‘๐๐๐ฌ โš” ๐๐ž๐ฐ ๐Œ๐š๐ฆ๐›๐š-๐›๐š๐ฌ๐ž๐ ๐š๐ซ๐œ๐ก๐ข๐ญ๐ž๐œ๐ญ๐ฎ๐ซ๐ž \"๐‰๐š๐ฆ๐›๐š\"", "raw": "๐“๐ก๐ž ๐ซ๐ž๐ญ๐ฎ๐ซ๐ง ๐จ๐Ÿ ๐ญ๐ก๐ž ๐‘๐๐๐ฌ โš” ๐๐ž๐ฐ ๐Œ๐š๐ฆ๐›๐š-๐›๐š๐ฌ๐ž๐ ๐š๐ซ๐œ๐ก๐ข๐ญ๐ž๐œ๐ญ๐ฎ๐ซ๐ž \"๐‰๐š๐ฆ๐›๐š\"", "href": null, "resource": null, ...
๐“๐ก๐ž ๐ซ๐ž๐ญ๐ฎ๐ซ๐ง ๐จ๐Ÿ ๐ญ๐ก๐ž ๐‘๐๐๐ฌ โš” ๐๐ž๐ฐ ๐Œ๐š๐ฆ๐›๐š-๐›๐š๐ฌ๐ž๐ ๐š๐ซ๐œ๐ก๐ข๐ญ๐ž๐œ๐ญ๐ฎ๐ซ๐ž "๐‰๐š๐ฆ๐›๐š" Since the release of BERT by Google in 2019, Transformers architecture have taken over machine learning thanks to their ๐—ฎ๐˜๐˜๐—ฒ๐—ป๐˜๐—ถ๐—ผ๐—ป ๐—บ๐—ฒ๐—ฐ๐—ต๐—ฎ๐—ป๐—ถ๐˜€๐—บ, that gives them the ability to focus on important points of the input. But ๐™–๐™ฉ๐™ฉ๐™š๐™ฃ๐™ฉ๐™ž๐™ค๐™ฃ ๐™˜๐™ค๐™ข๐™ฅ๐™ช๐™ฉ๐™–๐™ฉ๐™ž๐™ค๐™ฃ ๐™ž๐™จ ๐™ฆ๐™ช๐™–๐™™๐™ง๐™–๐™ฉ๐™ž๐™˜ ๐™ž๐™ฃ ๐™ฉ๐™๐™š ๐™ž๐™ฃ๐™ฅ๐™ช๐™ฉ ๐™ก๐™š๐™ฃ๐™œ๐™ฉ๐™. ๐Ÿ’ซ The Mamba paper, published in December 2023, announced the return of the RNNs: it has no attention, but integrates a selection mechanism, which should be able to reproduce the โ€œfocusโ€ ability of attention, in an architecture for which the compute requirements ๐—ด๐—ฟ๐—ผ๐˜„ ๐—ผ๐—ป๐—น๐˜† ๐—น๐—ถ๐—ป๐—ฒ๐—ฎ๐—ฟ๐—น๐˜† ๐—ถ๐—ป ๐—ถ๐—ป๐—ฝ๐˜‚๐˜ ๐—น๐—ฒ๐—ป๐—ด๐˜๐—ต! ๐Ÿค” Would this work? We had yet to see a large Mamba model recovering the performance of Attention-based Transformers. ๐Ÿ’ฅ But now it's done! A (Mamba + Transformers) hybrid just beat Transformers! The AI21 Labs team just released Jamba. They insert a few Transformer layers to inject some attention in a big pile of Mamba layers, thus getting the best of both worlds. ๐™๐™‡;๐˜ฟ๐™: ๐Ÿ—๏ธ ๐—ก๐—ฒ๐˜„ ๐— ๐—ผ๐—˜ ๐—ฎ๐—ฟ๐—ฐ๐—ต๐—ถ๐˜๐—ฒ๐—ฐ๐˜๐˜‚๐—ฟ๐—ฒ: 4 Jamba blocks, each of these being 7 Mamba layers for 1 Transformer. ๐Ÿ‹๏ธ ๐Ÿฑ๐Ÿฎ๐—• ๐—ฝ๐—ฎ๐—ฟ๐—ฎ๐—บ๐—ฒ๐˜๐—ฒ๐—ฟ๐˜€, ๐Ÿญ๐Ÿฎ๐—• ๐—ฎ๐—ฐ๐˜๐—ถ๐˜ƒ๐—ฒ ๐—ฎ๐˜ ๐—ถ๐—ป๐—ณ๐—ฒ๐—ฟ๐—ฒ๐—ป๐—ฐ๐—ฒ: This reduction is enabled by Mixture of Experts, and similar to Mixtral (47B parameters - 13B active). ๐ŸŽ๏ธ ๐—ฆ๐—ฝ๐—ฒ๐—ฒ๐—ฑ: ๐˜…๐Ÿฏ ๐˜๐—ต๐—ฟ๐—ผ๐˜‚๐—ด๐—ต๐—ฝ๐˜‚๐˜. Jamba is much faster than similar-sized Transformer models on long contexts. ๐Ÿ“ ๐—–๐—ผ๐—ป๐˜๐—ฒ๐˜…๐˜ ๐—น๐—ฒ๐—ป๐—ด๐˜๐—ต: ๐Ÿญ๐Ÿฐ๐Ÿฌ๐—ž ๐˜๐—ผ๐—ธ๐—ฒ๐—ป๐˜€ on a single 80GB A100! ๐Ÿ’ช ๐—ฃ๐—ฒ๐—ฟ๐—ณ๐—ผ๐—ฟ๐—บ๐—ฎ๐—ป๐—ฐ๐—ฒ: ๐˜€๐˜๐—ฎ๐˜๐—ฒ-๐—ผ๐—ณ-๐˜๐—ต๐—ฒ-๐—ฎ๐—ฟ๐˜ ๐—ณ๐—ผ๐—ฟ ๐˜๐—ต๐—ถ๐˜€ ๐˜€๐—ถ๐˜‡๐—ฒ. The small injection of attention seems sufficient since Jamba beats the open-source reference Mixtral-8x7B on many benchmarks! Try it here ๐Ÿ‘‰ https://huggingface.co/ai21labs/Jamba-v0.1
{ "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/63d10d4e8eaa4831005e92b5/7p7-OmWM6PqqCs7ZStPGD.jpeg", "fullname": "Aymeric Roucher", "name": "m-ric", "type": "user", "isPro": false, "isHf": true, "isMod": false, "followerCount": 494, "isFollowing": false }
[ { "type": "image", "url": "https://cdn-uploads.huggingface.co/production/uploads/63d10d4e8eaa4831005e92b5/AsfDNZjOUiCENV-G1W6zv.png" } ]
[]
[ { "reaction": "๐Ÿ‘€", "users": [ "monsoon-nlp", "lunarflu", "carbonox-infernox", "dhuynh95", "osanseviero", "vonehrenheim", "nikgr", "samusenps", "Amano13", "reach-vb", "Chunte", "seyf1elislam", "blanchon" ], "count": 13 }...
2024-03-28T17:32:17.000Z
2024-03-29T08:35:18.879Z
[]
/posts/m-ric/506318409499980
1,798
0
545762059483943
[ { "type": "text", "value": "Just now, we release a small MoE model, Qwen1.5-MoE-A2.7B, a 14B model with 2.7B activated parameters. Leaving the hype, I would love to share more things here in HF. But if you don't know much about this, check our blog for more info: ", "raw": "Just now, we release a small ...
Just now, we release a small MoE model, Qwen1.5-MoE-A2.7B, a 14B model with 2.7B activated parameters. Leaving the hype, I would love to share more things here in HF. But if you don't know much about this, check our blog for more info: https://qwenlm.github.io/blog/qwen-moe/ At the beginning, it was trying with the MoE stuff, making Megatron work well with MegaBlocks. As always, we worked with small ones first. However, we have been struggling with a lot of details. With megablocks and so many tricks that make training MoE models work, it is almost impossible to fail. The challenge is actually how good your model is. Then things became more complex than I had expected. Finegrained experts actually pissed me off but damn it works for the model at this scale. However, it brings complexity to the model, and this is somehow why at this moment our codes are not merged into llama.cpp cuz it really brings problems. Shared experts might be good, but we need more engineering efforts to really unleash its benefits in inference acceleration. For the community, this is actually our first time releasing an MoE model. We don't know what will happen to us, but we are prepared for complaints. I just hope that we can really make things clear, and provide a good recipe to play with our MoE model just like people playing with Mixtral.
{ "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/620760a26e3b7210c2ff1943/VC-rKqimF6yxGESNVlPoR.jpeg", "fullname": "Junyang Lin", "name": "JustinLin610", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": 132, "isFollowing": false }
[]
[]
[ { "reaction": "๐Ÿ”ฅ", "users": [ "Nbardy", "lunarflu", "CulturedMan", "osanseviero", "nikgr", "manishiitg", "reach-vb", "pierrci", "aguspiza", "xiaoniqiu", "alielfilali01", "Shuaiii" ], "count": 12 }, { "reaction": "โค๏ธ", ...
2024-03-28T17:06:31.000Z
2024-03-28T21:59:49.971Z
[ { "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6032802e1f993496bc14d9e3/w6hr-DEQot4VVkoyRIBiy.png", "fullname": "Omar Sanseviero", "name": "osanseviero", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": 2868, "isFollowing":...
/posts/JustinLin610/545762059483943
2,949
1
231419617229308
[ { "type": "text", "value": " ๐ŸŽˆ LLM Benchmarks Update!", "raw": " ๐ŸŽˆ LLM Benchmarks Update!", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": ...
๐ŸŽˆ LLM Benchmarks Update! **tl;dr: do not depend on benchmark leaderboards to choose your "chatbot" model! (Especially for non-English languages.)** First of all, I'm discontinuing the Open #Dutch #LLM Leaderboard (https://lnkd.in/eFnsaFR6). It will stay online for now, but I urge the use of the ScandEval leaderboard instead (https://scandeval.com/dutch-nlg/) by @saattrupdan. It contains more tasks, has better reproducibility and statistics (CI) and a flexible back-end library (`scandeval`) to run your own benchmarks with. As part of project "Leesplank" (with Michiel Buisman and Maarten Lens-FitzGerald) we recently added GPT-4-1106-preview scores to add a good "target" to the leaderboard. An important note here is that benchmark leaderboards are not a golden truth. Especially evaluating generative models is hard. You run into issues like prompt engineering (and sensitivity of models to one or other prompt), structured output generation, and - quite simply - "how to automatically evaluate open-ended generation". ๐Ÿ’ก Another important but under-discussed facet is the discrepancy between models' capability of understanding vs. generating *in different languages* (so the NLU part of NLG benchmarking). In other words: some of the listed models score really well on, e.g., MCQ benchmarks but are not suitable to use as DUTCH chat bots. Interestingly, some of these models seem to understand questions in Dutch and are able to pick the right answer (because they have good knowledge or reasoning skills), but generating fluent and grammatical Dutch is something else entirely! This is perhaps also true for humans: it's easier to sort-of grasp the meaning of a new language and answer with "Yes" or "No", but answering fluently in the language is much harder! Yet, your language production fluency does not necessarily say anything about your knowledge and reasoning skills. Hopefully we can get a chat arena for Dutch some day - user feedback is the most powerful metric!
{ "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1594192845975-5e1e17b6fcf41d740b6996a8.jpeg", "fullname": "Bram Vanroy", "name": "BramVanroy", "type": "user", "isPro": true, "isHf": false, "isMod": false, "followerCount": 173, "isFollowing": false }
[]
[ { "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1624975632470-60d368a613f774189902f555.jpeg", "fullname": "Dan Saattrup Nielsen", "name": "saattrupdan", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": 37 } ]
[ { "reaction": "๐Ÿ‘", "users": [ "davanstrien", "lunarflu", "nikgr", "saattrupdan", "reach-vb", "shiv2050", "ssml2050" ], "count": 7 }, { "reaction": "๐Ÿค", "users": [ "Rijgersberg", "osanseviero", "lunarflu", "robinsmits", ...
2024-03-28T13:08:40.000Z
2024-03-30T15:52:48.997Z
[ { "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/643033401572f43a4818cea3/CiAPL-E-6p-W8ilJ3t9vM.jpeg", "fullname": "Carlo Moro", "name": "cnmoro", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": 25, "isFollowing": false },...
/posts/BramVanroy/231419617229308
2,436
3
631118558281010
[ { "type": "text", "value": "Diaries of Open Source. Part 11 ๐Ÿš€", "raw": "Diaries of Open Source. Part 11 ๐Ÿš€", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\...
Diaries of Open Source. Part 11 ๐Ÿš€ ๐Ÿš€Databricks release DBRX, potentially the best open access model! A 132B Mixture of Experts with 36B active params and trained on 12 trillion tokens Blog: https://www.databricks.com/blog/introducing-dbrx-new-state-art-open-llm Base and instruct models: https://hf.co/collections/databricks/dbrx-6601c0852a0cdd3c59f71962 Demo: https://hf.co/spaces/databricks/dbrx-instruct ๐Ÿค1-bit and 2-bit quantization exploration using HQQ+ Blog post: https://mobiusml.github.io/1bit_blog/ Models: https://hf.co/collections/mobiuslabsgmbh/llama2-7b-hqq-6604257a96fc8b9c4e13e0fe GitHub: https://github.com/mobiusml/hqq ๐Ÿ“šCosmopedia: a large-scale synthetic dataset for pre-training - it includes 25 billion tokens and 30 million files Dataset: https://hf.co/datasets/HuggingFaceTB/cosmopedia Blog: https://hf.co/blog/cosmopedia โญMini-Gemini: multi-modal VLMs, from 2B to 34B Models: https://hf.co/collections/YanweiLi/mini-gemini-6603c50b9b43d044171d0854 Paper: https://huggingface.co/papers/2403.18814 GitHub: https://github.com/dvlab-research/MiniGemini ๐Ÿ”ฅVILA - On Pre-training for VLMs Models: https://hf.co/collections/Efficient-Large-Model/vila-on-pre-training-for-visual-language-models-65d8022a3a52cd9bcd62698e Paper: https://hf.co/papers/2312.07533 Misc ๐Ÿ‘€ FeatUp: a framework for image features at any resolution: https://hf.co/spaces/mhamilton723/FeatUp https://hf.co/papers/2403.10516 ๐ŸžColBERTus Maxiums, a colbertialized embedding model https://hf.co/mixedbread-ai/mxbai-colbert-large-v1 ๐Ÿ–Œ๏ธSemantic Palette, a new drawing paradigm https://hf.co/spaces/ironjr/SemanticPalette ๐Ÿง‘โ€โš•๏ธHistoGPT, a vision model that generates accurate pathology reports https://hf.co/marr-peng-lab/histogpt https://www.medrxiv.org/content/10.1101/2024.03.15.24304211v1
{ "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6032802e1f993496bc14d9e3/w6hr-DEQot4VVkoyRIBiy.png", "fullname": "Omar Sanseviero", "name": "osanseviero", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": 2868, "isFollowing": false }
[]
[]
[ { "reaction": "๐Ÿ”ฅ", "users": [ "mobicham", "appoose", "loubnabnl", "davanstrien", "samusenps", "DmitryRyumin", "ajibawa-2023", "clem", "GO4code", "victor", "vladbogo", "nikgr", "VictorSanh", "reach-vb", "manuel-tran" ...
2024-03-28T10:06:27.000Z
2024-03-28T13:03:16.372Z
[ { "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6032802e1f993496bc14d9e3/w6hr-DEQot4VVkoyRIBiy.png", "fullname": "Omar Sanseviero", "name": "osanseviero", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": 2868, "isFollowing":...
/posts/osanseviero/631118558281010
2,066
4
875756613096016
[ { "type": "text", "value": "๐ŸŽ‰ ๐ŸŽ‰ ๐ŸŽ‰ Happy to share our recent work. We noticed that image resolution plays an important role, either in improving multi-modal large language models (MLLM) performance or in Sora style any resolution encoder decoder, we hope this work can help lift restriction of 224x224 reso...
๐ŸŽ‰ ๐ŸŽ‰ ๐ŸŽ‰ Happy to share our recent work. We noticed that image resolution plays an important role, either in improving multi-modal large language models (MLLM) performance or in Sora style any resolution encoder decoder, we hope this work can help lift restriction of 224x224 resolution limit in ViT. https://huggingface.co/papers/2403.18361
{ "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/650dde4ce14eeb01d42b37a1/n5Yv24uofZ2XJjXdYCrKd.png", "fullname": "Xiaotian Han", "name": "xiaotianhan", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": 22, "isFollowing": false }
[]
[]
[ { "reaction": "๐Ÿ‘", "users": [ "ajibawa-2023", "samusenps", "DmitryRyumin", "clem", "lunarflu", "victor", "nikgr", "reach-vb", "shiv2050", "ak0601", "ncoop57", "talaviyabhavik" ], "count": 12 }, { "reaction": "๐Ÿš€", "...
2024-03-28T05:20:08.000Z
2024-04-23T18:22:53.659Z
[ { "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1648113222875-6141a88b3a0ec78603c9e784.png", "fullname": "Merve Noyan", "name": "merve", "type": "user", "isPro": false, "isHf": true, "isMod": false, "followerCount": 5589, "isFollowing": false }, { ...
/posts/xiaotianhan/875756613096016
2,087
2
149161309610662
[ { "type": "text", "value": "Compared Effect Of Image Captioning For SDXL Fine-tuning / DreamBooth Training for a Single Person, 10.3 GB VRAM via OneTrainer", "raw": "Compared Effect Of Image Captioning For SDXL Fine-tuning / DreamBooth Training for a Single Person, 10.3 GB VRAM via OneTrainer", "hre...
Compared Effect Of Image Captioning For SDXL Fine-tuning / DreamBooth Training for a Single Person, 10.3 GB VRAM via OneTrainer Sadly the post character count is limited so please read full info on Medium here https://medium.com/@furkangozukara/compared-effect-of-image-captioning-for-sdxl-fine-tuning-dreambooth-training-for-a-single-person-961087e42334
{ "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1672531901326-6345bd89fe134dfd7a0dba40.png", "fullname": "Furkan Gรถzรผkara", "name": "MonsterMMORPG", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": 376, "isFollowing": false }
[ { "type": "image", "url": "https://cdn-uploads.huggingface.co/production/uploads/6345bd89fe134dfd7a0dba40/gJjqOBUZJVWjq4h61tMHE.png" }, { "type": "image", "url": "https://cdn-uploads.huggingface.co/production/uploads/6345bd89fe134dfd7a0dba40/oB6pmcPHurv5vrSZV-py3.png" }, { "type": "image...
[]
[ { "reaction": "๐Ÿ”ฅ", "users": [ "ameerazam08", "clem", "lunarflu", "nikgr", "roktimsardar123" ], "count": 5 }, { "reaction": "๐Ÿ‘", "users": [ "shiv2050" ], "count": 1 } ]
2024-03-28T02:44:56.000Z
2024-03-28T02:44:56.968Z
[]
/posts/MonsterMMORPG/149161309610662
2,005
0
191716706611860
[ { "type": "text", "value": "๐Ÿš€๐ŸŽญ๐ŸŒŸ New Research Alert! ๐ŸŒŸ ๐ŸŽญ๐Ÿš€", "raw": "๐Ÿš€๐ŸŽญ๐ŸŒŸ New Research Alert! ๐ŸŒŸ ๐ŸŽญ๐Ÿš€", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\...
๐Ÿš€๐ŸŽญ๐ŸŒŸ New Research Alert! ๐ŸŒŸ ๐ŸŽญ๐Ÿš€ ๐Ÿ“„ Title: AniPortrait: Audio-Driven Synthesis of Photorealistic Portrait Animation ๐Ÿ” ๐Ÿ“ Description: AniPortrait is a novel framework for generating photorealistic portrait animations driven by audio and a reference image, with superior facial naturalness, pose variety, and visual quality, with potential applications in facial motion editing and facial reenactment. ๐Ÿ‘ฅ Authors: Huawei Wei, @ZJYang, Zhisheng Wang ๐Ÿ”— Paper: https://huggingface.co/papers/2403.17694 ๐Ÿ“ Repository: https://github.com/Zejun-Yang/AniPortrait ๐Ÿค— Demo: https://huggingface.co/spaces/ZJYang/AniPortrait_official ๐Ÿ”ฅ Model ๐Ÿค–: https://huggingface.co/ZJYang/AniPortrait ๐Ÿ“š More Papers: more cutting-edge research presented at other conferences in the https://huggingface.co/spaces/DmitryRyumin/NewEraAI-Papers curated by @DmitryRyumin ๐Ÿš€ Added to the Avatars Collection: https://huggingface.co/collections/DmitryRyumin/avatars-65df37cdf81fec13d4dbac36 ๐Ÿ” Keywords: #AniPortrait #Animation #AudioDriven #Photorealistic #FacialAnimation #DeepLearning #Innovation
{ "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/nRCxbVng_PPBqKd-Z3KVc.jpeg", "fullname": "Dmitry Ryumin", "name": "DmitryRyumin", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": 377, "isFollowing": false }
[ { "type": "image", "url": "https://cdn-uploads.huggingface.co/production/uploads/6493306970d925ae80523a53/Q8D8SBxzlvdIwMES1aX9u.png" }, { "type": "image", "url": "https://cdn-uploads.huggingface.co/production/uploads/6493306970d925ae80523a53/fbwLJKfraISTw4Nj5wWkJ.png" }, { "type": "video...
[ { "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/nRCxbVng_PPBqKd-Z3KVc.jpeg", "fullname": "Dmitry Ryumin", "name": "DmitryRyumin", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": 377 }, { "avatarUrl": "/avatars/26...
[ { "reaction": "๐Ÿ”ฅ", "users": [ "DmitryRyumin", "hinach4n", "samusenps", "MiSTe-R", "clem", "lunarflu", "victor", "reach-vb", "Poulain" ], "count": 9 }, { "reaction": "๐Ÿคฏ", "users": [ "DmitryRyumin", "clem", "lunarflu...
2024-03-27T22:52:30.000Z
2024-04-13T22:54:24.627Z
[ { "avatarUrl": "/avatars/afbc48df2e8c47c35be48168113d83c0.svg", "fullname": "s", "name": "Tom-Neverwinter", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": 2, "isFollowing": false }, { "avatarUrl": "https://cdn-avatars.huggingface.co/v1/pro...
/posts/DmitryRyumin/191716706611860
1,751
2
772576405157092
[ { "type": "text", "value": "The Unreasonable Ineffectiveness of the Deeper Layers", "raw": "The Unreasonable Ineffectiveness of the Deeper Layers", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_l...
The Unreasonable Ineffectiveness of the Deeper Layers https://huggingface.co/papers/2403.17887 We empirically study a simple layer-pruning strategy for popular families of open-weight pretrained LLMs, finding minimal degradation of performance on different question-answering benchmarks until after a large fraction (up to half) of the layers are removed. To prune these models, we identify the optimal block of layers to prune by considering similarity across layers; then, to "heal" the damage, we perform a small amount of finetuning. In particular, we use parameter-efficient finetuning (PEFT) methods, specifically quantization and Low Rank Adapters (QLoRA), such that each of our experiments can be performed on a single A100 GPU. From a practical perspective, these results suggest that layer pruning methods can complement other PEFT strategies to further reduce computational resources of finetuning on the one hand, and can improve the memory and latency of inference on the other hand. From a scientific perspective, the robustness of these LLMs to the deletion of layers implies either that current pretraining methods are not properly leveraging the parameters in the deeper layers of the network or that the shallow layers play a critical role in storing knowledge.
{ "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg", "fullname": "AK", "name": "akhaliq", "type": "user", "isPro": false, "isHf": true, "isMod": false, "followerCount": 5205, "isFollowing": false }
[ { "type": "image", "url": "https://cdn-uploads.huggingface.co/production/uploads/60f1abe7544c2adfd699860c/m6g_c4r2tuwYJoe95rEEv.png" } ]
[]
[ { "reaction": "โค๏ธ", "users": [ "samusenps", "ajibawa-2023", "osanseviero", "clem", "ishmnnit", "Aryanne", "KvrParaskevi" ], "count": 7 } ]
2024-03-27T20:53:43.000Z
2024-03-27T20:53:43.465Z
[]
/posts/akhaliq/772576405157092
2,517
0
554406272304085
[ { "type": "text", "value": "SegGPT is a vision generalist on image segmentation, quite like GPTs for computer vision โœจ", "raw": "SegGPT is a vision generalist on image segmentation, quite like GPTs for computer vision โœจ", "href": null, "resource": null, "url": null, "code": null, "us...
SegGPT is a vision generalist on image segmentation, quite like GPTs for computer vision โœจ It comes with the last release of transformers ๐ŸŽ Demo and more in this post! SegGPT is an extension of the Painter, where you speak to images with images: the model takes in an image prompt, transformed version of the image prompt, the actual image you want to see the same transform, and expected to output the transformed image. SegGPT consists of a vanilla ViT with a decoder on top (linear, conv, linear). The model is trained on diverse segmentation examples, where they provide example image-mask pairs, the actual input to be segmented, and the decoder head learns to reconstruct the mask output. This generalizes pretty well! The authors do not claim state-of-the-art results as the model is mainly used zero-shot and few-shot inference. They also do prompt tuning, where they freeze the parameters of the model and only optimize the image tensor (the input context). Thanks to ๐Ÿค— transformers you can use this model easily! See here https://huggingface.co/docs/transformers/en/model_doc/seggpt I have built an app for you to try it out. I combined SegGPT with Depth Anything Model, so you don't have to upload image mask prompts in your prompt pair ๐Ÿค— Try it here https://huggingface.co/spaces/merve/seggpt-depth-anything Also check out the collection https://huggingface.co/collections/merve/seggpt-660466a303bc3cd7559d271b
{ "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1648113222875-6141a88b3a0ec78603c9e784.png", "fullname": "Merve Noyan", "name": "merve", "type": "user", "isPro": false, "isHf": true, "isMod": false, "followerCount": 5589, "isFollowing": false }
[ { "type": "image", "url": "https://cdn-uploads.huggingface.co/production/uploads/6141a88b3a0ec78603c9e784/dfWOvGWApX7gPUCaXs-Fc.png" } ]
[]
[ { "reaction": "โค๏ธ", "users": [ "samusenps", "osanseviero", "clem", "lunarflu", "talaviyabhavik" ], "count": 5 }, { "reaction": "๐Ÿ˜Ž", "users": [ "samusenps", "osanseviero", "clem", "lunarflu" ], "count": 4 }, { "reactio...
2024-03-27T18:48:31.000Z
2024-03-27T18:48:31.826Z
[]
/posts/merve/554406272304085
2,899
0
698173564421191
[ { "type": "text", "value": "Thanks to the 4000+ attempts to jailbreak LLM chatbots and 2000+ votes cast on the Chatbot Guardrails Arena in the 4 days since release! ", "raw": "Thanks to the 4000+ attempts to jailbreak LLM chatbots and 2000+ votes cast on the Chatbot Guardrails Arena in the 4 days since ...
Thanks to the 4000+ attempts to jailbreak LLM chatbots and 2000+ votes cast on the Chatbot Guardrails Arena in the 4 days since release! Based on the players' feedback, we have updated the instructions to be clearer and to emphasize that players should vote after identifying a secure chatbot. Tell us what you think when you play again: https://huggingface.co/spaces/lighthouzai/guardrails-arena
{ "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/65304c99471261a41237c94a/Cl0H2RtemLeeW0VHGHZ0j.png", "fullname": "Srijan Kumar", "name": "srijankedia", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": 14, "isFollowing": false }
[]
[]
[ { "reaction": "โค๏ธ", "users": [ "samusenps", "osanseviero", "Tom-Neverwinter", "clefourrier", "clem", "lunarflu", "reach-vb", "AtAndDev", "normanschizogh", "chuangxinlezhi", "sbrandeis" ], "count": 11 }, { "reaction": "๐Ÿง ", ...
2024-03-27T16:44:52.000Z
2024-03-27T16:44:52.435Z
[]
/posts/srijankedia/698173564421191
1,579
0
139271554761909
[ { "type": "text", "value": "Here, we share the technical details of InternLM2, the current state-of-the-art open-source LLMs!!", "raw": "Here, we share the technical details of InternLM2, the current state-of-the-art open-source LLMs!!", "href": null, "resource": null, "url": null, "code...
Here, we share the technical details of InternLM2, the current state-of-the-art open-source LLMs!! See collections https://huggingface.co/collections/internlm/internlm2-65b0ce04970888799707893c Paper: https://huggingface.co/papers/2403.17297
{ "avatarUrl": "/avatars/18958b8406d1ce492b54c1c839f18c54.svg", "fullname": "Wenwei Zhang", "name": "ZwwWayne", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": 10, "isFollowing": false }
[]
[]
[ { "reaction": "โค๏ธ", "users": [ "samusenps", "osanseviero", "clem", "bedirt", "victor" ], "count": 5 } ]
2024-03-27T16:11:15.000Z
2024-03-27T16:11:15.797Z
[]
/posts/ZwwWayne/139271554761909
1,501
0
252914089485275
[ { "type": "text", "value": "Have we really squeezed out the capacity of a compact chat model? Thrilled to see our latest open model, Starling-7B, ranks 13th among all models in Chatbot Arena! ", "raw": "Have we really squeezed out the capacity of a compact chat model? Thrilled to see our latest open mod...
Have we really squeezed out the capacity of a compact chat model? Thrilled to see our latest open model, Starling-7B, ranks 13th among all models in Chatbot Arena! ๐Ÿš€ As a 7B model, Starling surpasses larger open and proprietary models, including Claude-2, GPT-3.5-Turbo, Gemini Pro, Mixtral 8x7B and Llama2-70B, and is currently the best 7B chat model in Chatbot Arena! Try out the model on HF here: https://huggingface.co/Nexusflow/Starling-LM-7B-beta
{ "avatarUrl": "/avatars/128ceae78490110ae41202851e84d58e.svg", "fullname": "Banghua Zhu", "name": "banghua", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": 39, "isFollowing": false }
[ { "type": "image", "url": "https://cdn-uploads.huggingface.co/production/uploads/647b8885aba7062fe5c32000/aLY9sZyHnRtnYEtzxIgah.png" } ]
[]
[ { "reaction": "๐Ÿ”ฅ", "users": [ "brian-yu-nexusflow", "banghua", "clem", "ThWu", "jian-zhang-nexusflow", "mahiatlinux", "osanseviero", "unnaymed", "Lewdiculous", "dalraf", "lunarflu", "mohammedbriman", "ucyang" ], "count": ...
2024-03-27T15:48:56.000Z
2024-03-27T15:48:56.245Z
[]
/posts/banghua/252914089485275
1,516
0
254221141271676
[ { "type": "text", "value": "New state-of-the-art open LLM! ๐Ÿš€ Databricks just released DBRX, a 132B MoE trained on 12T tokens. Claiming to surpass OpenAI GPT-3.5 and is competitive with Google Gemini 1.0 Pro. ๐Ÿคฏ", "raw": "New state-of-the-art open LLM! ๐Ÿš€ Databricks just released DBRX, a 132B MoE traine...
New state-of-the-art open LLM! ๐Ÿš€ Databricks just released DBRX, a 132B MoE trained on 12T tokens. Claiming to surpass OpenAI GPT-3.5 and is competitive with Google Gemini 1.0 Pro. ๐Ÿคฏ TL;DR ๐Ÿงฎ 132B MoE with 16 experts with 4 active in generation ๐ŸชŸ 32 000 context window ๐Ÿ“ˆ Outperforms open LLMs on common benchmarks, including MMLU ๐Ÿš€ Up to 2x faster inference than Llama 2 70B ๐Ÿ’ป Trained on 12T tokens ๐Ÿ”ก Uses the GPT-4 tokenizer ๐Ÿ“œ Custom License, commercially useable Collection: https://huggingface.co/collections/databricks/dbrx-6601c0852a0cdd3c59f71962 Demo: https://huggingface.co/spaces/databricks/dbrx-instruct Kudos to the Team at Databricks and MosaicML for this strong release in the open community! ๐Ÿค—
{ "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1624629516652-5ff5d596f244529b3ec0fb89.png", "fullname": "Philipp Schmid", "name": "philschmid", "type": "user", "isPro": false, "isHf": true, "isMod": false, "followerCount": 657, "isFollowing": false }
[ { "type": "image", "url": "https://cdn-uploads.huggingface.co/production/uploads/5ff5d596f244529b3ec0fb89/0-YBFNDbvD_xLRMNjk_Nd.png" }, { "type": "image", "url": "https://cdn-uploads.huggingface.co/production/uploads/5ff5d596f244529b3ec0fb89/H2QtWjot15itu9k_ypPLl.png" }, { "type": "image...
[]
[ { "reaction": "๐Ÿ”ฅ", "users": [ "victor", "alielfilali01", "Shinku", "osanseviero", "samusenps", "giux78", "Tom-Neverwinter", "mmhamdy", "sepal", "giulianobr", "carlos-eduardo", "eppie2", "VictorSanh", "dalraf", "lunarf...
2024-03-27T13:59:58.000Z
2024-03-28T18:26:50.483Z
[ { "avatarUrl": "/avatars/afbc48df2e8c47c35be48168113d83c0.svg", "fullname": "s", "name": "Tom-Neverwinter", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": 2, "isFollowing": false }, { "avatarUrl": "https://cdn-avatars.huggingface.co/v1/pro...
/posts/philschmid/254221141271676
6,577
4
528922528677736
[ { "type": "text", "value": " ๐Ÿฐ Happy ML Easter! ๐Ÿฐ", "raw": " ๐Ÿฐ Happy ML Easter! ๐Ÿฐ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, ...
๐Ÿฐ Happy ML Easter! ๐Ÿฐ I recently had a wager with my colleagues which had me create AI-assisted videos of myself in an Easter Bunny costume singing an AI-generated easter song (in different languages). I did it in 3 simple steps: ๐Ÿ‡I created an easter Bunny version of myself with this space: https://huggingface.co/spaces/multimodalart/Ip-Adapter-FaceID by @multimodalart ๐ŸŽถ Created a song on suno.ai ๐ŸŽž๏ธ Used the Dreamtalk space by @fffiloni to animate the image, conditioned on the song: https://huggingface.co/spaces/fffiloni/dreamtalk
{ "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/607feb037c746d01ecb19180/qs3NO5v-Ej5UaKns8yFW8.jpeg", "fullname": "Johannes Kolbe", "name": "johko", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": 54, "isFollowing": false }
[ { "type": "video", "url": "https://cdn-uploads.huggingface.co/production/uploads/607feb037c746d01ecb19180/caigr9VEvW1oP_8-8W9O2.mp4" }, { "type": "video", "url": "https://cdn-uploads.huggingface.co/production/uploads/607feb037c746d01ecb19180/2lJHe9pCvwParuw6xYHbq.mp4" }, { "type": "video...
[ { "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/61868ce808aae0b5499a2a95/F6BA0anbsoY_Z7M1JrwOe.jpeg", "fullname": "Sylvain Filoni", "name": "fffiloni", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": 5185 }, { "avatarUr...
[ { "reaction": "โค๏ธ", "users": [ "fffiloni", "osanseviero", "clem", "victor", "linoyts", "Wauplin", "kramp", "yyassif", "MarinaraSpaghetti", "sayandafadar", "alielfilali01", "sbrandeis" ], "count": 12 }, { "reaction": "๐Ÿš€"...
2024-03-27T13:08:34.000Z
2024-03-27T13:09:55.557Z
[]
/posts/johko/528922528677736
1,545
0
160289342297994
[ { "type": "text", "value": "Would 1-2 sentence tl;dr summaries of datasets on the Hub be useful for you? ", "raw": "Would 1-2 sentence tl;dr summaries of datasets on the Hub be useful for you? ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": nu...
Would 1-2 sentence tl;dr summaries of datasets on the Hub be useful for you? For example, for the https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T dataset, would the following summary help give you a quick sense of its content? > tl;dr: RedPajama is a fully open-source implementation of the LLaMa dataset, consisting of 1.2 trillion tokens from sources like Commoncrawl, C4, GitHub, Books, ArXiv, Wikipedia, and StackExchange, primarily in English, and is structured with metadata for each text sample. I've created a dataset with example summaries of the 500 most liked datasets on the Hub: https://huggingface.co/datasets/davanstrien/dataset-tldr Would these kinds of summaries be helpful?
{ "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1627505688463-60107b385ac3e86b3ea4fc34.jpeg", "fullname": "Daniel van Strien", "name": "davanstrien", "type": "user", "isPro": true, "isHf": true, "isMod": false, "followerCount": 410, "isFollowing": false }
[]
[]
[ { "reaction": "โค๏ธ", "users": [ "osanseviero", "merve", "clem", "samusenps", "sequelbox", "DataSoul", "Arakinas" ], "count": 7 } ]
2024-03-27T12:12:16.000Z
2024-03-27T15:00:31.046Z
[ { "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1648113222875-6141a88b3a0ec78603c9e784.png", "fullname": "Merve Noyan", "name": "merve", "type": "user", "isPro": false, "isHf": true, "isMod": false, "followerCount": 5589, "isFollowing": false }, { ...
/posts/davanstrien/160289342297994
1,321
2
223529654460707
[ { "type": "text", "value": "๐— ๐—ฎ๐—ท๐—ผ๐—ฟ ๐—ง๐—ข๐— : ๐—ฃ๐—น๐—ฎ๐—ป๐—ฒ๐˜ ๐—˜๐—ฎ๐—ฟ๐˜๐—ต ๐—ถ๐˜€ ๐—ฏฬถ๐—นฬถ๐˜‚ฬถ๐—ฒฬถ ๐Ÿฑ.๐Ÿฐ๐Ÿฌ๐Ÿฑ ๐—š๐—›๐˜‡", "raw": "๐— ๐—ฎ๐—ท๐—ผ๐—ฟ ๐—ง๐—ข๐— : ๐—ฃ๐—น๐—ฎ๐—ป๐—ฒ๐˜ ๐—˜๐—ฎ๐—ฟ๐˜๐—ต ๐—ถ๐˜€ ๐—ฏฬถ๐—นฬถ๐˜‚ฬถ๐—ฒฬถ ๐Ÿฑ.๐Ÿฐ๐Ÿฌ๐Ÿฑ ๐—š๐—›๐˜‡", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": nu...
๐— ๐—ฎ๐—ท๐—ผ๐—ฟ ๐—ง๐—ข๐— : ๐—ฃ๐—น๐—ฎ๐—ป๐—ฒ๐˜ ๐—˜๐—ฎ๐—ฟ๐˜๐—ต ๐—ถ๐˜€ ๐—ฏฬถ๐—นฬถ๐˜‚ฬถ๐—ฒฬถ ๐Ÿฑ.๐Ÿฐ๐Ÿฌ๐Ÿฑ ๐—š๐—›๐˜‡ ๐Ÿšจ EXPANSION RELEASE: ๐—ฆ๐—ฒ๐—ป๐˜๐—ถ๐—ป๐—ฒ๐—น-๐Ÿญ ๐—ถ๐˜€ ๐—ป๐—ผ๐˜„ ๐—ฎ๐˜ƒ๐—ฎ๐—ถ๐—น๐—ฎ๐—ฏ๐—น๐—ฒ in the MajorTOM-Core! https://huggingface.co/datasets/Major-TOM/Core-S1RTC ๐ŸŽ Together with @aliFrancis we've been racing to release the first official expansion to the Major TOM project. MajorTOM-Core-S1RTC contains 1,469,955 of SAR images paired to Sentinel-2 images from Core-S2. ๐Ÿ”We cover more than 65% of the optical coverage with an average time shift of 7 days. 16 TB of radiometrically calibrated SAR imagery, available in the exact same format as the existing Major-TOM data. ๐Ÿ—บ๏ธ You can explore instantly in our viewing app: https://huggingface.co/spaces/Major-TOM/MajorTOM-Core-Viewer So, what now? ๐Ÿงฑ ๐‚๐จ๐ฆ๐ฆ๐ฎ๐ง๐ข๐ญ๐ฒ ๐†๐ซ๐จ๐ฐ๐ญ๐ก: our community continues to grow! To coordinate the upcoming expansions as well as use cases of the open data, we will organise a meet up on 23 April, you can ๐ซ๐ž๐ ๐ข๐ฌ๐ญ๐ž๐ซ ๐ฒ๐จ๐ฎ๐ซ ๐ข๐ง๐ญ๐ž๐ซ๐ž๐ฌ๐ญ here: https://forms.gle/eBj8JvibJx9b6PLf9 ๐Ÿš‚ ๐Ž๐ฉ๐ž๐ง ๐ƒ๐š๐ญ๐š ๐Ÿ๐จ๐ซ ๐Ž๐ฉ๐ž๐ง ๐Œ๐จ๐๐ž๐ฅ๐ฌ: Major-TOM Core dataset is currently supporting several strands of ongoing research within and outwith our lab and we are looking forward to the time when we can release models that take advantage of that data! https://huggingface.co/Major-TOM ๐Ÿ“Œ ๐๐จ๐ฌ๐ญ๐ž๐ซ ๐š๐ญ ๐ˆ๐†๐€๐‘๐’๐’: We will present Major TOM project as a poster at IGARSS in Athens (July) - come talk to us if you're there! You can access the paper here: https://huggingface.co/papers/2402.12095 ๐ŸŒŒ Developed at European Space Agency ฮฆ-lab in partnership with Hugging Face
{ "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1678741407493-6304c06eeb6d777a838eab63.png", "fullname": "Mikolaj Czerkawski", "name": "mikonvergence", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": 25, "isFollowing": false }
[ { "type": "video", "url": "https://cdn-uploads.huggingface.co/production/uploads/6304c06eeb6d777a838eab63/qZtkQ9CZ4Nrrh5PVLGQW_.mp4" } ]
[ { "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/635011301d81beb8e2455ee9/NyDIbzavucEIyFDHnaAv0.jpeg", "fullname": "Alistair Francis", "name": "aliFrancis", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": 21 } ]
[ { "reaction": "๐Ÿ”ฅ", "users": [ "aliFrancis", "davanstrien", "osanseviero", "clem", "Tom-Neverwinter", "robmarkcole" ], "count": 6 }, { "reaction": "๐Ÿš€", "users": [ "samusenps", "clem", "Tom-Neverwinter" ], "count": 3 } ]
2024-03-27T10:25:46.000Z
2024-03-27T10:27:23.652Z
[]
/posts/mikonvergence/223529654460707
1,470
0
292464465490361
[ { "type": "text", "value": "๐Ÿ” Today's pick in Interpretability & Analysis of LMs: Have Faith in Faithfulness: Going Beyond Circuit Overlap When Finding Model Mechanisms by ", "raw": "๐Ÿ” Today's pick in Interpretability & Analysis of LMs: Have Faith in Faithfulness: Going Beyond Circuit Overlap When Fin...
๐Ÿ” Today's pick in Interpretability & Analysis of LMs: Have Faith in Faithfulness: Going Beyond Circuit Overlap When Finding Model Mechanisms by @mwhanna @sandropezzelle @belinkov Edge attribution patching (EAP) is a circuit discovery technique using gradients to approximate the effects of causal intervening on each model edge. In the literature, its effectiveness is validated by comparing the overlap of its resulting circuits with those found via causal interventions (much more expensive). This work: 1. Proposes a new method for faithful and efficient circuit discovery named edge attribution patching with integrated gradients (EAP-IG) 2. Evaluates the faithfulness EAP, EAP-IG and activation patching, i.e. whether behavior of the model remains consistent after all non-circuit edges are ablated. 3. Highlights that, while the no-overlap and full-overlap of EAP-like methods with activation patching results are generally good indicators of unfaithful and faithful (respectively) circuit identification, circuits with moderate overlap cannot generally assumed to be faithful to model behavior. An advantage of EAP-IG is enabling the usage of KL-Divergence as a target for gradient propagation, which is not possible in the case of raw gradient-based EAP. EAP-IG runtime is approximately similar to the one of EAP, with a small number of steps to approximate the gradient integral. Importantly, circuit faithfulness does not imply completeness, i.e. whether all components participating towards a specific task were accounted for. This aspect is identified as interesting for future work. ๐Ÿ“„ Paper: https://huggingface.co/papers/2403.17806 ๐Ÿ” All daily picks: https://huggingface.co/collections/gsarti/daily-picks-in-interpretability-and-analysis-of-lms-65ae3339949c5675d25de2f9
{ "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1670231290373-5e7749883d77a72421292d07.jpeg", "fullname": "Gabriele Sarti", "name": "gsarti", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": 205, "isFollowing": false }
[ { "type": "image", "url": "https://cdn-uploads.huggingface.co/production/uploads/5e7749883d77a72421292d07/4_nMQgwlvRITNWQLO4w86.png" }, { "type": "image", "url": "https://cdn-uploads.huggingface.co/production/uploads/5e7749883d77a72421292d07/TXwT9VOLdivdURyZj6gXW.png" }, { "type": "image...
[ { "avatarUrl": "/avatars/186a9aed84681246f48ed2a012c50def.svg", "fullname": "Yonatan Belinkov", "name": "belinkov", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": 3 }, { "avatarUrl": "/avatars/cee8dc17ab89b050a862dee09a0a0195.svg", "fullna...
[ { "reaction": "โค๏ธ", "users": [ "javifer", "osanseviero", "samusenps", "clem", "giux78", "mmhamdy", "alemiaschi" ], "count": 7 }, { "reaction": "๐Ÿ‘", "users": [ "shiv2050" ], "count": 1 } ]
2024-03-27T09:34:06.000Z
2024-03-27T09:34:06.994Z
[]
/posts/gsarti/292464465490361
1,261
0
285448939482538
[ { "type": "text", "value": "Based on the work of ", "raw": "Based on the work of ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "mention", "value": null, "raw": "@mrinaldi", "href": null...
Based on the work of @mrinaldi and @ruggsea we just released the biggest - ready for training - conversational dataset based on Usenet data in the Italian language ๐Ÿ‡ฎ๐Ÿ‡น๐Ÿ‡ฎ๐Ÿ‡น๐Ÿ‡ฎ๐Ÿ‡น๐Ÿ‡ฎ๐Ÿ‡น๐Ÿ‡ฎ๐Ÿ‡น๐Ÿ‡ฎ๐Ÿ‡น๐Ÿ‡ฎ๐Ÿ‡น. It contains about 9 millions of conversations made by real humans. https://huggingface.co/datasets/mii-community/UsenetArchiveIT-conversations
{ "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/5fef4eb7770b06e11c2c6381/1NMdigjCGtn0yvQZSi5NJ.png", "fullname": "Alessandro Ercolani", "name": "giux78", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": 44, "isFollowing": false }
[]
[ { "avatarUrl": "/avatars/641876a24d4ee45dab0f9723d7b9e7f1.svg", "fullname": "Matteo Rinaldi", "name": "mrinaldi", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": 3 }, { "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/635f...
[ { "reaction": "๐Ÿ‘", "users": [ "mrinaldi", "gsarti", "fgamezf", "osanseviero", "giux78", "merve", "clem" ], "count": 7 }, { "reaction": "โค๏ธ", "users": [ "gsarti", "osanseviero", "merve", "samusenps", "clem", "g...
2024-03-27T07:54:41.000Z
2024-03-27T08:15:52.907Z
[]
/posts/giux78/285448939482538
1,285
0
826730657864057
[ { "type": "text", "value": "Exciting news! ๐ŸŽ‰ I've created the OpenCerebrum datasets, open-source alternatives to Aether Research's proprietary Cerebrum dataset.", "raw": "Exciting news! ๐ŸŽ‰ I've created the OpenCerebrum datasets, open-source alternatives to Aether Research's proprietary Cerebrum dataset...
Exciting news! ๐ŸŽ‰ I've created the OpenCerebrum datasets, open-source alternatives to Aether Research's proprietary Cerebrum dataset. The first, OpenCerebrum SFT, is a text-generation and question-answering dataset with ~1.2M examples, curated from sources like Open-Orca, glaiveai, camel-ai, and more! ๐Ÿ“š The second, OpenCerebrum DPO, is a smaller dataset with ~21k examples, focusing on data point optimization. It's curated from sources like jondurbin, argilla, grimulkan, and others. ๐Ÿ“Š Both datasets are licensed under Apache-2.0 and are available in English. They're ready for use in your projects, and I welcome any feedback for future improvements! ๐Ÿš€ https://huggingface.co/datasets/Locutusque/OpenCerebrum-dpo https://huggingface.co/datasets/Locutusque/OpenCerebrum-SFT https://huggingface.co/Locutusque/OpenCerebrum-1.0-7b-SFT https://huggingface.co/Locutusque/OpenCerebrum-1.0-7b-DPO
{ "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/YeFyz1AZVcCRsyNHHtwJG.jpeg", "fullname": "Sebastian Gabarain", "name": "Locutusque", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": 180, "isFollowing": false }
[]
[]
[ { "reaction": "โค๏ธ", "users": [ "afrideva", "samusenps", "ajibawa-2023", "giux78", "osanseviero", "lewtun", "dvilasuero", "lhoestq", "victor", "Tom-Neverwinter", "InferenceIllusionist", "MexIvanov", "clefourrier", "mlabonne" ...
2024-03-27T00:07:00.000Z
2024-03-27T14:01:49.658Z
[ { "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/634262af8d8089ebaefd410e/pr6KcEebXTo5V2XAlpQNw.png", "fullname": "Fizz ๐Ÿณ๏ธโ€โšง๏ธ", "name": "Fizzarolli", "type": "user", "isPro": true, "isHf": false, "isMod": false, "followerCount": 49, "isFollowing": false ...
/posts/Locutusque/826730657864057
2,639
5
736768871803868
[ { "type": "text", "value": "Current LLMs are very susceptible to generating toxic, harmful and even dangerous content. They can also generate outputs with gender or racial biases.", "raw": "Current LLMs are very susceptible to generating toxic, harmful and even dangerous content. They can also generate ...
Current LLMs are very susceptible to generating toxic, harmful and even dangerous content. They can also generate outputs with gender or racial biases. The Biden-Harris Executive Order (https://www.federalregister.gov/documents/2023/11/01/2023-24283/safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence) sets forth guidelines on what is considered a safe AI system. Following up on these guidelines, we present the world's first open source Biden-Harris Executive Order Red teamed Multilingual Language Model: Aurora-M. The model is trained on 5 languages: English, Hindi, Japanese, Vietnamese and Finnish. Blog: https://huggingface.co/blog/mayank-mishra/aurora Paper coming out soon. Base model: https://huggingface.co/aurora-m/aurora-m-base (not safety tuned) Instruct model: https://huggingface.co/aurora-m/aurora-m-instruct (not safety tuned) Red teamed model: https://huggingface.co/aurora-m/aurora-m-biden-harris-redteamed (safety tuned according to the order mentioned above)
{ "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/62cd5057674cdb524450093d/f67rlrdsKPRTLdXCXoa_X.jpeg", "fullname": "Mayank Mishra", "name": "mayank-mishra", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": 53, "isFollowing": false }
[]
[]
[ { "reaction": "๐Ÿ˜”", "users": [ "samusenps", "oneiroid", "clem", "lunarflu", "antiven0m", "adamelliotfields", "Joseph717171" ], "count": 7 }, { "reaction": "๐Ÿ”ฅ", "users": [ "samusenps", "mayank-mishra", "clem", "diwank", ...
2024-03-26T23:17:27.000Z
2024-03-27T06:32:58.201Z
[]
/posts/mayank-mishra/736768871803868
1,896
1
529620339719015
[ { "type": "text", "value": "A new paper introduces Visual CoT, a new approach that enhances multi-modal large language models with visual chain-of-thought reasoning capabilities. This allows language models to dynamically identify and focus on specific regions within images that are most relevant for answer...
A new paper introduces Visual CoT, a new approach that enhances multi-modal large language models with visual chain-of-thought reasoning capabilities. This allows language models to dynamically identify and focus on specific regions within images that are most relevant for answering questions, mimicking human-like efficient visual reasoning. Keypoints: * Introduces the 373k Visual CoT dataset with bounding box annotations highlighting essential image regions * Proposes a multi-turn pipeline for focusing on relevant visual inputs * Achieves strong results on multi-modal benchmarks Paper: https://huggingface.co/papers/2403.16999 Code, data and other resources: https://github.com/deepcs233/Visual-CoT Congrats to the authors for their work!
{ "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/657217faabb25ed8aedd5e48/UUHAXeGtOnQBXFD3nYtf2.jpeg", "fullname": "Vlad Bogolin", "name": "vladbogo", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": 109, "isFollowing": false }
[]
[]
[ { "reaction": "โค๏ธ", "users": [ "samusenps", "Csplk", "osanseviero", "thesouthfrog", "Deping", "sbrandeis" ], "count": 6 }, { "reaction": "โž•", "users": [ "samusenps", "osanseviero" ], "count": 2 } ]
2024-03-26T22:20:59.000Z
2024-03-26T22:20:59.712Z
[]
/posts/vladbogo/529620339719015
1,385
0
751873931507932
[ { "type": "text", "value": "๐—›๐—ผ๐˜„ ๐—ฑ๐—ผ๐—ฒ๐˜€ ๐—ฏ๐—ฒ๐—ฎ๐—บ ๐˜€๐—ฒ๐—ฎ๐—ฟ๐—ฐ๐—ต ๐—ฑ๐—ฒ๐—ฐ๐—ผ๐—ฑ๐—ถ๐—ป๐—ด ๐˜„๐—ผ๐—ฟ๐—ธ? โžก๏ธ ๐™‰๐™š๐™ฌ ๐™ซ๐™ž๐™จ๐™ช๐™–๐™ก๐™ž๐™ฏ๐™–๐™ฉ๐™ž๐™ค๐™ฃ ๐™ฉ๐™ค๐™ค๐™ก! ๐Ÿ‘€", "raw": "๐—›๐—ผ๐˜„ ๐—ฑ๐—ผ๐—ฒ๐˜€ ๐—ฏ๐—ฒ๐—ฎ๐—บ ๐˜€๐—ฒ๐—ฎ๐—ฟ๐—ฐ๐—ต ๐—ฑ๐—ฒ๐—ฐ๐—ผ๐—ฑ๐—ถ๐—ป๐—ด ๐˜„๐—ผ๐—ฟ๐—ธ? โžก๏ธ ๐™‰๐™š๐™ฌ ๐™ซ๐™ž๐™จ๐™ช๐™–๐™ก๐™ž๐™ฏ๐™–๐™ฉ๐™ž๐™ค๐™ฃ ๐™ฉ๐™ค๐™ค๐™ก! ๐Ÿ‘€", "href": null, "resource": ...
๐—›๐—ผ๐˜„ ๐—ฑ๐—ผ๐—ฒ๐˜€ ๐—ฏ๐—ฒ๐—ฎ๐—บ ๐˜€๐—ฒ๐—ฎ๐—ฟ๐—ฐ๐—ต ๐—ฑ๐—ฒ๐—ฐ๐—ผ๐—ฑ๐—ถ๐—ป๐—ด ๐˜„๐—ผ๐—ฟ๐—ธ? โžก๏ธ ๐™‰๐™š๐™ฌ ๐™ซ๐™ž๐™จ๐™ช๐™–๐™ก๐™ž๐™ฏ๐™–๐™ฉ๐™ž๐™ค๐™ฃ ๐™ฉ๐™ค๐™ค๐™ก! ๐Ÿ‘€ In Decoder-type LLMs like GPT4 or Mistral-Large, the output is generated one token (=word part) at a time. That's why they're nicknamed "stochastic parrots": the "thinking" process only happens one step at a time, so it can seem really myopic. ๐’๐จ ๐ก๐จ๐ฐ ๐ข๐ฌ ๐ญ๐ก๐ž ๐ง๐ž๐ฑ๐ญ ๐ญ๐จ๐ค๐ž๐ง ๐ฌ๐ž๐ฅ๐ž๐œ๐ญ๐ž๐? ๐Ÿ“Š Given its input sentence like "๐˜ž๐˜ฉ๐˜ข๐˜ต ๐˜ช๐˜ด ๐˜ต๐˜ฉ๐˜ฆ 7๐˜ต๐˜ฉ ๐˜๐˜ช๐˜ฃ๐˜ฐ๐˜ฏ๐˜ข๐˜ค๐˜ค๐˜ช ๐˜ฏ๐˜ถ๐˜ฎ๐˜ฃ๐˜ฆ๐˜ณ? ๐˜›๐˜ฉ๐˜ฆ 7๐˜ต๐˜ฉ ๐˜๐˜ช๐˜ฃ๐˜ฐ๐˜ฏ๐˜ข๐˜ค๐˜ค๐˜ช ๐˜ฏ๐˜ถ๐˜ฎ๐˜ฃ๐˜ฆ๐˜ณ", the Decoder LLM generates, for each token in its vocabulary, a score that represents this token's probability of coming next. For instance: "๐™ž๐™จ" gets score 0.56, and "๐™˜๐™–๐™ฃ" gets score 0.35. ๐Ÿค‘ ๐†๐ซ๐ž๐ž๐๐ฒ ๐๐ž๐œ๐จ๐๐ข๐ง๐  is the naive option where you simply take the next most probable token at each step. But this creates paths that maximize very short-term rewards, thus may overlook better paths for the long term (like this time when you played FIFA all evening and arrived unprepared to your school exam on the next day). In our example, the next highest score token might be "๐™ž๐™จ", but this will strongly bias the LLM towards giving an hasty response. On the opposite, starting with "๐™˜๐™–๐™ฃ" could have been completed with "๐˜ฃ๐˜ฆ ๐˜ฐ๐˜ฃ๐˜ต๐˜ข๐˜ช๐˜ฏ๐˜ฆ๐˜ฅ ๐˜ง๐˜ณ๐˜ฐ๐˜ฎ ๐˜ค๐˜ฐ๐˜ฎ๐˜ฑ๐˜ถ๐˜ต๐˜ช๐˜ฏ๐˜จ ๐˜ฑ๐˜ณ๐˜ฆ๐˜ท๐˜ช๐˜ฐ๐˜ถ๐˜ด ๐˜๐˜ช๐˜ฃ๐˜ฐ๐˜ฏ๐˜ข๐˜ค๐˜ค๐˜ช ๐˜ฏ๐˜ถ๐˜ฎ๐˜ฃ๐˜ฆ๐˜ณ๐˜ด ๐˜ง๐˜ช๐˜ณ๐˜ด๐˜ต", which steers the LLM towards a correct reasoning! ๐Ÿ—บ๏ธ ๐๐ž๐š๐ฆ ๐ฌ๐ž๐š๐ซ๐œ๐ก improves on greedy decoding by generating at each step several paths - called beams - instead of one. This allows the generation to explore a much larger space, thus find better completions. In our example, both the "๐™ž๐™จ" and the "๐™˜๐™–๐™ฃ" completion could be tested. โœ… ๐Ÿ‘‰ I've created a tool to let you visualize it, thank you @joaogante for your great help! ๐™๐™ง๐™ฎ ๐™ž๐™ฉ ๐™๐™š๐™ง๐™š: https://huggingface.co/spaces/m-ric/beam_search_visualizer
{ "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/63d10d4e8eaa4831005e92b5/7p7-OmWM6PqqCs7ZStPGD.jpeg", "fullname": "Aymeric Roucher", "name": "m-ric", "type": "user", "isPro": false, "isHf": true, "isMod": false, "followerCount": 494, "isFollowing": false }
[]
[ { "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1641203017724-noauth.png", "fullname": "Joao Gante", "name": "joaogante", "type": "user", "isPro": false, "isHf": true, "isMod": false, "followerCount": 96 } ]
[ { "reaction": "โค๏ธ", "users": [ "samusenps", "matlok", "joaogante", "clefourrier", "fieryTransition", "Jafta", "awagner-mainz" ], "count": 7 }, { "reaction": "๐Ÿ”ฅ", "users": [ "osanseviero", "Awal", "mmhamdy", "samusenps", ...
2024-03-26T17:23:28.000Z
2024-03-26T17:23:28.756Z
[]
/posts/m-ric/751873931507932
1,702
0
236311186560364
[ { "type": "text", "value": "Very glad to welcome ", "raw": "Very glad to welcome ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "mention", "value": null, "raw": "@josefprusa", "href": nu...
Very glad to welcome @josefprusa, pioneer of 3D printing and open source hardware, founder of https://www.prusa3d.com/, to the HF Hub ๐Ÿ‘‹ AI applied to 3D printing could be big.
{ "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/5dd96eb166059660ed1ee413/NQtzmrDdbG0H8qkZvRyGk.jpeg", "fullname": "Julien Chaumond", "name": "julien-c", "type": "user", "isPro": true, "isHf": true, "isMod": false, "followerCount": 1580, "isFollowing": false }
[ { "type": "image", "url": "https://cdn-uploads.huggingface.co/production/uploads/5dd96eb166059660ed1ee413/Buyqz5oR6wuj5-C9iFUkE.png" } ]
[ { "avatarUrl": "/avatars/533162c0048b18e983a1b220b82230e1.svg", "fullname": "Josef Prลฏลกa", "name": "josefprusa", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": 1143 } ]
[ { "reaction": "๐Ÿ”ฅ", "users": [ "victor", "London12345", "satpalsr", "diwank", "ZennyKenny", "radames", "adamelliotfields", "VictorSanh", "mmhamdy", "samusenps", "pcuenq", "kramp", "osanseviero", "stattmone", "mdouglas"...
2024-03-26T16:25:55.000Z
2024-03-28T16:48:32.476Z
[ { "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1583857146757-5e67bdd61009063689407479.jpeg", "fullname": "Clem ๐Ÿค—", "name": "clem", "type": "user", "isPro": true, "isHf": true, "isMod": false, "followerCount": 1763, "isFollowing": false } ]
/posts/julien-c/236311186560364
3,160
1
161743256497155
[ { "type": "text", "value": "LLM Agent Operating System", "raw": "LLM Agent Operating System", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": ...
LLM Agent Operating System https://huggingface.co/papers/2403.16971 The integration and deployment of large language model (LLM)-based intelligent agents have been fraught with challenges that compromise their efficiency and efficacy. Among these issues are sub-optimal scheduling and resource allocation of agent requests over the LLM, the difficulties in maintaining context during interactions between agent and LLM, and the complexities inherent in integrating heterogeneous agents with different capabilities and specializations. The rapid increase of agent quantity and complexity further exacerbates these issues, often leading to bottlenecks and sub-optimal utilization of resources. Inspired by these challenges, this paper presents AIOS, an LLM agent operating system, which embeds large language model into operating systems (OS). Specifically, AIOS is designed to optimize resource allocation, facilitate context switch across agents, enable concurrent execution of agents, provide tool service for agents, and maintain access control for agents. We present the architecture of such an operating system, outline the core challenges it aims to resolve, and provide the basic design and implementation of the AIOS. Our experiments on concurrent execution of multiple agents demonstrate the reliability and efficiency of our AIOS modules. Through this, we aim to not only improve the performance and efficiency of LLM agents but also to pioneer for better development and deployment of the AIOS ecosystem in the future.
{ "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg", "fullname": "AK", "name": "akhaliq", "type": "user", "isPro": false, "isHf": true, "isMod": false, "followerCount": 5205, "isFollowing": false }
[ { "type": "image", "url": "https://cdn-uploads.huggingface.co/production/uploads/60f1abe7544c2adfd699860c/kQUttU5GaHg-9HDxed9JT.png" } ]
[]
[ { "reaction": "โค๏ธ", "users": [ "samusenps", "QiushiSun", "osanseviero", "alielfilali01", "AtAndDev", "Jakaline" ], "count": 6 }, { "reaction": "๐Ÿ‘€", "users": [ "diwank", "AtAndDev", "osanseviero" ], "count": 3 }, { "re...
2024-03-26T15:32:12.000Z
2024-03-27T06:54:25.215Z
[ { "avatarUrl": "/avatars/88a391b315665c42bced2df0c300bf8d.svg", "fullname": "Vishwas", "name": "Vishwas1", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": 5, "isFollowing": false }, { "avatarUrl": "https://cdn-avatars.huggingface.co/v1/prod...
/posts/akhaliq/161743256497155
2,191
3
428244381121381
[ { "type": "text", "value": "MLX RAG with GGUF Models", "raw": "MLX RAG with GGUF Models", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null...
MLX RAG with GGUF Models Minimal, clean code implementation of RAG with mlx inferencing for GGUF models. Code: https://github.com/Jaykef/mlx-rag-gguf The code here builds on vegaluisjose's example, it has been optimized to support RAG-based inferencing for .gguf models. I am using BAAI/bge-small-en for the embedding model, tinyllama-1.1b-chat-v1.0.Q4_0.gguf as base model and the custom vector database script for indexing texts in a pdf file. Inference speeds can go up to ~413 tokens/sec for prompts and ~36 tokens/sec for generation on my M2 Air. Queries make use of both .gguf (base model) and .npz (retrieval model) simultaneouly resulting in much higher inferencing speeds.
{ "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6438a9027de34e8ea7e4b257/vib8QSd1AWMr_bR9ig_xJ.jpeg", "fullname": "Jaward Sesay", "name": "Jaward", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": 191, "isFollowing": false }
[ { "type": "video", "url": "https://cdn-uploads.huggingface.co/production/uploads/6438a9027de34e8ea7e4b257/_VswNFRiMFxPPy3todCWu.mp4" } ]
[]
[ { "reaction": "โค๏ธ", "users": [ "samusenps" ], "count": 1 } ]
2024-03-26T13:38:01.000Z
2024-03-26T13:38:01.882Z
[]
/posts/Jaward/428244381121381
1,429
0
481178346523516
[ { "type": "text", "value": "Diaries of Open Source. Part 10 ๐Ÿš€", "raw": "Diaries of Open Source. Part 10 ๐Ÿš€", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\...
Diaries of Open Source. Part 10 ๐Ÿš€ ๐ŸŒผMarigold-LCM: A super fast SOTA Depth Estimator Demo: https://hf.co/spaces/prs-eth/marigold-lcm Original paper: https://hf.co/papers/2312.02145 Model: https://hf.co/prs-eth/marigold-lcm-v1-0 ๐ŸŒŸQuiet-STaR: A self-teaching technique via internal monologue Paper: https://hf.co/papers/2403.09629 GitHub: https://github.com/ezelikman/quiet-star Tweetutorial: https://twitter.com/ericzelikman/status/1768663835106513041 ๐Ÿ–ผ๏ธ WebSight v0.2: A image-to-code dataset containing tailwind CSS, images in screenshots, and more! Dataset: https://hf.co/datasets/HuggingFaceM4/WebSight Paper: https://hf.co/papers/2403.09029 Blog: https://hf.co/blog/websight ๐Ÿ•ต๏ธAgent-FLAN - effective agent tuning for LLMs Paper: https://hf.co/papers/2403.12881 Model: https://hf.co/internlm/Agent-FLAN-7b Dataset: https://hf.co/datasets/internlm/Agent-FLAN Website: https://internlm.github.io/Agent-FLAN/ ๐Ÿ”ฅHPT, a family of multimodal LLMs from HyperGAI Blog post: https://hypergai.com/blog/introducing-hpt-a-family-of-leading-multimodal-llms Model: https://huggingface.co/HyperGAI/HPT GitHub: https://github.com/hyperGAI/HPT ๐ŸŒModels and datasets around the world - Tess-70B, a MiQu-70B fine-tune with high-quality data https://hf.co/migtissera/Tess-70B-v1.6 - UNI, a model trained on 100 million pathology images from 100k+ slides https://hf.co/MahmoodLab/UNI - CONCH, a VLM trained on 1.17 million pathology image-text pairs https://hf.co/MahmoodLab/CONCH
{ "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6032802e1f993496bc14d9e3/w6hr-DEQot4VVkoyRIBiy.png", "fullname": "Omar Sanseviero", "name": "osanseviero", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": 2868, "isFollowing": false }
[]
[]
[ { "reaction": "๐Ÿค—", "users": [ "Bingxin", "toshas", "DmitryRyumin", "thomwolf", "migtissera", "julien-c", "London12345", "mmhamdy", "samusenps", "pcuenq", "victor", "sbrandeis" ], "count": 12 }, { "reaction": "โค๏ธ", "...
2024-03-26T11:48:14.000Z
2024-03-28T05:40:19.342Z
[ { "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/658a2e76367c76b8ee481b5c/SF1IA8CeQBdc7w1cNR6c0.jpeg", "fullname": "Bryan Alvarado Villalobos", "name": "thebryanalvarado", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": null, ...
/posts/osanseviero/481178346523516
1,615
4
204103548247572
[ { "type": "text", "value": "๐Ÿš€๐ŸŽญ๐ŸŒŸ New Research Alert! ๐ŸŒŸ ๐ŸŽญ๐Ÿš€", "raw": "๐Ÿš€๐ŸŽญ๐ŸŒŸ New Research Alert! ๐ŸŒŸ ๐ŸŽญ๐Ÿš€", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\...
๐Ÿš€๐ŸŽญ๐ŸŒŸ New Research Alert! ๐ŸŒŸ ๐ŸŽญ๐Ÿš€ ๐Ÿ“„ Title: FlashFace: Human Image Personalization with High-fidelity Identity Preservation ๐Ÿ” ๐Ÿ“ Description: FlashFace is a personalized photo editing tool that focuses on high-fidelity identity preservation and improved compliance through advanced encoding and integration strategies. ๐Ÿ‘ฅ Authors: Shilong Zhang, Lianghua Huang, @xichenhku et al. ๐Ÿ”— Paper: https://huggingface.co/papers/2403.17008 ๐ŸŒ Github Page: https://jshilong.github.io/flashface-page ๐Ÿ“š More Papers: more cutting-edge research presented at other conferences in the https://huggingface.co/spaces/DmitryRyumin/NewEraAI-Papers curated by @DmitryRyumin ๐Ÿš€ Added to the Avatars Collection: https://huggingface.co/collections/DmitryRyumin/avatars-65df37cdf81fec13d4dbac36 ๐Ÿ” Keywords: #FlashFace #Personalization #HighFidelityIdentity #DeepLearning #Innovation
{ "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/nRCxbVng_PPBqKd-Z3KVc.jpeg", "fullname": "Dmitry Ryumin", "name": "DmitryRyumin", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": 377, "isFollowing": false }
[ { "type": "image", "url": "https://cdn-uploads.huggingface.co/production/uploads/6493306970d925ae80523a53/o85gNpS2Ra-YYn8QDHZFV.png" }, { "type": "image", "url": "https://cdn-uploads.huggingface.co/production/uploads/6493306970d925ae80523a53/F9ZnPt1ITuZ51rD0v0XTg.png" }, { "type": "image...
[ { "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/nRCxbVng_PPBqKd-Z3KVc.jpeg", "fullname": "Dmitry Ryumin", "name": "DmitryRyumin", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": 377 }, { "avatarUrl": "https://cdn...
[ { "reaction": "๐Ÿ”ฅ", "users": [ "DmitryRyumin", "waqas700297", "kgourgou", "London12345", "samusenps", "KvrParaskevi", "whooray" ], "count": 7 }, { "reaction": "๐Ÿ‘", "users": [ "dashfunnydashdash", "loveisp", "AtonMountlook", ...
2024-03-26T08:40:00.000Z
2024-03-26T08:41:23.213Z
[]
/posts/DmitryRyumin/204103548247572
1,602
0
798001054518351
[ { "type": "text", "value": "Now You Can Full Fine Tune / DreamBooth Stable Diffusion XL (SDXL) with only 10.3 GB VRAM via OneTrainer โ€” Both U-NET and Text Encoder 1 is trained โ€” Compared 14 GB config vs slower 10.3 GB Config", "raw": "Now You Can Full Fine Tune / DreamBooth Stable Diffusion XL (SDXL) wi...
Now You Can Full Fine Tune / DreamBooth Stable Diffusion XL (SDXL) with only 10.3 GB VRAM via OneTrainer โ€” Both U-NET and Text Encoder 1 is trained โ€” Compared 14 GB config vs slower 10.3 GB Config Full config and instructions are shared here : https://www.patreon.com/posts/96028218 Used SG161222/RealVisXL_V4.0 as a base model and OneTrainer to train on Windows 10 : https://github.com/Nerogar/OneTrainer The posted example x/y/z checkpoint comparison images are not cherry picked. So I can get perfect images with multiple tries. Trained 150 epochs, 15 images and used my ground truth 5200 regularization images : https://www.patreon.com/posts/massive-4k-woman-87700469 In each epoch only 15 of regularization images used to make DreamBooth training affect As a caption only โ€œohwx manโ€ is used, for regularization images just โ€œmanโ€ You can download configs and full instructions here : https://www.patreon.com/posts/96028218 Hopefully full public tutorial coming within 2 weeks. I will show all configuration as well The tutorial will be on our channel : https://www.youtube.com/SECourses Training speeds are as below thus durations: RTX 3060 โ€” slow preset : 3.72 second / it thus 15 train images 150 epoch 2 (reg images concept) : 4500 steps = 4500 3.72 / 3600 = 4.6 hours RTX 3090 TI โ€” slow preset : 1.58 second / it thus : 4500 * 1.58 / 3600 = 2 hours RTX 3090 TI โ€” fast preset : 1.45 second / it thus : 4500 * 1.45 / 3600 = 1.8 hours A quick tutorial for how to use concepts in OneTrainer : https://youtu.be/yPOadldf6bI
{ "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1672531901326-6345bd89fe134dfd7a0dba40.png", "fullname": "Furkan Gรถzรผkara", "name": "MonsterMMORPG", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": 376, "isFollowing": false }
[ { "type": "image", "url": "https://cdn-uploads.huggingface.co/production/uploads/6345bd89fe134dfd7a0dba40/3ZiQ52_CdZxKFiXGw-eAs.jpeg" }, { "type": "image", "url": "https://cdn-uploads.huggingface.co/production/uploads/6345bd89fe134dfd7a0dba40/bgk0UO27_wx8VDV-3LHsi.jpeg" }, { "type": "ima...
[]
[ { "reaction": "โค๏ธ", "users": [ "samusenps", "radames", "Reza2kn", "Stephane97480", "osanseviero", "Zazalael", "seyf1elislam", "MaziyarPanahi" ], "count": 8 }, { "reaction": "๐Ÿ‘", "users": [ "waqas700297" ], "count": 1 } ]
2024-03-26T04:36:46.000Z
2024-04-02T03:20:22.108Z
[ { "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1672531901326-6345bd89fe134dfd7a0dba40.png", "fullname": "Furkan Gรถzรผkara", "name": "MonsterMMORPG", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": 376, "isFollowing": false ...
/posts/MonsterMMORPG/798001054518351
2,047
5
592209532314704
[ { "type": "text", "value": "The AI Comic Factory 1.2 has been released a few hours ago!", "raw": "The AI Comic Factory 1.2 has been released a few hours ago!", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "t...
The AI Comic Factory 1.2 has been released a few hours ago! You can now tell the LLM in advance how many pages you will want, for richer and deeper stories https://www.loom.com/share/b8b2d55fc60249a78df60adaa2673d2f I the future I would like to improve how layouts are attributed to pages ( seeing the same layout 6x can be a bit boring) Also, I'm still waiting for some new improved AI models to drop, so stay tuned!
{ "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/2RK8J_YSNAK2ob8XZH7w2.jpeg", "fullname": "Julian Bilcke", "name": "jbilcke-hf", "type": "user", "isPro": false, "isHf": true, "isMod": false, "followerCount": 1312, "isFollowing": false }
[]
[]
[ { "reaction": "โค๏ธ", "users": [ "osanseviero", "victor", "halbarba", "musfiqdehan", "szh", "clem", "samusenps", "ajibawa-2023", "radames", "kramp", "julien-c", "Krass1", "kuramacn", "Handoman", "tschacht", "Chris"...
2024-03-25T21:38:18.000Z
2024-05-10T21:03:45.272Z
[ { "avatarUrl": "/avatars/f4cdeeaf5817bac68fbbf6f1cd6f2a3c.svg", "fullname": "JACK", "name": "JACKTHERAPPA", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": null, "isFollowing": false }, { "avatarUrl": "https://cdn-avatars.huggingface.co/v1/...
/posts/jbilcke-hf/592209532314704
8,100
2
242385169623874
[ { "type": "text", "value": "Introducing Marigold-LCM ๐ŸŒผ โ€” a FAST version of the now popular state-of-the-art depth estimator! Thanks to the latent consistency distillation, it retains the precision of the original Marigold but reaches the solution in just a few steps! ", "raw": "Introducing Marigold-LCM...
Introducing Marigold-LCM ๐ŸŒผ โ€” a FAST version of the now popular state-of-the-art depth estimator! Thanks to the latent consistency distillation, it retains the precision of the original Marigold but reaches the solution in just a few steps! Check out the teaser video attached below and play with the new demo - it accepts videos now! Also, meet the new team member: Tianfu Wang (@Tianfwang) ๐Ÿค— Demo: https://huggingface.co/spaces/prs-eth/marigold-lcm ๐Ÿค— Model: https://huggingface.co/prs-eth/marigold-lcm-v1-0 ๐Ÿค— Original Marigold post: https://huggingface.co/posts/toshas/656973498012745 ๐Ÿค— Paper: https://huggingface.co/papers/2312.02145 ๐ŸŒ Website: https://marigoldmonodepth.github.io ๐Ÿ‘พ Code: https://github.com/prs-eth/marigold ๐Ÿ‘พ Code: pip install diffusers
{ "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/62f93abbc4817cfc0756b6f8/rGYLaq-rmoJJYotkC1VXk.jpeg", "fullname": "Anton Obukhov", "name": "toshas", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": 68, "isFollowing": false }
[ { "type": "video", "url": "https://cdn-uploads.huggingface.co/production/uploads/62f93abbc4817cfc0756b6f8/LZEpbAbimQH6JGca_5WO9.mp4" } ]
[ { "avatarUrl": "/avatars/a06aecf91bcc005e618460ab31228fb7.svg", "fullname": "Tianfu Wang", "name": "Tianfwang", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": 3 } ]
[ { "reaction": "๐Ÿ”ฅ", "users": [ "DmitryRyumin", "samusenps", "osanseviero", "clem", "radames", "Veneco" ], "count": 6 }, { "reaction": "๐Ÿคฏ", "users": [ "DmitryRyumin", "osanseviero", "clem" ], "count": 3 } ]
2024-03-25T18:23:23.000Z
2024-03-25T19:39:03.285Z
[ { "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1654292443715-noauth.png", "fullname": "Xin Yin", "name": "Kache", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": 1, "isFollowing": false } ]
/posts/toshas/242385169623874
1,977
1
580395077003079
[ { "type": "text", "value": "๐Ÿš€ Just released version 0.22.0 of the ", "raw": "๐Ÿš€ Just released version 0.22.0 of the ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "inline_code", "value": null, ...
๐Ÿš€ Just released version 0.22.0 of the `huggingface_hub` Python library! Exciting updates include: โœจ Chat-completion API in the InferenceClient! ๐Ÿค– Official inference types in InferenceClient! ๐Ÿงฉ Better config and tags in `ModelHubMixin`! ๐Ÿ† Generate model cards for your `ModelHubMixin` integrations! ๐ŸŽ๏ธ x3 download speed in `HfFileSystem`!! Check out the full release notes for more details: https://huggingface.co/spaces/Wauplin/huggingface_hub/discussions/5 ๐Ÿ‘€
{ "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1659336880158-6273f303f6d63a28483fde12.png", "fullname": "Lucain Pouget", "name": "Wauplin", "type": "user", "isPro": true, "isHf": true, "isMod": false, "followerCount": 157, "isFollowing": false }
[]
[]
[ { "reaction": "๐Ÿ”ฅ", "users": [ "pierrci", "raucheacho", "cbensimon", "clefourrier", "victor", "trysem", "nielsr", "monsoon-nlp", "samusenps", "osanseviero", "mmhamdy", "clem", "minhdang", "ajibawa-2023", "julien-c", ...
2024-03-25T14:10:32.000Z
2024-03-28T08:13:06.344Z
[ { "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6527e89a8808d80ccff88b7a/CuGNmF1Et8KMQ0mCd1NEJ.jpeg", "fullname": "Lain", "name": "not-lain", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": 941, "isFollowing": false }, ...
/posts/Wauplin/580395077003079
2,323
2
402964363815778
[ { "type": "text", "value": "Just released a notebook showing how to finetune moondream: ", "raw": "Just released a notebook showing how to finetune moondream: ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { ...
Just released a notebook showing how to finetune moondream: https://github.com/vikhyat/moondream/blob/main/notebooks/Finetuning.ipynb
{ "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/63117568fa95534e218da163/8h9zN8aKRxPLBnXW7sqY9.jpeg", "fullname": "Vik Korrapati", "name": "vikhyatk", "type": "user", "isPro": true, "isHf": false, "isMod": false, "followerCount": 375, "isFollowing": false }
[]
[]
[ { "reaction": "โค๏ธ", "users": [ "NickyNicky", "samusenps", "peaceAsh", "musfiqdehan", "Norod78", "osanseviero", "ajibawa-2023", "Tom-Neverwinter", "clem", "abdullah", "damerajee", "t1u1", "notune", "not-lain", "antiven0...
2024-03-25T04:29:04.000Z
2024-03-25T04:29:04.435Z
[]
/posts/vikhyatk/402964363815778
3,689
0
563404579814446
[ { "type": "text", "value": "Diaries of Open Source. Part 9!", "raw": "Diaries of Open Source. Part 9!", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", ...
Diaries of Open Source. Part 9! โฐAmazon releases Chronos, a family of models for time series Base model: https://hf.co/amazon/chronos-t5-large Paper: https://hf.co/papers/2403.07815 Models: https://huggingface.co/collections/amazon/chronos-models-65f1791d630a8d57cb718444 ๐Ÿ’กORPO Alignment: align without a reference model nor SFT! Paper: https://hf.co/papers/2403.07691 Models: https://hf.co/collections/kaist-ai/orpo-65efef87544ba100aef30013 GitHub: https://github.com/xfactlab/orpo ๐Ÿ‡บ๐Ÿ‡ณCohere releases 250M Wikipedia Embeddings in 300+ languages Data: https://hf.co/datasets/Cohere/wikipedia-2023-11-embed-multilingual-v3 Announcement: https://twitter.com/Nils_Reimers/status/1767891859207057618 ๐ŸงฌSegmentNT: a LLM for annotating DNA at single nucleotide resolution Models: https://huggingface.co/collections/InstaDeepAI/segmentnt-65eb4941c57808b4a3fe1319 GitHub repo: https://github.com/instadeepai/nucleotide-transformer Paper: https://www.biorxiv.org/content/10.1101/2024.03.14.584712v1 ๐Ÿš€DynamiCrafter: video generation models for interpolation and looping are out! Project page: https://doubiiu.github.io/projects/DynamiCrafter/ GitHub: https://github.com/Doubiiu/DynamiCrafter Demo: https://hf.co/spaces/Doubiiu/DynamiCrafter_interp_loop ๐Ÿš€Stanford releases Anticipatory Music Transformer: GitHub: https://github.com/jthickstun/anticipation/ Models: https://hf.co/stanford-crfm Original blog announcement: https://crfm.stanford.edu/2023/06/16/anticipatory-music-transformer.html
{ "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6032802e1f993496bc14d9e3/w6hr-DEQot4VVkoyRIBiy.png", "fullname": "Omar Sanseviero", "name": "osanseviero", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": 2868, "isFollowing": false }
[]
[]
[ { "reaction": "๐Ÿ”ฅ", "users": [ "YaTharThShaRma999", "giux78", "DmitryRyumin", "mmhamdy", "samusenps", "clem" ], "count": 6 }, { "reaction": "๐Ÿ‘", "users": [ "FlipTip", "samusenps", "avinash02", "clem" ], "count": 4 } ]
2024-03-24T13:29:30.000Z
2024-03-24T13:31:48.176Z
[ { "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6032802e1f993496bc14d9e3/w6hr-DEQot4VVkoyRIBiy.png", "fullname": "Omar Sanseviero", "name": "osanseviero", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": 2868, "isFollowing":...
/posts/osanseviero/563404579814446
3,268
2
997723311930747
[ { "type": "text", "value": "I and ", "raw": "I and ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "mention", "value": null, "raw": "@FinancialSupport", "href": null, "resource": null...
I and @FinancialSupport just release the new Italian Leaderboard: https://huggingface.co/spaces/FinancialSupport/open_ita_llm_leaderboard It is based on lm-evaluation-harness and at the moment , mainly, on 7 billion models. In the next weeks we will add more models. If you have suggestion or need explanations join our community discord https://discord.gg/a26cRkBCNH
{ "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/5fef4eb7770b06e11c2c6381/1NMdigjCGtn0yvQZSi5NJ.png", "fullname": "Alessandro Ercolani", "name": "giux78", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": 44, "isFollowing": false }
[]
[ { "avatarUrl": "/avatars/83d446ebfeafb54b8e2bcfcfb57f13b7.svg", "fullname": "Samuele Colombo", "name": "FinancialSupport", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": 20 } ]
[ { "reaction": "๐Ÿ‘€", "users": [ "osanseviero", "victor", "giux78", "clem" ], "count": 4 }, { "reaction": "โค๏ธ", "users": [ "clefourrier", "samusenps", "clem", "giux78" ], "count": 4 } ]
2024-03-24T12:46:56.000Z
2024-03-27T06:18:49.909Z
[ { "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1644340617257-noauth.png", "fullname": "Clรฉmentine Fourrier", "name": "clefourrier", "type": "user", "isPro": false, "isHf": true, "isMod": false, "followerCount": 459, "isFollowing": false }, { "ava...
/posts/giux78/997723311930747
2,661
9
595207139085023
[ { "type": "text", "value": "๐Ÿš€๐Ÿ’ƒ๐ŸŒŸ New Research Alert - ICASSP 2024! ๐ŸŒŸ ๐Ÿ•บ๐Ÿš€", "raw": "๐Ÿš€๐Ÿ’ƒ๐ŸŒŸ New Research Alert - ICASSP 2024! ๐ŸŒŸ ๐Ÿ•บ๐Ÿš€", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", ...
๐Ÿš€๐Ÿ’ƒ๐ŸŒŸ New Research Alert - ICASSP 2024! ๐ŸŒŸ ๐Ÿ•บ๐Ÿš€ ๐Ÿ“„ Title: Text2Avatar: Text to 3D Human Avatar Generation with Codebook-Driven Body Controllable Attribute ๐ŸŒŸ ๐Ÿ“ Description: Text2Avatar is a novel approach that can generate realistic 3D human avatars directly from textual descriptions, enabling multi-attribute control and realistic styling, overcoming the challenges of feature coupling and data scarcity in this domain. ๐Ÿ‘ฅ Authors: Chaoqun Gong et al. ๐Ÿ“… Conference: ICASSP, 14-19 April 2024 | Seoul, Korea ๐Ÿ‡ฐ๐Ÿ‡ท ๐Ÿ”— Paper: https://huggingface.co/papers/2401.00711 ๐ŸŒ Github Page: https://iecqgong.github.io/text2avatar/ ๐Ÿ“š More Papers: more cutting-edge research presented at other conferences in the https://huggingface.co/spaces/DmitryRyumin/NewEraAI-Papers curated by @DmitryRyumin ๐Ÿš€ Added to the Avatars Collection: https://huggingface.co/collections/DmitryRyumin/avatars-65df37cdf81fec13d4dbac36 ๐Ÿ“ Added to the ICASSP-2023-24-Papers: https://github.com/DmitryRyumin/ICASSP-2023-24-Papers ๐Ÿ” Keywords: #AvatarGeneration #Text2Avatar #ICASSP2024 #DeepLearning #Innovation
{ "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/nRCxbVng_PPBqKd-Z3KVc.jpeg", "fullname": "Dmitry Ryumin", "name": "DmitryRyumin", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": 377, "isFollowing": false }
[ { "type": "image", "url": "https://cdn-uploads.huggingface.co/production/uploads/6493306970d925ae80523a53/Xfa-kUaLVpLVIjN7MOrx6.png" }, { "type": "image", "url": "https://cdn-uploads.huggingface.co/production/uploads/6493306970d925ae80523a53/CyHv9W4SAUMsgpAhTYj6o.png" }, { "type": "image...
[ { "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/nRCxbVng_PPBqKd-Z3KVc.jpeg", "fullname": "Dmitry Ryumin", "name": "DmitryRyumin", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": 377 } ]
[ { "reaction": "๐Ÿ‘", "users": [ "DmitryRyumin", "samusenps", "xenagarage", "Dlbk", "palloo", "victor", "Tom-Neverwinter" ], "count": 7 }, { "reaction": "๐Ÿ”ฅ", "users": [ "palloo", "GaneshMystic", "clem", "Tom-Neverwinter" ...
2024-03-23T21:10:54.000Z
2024-03-27T23:58:47.831Z
[ { "avatarUrl": "/avatars/afbc48df2e8c47c35be48168113d83c0.svg", "fullname": "s", "name": "Tom-Neverwinter", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": 2, "isFollowing": false } ]
/posts/DmitryRyumin/595207139085023
2,675
1
901395976811277
[ { "type": "text", "value": "Diaries of Open Source. Part 8!", "raw": "Diaries of Open Source. Part 8!", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", ...
Diaries of Open Source. Part 8! ๐ŸคฏCRM: Image-to-3D Textured Mesh Demo: https://hf.co/spaces/Zhengyi/CRM Model: https://hf.co/Zhengyi/CRM Project page: https://ml.cs.tsinghua.edu.cn/~zhengyi/CRM/ Paper: https://huggingface.co/papers/2403.05034 ๐ŸคHalf Quadratic Quantization: super-fast quantization of very large models Blog post: https://mobiusml.github.io/hqq_blog/ Colab: https://colab.research.google.com/drive/1cG_5R_u9q53Uond7F0JEdliwvoeeaXVN?usp=sharing Repo: https://github.com/mobiusml/hqq ๐Ÿค—GemMoE -Gemma + MoE Model: https://hf.co/Crystalcareai/GemMoE-Base-Random Collection: https://huggingface.co/collections/Crystalcareai/gemmoe-65f11f4922af97ebe9943591 ๐Ÿ‘€VeCLIP and MOFI, new 0-shot and image retrieval models by Apple, are now open-source! GitHub: https://github.com/apple/ml-veclip/ and https://github.com/apple/ml-mofi VeCLIP paper: https://hf.co/papers/2310.07699 MOFI paper: https://hf.co/papers/2306.07952 โšกSPIN: Recipe for alignment with very little data Collection: https://hf.co/collections/argilla/dibt-prompt-collective-spin-65ef59062518776024395fc3 Tweetutorial: https://twitter.com/argilla_io/status/1767608154697699455 ๐Ÿ‘€ViT Prisma - an interoperability library for vision models GitHub: https://github.com/soniajoseph/ViT-Prisma โ˜•OpenLRM: full model and training code are open-sourced Codebase: https://github.com/3DTopia/OpenLRM Demo: https://hf.co/spaces/zxhezexin/OpenLRM Models: https://huggingface.co/zxhezexin โš—๏ธOxford releases an extensive PEFT evaluation for bio models Model: https://hf.co/NTaylor/bio-mobilebert-mimic-mp-lora GitHub: https://github.com/nlpie-research/efficient-ml Paper: https://hf.co/papers/2402.10597 ๐ŸŒData and models around the world Hermes 2 Pro 7B: an upgraded Nous Hermes 2 model with strong function calling and JSON capabilities https://hf.co/NousResearch/Hermes-2-Pro-Mistral-7B Navarasa-2.0โ€Š: Gemma fine-tuned in 15 indian language https://hf.co/collections/Telugu-LLM-Labs/navarasa-65f5e6ffdf29f02c6d7767ce
{ "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6032802e1f993496bc14d9e3/w6hr-DEQot4VVkoyRIBiy.png", "fullname": "Omar Sanseviero", "name": "osanseviero", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": 2868, "isFollowing": false }
[]
[]
[ { "reaction": "โค๏ธ", "users": [ "dvilasuero", "DmitryRyumin", "AugustYuYang", "samusenps", "ajibawa-2023", "DonAmit197", "Zhengyi", "victor", "clem", "radames" ], "count": 10 }, { "reaction": "๐Ÿš€", "users": [ "dvilasuero", ...
2024-03-23T11:28:50.000Z
2024-03-24T07:19:59.923Z
[ { "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6032802e1f993496bc14d9e3/w6hr-DEQot4VVkoyRIBiy.png", "fullname": "Omar Sanseviero", "name": "osanseviero", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": 2868, "isFollowing":...
/posts/osanseviero/901395976811277
2,527
4
869933859098017
[ { "type": "text", "value": "Does anyone have experience with finetuning Gemma? Even the 2B variant feels more memory heavy than mistral 7B. I know that its vocabulary is much larger (250k) but I'm a bit surprised that the max batch size that I can get in an A100 80GB is only 2 whereas I could fit 4 with mis...
Does anyone have experience with finetuning Gemma? Even the 2B variant feels more memory heavy than mistral 7B. I know that its vocabulary is much larger (250k) but I'm a bit surprised that the max batch size that I can get in an A100 80GB is only 2 whereas I could fit 4 with mistral 7B - even though Gemma is much smaller except for the embedding layer. Both runs were using FA, same sequence length, same deepspeed zero 3 settings. Oh and yes I'm using the most recent hot fix of transformers that solves a memory issue with Gemma and others. Any prior experience that you can share or suggestions to improve throughout?
{ "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1594192845975-5e1e17b6fcf41d740b6996a8.jpeg", "fullname": "Bram Vanroy", "name": "BramVanroy", "type": "user", "isPro": true, "isHf": false, "isMod": false, "followerCount": 173, "isFollowing": false }
[]
[]
[ { "reaction": "๐Ÿ‘", "users": [ "KnutJaegersberg", "victor", "DonAmit197", "svernek", "FlipTip", "soates", "clem", "Andcircle" ], "count": 8 }, { "reaction": "๐Ÿ‘€", "users": [ "osanseviero", "samusenps", "DonAmit197", ...
2024-03-23T08:39:50.000Z
2024-03-23T12:31:08.593Z
[ { "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1669551186189-63732ebbbd81fae2b3aaf3fb.jpeg", "fullname": "Knut Jรคgersberg", "name": "KnutJaegersberg", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": 238, "isFollowing": fal...
/posts/BramVanroy/869933859098017
2,391
4
626553999371491
[ { "type": "text", "value": "Today we launch our space in colab with ", "raw": "Today we launch our space in colab with ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "mention", "value": null, ...
Today we launch our space in colab with @dvilasuero & @davanstrien so you can help us translate/correct our curated prompt dataset, that will be used to evaluate the performance of Arabic LLMs laterย and help our community to identify how open models perform on Arabic. How to Get Involved? 1. Visit our Argilla Space and start reviewing prompts. https://2a2i-prompt-translation-for-arabic.hf.space/ 2. Join our Discord channel in the HuggingFace's discord server to connect with the community and share your insights. https://discord.com/channels/879548962464493619/1217179730869096448
{ "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/626237d9bbcbd1c34f1bb231/EJrOjvAL-68qMCYdnvOrq.png", "fullname": "Ali El Filali", "name": "alielfilali01", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": 186, "isFollowing": false }
[ { "type": "image", "url": "https://cdn-uploads.huggingface.co/production/uploads/626237d9bbcbd1c34f1bb231/m8yh5bf9KztFIsON6irr_.jpeg" } ]
[ { "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1627505688463-60107b385ac3e86b3ea4fc34.jpeg", "fullname": "Daniel van Strien", "name": "davanstrien", "type": "user", "isPro": true, "isHf": true, "isMod": false, "followerCount": 410 }, { "avatarUrl": "...
[ { "reaction": "โค๏ธ", "users": [ "medmac01", "dvilasuero", "samusenps", "ZennyKenny", "SudhaSG", "osanseviero", "monsoon-nlp", "davanstrien", "clefourrier", "not-lain", "KBayoud", "clem" ], "count": 12 }, { "reaction": "๐Ÿ”ฅ...
2024-03-23T03:08:34.000Z
2024-03-23T15:33:24.438Z
[ { "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/60420dccc15e823a685f2b03/Dn7QTyy9SZ7jKN6xpufVD.png", "fullname": "Daniel Vila", "name": "dvilasuero", "type": "user", "isPro": false, "isHf": true, "isMod": false, "followerCount": 231, "isFollowing": false ...
/posts/alielfilali01/626553999371491
2,209
3
333899475285572
[ { "type": "text", "value": "MathVerse", "raw": "MathVerse", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "u...
MathVerse Does Your Multi-modal LLM Truly See the Diagrams in Visual Math Problems? https://huggingface.co/papers/2403.14624 The remarkable progress of Multi-modal Large Language Models (MLLMs) has garnered unparalleled attention, due to their superior performance in visual contexts. However, their capabilities in visual math problem-solving remain insufficiently evaluated and understood. We investigate current benchmarks to incorporate excessive visual content within textual questions, which potentially assist MLLMs in deducing answers without truly interpreting the input diagrams. To this end, we introduce MathVerse, an all-around visual math benchmark designed for an equitable and in-depth evaluation of MLLMs. We meticulously collect 2,612 high-quality, multi-subject math problems with diagrams from publicly available sources. Each problem is then transformed by human annotators into six distinct versions, each offering varying degrees of information content in multi-modality, contributing to 15K test samples in total. This approach allows MathVerse to comprehensively assess whether and how much MLLMs can truly understand the visual diagrams for mathematical reasoning. In addition, we propose a Chain-of-Thought (CoT) evaluation strategy for a fine-grained assessment of the output answers. Rather than naively judging True or False, we employ GPT-4(V) to adaptively extract crucial reasoning steps, and then score each step with detailed error analysis, which can reveal the intermediate CoT reasoning quality by MLLMs. We hope the MathVerse benchmark may provide unique insights to guide the future development of MLLMs.
{ "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg", "fullname": "AK", "name": "akhaliq", "type": "user", "isPro": false, "isHf": true, "isMod": false, "followerCount": 5205, "isFollowing": false }
[ { "type": "image", "url": "https://cdn-uploads.huggingface.co/production/uploads/60f1abe7544c2adfd699860c/kBHW9hfwHmPsS0ieEQ3Q4.png" } ]
[]
[ { "reaction": "โค๏ธ", "users": [ "samusenps", "AtAndDev", "shvhio", "jncarlo", "thomwolf" ], "count": 5 } ]
2024-03-22T20:21:53.000Z
2024-03-22T20:21:53.738Z
[]
/posts/akhaliq/333899475285572
2,898
0
619817116350533
[ { "type": "text", "value": "๐Ÿš€๐ŸŽญ๐ŸŒŸ New Research Alert - CVPR 2024! ๐ŸŒŸ ๐ŸŽญ๐Ÿš€", "raw": "๐Ÿš€๐ŸŽญ๐ŸŒŸ New Research Alert - CVPR 2024! ๐ŸŒŸ ๐ŸŽญ๐Ÿš€", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "val...
๐Ÿš€๐ŸŽญ๐ŸŒŸ New Research Alert - CVPR 2024! ๐ŸŒŸ ๐ŸŽญ๐Ÿš€ ๐Ÿ“„ Title: GaussianAvatars: Photorealistic Head Avatars with Rigged 3D Gaussians ๐Ÿ” ๐Ÿ“ Description: GaussianAvatars proposes a novel method for creating photorealistic and fully controllable head avatars by combining a parametric morphable face model with a dynamic 3D representation based on rigged 3D Gaussian splats, enabling high-quality rendering and precise animation control. ๐Ÿ‘ฅ Authors: Shenhan Qian, Tobias Kirschstein, Liam Schoneveld, Davide Davoli, Simon Giebenhain, Matthias NieรŸner ๐Ÿ“… Conference: CVPR, Jun 17-21, 2024 | Seattle WA, USA ๐Ÿ‡บ๐Ÿ‡ธ ๐Ÿ”— Paper: https://huggingface.co/papers/2312.02069 ๐ŸŒ Github Page: https://shenhanqian.github.io/gaussian-avatars ๐Ÿ“ Repository: https://github.com/ShenhanQian/GaussianAvatars ๐Ÿ“บ Video: https://www.youtube.com/watch?v=lVEY78RwU_I ๐Ÿ“š More Papers: more cutting-edge research presented at other conferences in the https://huggingface.co/spaces/DmitryRyumin/NewEraAI-Papers curated by @DmitryRyumin ๐Ÿš€ Added to the Avatars Collection: https://huggingface.co/collections/DmitryRyumin/avatars-65df37cdf81fec13d4dbac36 ๐Ÿ” Keywords: #HeadAvatar #GaussianAvatars #DynamicGaussians #3DModeling #AvatarGeneration #CVPR2024 #DeepLearning #Innovation
{ "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/nRCxbVng_PPBqKd-Z3KVc.jpeg", "fullname": "Dmitry Ryumin", "name": "DmitryRyumin", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": 377, "isFollowing": false }
[ { "type": "image", "url": "https://cdn-uploads.huggingface.co/production/uploads/6493306970d925ae80523a53/J1e1HBKKiRJ1xGHaq3cFH.png" }, { "type": "video", "url": "https://cdn-uploads.huggingface.co/production/uploads/6493306970d925ae80523a53/Ifoi55KR5Er2vG4IZy2aV.mp4" }, { "type": "video...
[ { "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/nRCxbVng_PPBqKd-Z3KVc.jpeg", "fullname": "Dmitry Ryumin", "name": "DmitryRyumin", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": 377 } ]
[ { "reaction": "๐Ÿ”ฅ", "users": [ "DmitryRyumin", "samusenps", "xenagarage", "wickedPixel", "clem", "victor", "pratikjadhav", "jonasmaltebecker" ], "count": 8 }, { "reaction": "๐Ÿคฏ", "users": [ "DmitryRyumin", "AngryBear9019", ...
2024-03-22T19:35:16.000Z
2024-03-22T19:35:16.001Z
[]
/posts/DmitryRyumin/619817116350533
1,861
0
376887867127881
[ { "type": "text", "value": "๐Ÿ… Quantized Embeddings are here! Unlike model quantization, embedding quantization is a post-processing step for embeddings that converts e.g. ", "raw": "๐Ÿ… Quantized Embeddings are here! Unlike model quantization, embedding quantization is a post-processing step for embeddi...
๐Ÿ… Quantized Embeddings are here! Unlike model quantization, embedding quantization is a post-processing step for embeddings that converts e.g. `float32` embeddings to binary or `int8` embeddings. This saves 32x or 4x memory & disk space, and these embeddings are much easier to compare! Our results show 25-45x speedups in retrieval compared to full-size embeddings, while keeping 96% of the performance! Learn more about it in our blogpost in collaboration with mixedbread.ai: https://huggingface.co/blog/embedding-quantization Or try out our demo where we use quantized embeddings to let you search all of Wikipedia (yes, 41,000,000 texts) in 1 second on a CPU Space: https://huggingface.co/spaces/sentence-transformers/quantized-retrieval
{ "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6317233cc92fd6fee317e030/cJHSvvimr1kqgQfHOjO5n.png", "fullname": "Tom Aarsen", "name": "tomaarsen", "type": "user", "isPro": false, "isHf": true, "isMod": false, "followerCount": 1060, "isFollowing": false }
[]
[]
[ { "reaction": "๐Ÿ”ฅ", "users": [ "jaggernaut007", "snowdere", "osanseviero", "AtAndDev", "cnmoro", "mohammedbriman", "clem", "pascalnjue", "louisbrulenaudet", "iamrobotbear", "mrdbourke", "danielpark", "MichelleRuwen" ], "co...
2024-03-22T16:58:56.000Z
2024-03-22T23:41:30.051Z
[ { "avatarUrl": "/avatars/b2b429dd81d366b42a74a36f8e237286.svg", "fullname": "D", "name": "snowdere", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": 1, "isFollowing": false } ]
/posts/tomaarsen/376887867127881
3,008
1
283998571804778
[ { "type": "text", "value": "๐Ÿ”ฅ๐Ÿ†•๐Ÿ†•๐Ÿ”ฅ Dataset Drop: 4 KTO signal transformed versions of the highly loved Argilla DPO datasets.", "raw": "๐Ÿ”ฅ๐Ÿ†•๐Ÿ†•๐Ÿ”ฅ Dataset Drop: 4 KTO signal transformed versions of the highly loved Argilla DPO datasets.", "href": null, "resource": null, "url": null, "code...
๐Ÿ”ฅ๐Ÿ†•๐Ÿ†•๐Ÿ”ฅ Dataset Drop: 4 KTO signal transformed versions of the highly loved Argilla DPO datasets. KTO formats for: - UltraFeedback Cleaned Binarized - Distilabel Intel Orca - Distilabel Capybara - DPO mix https://huggingface.co/collections/argilla/preference-datasets-for-kto-65f98314d7c1b04ab54d41a7 Paper claims :) https://arxiv.org/abs/2402.01306 KTO matches or exceeds DPO performance at scales from 1B to 30B parameters.1 That is, taking a preference dataset of n DPO pairs and breaking it up into 2n examples for KTO can yield better generations, despite the model ostensibly learning from a weaker signal. KTO can handle extreme data imbalances, matching DPO performance while using up to 90% fewer desirable examples (i.e., examples of good generations). Its success thus cannot be ascribed to the alignment data being sourced from a preference dataset. When the pretrained model is sufficiently good, one can skip supervised finetuning and go straight to KTO without a loss in generation quality. In contrast, we find that without doing SFT first, DPO-aligned models are significantly worse at all scales. Do you need something custom? Take a look at @davanstrien his guide on creating your own KTO dataset with Argilla and our community. https://github.com/huggingface/data-is-better-together/tree/main/kto-preference
{ "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1677141720071-634ff41ff32062e9eb7b06a3.jpeg", "fullname": "David Berenstein", "name": "davidberenstein1957", "type": "user", "isPro": false, "isHf": true, "isMod": false, "followerCount": 167, "isFollowing": false }
[]
[ { "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1627505688463-60107b385ac3e86b3ea4fc34.jpeg", "fullname": "Daniel van Strien", "name": "davanstrien", "type": "user", "isPro": true, "isHf": true, "isMod": false, "followerCount": 410 } ]
[ { "reaction": "๐Ÿ”ฅ", "users": [ "davanstrien", "ZennyKenny", "osanseviero", "giux78", "clem" ], "count": 5 }, { "reaction": "โค๏ธ", "users": [ "davanstrien", "samusenps", "osanseviero", "clem" ], "count": 4 } ]
2024-03-22T16:57:28.000Z
2024-03-22T16:57:28.693Z
[]
/posts/davidberenstein1957/283998571804778
1,625
0
896443101397392
[ { "type": "text", "value": "๐Ÿค— PEFT v0.10.0 release! ๐Ÿ”ฅ๐Ÿš€โœจ", "raw": "๐Ÿค— PEFT v0.10.0 release! ๐Ÿ”ฅ๐Ÿš€โœจ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", ...
๐Ÿค— PEFT v0.10.0 release! ๐Ÿ”ฅ๐Ÿš€โœจ Some highli๐Ÿ“ghts: 1. FSDP+QLoRA and DeepSpeed Stage-3+QLoRA 2. Layer expansion + LoRA 3. DoRA support for Conv2D layers and quantized bitsandbytes layers 4. New LoftQ utility 5. Batched inference for mixed LoRA adapters. http://Answer.AI team in collaboration with bitsandbytes and Hugging Face ๐Ÿค— open sourced code enabling the usage of FSDP+QLoRA and explained the whole process in their insightful blogpost https://lnkd.in/g6jgfXyv. This is now integrated into Hugging Face ecosystem. For an end-to-end example on FSDP+QLoRA, please refer https://lnkd.in/gT3yY-Rx. For an end-to-end example on DeepSpeed Stage-3+QLoRA, please refer https://lnkd.in/gkt-xZRE. With the PR https://lnkd.in/g5F348MN these changes are now upstreamed in https://lnkd.in/g5_MxYtY thanks to Wing Lian ! ๐Ÿš€ Kudos to http://Answer.AI team, Titus von Kรถller , Younes Belkada, Benjamin Bossan and Zachary Mueller for all the help without which this couldn't have been possible. ๐Ÿค— For efficient depthwise layer expansion akin to `passthrough` method of `mergekit` but without using additional memory and attaching LoRAs to it, refer to the details below! ๐Ÿ”ฅhttps://lnkd.in/ge95ztjA Now DoRA is supported for Conv2D layers as well as bitsandbytes quantized layers โœจ. For more details, please refer the below thread. https://lnkd.in/gsJbuWPD Now you can mix different LoRA adapters in a batch during inference which speeds-up the inference by avoiding computation of base model multiple times which would be the case for adaptive inference with batch_size=1! โšก๏ธ. Details below. https://lnkd.in/gD-pcX_B LoftQ reduces quantization error by appropriately initializing the LoRA adapter weights. Normally, this is a two-step process. Benjamin Bossan added new util `replace_lora_weights_loftq` for LoftQ to use it on the fly with bnb. For more details, refer to the release notes. ๐Ÿ“ https://lnkd.in/gg7-AmHA. As always, make sure losses go down and be happy to watch your model train!
{ "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1638132956881-5fca176d1d7a08cb34d79d5d.jpeg", "fullname": "Sourab Mangrulkar", "name": "smangrul", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": 224, "isFollowing": false }
[ { "type": "image", "url": "https://cdn-uploads.huggingface.co/production/uploads/5fca176d1d7a08cb34d79d5d/w8xSyBu3ISRKkVVu_EFJj.png" }, { "type": "image", "url": "https://cdn-uploads.huggingface.co/production/uploads/5fca176d1d7a08cb34d79d5d/WHUCSrN2yNNiMzv4WEA3I.jpeg" }, { "type": "vide...
[]
[ { "reaction": "โค๏ธ", "users": [ "osanseviero", "clefourrier", "DmitryRyumin", "nbroad", "samusenps", "adamo1139", "giux78", "louisbrulenaudet", "sergey21000" ], "count": 9 }, { "reaction": "๐Ÿ”ฅ", "users": [ "osanseviero", "d...
2024-03-22T16:13:00.000Z
2024-03-23T21:22:55.452Z
[ { "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6569216f9c96f1a47bf45788/mCLqmAs4dOjKdxNQVAp1w.png", "fullname": "Sica Rius", "name": "SicariusSicariiStuff", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": 135, "isFollowing...
/posts/smangrul/896443101397392
2,813
1
256979999653089
[ { "type": "text", "value": "Look at the beauty in the video โ€” four different embeddings on the same map! In another community blog post, I explore how you can use Nomic Atlas to view and clean your dataset. You can check it out here - ", "raw": "Look at the beauty in the video โ€” four different embedding...
Look at the beauty in the video โ€” four different embeddings on the same map! In another community blog post, I explore how you can use Nomic Atlas to view and clean your dataset. You can check it out here - https://huggingface.co/blog/visheratin/nomic-data-cleaning
{ "avatarUrl": "/avatars/b892a3d50b2f8ead0a5f7108564e45d0.svg", "fullname": "Alexander Visheratin", "name": "visheratin", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": 55, "isFollowing": false }
[ { "type": "video", "url": "https://cdn-uploads.huggingface.co/production/uploads/609ede05121df5de54007033/xge8bMEz0hbEia8kWtI_f.mp4" } ]
[]
[ { "reaction": "๐Ÿ”ฅ", "users": [ "osanseviero", "radames", "samusenps", "clem" ], "count": 4 }, { "reaction": "๐Ÿ˜Ž", "users": [ "osanseviero", "samusenps", "clem" ], "count": 3 }, { "reaction": "๐Ÿคฏ", "users": [ "osanseviero...
2024-03-22T14:30:25.000Z
2024-03-23T17:50:41.084Z
[ { "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/64b1619f4c3cc95a751e6c41/oSUtmp1Gw0I-ve_1wFlNW.jpeg", "fullname": "Michal Zebrowski", "name": "M1cler", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": 1, "isFollowing": false...
/posts/visheratin/256979999653089
1,939
1
843830562646766
[ { "type": "text", "value": "Worked on a short blog post discussing how we semi-automated the release process of the ", "raw": "Worked on a short blog post discussing how we semi-automated the release process of the ", "href": null, "resource": null, "url": null, "code": null, "user":...
Worked on a short blog post discussing how we semi-automated the release process of the `diffusers` library. The post delves deeper into the workflows responsible for: * Publishing the package on Test PyPI and main PyPI servers. * Notifying an internal Slack channel after a release is published on the repository. Check it out here ๐Ÿ‘‰ https://sayak.dev/posts/streamlined-releases.html
{ "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1649681653581-5f7fbd813e94f16a85448745.jpeg", "fullname": "Sayak Paul", "name": "sayakpaul", "type": "user", "isPro": false, "isHf": true, "isMod": false, "followerCount": 459, "isFollowing": false }
[]
[]
[ { "reaction": "๐Ÿค", "users": [ "awacke1", "osanseviero" ], "count": 2 } ]
2024-03-22T08:58:10.000Z
2024-03-22T08:58:10.867Z
[]
/posts/sayakpaul/843830562646766
2,381
0
781413226295833
[ { "type": "text", "value": "Diaries of Open Source. Part 7!", "raw": "Diaries of Open Source. Part 7!", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", ...
Diaries of Open Source. Part 7! ๐Ÿ”ฅSakana releases Evolutionary Model Merge Blog post: https://sakana.ai/evolutionary-model-merge/ Paper: https://huggingface.co/papers/2403.13187 Models and demo: https://hf.co/SakanaAI ๐ŸžMixedBread releases new SoTA sentence embedding model Announcement: https://www.mixedbread.ai/blog/mxbai-embed-large-v1 Model: https://huggingface.co/mixedbread-ai/mxbai-embed-large-v1 ๐ŸŽฅVideoMamba, a Mamba-based model for video understanding Blog: https://hf.co/blog/vladbogo/video-mamba Demo: https://huggingface.co/spaces/OpenGVLab/VideoMamba Model: https://huggingface.co/OpenGVLab/VideoMamba ๐Ÿ” MathVerse, a visual math benchmark for multimodal LLMs Paper page: https://mathverse-cuhk.github.io/ Dataset: https://huggingface.co/datasets/AI4Math/MathVerse Paper: https://huggingface.co/papers/2403.14624 ๐Ÿง GraphWiz, a family of instruct-tuned LLMs to solve graph problems Repos: https://hf.co/GraphWiz Paper: https://hf.co/papers/2402.16029 ๐Ÿช†NLLB-SigLIP-MRL: a combination of NLLB and SigLIP trained with Matryoshka representation learning Model: https://hf.co/visheratin/nllb-siglip-mrl-large Tweet: https://twitter.com/visheratin/status/1766643219909984734?s=46 ๐ŸงHDM and ProciGen: Template-free reconstruction of human-object interactions Paper page: https://virtualhumans.mpi-inf.mpg.de/procigen-hdm/ Demo: https://hf.co/spaces/xiexh20/HDM-interaction-recon?logs=build Models: https://hf.co/xiexh20/HDM-models ๐ŸŒŽModels and data around the world EagleX 7B, multi-lingual RNN-based model https://hf.co/spaces/recursal/EagleX-7B-1.7T-Gradio-Demo Tamil LLM https://hf.co/mervinpraison/tamil-large-language-model-7b-v1.0
{ "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6032802e1f993496bc14d9e3/w6hr-DEQot4VVkoyRIBiy.png", "fullname": "Omar Sanseviero", "name": "osanseviero", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": 2868, "isFollowing": false }
[]
[]
[ { "reaction": "๐Ÿš€", "users": [ "osanseviero", "victor", "not-lain", "xiexh20", "visheratin", "giux78", "DmitryRyumin", "taufiqdp", "samusenps", "AtAndDev", "mmhamdy", "InferenceIllusionist", "clem" ], "count": 13 }, { ...
2024-03-22T08:44:11.000Z
2024-03-22T08:48:35.930Z
[ { "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6032802e1f993496bc14d9e3/w6hr-DEQot4VVkoyRIBiy.png", "fullname": "Omar Sanseviero", "name": "osanseviero", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": 2868, "isFollowing":...
/posts/osanseviero/781413226295833
1,878
2
561697387289650
[ { "type": "text", "value": "How to let LLM acquire the Agentic Capability?", "raw": "How to let LLM acquire the Agentic Capability?", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "val...
How to let LLM acquire the Agentic Capability? Previous answers are direct imitation learning by collecting agentic data such as tool calling history (inefficient and introduces format hallucination). Agent-FLAN tells a different view: - Eliciting the foundational capability (e.g., reasoning, retrieval, and instruction following) is more important - Using chat data is more effective with less side effects than tool calling history Dataset: https://huggingface.co/datasets/internlm/Agent-FLAN HF Model: https://huggingface.co/internlm/Agent-FLAN-7b Paper: https://huggingface.co/papers/2403.12881 Project page๏ผšhttps://internlm.github.io/Agent-FLAN/
{ "avatarUrl": "/avatars/18958b8406d1ce492b54c1c839f18c54.svg", "fullname": "Wenwei Zhang", "name": "ZwwWayne", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": 10, "isFollowing": false }
[]
[]
[ { "reaction": "โค๏ธ", "users": [ "Csplk", "samusenps", "osanseviero", "diwank", "clem" ], "count": 5 }, { "reaction": "โž•", "users": [ "samusenps", "osanseviero" ], "count": 2 }, { "reaction": "๐Ÿค", "users": [ "awacke1" ...
2024-03-22T04:07:29.000Z
2024-03-22T04:07:29.212Z
[]
/posts/ZwwWayne/561697387289650
1,710
0
476094875024894
[ { "type": "text", "value": "Here it is, an Adaptive AI Assistant (AAA) only made possible through open-source, incredibly inspiring, well done to the Open Interpreter team๐Ÿ‘๐Ÿผ๐Ÿ‘๐Ÿผ", "raw": "Here it is, an Adaptive AI Assistant (AAA) only made possible through open-source, incredibly inspiring, well done...
Here it is, an Adaptive AI Assistant (AAA) only made possible through open-source, incredibly inspiring, well done to the Open Interpreter team๐Ÿ‘๐Ÿผ๐Ÿ‘๐Ÿผ Glad this one did not start with big tech :) Code: https://github.com/OpenInterpreter/01
{ "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6438a9027de34e8ea7e4b257/vib8QSd1AWMr_bR9ig_xJ.jpeg", "fullname": "Jaward Sesay", "name": "Jaward", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": 191, "isFollowing": false }
[ { "type": "video", "url": "https://cdn-uploads.huggingface.co/production/uploads/6438a9027de34e8ea7e4b257/hpvSbrAs5WYUxiLuwJGdn.qt" } ]
[]
[ { "reaction": "๐Ÿ‘", "users": [ "EidosL", "awacke1", "alielfilali01", "Seuriin", "clem", "elsiddik" ], "count": 6 }, { "reaction": "๐Ÿ”ฅ", "users": [ "Kukedlc", "Seuriin", "clem", "bunnycore" ], "count": 4 } ]
2024-03-22T00:11:32.000Z
2024-03-22T00:11:32.964Z
[]
/posts/Jaward/476094875024894
1,742
0
688384427302302
[ { "type": "text", "value": "LLaVA-NeXT is recently merged to Hugging Face transformers and it outperforms many of the closed source models like Gemini on various benchmarks ๐Ÿคฉ Let's take a look!", "raw": "LLaVA-NeXT is recently merged to Hugging Face transformers and it outperforms many of the closed s...
LLaVA-NeXT is recently merged to Hugging Face transformers and it outperforms many of the closed source models like Gemini on various benchmarks ๐Ÿคฉ Let's take a look! Demo: https://huggingface.co/spaces/merve/llava-next Notebook: https://colab.research.google.com/drive/1afNudu72SNWZCYtCVrRlb9T9Vj9CFJEK?usp=sharing LLaVA is essentially a vision-language model that consists of ViT-based CLIP encoder, a MLP projection and Vicuna as decoder โœจ LLaVA 1.5 was released with Vicuna, but LLaVA NeXT (1.6) is released with four different LLMs: - Nous-Hermes-Yi-34B - Mistral-7B - Vicuna 7B & 13B Mistral and Nous-Hermes-Yi-34B are performing better and have better commercial use. Moreover, according to authors' findings, the improvements comes from more diverse and high quality data mixture and dynamic high resolution. LLaVA based on Nous-Hermes-Yi-34B outperforms many other models, including Gemini in various multimodal understanding and generation benchmarks ๐Ÿ˜Š
{ "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1648113222875-6141a88b3a0ec78603c9e784.png", "fullname": "Merve Noyan", "name": "merve", "type": "user", "isPro": false, "isHf": true, "isMod": false, "followerCount": 5589, "isFollowing": false }
[ { "type": "image", "url": "https://cdn-uploads.huggingface.co/production/uploads/6141a88b3a0ec78603c9e784/gMtVBUhdBDKyKlHRLzyKg.png" } ]
[]
[ { "reaction": "๐Ÿš€", "users": [ "osanseviero", "samusenps", "yesdeepakmittalTA", "Dlbk", "InferenceIllusionist", "nqtruong", "mirellyssl", "acamilogg88", "clem", "LucienL", "shing3232", "jarvisx17" ], "count": 12 } ]
2024-03-21T21:20:54.000Z
2024-03-21T21:20:54.634Z
[]
/posts/merve/688384427302302
3,324
0
358885674905546
[ { "type": "text", "value": "New NVIDIA GTC24 paper ๐ŸŽŠ", "raw": "New NVIDIA GTC24 paper ๐ŸŽŠ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": nu...
New NVIDIA GTC24 paper ๐ŸŽŠ We generate high-quality 3D assets in only 400ms from text by combining (a) amortized optimization for speed, (b) surface rendering for quality, and (c) 3D data for robustness. โ˜• LATTE3D project details: https://research.nvidia.com/labs/toronto-ai/LATTE3D/
{ "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/631b7370bf1351ed2bd0abdc/p0ZRMjgp5mt3sT5OhdHsp.png", "fullname": "Jonathan Lorraine", "name": "lorraine2", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": 13, "isFollowing": false }
[ { "type": "video", "url": "https://cdn-uploads.huggingface.co/production/uploads/631b7370bf1351ed2bd0abdc/KLXFZVbUEPG6oQLrCmFXf.mp4" } ]
[]
[ { "reaction": "๐Ÿš€", "users": [ "ZennyKenny", "osanseviero", "merve", "NickyNicky", "Dlbk", "bvk1ng", "lorraine2" ], "count": 7 }, { "reaction": "โค๏ธ", "users": [ "merve", "samusenps", "Krass1", "bvk1ng", "clem", ...
2024-03-21T18:45:30.000Z
2024-03-21T22:19:54.919Z
[ { "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/631b7370bf1351ed2bd0abdc/p0ZRMjgp5mt3sT5OhdHsp.png", "fullname": "Jonathan Lorraine", "name": "lorraine2", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": 13, "isFollowing": f...
/posts/lorraine2/358885674905546
1,681
2
592846757318176
[ { "type": "text", "value": "๐ŸŒŠLaVague can compile Action Plans into actionable code to browse the internet!", "raw": "๐ŸŒŠLaVague can compile Action Plans into actionable code to browse the internet!", "href": null, "resource": null, "url": null, "code": null, "user": null, "label"...
๐ŸŒŠLaVague can compile Action Plans into actionable code to browse the internet! In this example, you can see how an action plan with natural language instructions can be โ€œcompiledโ€ into executable Selenium code! ๐Ÿค–This shows the potential of #LAM (Large Action Models) to perform actions for us and automate mechanical tasks. This example leverages a local embedding model and OpenAI GPT-3.5, but we support many options, including local ones with Gemma! You can try this in our docs: https://docs.lavague.ai/en/latest/ LaVague is an open-source Large Action Model framework to automate automation. If you are interested in helping us on our mission to democratize automation tooling for devs, donโ€™t hesitate to visit our GitHub (https://github.com/lavague-ai/LaVague) or Discord (https://discord.gg/SDxn9KpqX9)!
{ "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1661497922734-62f4ac43567dbf9a39f75474.jpeg", "fullname": "Daniel Huynh", "name": "dhuynh95", "type": "user", "isPro": true, "isHf": false, "isMod": false, "followerCount": 75, "isFollowing": false }
[ { "type": "image", "url": "https://cdn-uploads.huggingface.co/production/uploads/62f4ac43567dbf9a39f75474/YRzg_kR6ExOKpyTrI3aJQ.gif" } ]
[]
[ { "reaction": "๐Ÿ”ฅ", "users": [ "AjasLearner", "radames", "osanseviero", "merve", "samusenps", "kramp", "clefourrier", "louisbrulenaudet", "TechxGenus", "mathiasn1", "clem" ], "count": 11 }, { "reaction": "โค๏ธ", "users": [ ...
2024-03-21T17:14:55.000Z
2024-03-21T17:14:55.338Z
[]
/posts/dhuynh95/592846757318176
1,759
0
887706953621123
[ { "type": "text", "value": "Mora", "raw": "Mora", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null,...
Mora Enabling Generalist Video Generation via A Multi-Agent Framework https://huggingface.co/papers/2403.13248 Sora is the first large-scale generalist video generation model that garnered significant attention across society. Since its launch by OpenAI in February 2024, no other video generation models have paralleled {Sora}'s performance or its capacity to support a broad spectrum of video generation tasks. Additionally, there are only a few fully published video generation models, with the majority being closed-source. To address this gap, this paper proposes a new multi-agent framework Mora, which incorporates several advanced visual AI agents to replicate generalist video generation demonstrated by Sora. In particular, Mora can utilize multiple visual agents and successfully mimic Sora's video generation capabilities in various tasks, such as (1) text-to-video generation, (2) text-conditional image-to-video generation, (3) extend generated videos, (4) video-to-video editing, (5) connect videos and (6) simulate digital worlds. Our extensive experimental results show that Mora achieves performance that is proximate to that of Sora in various tasks. However, there exists an obvious performance gap between our work and Sora when assessed holistically. In summary, we hope this project can guide the future trajectory of video generation through collaborative AI agents.
{ "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg", "fullname": "AK", "name": "akhaliq", "type": "user", "isPro": false, "isHf": true, "isMod": false, "followerCount": 5205, "isFollowing": false }
[ { "type": "image", "url": "https://cdn-uploads.huggingface.co/production/uploads/60f1abe7544c2adfd699860c/Glfd4gkgkJKygDBFGPHky.gif" } ]
[]
[ { "reaction": "๐Ÿš€", "users": [ "osanseviero", "notune", "merve", "dn6", "AtAndDev", "ahmed-ai", "Shinku", "Lewdiculous", "uglysonic3121", "clem", "alielfilali01" ], "count": 11 }, { "reaction": "โค๏ธ", "users": [ "merv...
2024-03-21T16:26:47.000Z
2024-03-21T16:26:47.107Z
[]
/posts/akhaliq/887706953621123
2,240
0
745405285556121
[ { "type": "text", "value": "How talkative is your chatbot about your internal data? ๐Ÿ˜ฌ", "raw": "How talkative is your chatbot about your internal data? ๐Ÿ˜ฌ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "typ...
How talkative is your chatbot about your internal data? ๐Ÿ˜ฌ As more chatbots get deployed in production, with access to internal databases, we need to make sure they don't leak private information to anyone interacting with them. The Lighthouz AI team therefore introduced the Chatbot Guardrails Arena to stress test models and see how well guarded your private information is. Anyone can try to make models reveal information they should not share ๐Ÿ˜ˆ (which is quite fun to do for the strongest models)! The votes will then be gathered to create an Elo ranking of the safest models with respect to PII. In the future, with the support of the community, this arena could inform safety choices that company make, when choosing models and guardrails on their resistance to adversarial attacks. It's also a good way to easily demonstrate the limitations of current systems! Check out the arena: https://huggingface.co/spaces/lighthouzai/guardrails-arena Learn more in the blog: https://huggingface.co/blog/arena-lighthouz
{ "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1644340617257-noauth.png", "fullname": "Clรฉmentine Fourrier", "name": "clefourrier", "type": "user", "isPro": false, "isHf": true, "isMod": false, "followerCount": 459, "isFollowing": false }
[]
[]
[ { "reaction": "๐Ÿš€", "users": [ "osanseviero", "m-ric", "zelus82", "julien-c" ], "count": 4 } ]
2024-03-21T15:06:20.000Z
2024-04-17T11:13:17.614Z
[]
/posts/clefourrier/745405285556121
1,525
0
685682060907396
[ { "type": "text", "value": "๐Ÿš€๐ŸŽญ๐ŸŒŸ New Research Alert - CVPR 2024! ๐ŸŒŸ ๐ŸŽญ๐Ÿš€", "raw": "๐Ÿš€๐ŸŽญ๐ŸŒŸ New Research Alert - CVPR 2024! ๐ŸŒŸ ๐ŸŽญ๐Ÿš€", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "val...
๐Ÿš€๐ŸŽญ๐ŸŒŸ New Research Alert - CVPR 2024! ๐ŸŒŸ ๐ŸŽญ๐Ÿš€ ๐Ÿ“„ Title: Gaussian Head Avatar: Ultra High-fidelity Head Avatar via Dynamic Gaussians ๐Ÿ” ๐Ÿ“ Description: Gaussian Head Avatar is a method for generating highly detailed 3D head avatars using dynamic Gaussian functions controlled by a neural network, ensuring ultra-high quality visualization even under limited viewpoints. ๐Ÿ‘ฅ Authors: Yuelang Xu, @ben55, Zhe Li, @HongwenZhang, @wanglz14, Zerong Zheng, and @YebinLiu ๐Ÿ“… Conference: CVPR, Jun 17-21, 2024 | Seattle WA, USA ๐Ÿ‡บ๐Ÿ‡ธ ๐Ÿ”— Paper: https://huggingface.co/papers/2312.03029 ๐ŸŒ Github Page: https://yuelangx.github.io/gaussianheadavatar ๐Ÿ“ Repository: https://github.com/YuelangX/Gaussian-Head-Avatar ๐Ÿ“บ Video: https://www.youtube.com/watch?v=kvrrI3EoM5g ๐Ÿ“š More Papers: more cutting-edge research presented at other conferences in the https://huggingface.co/spaces/DmitryRyumin/NewEraAI-Papers curated by @DmitryRyumin ๐Ÿš€ Added to the Avatars Collection: https://huggingface.co/collections/DmitryRyumin/avatars-65df37cdf81fec13d4dbac36 ๐Ÿ” Keywords: #HeadAvatar #DynamicGaussians #3DModeling #AvatarGeneration #CVPR2024 #DeepLearning #Innovation
{ "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/nRCxbVng_PPBqKd-Z3KVc.jpeg", "fullname": "Dmitry Ryumin", "name": "DmitryRyumin", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": 377, "isFollowing": false }
[ { "type": "image", "url": "https://cdn-uploads.huggingface.co/production/uploads/6493306970d925ae80523a53/_r24CiVCLjOBNkMI3oxhf.png" }, { "type": "video", "url": "https://cdn-uploads.huggingface.co/production/uploads/6493306970d925ae80523a53/65XbCWdopyV0Bo2AIknXJ.mp4" }, { "type": "video...
[ { "avatarUrl": "/avatars/1a73a63bb3fda6dfe326ced338d747e4.svg", "fullname": "chen ben wang", "name": "ben55", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": null }, { "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/noaut...
[ { "reaction": "๐Ÿ”ฅ", "users": [ "DmitryRyumin", "mrfakename", "osanseviero", "samusenps", "akhaliq", "eieihihi", "Katiyar48" ], "count": 7 }, { "reaction": "๐Ÿคฏ", "users": [ "DmitryRyumin", "osanseviero", "akhaliq", "clefour...
2024-03-21T14:53:23.000Z
2024-03-21T14:53:23.754Z
[]
/posts/DmitryRyumin/685682060907396
1,421
0