nltpt-q

community

AI & ML interests

None defined yet.

ariG23498ย 
posted an update 24 days ago
cfahlgren1ย 
posted an update about 2 months ago
view post
Post
432
I ran the Anthropic Misalignment Framework for a few top models and added it to a dataset: cfahlgren1/anthropic-agentic-misalignment-results

You can read the reasoning traces of the models trying to blackmail the user and perform other actions. It's very interesting!!

reach-vbย 
posted an update 2 months ago
view post
Post
3957
Excited to onboard FeatherlessAI on Hugging Face as an Inference Provider - they bring a fleet of 6,700+ LLMs on-demand on the Hugging Face Hub ๐Ÿคฏ

Starting today, you'd be able to access all those LLMs (OpenAI compatible) on HF model pages and via OpenAI client libraries too! ๐Ÿ’ฅ

Go, play with it today: https://huggingface.co/blog/inference-providers-featherless

P.S. They're also bringing on more GPUs to support all your concurrent requests!
  • 1 reply
ยท
ariG23498ย 
posted an update 2 months ago
view post
Post
1699
๐Ÿšจ Implement KV Cache from scratch in pure PyTorch. ๐Ÿšจ

We have documented all of our learning while implementing KV Cache to nanoVLM. Joint work with @kashif @lusxvr @andito @pcuenq

Blog: hf.co/blog/kv-cache
  • 1 reply
ยท
cfahlgren1ย 
posted an update 2 months ago
cfahlgren1ย 
posted an update 3 months ago
view post
Post
1716
Yesterday, we dropped a new conversational viewer for datasets on the hub! ๐Ÿ’ฌ

Actually being able to view and inspect your data is extremely important. This is a big step in making data more accessible and actionable for everyone.

Here's some datasets you can try it out on:
โ€ข mlabonne/FineTome-100k
โ€ข Salesforce/APIGen-MT-5k
โ€ข open-thoughts/OpenThoughts2-1M
โ€ข allenai/tulu-3-sft-mixture

Any other good ones?
  • 1 reply
ยท
reach-vbย 
posted an update 3 months ago
view post
Post
4234
hey hey @mradermacher - VB from Hugging Face here, we'd love to onboard you over to our optimised xet backend! ๐Ÿ’ฅ

as you know we're in the process of upgrading our storage backend to xet (which helps us scale and offer blazingly fast upload/ download speeds too): https://huggingface.co/blog/xet-on-the-hub and now that we are certain that the backend can scale with even big models like Llama 4/ Qwen 3 - we;re moving to the next phase of inviting impactful orgs and users on the hub over as you are a big part of the open source ML community - we would love to onboard you next and create some excitement about it in the community too!

in terms of actual steps - it should be as simple as one of the org admins to join hf.co/join/xet - we'll take care of the rest.

p.s. you'd need to have a the latest hf_xet version of huggingface_hub lib but everything else should be the same: https://huggingface.co/docs/hub/storage-backends#using-xet-storage

p.p.s. this is fully backwards compatible so everything will work as it should! ๐Ÿค—
ยท
cfahlgren1ย 
posted an update 7 months ago
view post
Post
2346
If you haven't seen yet, we just released Inference Providers ๐Ÿ”€

> 4 new serverless inference providers on the Hub ๐Ÿคฏ
> Use your HF API key or personal key with all providers ๐Ÿ”‘
> Chat with Deepseek R1, V3, and more on HF Hub ๐Ÿ‹
> We support Sambanova, TogetherAI, Replicate, and Fal.ai ๐Ÿ’ช

Best of all, we don't charge any markup on top of the provider ๐Ÿซฐ Have you tried it out yet? HF Pro accounts get $2 of free usage for the provider inference.
ariG23498ย 
posted an update 7 months ago
view post
Post
2844
Tried my hand at simplifying the derivations of Direct Preference Optimization.

I cover how one can reformulate RLHF into DPO. The idea of implicit reward modeling is chef's kiss.

Blog: https://huggingface.co/blog/ariG23498/rlhf-to-dpo
ariG23498ย 
posted an update 7 months ago
cfahlgren1ย 
posted an update 7 months ago
view post
Post
1780
Wow, I just added Langfuse tracing to the Deepseek Artifacts app and it's really nice ๐Ÿ”ฅ

It allows me to visualize and track more things along with the cfahlgren1/react-code-instructions dataset.

It was just added as a one click Docker Space template, so it's super easy to self host ๐Ÿ’ช
cfahlgren1ย 
posted an update 7 months ago
view post
Post
2268
You'll notice the AI in the SQL Console is much better at working with chatml conversations:

Here's example of unnesting the cfahlgren1/react-code-instructions in less than 10 seconds by asking it. Check it out here: cfahlgren1/react-code-instructions

- "show me the average assistant response length"
- "extract user, system, and assistant messages into separate columns"

It's super easy to work with conversational datasets now with natural language ๐Ÿ—ฃ๏ธ





  • 2 replies
ยท
cfahlgren1ย 
posted an update 8 months ago
reach-vbย 
posted an update 8 months ago
view post
Post
7160
VLMs are going through quite an open revolution AND on-device friendly sizes:

1. Google DeepMind w/ PaliGemma2 - 3B, 10B & 28B: google/paligemma-2-release-67500e1e1dbfdd4dee27ba48

2. OpenGVLabs w/ InternVL 2.5 - 1B, 2B, 4B, 8B, 26B, 38B & 78B: https://huggingface.co/collections/OpenGVLab/internvl-25-673e1019b66e2218f68d7c1c

3. Qwen w/ Qwen 2 VL - 2B, 7B & 72B: Qwen/qwen2-vl-66cee7455501d7126940800d

4. Microsoft w/ FlorenceVL - 3B & 8B: @jiuhai

5. Moondream2 w/ 0.5B: https://huggingface.co/vikhyatk/

What a time to be alive! ๐Ÿ”ฅ
ariG23498ย 
posted an update 8 months ago
cfahlgren1ย 
posted an update 8 months ago
view post
Post
1941
You can just ask things ๐Ÿ—ฃ๏ธ

"show me messages in the coding category that are in the top 10% of reward model scores"

Download really high quality instructions from the Llama3.1 405B synthetic dataset ๐Ÿ”ฅ

argilla/magpie-ultra-v1.0