modelId
stringlengths 5
138
| author
stringlengths 2
42
| last_modified
unknowndate 2020-02-15 11:33:14
2025-05-13 18:27:33
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 457
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
unknowndate 2022-03-02 23:29:04
2025-05-13 18:26:52
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
tungdqzenai/df7371d1-e77b-42d0-93fc-a3560c5d3d8d | tungdqzenai | "2025-05-13T06:36:55Z" | 0 | 0 | null | [
"region:us"
] | null | "2025-05-13T06:36:30Z" | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p>
</div>
</main>
</body>
</html> |
yesbreaddog/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-meek_insectivorous_cow | yesbreaddog | "2025-05-13T06:32:12Z" | 4 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am meek insectivorous cow",
"trl",
"conversational",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-04-28T18:54:26Z" | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p>
</div>
</main>
</body>
</html> |
ayyanc719/Hnmk | ayyanc719 | "2025-05-13T06:32:07Z" | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | "2025-05-13T06:32:07Z" | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p>
</div>
</main>
</body>
</html> |
luningning0324/luningning0324 | luningning0324 | "2025-05-13T06:31:40Z" | 0 | 0 | null | [
"region:us"
] | null | "2025-05-13T06:31:39Z" | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p>
</div>
</main>
</body>
</html> |
ellsant2dy/bnm | ellsant2dy | "2025-05-13T06:30:05Z" | 0 | 0 | null | [
"region:us"
] | null | "2025-05-13T06:30:05Z" | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p>
</div>
</main>
</body>
</html> |
malviwyli/seaef | malviwyli | "2025-05-13T06:30:04Z" | 0 | 0 | null | [
"region:us"
] | null | "2025-05-13T06:30:04Z" | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p>
</div>
</main>
</body>
</html> |
AshikAi96/qwen_lora | AshikAi96 | "2025-05-13T06:29:58Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:Qwen/Qwen1.5-1.8B",
"base_model:adapter:Qwen/Qwen1.5-1.8B",
"license:other",
"region:us"
] | null | "2025-05-13T06:29:26Z" | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p>
</div>
</main>
</body>
</html> |
Raexcfdhd/Fields | Raexcfdhd | "2025-05-13T06:29:35Z" | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | "2025-05-13T06:29:35Z" | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p>
</div>
</main>
</body>
</html> |
Carltongdfgh/Fields | Carltongdfgh | "2025-05-13T06:29:35Z" | 0 | 0 | null | [
"license:bsl-1.0",
"region:us"
] | null | "2025-05-13T06:29:35Z" | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p>
</div>
</main>
</body>
</html> |
backendboom/c834dd96-a367-48e4-ab62-76cec0606390 | backendboom | "2025-05-13T06:28:58Z" | 0 | 0 | null | [
"region:us"
] | null | "2025-05-13T06:28:36Z" | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p>
</div>
</main>
</body>
</html> |
Holland94/Haley | Holland94 | "2025-05-13T06:28:30Z" | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | "2025-05-13T06:28:29Z" | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p>
</div>
</main>
</body>
</html> |
Odom91/Haley | Odom91 | "2025-05-13T06:28:29Z" | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | "2025-05-13T06:28:29Z" | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p>
</div>
</main>
</body>
</html> |
linoyts/Hunyuan-LoRA | linoyts | "2025-05-13T06:28:07Z" | 0 | 0 | null | [
"text-to-video",
"base_model:tencent/HunyuanVideo",
"base_model:finetune:tencent/HunyuanVideo",
"region:us"
] | text-to-video | "2025-05-12T10:08:22Z" | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p>
</div>
</main>
</body>
</html> |
anyforge/anyocr | anyforge | "2025-05-13T06:24:32Z" | 0 | 1 | null | [
"onnx",
"ocr",
"image-to-text",
"license:apache-2.0",
"region:us"
] | image-to-text | "2025-04-25T01:13:32Z" | ---
license: apache-2.0
pipeline_tag: image-to-text
tags:
- ocr
---
# AnyOCR
<a href="https://huggingface.co/anyforge/anyocr" target="_blank"><img src="https://img.shields.io/badge/%F0%9F%A4%97-HuggingFace-blue"></a>
<a href="https://www.modelscope.cn/models/anyforge/anyocr" target="_blank"><img alt="Static Badge" src="https://img.shields.io/badge/%E9%AD%94%E6%90%AD-ModelScope-blue"></a>
<a href=""><img src="https://img.shields.io/badge/Python->=3.6-aff.svg"></a>
<a href=""><img src="https://img.shields.io/badge/OS-Linux%2C%20Win%2C%20Mac-pink.svg"></a>
<a href=""><img alt="Static Badge" src="https://img.shields.io/badge/engine-cpu_gpu_onnxruntime-blue"></a>
```
___ ____ __________
/ | ____ __ __/ __ \/ ____/ __ \
/ /| | / __ \/ / / / / / / / / /_/ /
/ ___ |/ / / / /_/ / /_/ / /___/ _, _/
/_/ |_/_/ /_/\__, /\____/\____/_/ |_|
/____/
```
简体中文 | [English](./README_en.md)
## 1. 简介
目前,我们非常开心的推出了兼容多平台的onnx格式的ocr工具`AnyOCR`,其核心亮点在于采用ONNXRuntime作为推理引擎,相比PaddlePaddle推理引擎,确保了高效稳定的运行。
- github地址:[AnyOCR](https://github.com/anyforge/anyocr)
- Hugging Face: [AnyOCR](https://huggingface.co/anyforge/anyocr)
- ModelScope: [AnyOCR](https://www.modelscope.cn/models/anyforge/anyocr)
## 2. 缘起
PaddlePaddle团队在PaddleOCR项目上,实现了一个基于PaddlePaddle的OCR工具,其性能和功能都十分强大,但是,在某些场景下,PaddlePaddle推理引擎的运行速度和稳定性,都存在一些问题。所以我们搜集很多新的OCR数据对paddleocr进行微调优化,并导出成onnx格式,直接使用onnxruntime推理,避开paddlepaddle推理引擎的坑,并支持cpu,gpu等。
Paddleocr在一些新型的数据上或者领域数据上表现的并不是很好,所以我们采集了很多数据进行微调训练,覆盖各个领域,包括:
- cc-ocr
- 工业
- 医疗
- 体检
- 中文
- 英文
- 论文
- 网络
- 自建
- 等等
数据集总计:大于`385K`。
### 扩展训练
- 训练集:`385K`
- 测试集:`5k`
- 准确率:`0.952`
### 模型介绍
- 检测模型:`anyocr_det_ch_v4_lite.onnx`,由`ch_PP-OCRv4_det`在我们的数据集上微调训练而来。
- 识别模型:`anyocr_rec_v4_server.onnx`,由`ch_PP-OCRv4_server_rec`在我们的数据集上微调训练而来。
- 方向分类:`anyocr_cls_v4.onnx`,来源于`ch_ppocr_mobile_v2.0_cls`未做训练。
- 文字字符:`anyocr_keys_v4.txt`,来源于`ppocr/utils/ppocr_keys_v1.txt`。
- 更大更强:我们还训练了一个更大更强的文字识别模型,支持中英文、数字识别,支持1.5万+字符和部分生僻字识别,需要可邮件申请使用。
### 评估
自建评估集:`1.1K`
抽取1150对未训练的数据作为评估,覆盖中文,英文,数字,符号等。
我们的评估集与其它ocr准确率的测试评估:
- anyocr: 0.97
- 百度paddleocr:0.92
- 阿里通义读光ocr:0.86
- 阶跃星辰GOT_OCR2.0:0.89
- olm-ocr: 0.46
## 3. 使用方法
### 安装依赖
```bash
## for cpu
pip install -r requirements.txt
## for gpu
pip install -r requirements-gpu.txt
```
### 使用方法
```python
## simple
# use_det = True or False, 是否使用文本检测
# use_cls = True or False, 是否使用文本方向
# use_rec = True or False, 是否使用文本识别
from anyocr.pipeline import anyocr
model = anyocr()
res = model.raw_completions('/to/your/image',use_cls=True,use_det=True)
print(res)
## 返回单字坐标
from anyocr.pipeline import anyocr
model = anyocr()
res = model.raw_completions('/to/your/image',use_cls=True,use_det=True,return_word_box = True)
### 自定义模型地址
from anyocr.pipeline import anyocr
from anyocr.pipeline import anyocrConfig
config = anyocrConfig(
det_model_path = "anyocr/models/anyocr_det_ch_v4_lite.onnx",
rec_model_path = "anyocr/models/anyocr_rec_v4_server.onnx",
cls_model_path = "anyocr/models/anyocr_cls_v4.onnx",
rec_keys_path = "anyocr/models/anyocr_keys_v4.txt"
)
config = config.model_dump()
model = anyocr(config)
res = model.raw_completions('/to/your/image',use_cls=True,use_det=True)
print(res)
```
### Use paddleocr integration
```python
from paddleocr import PaddleOCR, draw_ocr
ocrmodel = PaddleOCR(
use_gpu = False, # or True
det_model_dir = "anyocr/paddlemodels/det/ch_PP-OCRv4_det_infer",
cls_model_dir = "anyocr/paddlemodels/cls/ch_ppocr_mobile_v2.0_cls_infer",
rec_model_dir = "anyocr/paddlemodels/rec/anyocr_rec_v4_server",
rec_char_dict_path = "anyocr/paddlemodels/anyocr_keys_v4.txt",
use_dilation = True,
)
img_path = '/to/your/image'
result = ocrmodel.ocr(img_path, cls=True)
for idx in range(len(result)):
res = result[idx]
for line in res:
print(line)
```
- 如果您有更好的文字检测,文本识别识别也可以只使用我们的一部分。
- 您也可以将paddleocr的模型导出成onnx格式,使用AnyOCR推理,或者您自己微调的paddleocr模型,使用AnyOCR推理。
### 参数配置
```python
from pydantic import BaseModel
class anyocrConfig(BaseModel):
text_score: float = 0.5 # 文本识别结果置信度,取值范围:[0, 1]
use_det: bool = True # 是否使用文本检测
use_cls: bool = True # 是否使用文本行方向分类
use_rec: bool = True # 是否使用文本行识别
print_verbose: bool = False # 打印进度
min_height: int = 30 # 图像最小高度(单位是像素),低于这个值,会跳过文本检测阶段,直接进行后续识别。
width_height_ratio: float = 8 # 如果输入图像的宽高比大于width_height_ratio,则会跳过文本检测,直接进行后续识别
max_side_len: int = 2000 # 如果输入图像的最大边大于max_side_len,则会按宽高比,将最大边缩放到max_side_len
min_side_len: int = 30 # 如果输入图像的最小边小于min_side_len,则会按宽高比,将最小边缩放到min_side_len
return_word_box: bool = False # 是否返回文字的单字坐标。
det_use_cuda: bool = False # 是否使用gpu
det_model_path: Optional[str] = None #文本检测模型路径
det_limit_side_len: float = 736 # 限制图像边的长度的像素值。
det_limit_type: str = "min" # 限制图像的最小边长度还是最大边为limit_side_len,取值范围为:[min, max]
det_max_candidates:int = 1000 # 最大候选框数目
det_thresh: float = 0.3 # 图像中文字部分和背景部分分割阈值。值越大,文字部分会越小。取值范围:[0, 1]
det_box_thresh: float = 0.5 # 文本检测所得框是否保留的阈值,值越大,召回率越低。取值范围:[0, 1]
det_unclip_ratio: float = 1.6 # 控制文本检测框的大小,值越大,检测框整体越大。取值范围:[1.6, 2.0]
det_donot_use_dilation: bool = False # 是否使用膨胀,该参数用于将检测到的文本区域做形态学的膨胀处理。
det_score_mode: str = "slow" # 计算文本框得分的方式。取值范围为:[slow, fast]
cls_use_cuda: bool = False # 是否使用gpu
cls_model_path: Optional[str] = None # 文本行方向分类模型路径
cls_image_shape: List[int] = [3, 48, 192] # 输入方向分类模型的图像Shape(CHW)
cls_label_list: List[str] = ["0", "180"] # 方向分类的标签,0°或者180°,该参数不能动。
cls_batch_num: int = 6 # 批次推理的batch大小,一般采用默认值即可,太大并没有明显提速,效果还可能会差。默认值为6。
cls_thresh: float = 0.9 # 方向分类结果的置信度。取值范围:[0, 1]
rec_use_cuda: bool = False # 是否使用gpu
rec_keys_path: Optional[str] = None # 文本识别模型对应的字典文件
rec_model_path: Optional[str] = None # 文本识别模型路径
rec_img_shape: List[int] = [3, 48, 320] # 输入文本识别模型的图像Shape(CHW)
rec_batch_num: int = 6 # 批次推理的batch大小,一般采用默认值即可,太大并没有明显提速,效果还可能会差。默认值为6。
```
## Buy me a coffee
- 微信(WeChat)
<div align="left">
<img src="./zanshan.jpg" width="30%" height="30%">
</div>
|
VNCNGDC/HGKF | VNCNGDC | "2025-05-13T06:23:51Z" | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | "2025-05-13T06:23:51Z" | ---
license: apache-2.0
---
|
NEWWWWWbie/cybertron_merge_02-Q8_0-GGUF | NEWWWWWbie | "2025-05-13T06:20:45Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"llama-cpp",
"gguf-my-repo",
"base_model:NEWWWWWbie/cybertron_merge_02",
"base_model:quantized:NEWWWWWbie/cybertron_merge_02",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-05-13T06:20:09Z" | ---
base_model: NEWWWWWbie/cybertron_merge_02
library_name: transformers
tags:
- mergekit
- merge
- llama-cpp
- gguf-my-repo
---
# NEWWWWWbie/cybertron_merge_02-Q8_0-GGUF
This model was converted to GGUF format from [`NEWWWWWbie/cybertron_merge_02`](https://huggingface.co/NEWWWWWbie/cybertron_merge_02) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/NEWWWWWbie/cybertron_merge_02) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo NEWWWWWbie/cybertron_merge_02-Q8_0-GGUF --hf-file cybertron_merge_02-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo NEWWWWWbie/cybertron_merge_02-Q8_0-GGUF --hf-file cybertron_merge_02-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo NEWWWWWbie/cybertron_merge_02-Q8_0-GGUF --hf-file cybertron_merge_02-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo NEWWWWWbie/cybertron_merge_02-Q8_0-GGUF --hf-file cybertron_merge_02-q8_0.gguf -c 2048
```
|
MaestrAI/sarah_trent-lora-1747116411 | MaestrAI | "2025-05-13T06:13:05Z" | 0 | 0 | null | [
"region:us"
] | null | "2025-05-13T06:06:50Z" | # sarah_trent LORA Model
This is a LORA model for character Sarah Trent
Created at 2025-05-13 08:06:51
|
NEWWWWWbie/cybertron_merge_02 | NEWWWWWbie | "2025-05-13T06:11:41Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2311.03099",
"base_model:fdtn-ai/Foundation-Sec-8B",
"base_model:merge:fdtn-ai/Foundation-Sec-8B",
"base_model:trendmicro-ailab/Llama-Primus-Reasoning",
"base_model:merge:trendmicro-ailab/Llama-Primus-Reasoning",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-05-13T05:34:32Z" | ---
base_model:
- trendmicro-ailab/Llama-Primus-Reasoning
- fdtn-ai/Foundation-Sec-8B
library_name: transformers
tags:
- mergekit
- merge
---
# testing_reasoning
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [DARE TIES](https://arxiv.org/abs/2311.03099) merge method using [trendmicro-ailab/Llama-Primus-Reasoning](https://huggingface.co/trendmicro-ailab/Llama-Primus-Reasoning) as a base.
### Models Merged
The following models were included in the merge:
* [fdtn-ai/Foundation-Sec-8B](https://huggingface.co/fdtn-ai/Foundation-Sec-8B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
# # Mergekit configuration for merging two Llama 3.1 8B models using DARE TIES
models:
- model: trendmicro-ailab/Llama-Primus-Reasoning
# This is the base model. DARE TIES merges deltas into this model.
# No specific parameters needed here for the base model itself in DARE TIES.
- model: fdtn-ai/Foundation-Sec-8B
# This is the model whose delta (difference from the base) will be merged.
parameters:
weight: 0.3
density: 0.7 # Specifies the scaling factor for the delta from this model.
# A weight of 0.5 means 50% of the pruned delta is applied.
merge_method: dare_ties # Use the DARE TIES merge method
base_model: trendmicro-ailab/Llama-Primus-Reasoning # Explicitly define the base model
dtype: bfloat16 # Data type for computation and output model.
# Use bfloat16 for Llama 3 models if possible, otherwise float16.
# models:
# - model: trendmicro-ailab/Llama-Primus-Merged
# - model: fdtn-ai/Foundation-Sec-8B
# parameters:
# density: 0.1
# weight: 0.2
# tokenizer_source: union
# merge_method: dare_ties
# base_model: trendmicro-ailab/Llama-Primus-Merged
# parameters:
# normalize: true
# int8_mask: true
# dtype: float16
```
|
fahmiaziz/Llama-3.2-11B-qlora-Information-Extraction-float16 | fahmiaziz | "2025-05-13T06:10:02Z" | 0 | 0 | transformers | [
"transformers",
"mllama",
"image-text-to-text",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | image-text-to-text | "2025-05-13T06:09:51Z" | ---
base_model: unsloth/llama-3.2-11b-vision-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- mllama
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** fahmiaziz
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-11b-vision-instruct-unsloth-bnb-4bit
This mllama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
vmpsergio/0a0c1273-7cc2-4e14-8f1a-b93b1d4d2f71 | vmpsergio | "2025-05-13T06:08:32Z" | 0 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"axolotl",
"dpo",
"trl",
"conversational",
"arxiv:2305.18290",
"base_model:Qwen/Qwen2.5-7B",
"base_model:quantized:Qwen/Qwen2.5-7B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | "2025-05-13T05:44:01Z" | ---
base_model: Qwen/Qwen2.5-7B
library_name: transformers
model_name: 0a0c1273-7cc2-4e14-8f1a-b93b1d4d2f71
tags:
- generated_from_trainer
- axolotl
- dpo
- trl
licence: license
---
# Model Card for 0a0c1273-7cc2-4e14-8f1a-b93b1d4d2f71
This model is a fine-tuned version of [Qwen/Qwen2.5-7B](https://huggingface.co/Qwen/Qwen2.5-7B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="vmpsergio/0a0c1273-7cc2-4e14-8f1a-b93b1d4d2f71", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/dedok-yo/s56-8/runs/kmyx65fo)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.12.0.dev0
- Transformers: 4.46.0
- Pytorch: 2.5.0+cu124
- Datasets: 3.0.1
- Tokenizers: 0.20.1
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
benjminbinqle/AlkodivinCapsules | benjminbinqle | "2025-05-13T06:07:59Z" | 0 | 0 | null | [
"region:us"
] | null | "2025-05-13T06:07:41Z" | <p><strong>Що таке Alkodivin Capsules?</strong></p>
<p><strong><a href="https://www.wellbioways.com/uk/alkodivin/">Alkodivin Capsules</a></strong>– це дієтична добавка, призначена для допомоги людям, які мають труднощі з тягою до алкоголю, детоксикацією та відновленням після абстиненції. Капсули виготовлені з гармонізуючої суміші рослинних екстрактів та вітамінів, які запобігають дисбалансу в нервовій системі, підтримують функцію печінки та пригнічують психічну тягу до алкоголю.</p>
<p>Ці капсули не є ліками чи замінниками терапії. Вони радше призначені для пришвидшення процесу одужання — незалежно від того, чи прагне людина зменшити споживання алкоголю, позбутися залежності чи взагалі утриматися від алкоголю.</p>
<p><a href="https://www.wellbioways.com/uk/alkodivin/">https://www.wellbioways.com/uk/alkodivin/</a></p>
<p><a href="https://www.wellbioways.com/Buy-Alkodivin">https://www.wellbioways.com/Buy-Alkodivin</a></p>
<p><a href="https://www.facebook.com/groups/alkodivin">https://www.facebook.com/groups/alkodivin</a></p>
<p><a href="https://www.facebook.com/groups/alkodivin/posts/1035141431459124/">https://www.facebook.com/groups/alkodivin/posts/1035141431459124/</a></p>
<p><a href="https://www.facebook.com/share/p/1EZo836vDV/">https://www.facebook.com/share/p/1EZo836vDV/</a></p>
<p><a href="https://www.facebook.com/groups/alkodivincapsules">https://www.facebook.com/groups/alkodivincapsules</a></p>
<p><a href="https://www.facebook.com/groups/alkodivincapsules/posts/1030698535243723/">https://www.facebook.com/groups/alkodivincapsules/posts/1030698535243723/</a></p>
<p><a href="https://www.facebook.com/share/p/15GXts9YSF/">https://www.facebook.com/share/p/15GXts9YSF/</a></p>
<p><a href="https://www.facebook.com/events/1868662117232589/">https://www.facebook.com/events/1868662117232589/</a></p>
<p><a href="https://uk.pinterest.com/Alkodivin/">https://uk.pinterest.com/Alkodivin/</a></p>
<p><a href="https://uk.pinterest.com/AlkodivinCapsules/">https://uk.pinterest.com/AlkodivinCapsules/</a></p>
<p><a href="https://colab.research.google.com/drive/1M8bRHDXY1UY-rQCxIj1PwSn7tfnd0SCX?usp=sharing">https://colab.research.google.com/drive/1M8bRHDXY1UY-rQCxIj1PwSn7tfnd0SCX?usp=sharing</a></p>
<p><a href="https://colab.research.google.com/drive/1Byib5MHn4bWjgyQSTU0qi19ci4Cm9cVV?usp=sharing">https://colab.research.google.com/drive/1Byib5MHn4bWjgyQSTU0qi19ci4Cm9cVV?usp=sharing</a></p>
<p><a href="https://github.com/benjminbinqle/Alkodivin-Capsules/discussions/1">https://github.com/benjminbinqle/Alkodivin-Capsules/discussions/1</a></p>
<p><a href="https://github.com/benjminbinqle/Alkodivin-Capsules/discussions/2">https://github.com/benjminbinqle/Alkodivin-Capsules/discussions/2</a></p>
<p><a href="https://alkodivincapsules.quora.com/">https://alkodivincapsules.quora.com/</a></p>
<p><a href="https://www.quora.com/Alkodivin-Capsules-Ukraine-Price-And-Work/answer/Lilianmulliws">https://www.quora.com/Alkodivin-Capsules-Ukraine-Price-And-Work/answer/Lilianmulliws</a></p>
<p><a href="https://www.commudle.com/users/Alkodivin">https://www.commudle.com/users/Alkodivin</a></p>
<p><a href="https://www.commudle.com/users/AlkodivinOffer">https://www.commudle.com/users/AlkodivinOffer</a></p>
<p><a href="https://knowt.com/note/86e3bad9-7f8b-4cd4-a2c9-3df28cb094bb/Alkodivin-Ukraine-Best-Quality-Capsule">https://knowt.com/note/86e3bad9-7f8b-4cd4-a2c9-3df28cb094bb/Alkodivin-Ukraine-Best-Quality-Capsule</a></p>
<p><a href="https://knowt.com/note/264a1c62-0dbb-40c2-b0b5-29e92ef1b629/Alkodivin-Capsules-WATCH-Its-Seriousness">https://knowt.com/note/264a1c62-0dbb-40c2-b0b5-29e92ef1b629/Alkodivin-Capsules-WATCH-Its-Seriousness</a> </p> |
yang-z/CodeV-QC-7B | yang-z | "2025-05-13T06:07:25Z" | 0 | 0 | null | [
"qwen2",
"license:apache-2.0",
"region:us"
] | null | "2025-05-13T03:35:31Z" | ---
license: apache-2.0
---
|
scb10x/typhoon2.1-gemma3-12b | scb10x | "2025-05-13T06:02:56Z" | 176 | 1 | null | [
"safetensors",
"gemma3_text",
"text-generation",
"conversational",
"arxiv:2412.13702",
"license:gemma",
"region:us"
] | text-generation | "2025-05-01T15:11:16Z" | ---
license: gemma
pipeline_tag: text-generation
---
**Typhoon2.1-Gemma3-12B**: Thai Large Language Model (Instruct)
**Typhoon2.1-Gemma3-12B** is a instruct Thai 🇹🇭 large language model with 12 billion parameters, a 128K context length, and function-calling capabilities. It is based on Gemma3 12B.
Remark: This is text only model. We removed vision encoder for this version due to complexity. Stay-tune for version with vision encoder soon.
## **Performance**

## **Model Description**
- **Model type**: A 12B instruct decoder-only model based on Gemma3 architecture.
- **Requirement**: transformers 4.50.0 or newer.
- **Primary Language(s)**: Thai 🇹🇭 and English 🇬🇧
- **Context Length**: 128K
- **License**: [Gemma License](https://github.com/google-deepmind/gemma/blob/main/LICENSE)
## Usage Example
This code snippet shows how to use the Typhoon2.1-Gemma3-12B model for Thai or English text generation using the transformers library. It includes setting up the model and tokenizer, formatting chat messages in a system-user style, and generating a response.
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
model_id = "scb10x/typhoon2.1-gemma3-12b"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.bfloat16,
device_map="auto",
)
messages = [
{"role": "system", "content": "You are a male AI assistant named Typhoon created by SCB 10X to be helpful, harmless, and honest. Typhoon is happy to help with analysis, question answering, math, coding, creative writing, teaching, role-play, general discussion, and all sorts of other tasks. Typhoon responds directly to all human messages without unnecessary affirmations or filler phrases like “Certainly!”, “Of course!”, “Absolutely!”, “Great!”, “Sure!”, etc. Specifically, Typhoon avoids starting responses with the word “Certainly” in any way. Typhoon follows this information in all languages, and always responds to the user in the language they use or request. Typhoon is now being connected with a human. Write in fluid, conversational prose, Show genuine interest in understanding requests, Express appropriate emotions and empathy. Also showing information in term that is easy to understand and visualized."},
{"role": "user", "content": "ขอสูตรไก่ย่าง"},
]
input_ids = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt",
enable_thinking=False # Switches between thinking and non-thinking modes. Default is False.
).to(model.device)
outputs = model.generate(
input_ids,
max_new_tokens=512,
do_sample=True,
temperature=0.6,
top_p=0.95,
)
response = outputs[0][input_ids.shape[-1]:]
print(tokenizer.decode(response, skip_special_tokens=True))
```
## Deploy as Server
This section shows how to run Typhoon2.1 as an OpenAI-compatible API server using vllm.
```bash
pip install vllm
vllm serve scb10x/typhoon2.1-gemma3-12b --max-model-len 16000 --dtype bfloat16 --tool-call-parser pythonic --enable-auto-tool-choice
# adjust --max-model-len based on your avaliable memory
# you can use --quantization bitsandbytes to reduce the memory use while trade-off inference speed
```
## Using Tools
You can provide tools to the vLLM-powered OpenAI-compatible API for functionality.
```
from openai import OpenAI
import json
client = OpenAI(base_url="http://localhost:8000/v1", api_key="dummy")
def get_weather(location: str, unit: str):
return f"Getting the weather for {location} in {unit}..."
tool_functions = {"get_weather": get_weather}
tools = [{
"type": "function",
"function": {
"name": "get_weather",
"description": "Get the current weather in a given location",
"parameters": {
"type": "object",
"properties": {
"location": {"type": "string", "description": "City and state, e.g., 'San Francisco, CA'"},
"unit": {"type": "string", "enum": ["celsius", "fahrenheit"]}
},
"required": ["location", "unit"]
}
}
}]
response = client.chat.completions.create(
model=client.models.list().data[0].id,
messages=[{"role": "user", "content": "What's the weather like in San Francisco?"}],
tools=tools,
tool_choice="auto"
)
tool_call = response.choices[0].message.tool_calls[0].function
print(f"Function called: {tool_call.name}")
print(f"Arguments: {tool_call.arguments}")
print(f"Result: {get_weather(**json.loads(tool_call.arguments))}")
```
## Switching Between Thinking and Non-Thinking Mode
Typhoon supports two modes:
Non-thinking mode (default): Fast response generation without extra reasoning steps.
Thinking mode: The model first reasons internally, then provides a clearer and potentially more accurate final answer.
You can enable thinking mode by:
Setting enable_thinking=True in apply_chat_template.
Using a special system prompt that instructs the model to reason inside <think>...</think> tags.
You can turn on thinking mode by either
- add enable_thinking=True to apply_chat_template
```python
input_ids = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt",
enable_thinking=True # Switches between thinking and non-thinking modes. Default is False.
).to(model.device)
```
- manually by supply thinking mode system prompt
```
You are a helpful assistant. First, think through the reasoning internally, then present the reasoning within <think>...</think>. After thinking, clearly state a response that addresses the user's request and aligns with their preferences, not just providing a direct answer.
```
- in vllm powered openai compatible client you can add chat_template_kwargs to the post payload
```json
{
"model": "scb10x/typhoon2.1-gemma3-12b",
"messages": [
{"role": "user", "content": "Give me a short introduction to large language models."}
],
"chat_template_kwargs": {"enable_thinking": true}
}
```
## Budget forcing
This section introduces budget forcing, an advanced technique to let the model spend more time and tokens reasoning before producing a final answer—great for improving performance on complex questions.
```
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer
class BudgetForcingHandler:
def __init__(self, model_name: str, max_think_token: int, max_ignore=5, temperature=0.6, seed=32):
self.temperature = temperature
self.seed = seed
self.max_think_token = max_think_token
self.max_ignore = max_ignore
self.model = LLM(model_name, dtype='bfloat16', enforce_eager=True)
self.tokenizer = AutoTokenizer.from_pretrained(model_name)
self.alternative_str = '\nAlternatively'
self.system = """You are a reasoning assistant. First, think through the reasoning internally, then present the reasoning within <think>...</think>. After thinking, clearly state the final answer."""
def __call__(self, prompts: List[str]):
count_prompt = len(prompts)
prompts = [self.tokenizer.apply_chat_template([{'role': 'system', 'content': self.system}, {'role': 'user', 'content': f'Please solve this math question, and put your final answer within \\boxed{{}}.\n{p}'}], add_generation_prompt=True, tokenize=False) for p in prompts]
sampling_params = SamplingParams(
max_tokens=self.max_think_token,
seed=self.seed,
stop=["</think>"],
skip_special_tokens=False,
temperature=self.temperature,
)
o = self.model.generate(
prompts,
sampling_params=sampling_params
)
outputs = [output.outputs[0].text for output in o]
token_count = [len(output.outputs[0].token_ids) for output in o]
for i in range(len(prompts)):
prompts[i] = prompts[i] + outputs[i]
for _ in range(self.max_ignore): # Num of times to skip stop token
inference_loop_prompts = []
inference_idx = []
max_inference_token = 0
print('current token count: ', token_count)
for i in range(len(prompts)):
left_budget = self.max_think_token - token_count[i]
if left_budget > 0:
prompts[i] = prompts[i] + self.alternative_str
inference_loop_prompts.append(prompts[i])
inference_idx.append(i)
if left_budget > max_inference_token:
max_inference_token = left_budget
outputs = ['' for _ in range(len(prompts))]
if max_inference_token == 0 or len(inference_loop_prompts) == 0:
break
sampling_params = SamplingParams(
max_tokens=max_inference_token,
min_tokens=1,
seed=self.seed,
stop=["</think>"],
skip_special_tokens=False,
temperature=self.temperature,
)
o = self.model.generate(
inference_loop_prompts,
sampling_params=sampling_params
)
assert len(inference_idx) == len(inference_loop_prompts)
assert len(inference_idx) == len(o)
for i, output in zip(inference_idx, o):
outputs[i] = output.outputs[0].text
for i, idx in enumerate(inference_idx):
token_count[idx] = token_count[idx] + len(o[i].outputs[0].token_ids)
for i in range(len(prompts)):
prompts[i] = prompts[i] + outputs[i]
print('generating answer...')
prompts = [p + '\nTime\'s up. End of thinking process. Will answer immediately.\n</think>' for i, p in enumerate(prompts)]
sampling_params = SamplingParams(
max_tokens=2048,
min_tokens=0,
seed=self.seed,
skip_special_tokens=False,
temperature=self.temperature,
)
o = self.model.generate(
prompts,
sampling_params=sampling_params,
)
for i in range(len(prompts)):
prompts[i] = prompts[i] + o[i].outputs[0].text
assert len(prompts) == count_prompt
return prompts
handler = BudgetForcingHandler("scb10x/typhoon2.1-gemma3-12b", max_think_token=2048)
handler(["How many r in raspberry?"])
```
## **Intended Uses & Limitations**
This model is an instructional model. However, it’s still undergoing development. It incorporates some level of guardrails, but it still may produce answers that are inaccurate, biased, or otherwise objectionable in response to user prompts. We recommend that developers assess these risks in the context of their use case.
## **Follow us**
**https://twitter.com/opentyphoon**
## **Support**
**https://discord.gg/us5gAYmrxw**
## **Citation**
- If you find Typhoon2 useful for your work, please cite it using:
```
@misc{typhoon2,
title={Typhoon 2: A Family of Open Text and Multimodal Thai Large Language Models},
author={Kunat Pipatanakul and Potsawee Manakul and Natapong Nitarach and Warit Sirichotedumrong and Surapon Nonesung and Teetouch Jaknamon and Parinthapat Pengpun and Pittawat Taveekitworachai and Adisai Na-Thalang and Sittipong Sripaisarnmongkol and Krisanapong Jirayoot and Kasima Tharnpipitchai},
year={2024},
eprint={2412.13702},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2412.13702},
}
``` |
mlfoundations-dev/openthoughts3_100k | mlfoundations-dev | "2025-05-13T06:02:49Z" | 1 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-05-10T11:11:38Z" | ---
library_name: transformers
license: apache-2.0
base_model: Qwen/Qwen2.5-7B-Instruct
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: openthoughts3_100k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# openthoughts3_100k
This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on the mlfoundations-dev/openthoughts3_100k dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 32
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- total_eval_batch_size: 256
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5.0
### Training results
### Framework versions
- Transformers 4.46.1
- Pytorch 2.5.1
- Datasets 3.1.0
- Tokenizers 0.20.3
|
cryptocurry/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-quiet_dense_rhino | cryptocurry | "2025-05-13T06:02:18Z" | 2 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am quiet dense rhino",
"trl",
"conversational",
"arxiv:2402.03300",
"base_model:unsloth/Qwen2.5-0.5B-Instruct",
"base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-05-03T08:03:08Z" | ---
base_model: unsloth/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-quiet_dense_rhino
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am quiet dense rhino
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-quiet_dense_rhino
This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="cryptocurry/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-quiet_dense_rhino", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.17.0
- Transformers: 4.51.3
- Pytorch: 2.7.0
- Datasets: 3.5.1
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
jyp96/backpack | jyp96 | "2025-05-13T05:59:18Z" | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"sd3",
"sd3-diffusers",
"base_model:stabilityai/stable-diffusion-3-medium-diffusers",
"base_model:adapter:stabilityai/stable-diffusion-3-medium-diffusers",
"license:other",
"region:us"
] | text-to-image | "2025-05-05T08:43:16Z" | ---
base_model: stabilityai/stable-diffusion-3-medium-diffusers
library_name: diffusers
license: other
instance_prompt: a photo of sks backpack
widget:
- text: A photo of sks backpack in a bucket
output:
url: image_0.png
- text: A photo of sks backpack in a bucket
output:
url: image_1.png
- text: A photo of sks backpack in a bucket
output:
url: image_2.png
- text: A photo of sks backpack in a bucket
output:
url: image_3.png
tags:
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- sd3
- sd3-diffusers
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- sd3
- sd3-diffusers
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SD3 DreamBooth LoRA - jyp96/backpack
<Gallery />
## Model description
These are jyp96/backpack DreamBooth LoRA weights for stabilityai/stable-diffusion-3-medium-diffusers.
The weights were trained using [DreamBooth](https://dreambooth.github.io/) with the [SD3 diffusers trainer](https://github.com/huggingface/diffusers/blob/main/examples/dreambooth/README_sd3.md).
Was LoRA for the text encoder enabled? False.
## Trigger words
You should use `a photo of sks backpack` to trigger the image generation.
## Download model
[Download the *.safetensors LoRA](jyp96/backpack/tree/main) in the Files & versions tab.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained(stabilityai/stable-diffusion-3-medium-diffusers, torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('jyp96/backpack', weight_name='pytorch_lora_weights.safetensors')
image = pipeline('A photo of sks backpack in a bucket').images[0]
```
### Use it with UIs such as AUTOMATIC1111, Comfy UI, SD.Next, Invoke
- **LoRA**: download **[`diffusers_lora_weights.safetensors` here 💾](/jyp96/backpack/blob/main/diffusers_lora_weights.safetensors)**.
- Rename it and place it on your `models/Lora` folder.
- On AUTOMATIC1111, load the LoRA by adding `<lora:your_new_name:1>` to your prompt. On ComfyUI just [load it as a regular LoRA](https://comfyanonymous.github.io/ComfyUI_examples/lora/).
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## License
Please adhere to the licensing terms as described [here](https://huggingface.co/stabilityai/stable-diffusion-3-medium/blob/main/LICENSE.md).
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
SDSDASDGHTJH/FFGGDDFFGGFDGFDG | SDSDASDGHTJH | "2025-05-13T05:58:46Z" | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | "2025-05-13T05:58:46Z" | ---
license: creativeml-openrail-m
---
|
FGDFGZ/SDEGH | FGDFGZ | "2025-05-13T05:57:10Z" | 0 | 0 | null | [
"license:bigscience-openrail-m",
"region:us"
] | null | "2025-05-13T05:57:10Z" | ---
license: bigscience-openrail-m
---
|
cs2764/bge-m3-Q8_0-GGUF | cs2764 | "2025-05-13T05:52:12Z" | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"gguf",
"feature-extraction",
"sentence-similarity",
"llama-cpp",
"gguf-my-repo",
"base_model:BAAI/bge-m3",
"base_model:quantized:BAAI/bge-m3",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | sentence-similarity | "2025-05-13T05:52:05Z" | ---
base_model: BAAI/bge-m3
license: mit
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- llama-cpp
- gguf-my-repo
---
# cs2764/bge-m3-Q8_0-GGUF
This model was converted to GGUF format from [`BAAI/bge-m3`](https://huggingface.co/BAAI/bge-m3) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/BAAI/bge-m3) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo cs2764/bge-m3-Q8_0-GGUF --hf-file bge-m3-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo cs2764/bge-m3-Q8_0-GGUF --hf-file bge-m3-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo cs2764/bge-m3-Q8_0-GGUF --hf-file bge-m3-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo cs2764/bge-m3-Q8_0-GGUF --hf-file bge-m3-q8_0.gguf -c 2048
```
|
MrRobotoAI/119-Q4_K_M-GGUF | MrRobotoAI | "2025-05-13T05:50:42Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"llama-cpp",
"gguf-my-repo",
"base_model:MrRobotoAI/119",
"base_model:quantized:MrRobotoAI/119",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-05-13T05:50:20Z" | ---
base_model: MrRobotoAI/119
library_name: transformers
tags:
- mergekit
- merge
- llama-cpp
- gguf-my-repo
---
# MrRobotoAI/119-Q4_K_M-GGUF
This model was converted to GGUF format from [`MrRobotoAI/119`](https://huggingface.co/MrRobotoAI/119) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/MrRobotoAI/119) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo MrRobotoAI/119-Q4_K_M-GGUF --hf-file 119-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo MrRobotoAI/119-Q4_K_M-GGUF --hf-file 119-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo MrRobotoAI/119-Q4_K_M-GGUF --hf-file 119-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo MrRobotoAI/119-Q4_K_M-GGUF --hf-file 119-q4_k_m.gguf -c 2048
```
|
mradermacher/QwenGuard-v1.2-3B-GGUF | mradermacher | "2025-05-13T05:49:25Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"llama-factory",
"freeze",
"generated_from_trainer",
"en",
"dataset:AIML-TUDA/LlavaGuard",
"base_model:AIML-TUDA/QwenGuard-v1.2-3B",
"base_model:quantized:AIML-TUDA/QwenGuard-v1.2-3B",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-05-13T05:40:41Z" | ---
base_model: AIML-TUDA/QwenGuard-v1.2-3B
datasets: AIML-TUDA/LlavaGuard
extra_gated_fields:
Affiliation: text
Country: text
Email: text
? I have explicitly checked that downloading LlavaGuard is legal in my jurisdiction,
in the country/region where I am located right now, and for the use case that
I have described above, I have also read and accepted the relevant Terms of Use
: checkbox
Name: text
extra_gated_prompt: By filling out the form below I understand that LlavaGuard is
a derivative model based on webscraped images and the SMID dataset that use individual
licenses and their respective terms and conditions apply. I understand that all
content uses are subject to the terms of use. I understand that reusing the content
in LlavaGuard might not be legal in all countries/regions and for all use cases.
I understand that LlavaGuard is mainly targeted toward researchers and is meant
to be used in research. LlavaGuard authors reserve the right to revoke my access
to this data. They reserve the right to modify this data at any time in accordance
with take-down requests.
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- llama-factory
- freeze
- generated_from_trainer
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/AIML-TUDA/QwenGuard-v1.2-3B
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/QwenGuard-v1.2-3B-GGUF/resolve/main/QwenGuard-v1.2-3B.Q2_K.gguf) | Q2_K | 1.4 | |
| [GGUF](https://huggingface.co/mradermacher/QwenGuard-v1.2-3B-GGUF/resolve/main/QwenGuard-v1.2-3B.Q3_K_S.gguf) | Q3_K_S | 1.6 | |
| [GGUF](https://huggingface.co/mradermacher/QwenGuard-v1.2-3B-GGUF/resolve/main/QwenGuard-v1.2-3B.Q3_K_M.gguf) | Q3_K_M | 1.7 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/QwenGuard-v1.2-3B-GGUF/resolve/main/QwenGuard-v1.2-3B.Q3_K_L.gguf) | Q3_K_L | 1.8 | |
| [GGUF](https://huggingface.co/mradermacher/QwenGuard-v1.2-3B-GGUF/resolve/main/QwenGuard-v1.2-3B.IQ4_XS.gguf) | IQ4_XS | 1.9 | |
| [GGUF](https://huggingface.co/mradermacher/QwenGuard-v1.2-3B-GGUF/resolve/main/QwenGuard-v1.2-3B.Q4_K_S.gguf) | Q4_K_S | 1.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/QwenGuard-v1.2-3B-GGUF/resolve/main/QwenGuard-v1.2-3B.Q4_K_M.gguf) | Q4_K_M | 2.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/QwenGuard-v1.2-3B-GGUF/resolve/main/QwenGuard-v1.2-3B.Q5_K_S.gguf) | Q5_K_S | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/QwenGuard-v1.2-3B-GGUF/resolve/main/QwenGuard-v1.2-3B.Q5_K_M.gguf) | Q5_K_M | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/QwenGuard-v1.2-3B-GGUF/resolve/main/QwenGuard-v1.2-3B.Q6_K.gguf) | Q6_K | 2.6 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/QwenGuard-v1.2-3B-GGUF/resolve/main/QwenGuard-v1.2-3B.Q8_0.gguf) | Q8_0 | 3.4 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/QwenGuard-v1.2-3B-GGUF/resolve/main/QwenGuard-v1.2-3B.f16.gguf) | f16 | 6.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
ddmfsdgse/fkmtjk | ddmfsdgse | "2025-05-13T05:46:04Z" | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | "2025-05-13T05:46:04Z" | ---
license: apache-2.0
---
|
ndfjm/djrdfjy | ndfjm | "2025-05-13T05:46:04Z" | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | "2025-05-13T05:46:04Z" | ---
license: apache-2.0
---
|
djrsge/hseeh | djrsge | "2025-05-13T05:46:04Z" | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | "2025-05-13T05:46:04Z" | ---
license: apache-2.0
---
|
fyuuki0jp/outputs | fyuuki0jp | "2025-05-13T05:44:15Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"gemma3_text",
"text-generation",
"generated_from_trainer",
"unsloth",
"trl",
"sft",
"conversational",
"base_model:unsloth/gemma-3-1b-it",
"base_model:finetune:unsloth/gemma-3-1b-it",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-05-13T05:43:40Z" | ---
base_model: unsloth/gemma-3-1b-it
library_name: transformers
model_name: outputs
tags:
- generated_from_trainer
- unsloth
- trl
- sft
licence: license
---
# Model Card for outputs
This model is a fine-tuned version of [unsloth/gemma-3-1b-it](https://huggingface.co/unsloth/gemma-3-1b-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="fyuuki0jp/outputs", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.6.0+cu124
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
Xem-idol-tiktoker-tung-clip-moi/Hot.Video.Xem.ma.muon.ngat.luon.idol.tiktoker.mun.lai.them.sieu.pham | Xem-idol-tiktoker-tung-clip-moi | "2025-05-13T05:43:25Z" | 0 | 0 | null | [
"region:us"
] | null | "2025-05-13T05:35:27Z" | <animated-image data-catalyst=""><a href="https://tinyurl.com/fn84hrnu?news-viral-video" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
Idol Tiktoker mun lại thêm siêu phẩm mới, khiến anh em "tối cổ"
Gần đây, cộng đồng game thủ lại "chao đảo" trước siêu phẩm được cho là clip của Mun trên MXH và lan truyền nhanh chóng với tốc độ chóng mặt. |
Zelma89/Blackburn | Zelma89 | "2025-05-13T05:42:50Z" | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | "2025-05-13T05:42:50Z" | ---
license: apache-2.0
---
|
Elliott87/Blackburn | Elliott87 | "2025-05-13T05:42:50Z" | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | "2025-05-13T05:42:50Z" | ---
license: apache-2.0
---
|
ramonruedebu/sdfsad | ramonruedebu | "2025-05-13T05:42:09Z" | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | "2025-05-13T05:41:54Z" | ---
license: apache-2.0
---
|
TheRamsay/ClTRUS-gpt2-74M-transformer-adapter | TheRamsay | "2025-05-13T05:42:09Z" | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | "2025-05-13T05:41:58Z" | ---
license: apache-2.0
---
|
trehardy/kai-ai-vids-lora | trehardy | "2025-05-13T05:41:17Z" | 0 | 0 | null | [
"license:other",
"region:us"
] | null | "2025-05-13T04:56:56Z" | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
--- |
QLU-NLP/BianCang-Qwen2.5-7B | QLU-NLP | "2025-05-13T05:41:05Z" | 5 | 2 | null | [
"safetensors",
"qwen2",
"arxiv:2411.11027",
"region:us"
] | null | "2024-11-16T03:53:12Z" | # 扁仓中医大模型(BianCang: A Traditional Chinese Medicine Large Language Model)
<div align="center">
<p>
<img src="assets/BianCang-logo.png" width="500px"/>
</p>
</div>
## 💡介绍
你好,欢迎来到扁仓中医大模型的开源仓库。
为推动大语言模型在传统中医领域的落地应用,辅助医生进行疾病诊断,辅助患者进行自我评估,推动大模型赋能传统中医,我们在该仓库推出了**扁仓**系列中医大模型。扁仓是古代名医扁鹊、仓公的并称,泛指名医。我们期待扁仓中医大模型能够在延续中医传承和提升我国人民医疗健康水平方面做出一定的贡献。
扁仓以Qwen2/2.5作为基座,采用先注入领域知识再进行知识激活和对齐的两阶段训练方法而得到。扁仓在中医辨病辨证等中医特色任务上取得了最先进的性能,并且在各种医学执照考试中表现优异。
我们在该仓库中开源以下资源:
- 扁仓基座模型权重:包括BianCang-Qwen2-7B、BianCang-Qwen2.5-7B、BianCang-Qwen2.5-14B。
- 扁仓指令精调模型权重:包括BianCang-Qwen2-7B-Instruct、BianCang-Qwen2.5-7B-Instruct、BianCang-Qwen2.5-14B-Instruct。
更多信息请查看[GitHub]([QLU-NLP/BianCang](https://github.com/QLU-NLP/BianCang))
## 🚀推理
### 使用SWIFT
#### 环境安装
在[Release v2.4.2 · modelscope/ms-swift](https://github.com/modelscope/ms-swift/releases/tag/v2.4.2)处下载SWIFT源码,切换到对应目录,然后执行安装命令:
```shell
cd swift
pip install -e .
```
你可以根据自己的GPU驱动版本去选择合适的torch版本进行替换,SWIFT至少需要torch >= 1.13,推荐torch >= 2.0.0。
注意:由于我们进行SFT训练时使用的Chat Template为*qwen*,因此如果你使用的SWIFT版本大于我们提供的版本,可能会遇到Qwen2.5 Chat Template不对应的问题,请手动将Chat Template指定为*qwen*而不是*qwen2_5*。具体原因参考:[fix qwen2.5 template by Jintao-Huang · Pull Request #2081 · modelscope/ms-swift](https://github.com/modelscope/ms-swift/pull/2081)
#### 推理方式1-代码推理
```python
import os
os.environ['CUDA_VISIBLE_DEVICES'] = '0'
from swift.llm import (
get_model_tokenizer, get_template, inference, ModelType
)
from swift.utils import seed_everything
model_type = ModelType.qwen2_5_7b_instruct
template_type = 'qwen'
model_id_or_path = 'QLU-NLP/BianCang-Qwen2.5-7B-Instruct'
model, tokenizer = get_model_tokenizer(model_type, model_id_or_path=model_id_or_path, model_kwargs={'device_map': 'auto'})
model.generation_config.max_new_tokens = 256
template = get_template(template_type, tokenizer)
seed_everything(42)
query = '你好,你是谁?'
response, history = inference(model, template, query)
print(f'query: {query}')
print(f'response: {response}')
query = '下面是一名患者的基本情况。年龄:78岁,性别:女。主 诉:活动后胸痛一周。现病史:患者一周前活动后出现胸口隐隐作痛,如针刺样乏力气短,活动后汗出,偏头痛。中医望闻切诊:表情自然,面色红润,形体正常,语气清,气息平;无异常气味,舌暗红,苔少。请你根据上述患者的主诉、病史和中医望闻切诊情况,判断该患者的主要中医疾病和中医证型,并给出中医辨病辨证的依据。'
response, history = inference(model, template, query, history)
print(f'query: {query}')
print(f'response: {response}')
print(f'history: {history}')
```
输出:
```
query: 你好,你是谁?
response: 你好!我是一个名为扁仓中医大模型的人工智能,由齐鲁工业大学(山东省科学院)计算机科学与技术学部(国家超级计算济南中心)自然语言处理与认知计算团队研发。我被设计成能够理解和生成自然语言文本,以便与人类进行中医辩证、中医处方推荐、中医知识问答、中医问题咨询等方面的对话交流,辅助人们完成疾病诊断相关的任务。请问有什么我可以帮助您的吗?
query: 下面是一名患者的基本情况。年龄:78岁,性别:女。主 诉:活动后胸痛一周。现病史:患者一周前活动后出现胸口隐隐作痛,如针刺样乏力气短,活动后汗出,偏头痛。中医望闻切诊:表情自然,面色红润,形体正常,语气清,气息平;无异常气味,舌暗红,苔少。请你根据上述患者的主诉、病史和中医望闻切诊情况,判断该患者的主要中医疾病和中医证型,并给出中医辨病辨证的依据。
response: 根据中医的诊断方法,患者患有胸痹心痛,中医证型属于气虚血瘀证。综合脉证,四诊合参,本病当属祖国医学“胸痹心痛病”范畴,证属“气虚血瘀”。患者素体虚弱,久病伤正,伤及心气,心气衰微,机能不健,致阴邪易于上乘阳位,况心脉为宗气之所,百脉朝会之枢,宗气的鼓动形成了心气推动血液运行全身,心气不足则血行无力瘀滞,发为本病,舌脉俱为佐证。
history: [['你好,你是谁?', '你好!我是一个名为扁仓中医大模型的人工智能,由齐鲁工业大学(山东省科学院)计算机科学与技术学部(国家超级计算济南中心)自然语言处理与认知计算团队研发。我被设计成能够理解和生成自然语言文本,以便与人类进行中医辩证、中医处方推荐、中医知识问答、中医问题咨询等方面的对话交流,辅助人们完成疾病诊断相关的任务。请问有什么我可以帮助您的吗?'], ['下面是一名患者的基本情况。年龄:78岁,性别:女。主 诉:活动后胸痛一周。现病史:患者一周前活动后出现胸口隐隐作痛,如针刺样乏力气短,活动后汗出,偏头痛。中医望闻切诊:表情自然,面色红润,形体正常,语气清,气息平;无异常气味,舌暗红,苔少。请你根据上述患者的主诉、病史和中医望闻切诊情况,判断该患者的主要中医疾病和中医证型,并给出中医辨病辨证的依据。', '根据中医的诊断方法,患者患有胸痹心痛,中医证型属于气虚血瘀证。综合脉证,四诊合参,本病当属祖国医学“胸痹心痛病”范畴,证属“气虚血瘀”。患者素体虚弱,久病伤正,伤及心气,心气衰微,机能不健,致阴邪易于上乘阳位,况心脉为宗气之所,百脉朝会之枢,宗气的鼓动形成了心气推动血液运行全身,心气不足则血行无力瘀滞,发为本病,舌脉俱为佐证。']]
```
#### 推理方式2-部署API
使用以下命令部署API:
```shell
CUDA_VISIBLE_DEVICES=0 swift deploy --model_type qwen2_5-7b-instruct --model_id_or_path QLU-NLP/BianCang-Qwen2.5-7B-Instruct --port 8090 --template_type qwen
```
使用curl进行测试:
```shell
curl http://localhost:8090/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "qwen2_5-7b-instruct",
"messages": [{"role": "user", "content": "你好,你是谁?"}],
"max_tokens": 256,
"temperature": 0.3
}'
```
响应如下:
```json
{"model":"qwen2_5-7b-instruct",
"choices":[{"index":0,"message":{"role":"assistant","content":"你好!我是一个名为扁仓中医大模型的人工智能,由齐鲁工业大学(山东省科学院)计算机科学与技术学部(国家超级计算济南中心)自然语言处理与认知计算团队研发。我被设计成能够理解和生成自然语言文本,以便与人类进行中医辩证、中医处方推荐、中医知识问答、中医问题咨询等方面的对话交流,辅助人们完成疾病诊断相关的任务。请问有什么我可以帮助您的吗?",
"tool_calls":null},"finish_reason":null,"logprobs":null}],
"usage":{"prompt_tokens":24,"completion_tokens":92,"total_tokens":116},
"id":"chatcmpl-6b4a02dee57a42238b27b5c40085df16",
"object":"chat.completion","created":1730209011}
```
使用代码进行测试:
```python
from swift.llm import get_model_list_client, XRequestConfig, inference_client
model_list = get_model_list_client(port=8090)
model_type = model_list.data[0].id
print(f'model_type: {model_type}')
query = "你好,你是谁?"
request_config = XRequestConfig(seed=42)
resp = inference_client(model_type, query, request_config=request_config, port=8090)
response = resp.choices[0].message.content
print(f'query: {query}')
print(f'response: {response}')
history = [(query, response)]
query = '下面是一名患者的基本情况。年龄:78岁,性别:女。主 诉:活动后胸痛一周。现病史:患者一周前活动后出现胸口隐隐作痛,如针刺样乏力气短,活动后汗出,偏头痛。中医望闻切诊:表情自然,面色红润,形体正常,语气清,气息平;无异常气味,舌暗红,苔少。请你根据上述患者的主诉、病史和中医望闻切诊情况,判断该患者的主要中医疾病和中医证型,并给出中医辨病辨证的依据。'
request_config = XRequestConfig(stream=True, seed=42)
stream_resp = inference_client(model_type, query, history, request_config=request_config, port=8090)
print(f'query: {query}')
print('response: ', end='')
for chunk in stream_resp:
print(chunk.choices[0].delta.content, end='', flush=True)
print()
```
输出如下:
```
model_type: qwen2_5-7b-instruct
query: 你好,你是谁?
response: 你好!我是一个名为扁仓中医大模型的人工智能,由齐鲁工业大学(山东省科学院)计算机科学与技术学部(国家超级计算济南中心)自然语言处理与认知计算团队研发。我被设计成能够理解和生成自然语言文本,以便与人类进行中医辩证、中医处方推荐、中医知识问答、中医问题咨询等方面的对话交流,辅助人们完成疾病诊断相关的任务。请问有什么我可以帮助您的吗?
query: 下面是一名患者的基本情况。年龄:78岁,性别:女。主 诉:活动后胸痛一周。现病史:患者一周前活动后出现胸口隐隐作痛,如针刺样乏力气短,活动后汗出,偏头痛。中医望闻切诊:表情自然,面色红润,形体正常,语气清,气息平;无异常气味,舌暗红,苔少。请你根据上述患者的主诉、病史和中医望闻切诊情况,判断该患者的主要中医疾病和中医证型,并给出中医辨病辨证的依据。
response: 根据中医的诊断方法,患者患有胸痹心痛,中医证型属于气虚血瘀证。综合脉证,四诊合参,本病当属祖国医学“胸痹心痛病”范畴,证属“气虚血瘀”。患者素体虚弱,久病伤正,伤及心气,心气衰微,机能不健,致阴邪易于上乘阳位,况心脉为宗气之所,百脉朝会之枢,宗气的鼓动形成了心气推动血液运行全身,心气不足则血行无力瘀滞,发为本病,舌脉俱为佐证。
```
### 使用Transformers
你也可以使用transformers包进行推理:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "QLU-NLP/BianCang-Qwen2.5-7B-Instruct"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
prompt = "你好,你是谁?"
messages = [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=256
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(response)
```
### 使用Web UI
我们提供了一个简单的演示Web UI。
安装streamlit:
```shell
pip install streamlit
```
使用SWIFT部署API:
```shell
CUDA_VISIBLE_DEVICES=0 swift deploy --model_type qwen2_5-7b-instruct --model_id_or_path QLU-NLP/BianCang-Qwen2.5-7B-Instruct --port 8090 --template_type qwen
```
启动streamlit:
```shell
streamlit run web_ui.py
```

## 🥇中医能力测试
<table border="1" cellpadding="5" cellspacing="0">
<thead>
<tr>
<th rowspan="2">Model</th>
<th colspan="4">TCM Syndrome Differentiation</th>
<th colspan="4">TCM Disease Diagnosis</th>
<th colspan="4">TCM Exam</th>
</tr>
<tr>
<th colspan="2">TCMSD <br>Acc.(%)</th>
<th colspan="2">TCMSD-BC<br> Acc.(%)</th>
<th colspan="2">TCMDD<br> Acc.(%)</th>
<th colspan="2">TCMDD-BC<br> Acc.(%)</th>
<th colspan="2">MLEC-TCM<br> Acc.(%)</th>
<th colspan="2">MLEC-CWM<br> Acc.(%)</th>
</tr>
<tr>
<th></th><th>DI</th><th>CoT</th><th>DI</th><th>CoT</th><th>DI</th><th>CoT</th><th>DI</th><th>CoT</th><th>ZS</th><th>FS</th><th>ZS</th><th>FS</th>
</tr>
</thead>
<tbody align="center" valign="center">
<tr><td>GPT-4</td><td>24.53</td><td>45.21</td><td>16.67</td><td>70.73</td><td>27.83</td><td>54.54</td><td>41.80</td><td>68.33</td><td>74.70</td><td>76.35</td><td>76.26</td><td>76.37</td></tr>
<tr><td>DeepSeek-V3</td><td>34.62</td><td>40.74</td><td>24.53</td><td>72.00</td><td>46.97</td><td>59.08</td><td>82.67</td><td>72.93</td><td>84.97</td><td>88.56</td><td>85.05</td><td>87.81</td></tr>
<tr><td>DeepSeek-R1</td><td>37.17</td><td>55.67</td><td>25.53</td><td>76.07</td><td>50.66</td><td>80.75</td><td>79.27</td><td>94.53</td><td>92.68</td><td>93.10</td><td>90.92</td><td>90.77</td></tr>
<tr><td>Qwen2-7B</td><td>31.74</td><td>27.18</td><td>32.73</td><td>28.40</td><td>41.60</td><td>54.59</td><td>74.87</td><td>77.93</td><td>86.01</td><td>89.18</td><td>84.45</td><td>87.89</td></tr>
<tr><td>Qwen2-7B-Instruct</td><td>25.70</td><td>33.41</td><td>14.27</td><td>57.00</td><td>32.87</td><td>52.92</td><td>60.40</td><td>60.13</td><td>83.61</td><td>84.22</td><td>79.89</td><td>82.99</td></tr>
<tr><td>Qwen2.5-7B</td><td>30.44</td><td>21.29</td><td>17.87</td><td>35.73</td><td>23.71</td><td>43.88</td><td>63.87</td><td>71.27</td><td>83.32</td><td>85.52</td><td>82.02</td><td>84.04</td></tr>
<tr><td>Qwen2.5-7B-Instruct</td><td>24.30</td><td>32.19</td><td>9.93</td><td>57.07</td><td>36.29</td><td>51.51</td><td>62.93</td><td>55.53</td><td>78.72</td><td>79.88</td><td>77.27</td><td>78.43</td></tr>
<tr><td>Qwen2.5-14B</td><td>35.62</td><td>25.21</td><td>33.93</td><td>30.13</td><td>24.33</td><td>36.64</td><td>33.33</td><td>32.80</td><td>86.59</td><td>89.93</td><td>87.10</td><td>90.06</td></tr>
<tr><td>Qwen2.5-14B-Instruct</td><td>25.94</td><td>35.03</td><td>16.07</td><td>60.00</td><td>38.30</td><td>49.31</td><td>46.27</td><td>53.67</td><td>82.25</td><td>84.81</td><td>81.79</td><td>85.68</td></tr>
<tr><td>BianCang-Qwen2-7B</td><td>42.14</td><td>30.30</td><td>57.80</td><td>48.00</td><td>43.73</td><td>54.67</td><td>74.73</td><td>80.67</td><td>90.86</td><td>91.87</td><td>89.08</td><td>90.36</td></tr>
<tr><td>BianCang-Qwen2-7B-Instruct</td><td>68.88</td><td>75.96</td><td>57.33</td><td>75.40</td><td>64.42</td><td>77.71</td><td><b>89.07</b></td><td>85.67</td><td><b>92.39</b></td><td><b>92.39</b></td><td>91.14</td><td>91.48</td></tr>
<tr><td>BianCang-Qwen2.5-7B</td><td>46.57</td><td>26.72</td><td>52.93</td><td>45.47</td><td>49.80</td><td>53.15</td><td>68.13</td><td>61.73</td><td>86.46</td><td>86.30</td><td>83.93</td><td>85.35</td></tr>
<tr><td>BianCang-Qwen2.5-7B-Instruct</td><td>78.90</td><td><b>82.10</b></td><td><b>66.73</b></td><td><b>77.73</b></td><td>73.73</td><td><b>82.65</b></td><td>87.87</td><td><b>89.40</b></td><td>90.22</td><td>90.57</td><td>90.32</td><td>90.62</td></tr>
<tr><td>BianCang-Qwen2.5-14B</td><td>43.77</td><td>33.96</td><td>61.93</td><td>53.47</td><td>66.61</td><td>60.39</td><td>82.93</td><td>77.07</td><td>89.28</td><td>90.86</td><td>89.42</td><td>90.58</td></tr>
<tr><td>BianCang-Qwen2.5-14B-Instruct</td><td><b>79.38</b></td><td>75.54</td><td>62.27</td><td>70.73</td><td><b>77.63</b></td><td>82.05</td><td>86.33</td><td>88.73</td><td>92.29</td><td>92.29</td><td><b>92.75</b></td><td><b>92.86</b></td></tr>
</tbody>
</table>
<br>
<table border="1">
<tr>
<th>Model</th>
<th>CMB Acc.(%)</th>
<th colspan="2">MLEC-Clinic <br>Acc.(%)</th>
<th colspan="2">MLEC-PublicHealth<br> Acc.(%)</th>
<th colspan="2">MLEC-Stomatology<br> Acc.(%)</th>
</tr>
<tr>
<th></th>
<th>ZS/FS</th>
<th>ZS</th>
<th>FS</th>
<th>ZS</th>
<th>FS</th>
<th>ZS</th>
<th>FS</th>
</tr>
<tr>
<td>GPT-4</td>
<td>59.46*</td>
<td>82.63</td>
<td>82.69</td>
<td>81.55</td>
<td>82.58</td>
<td>72.97</td>
<td>75.43</td>
</tr>
<tr>
<td>DeepSeek-V3</td>
<td>82.33</td>
<td>86.83</td>
<td>89.41</td>
<td>85.38</td>
<td>87.59</td>
<td>79.09</td>
<td>81.97</td>
</tr>
<tr>
<td>DeepSeek-R1</td>
<td>86.38</td>
<td>92.51</td>
<td>92.36</td>
<td>91.42</td>
<td>90.40</td>
<td>87.03</td>
<td>86.16</td>
</tr>
<tr>
<td>Qwen2-7B</td>
<td>81.63</td>
<td>87.63</td>
<td>90.63</td>
<td>82.63</td>
<td>86.79</td>
<td>80.34</td>
<td>84.65</td>
</tr>
<tr>
<td>Qwen2-7B-Instruct</td>
<td>83.45</td>
<td>85.16</td>
<td>83.35</td>
<td>81.61</td>
<td>81.07</td>
<td>76.29</td>
<td>75.88</td>
</tr>
<tr>
<td>Qwen2.5-7B</td>
<td>79.60</td>
<td>86.65</td>
<td>88.55</td>
<td>83.39</td>
<td>85.17</td>
<td>78.03</td>
<td>80.79</td>
</tr>
<tr>
<td>Qwen2.5-7B-Instruct</td>
<td>79.51</td>
<td>82.81</td>
<td>83.73</td>
<td>80.96</td>
<td>80.85</td>
<td>72.93</td>
<td>74.40</td>
</tr>
<tr>
<td>Qwen2.5-14B</td>
<td>84.07</td>
<td>90.40</td>
<td>93.13</td>
<td>86.46</td>
<td>89.54</td>
<td>84.31</td>
<td>88.20</td>
</tr>
<tr>
<td>Qwen2.5-14B-Instruct</td>
<td>83.69</td>
<td>86.47</td>
<td>88.02</td>
<td>83.17</td>
<td>86.14</td>
<td>78.94</td>
<td>82.57</td>
</tr>
<tr>
<td>BianCang-Qwen2-7B (Ours)</td>
<td>83.27</td>
<td>91.88</td>
<td>93.31</td>
<td>88.57</td>
<td>90.72</td>
<td>85.29</td>
<td>88.47</td>
</tr>
<tr>
<td>BianCang-Qwen2-7B-Instruct (Ours)</td>
<td>84.08</td>
<td>94.35</td>
<td>94.35</td>
<td>91.37</td>
<td><b>91.64</b></td>
<td>89.19</td>
<td>90.02</td>
</tr>
<tr>
<td>BianCang-Qwen2.5-7B (Ours)</td>
<td>80.13</td>
<td>90.43</td>
<td>91.32</td>
<td>85.65</td>
<td>87.22</td>
<td>82.19</td>
<td>82.65</td>
</tr>
<tr>
<td>BianCang-Qwen2.5-7B-Instruct (Ours)</td>
<td>80.71</td>
<td>93.40</td>
<td>93.43</td>
<td>89.91</td>
<td>89.91</td>
<td>86.43</td>
<td>86.77</td>
</tr>
<tr>
<td>BianCang-Qwen2.5-14B (Ours)</td>
<td><b>84.34</b></td>
<td>91.70</td>
<td>93.37</td>
<td>87.92</td>
<td>89.97</td>
<td>86.16</td>
<td>87.94</td>
</tr>
<tr>
<td>BianCang-Qwen2.5-14B-Instruct (Ours)</td>
<td>83.80</td>
<td><b>94.74</b></td>
<td><b>94.97</b></td>
<td><b>91.86</b></td>
<td>91.53</td>
<td><b>90.43</b></td>
<td><b>90.51</b></td>
</tr>
</table>
更多测评结果请关注我们的技术报告。
## 🧡致谢
本项目基于开源项目进行开发,在此对相关项目和研究开发人员表示感谢。
- [Qwen2](https://github.com/vitanova/Qwen2)
- [Qwen2.5](https://github.com/QwenLM/Qwen2.5)
- [SWIFT](https://github.com/modelscope/ms-swift)
- [ModelScope](https://github.com/modelscope/modelscope)
- [ShenNong-TCM-LLM](https://github.com/michael-wzhu/ShenNong-TCM-LLM?tab=readme-ov-file)
- [HuatuoGPT-II](https://github.com/FreedomIntelligence/HuatuoGPT-II)
- [DISC-MedLLM](https://github.com/FudanDISC/DISC-MedLLM)
- [MLEC-QA](https://github.com/Judenpech/MLEC-QA)
- [CMB](https://github.com/FreedomIntelligence/CMB?tab=readme-ov-file)
- [ZY-BERT](https://github.com/Borororo/ZY-BERT)
- [COIG](https://github.com/BAAI-Zlab/COIG)
- [APE210k](https://github.com/Chenny0808/ape210k)
- [Evol-Instruction-66K](https://github.com/Continuum-Labs-HQ/EvolInstruct)
## ❔关于我们
本项目由齐鲁工业大学(山东省科学院)计算学部(国家超级计算济南中心)自然语言处理与认知计算团队、山东~~省~~中医药大学附属医院临床研究中心合作完成。
<div align="center">
<p>
<img src="assets/QLU-NLP-logo.png" width="500px"/>
</p>
</div>
<div align="center">
<p>
<img src="assets/超算logo.png" width="500px"/>
</p>
</div>
<p>
<div align="center">
<p>
<img src="assets/山中医logo.png" width="500px"/>
</p>
</div>
## ❕免责声明
- 本项目相关资源仅供学术研究之用。
- 扁仓中医大模型作为基于语言模型的智能助手,具有局限性,无法保证所有响应的准确性,其不能代替中医/西医进行医学诊断和给出医学建议。如有需要,请咨询专业医生或前往医院就诊。
- 由于医疗领域的数据不准确可能造成严重后果,我们强烈建议用户在处理生成的信息时要小心谨慎,并向专家寻求建议。
## 📖引用
```
@article{Wei2024BianCang,
title={BianCang: A Traditional Chinese Medicine Large Language Model},
author={Sibo, Wei and Xueping, Peng and Yi-fei, Wang and Jiasheng, Si and Weiyu, Zhang and Wenpeng, Lu and Xiaoming, Wu and Yinglong, Wang},
journal={arXiv preprint arXiv:2411.11027},
year={2024}
}
```
|
Laudando-Associates-LLC/d-fine | Laudando-Associates-LLC | "2025-05-13T05:40:23Z" | 20 | 0 | pytorch | [
"pytorch",
"d_fine",
"object-detection",
"onnx",
"safetensors",
"AgTech",
"transformers",
"custom_code",
"en",
"dataset:Laudando-Associates-LLC/pucks",
"arxiv:2410.13842",
"license:apache-2.0",
"region:us"
] | object-detection | "2025-05-09T19:22:02Z" | ---
language:
- en
license: apache-2.0
tags:
- object-detection
- onnx
- safetensors
- AgTech
- transformers
library_name: pytorch
inference: false
datasets:
- Laudando-Associates-LLC/pucks
---
<h1 align="center"><strong>D-FINE</strong></h1>
<p align="center">
<a href="https://huggingface.co/Laudando-Associates-LLC/d-fine">
<img src="https://img.shields.io/badge/HuggingFace-Model-yellow?logo=huggingface&style=for-the-badge">
</a>
</p>
<div align="justify">
[D-FINE: Redefine Regression Task in DETRs as Fine-grained Distribution Refinement](https://arxiv.org/abs/2410.13842) is a family of real-time object detectors that improve localization accuracy by rethinking how bounding boxes are predicted in DETR-style models. Instead of directly regressing box coordinates, D-FINE introduces a distribution based refinement approach that progressively sharpens predictions over multiple stages.
It also includes a self-distillation mechanism that passes refined localization knowledge to earlier layers, improving training efficiency and model robustness. Combined with lightweight architectural optimizations, D-FINE achieves a strong balance between speed and accuracy.
This repository provides five pretrained variants — Nano, Small, Medium, Large, and Extra Large — offering a trade-off between speed and accuracy for different deployment needs.
</div>
<h3 align="left">Sample Predictions Across D-FINE Variants</h3>
<table align="center">
<tr>
<td align="center"><img src="assets/nano.png" alt="Nano" style="width:100%; max-width:300px;"><br><strong>Nano</strong></td>
<td align="center"><img src="assets/small.png" alt="Small" style="width:100%; max-width:300px;"><br><strong>Small</strong></td>
</tr>
<tr>
<td align="center"><img src="assets/medium.png" alt="Medium" style="width:100%; max-width:300px;"><br><strong>Medium</strong></td>
<td align="center"><img src="assets/large.png" alt="Large" style="width:100%; max-width:300px;"><br><strong>Large</strong></td>
</tr>
</table>
## Try it in the Browser
You can test the model(s) using our interactive Gradio demo:
<p align="center">
<a href="https://huggingface.co/spaces/Laudando-Associates-LLC/d-fine-demo">
<img src="https://img.shields.io/badge/Launch%20Demo-Gradio-FF4B4B?logo=gradio&logoColor=white&style=for-the-badge">
</a>
</p>
## D-FINE Variants
The D-FINE family includes five model sizes trained on the [L&A Pucks Dataset](https://huggingface.co/datasets/Laudando-Associates-LLC/pucks), each offering a different balance between model size and detection accuracy.
| Variant | Parameters | mAP@[0.50:0.95] | Model Card | ONNX | PyTorch |
|:------------:|:----------:|:---------------:|:-----------:|:--------------:|:-------:|
| Nano | 3.76M | 0.825 | <a href="https://huggingface.co/Laudando-Associates-LLC/d-fine-nano"><img src="https://img.shields.io/badge/HuggingFace-Model-yellow?logo=huggingface&style=for-the-badge"></a> | <a href="https://huggingface.co/Laudando-Associates-LLC/d-fine-nano/resolve/main/model.onnx"><img src="https://img.shields.io/badge/-ONNX-005CED?style=for-the-badge&logo=onnx&logoColor=white"></a> | <a href="https://huggingface.co/Laudando-Associates-LLC/d-fine-nano/resolve/main/pytorch_model.bin"><img src="https://img.shields.io/badge/PyTorch-EE4C2C?style=for-the-badge&logo=pytorch&logoColor=white"></a> |
| Small | 10.3M | 0.816 | <a href="https://huggingface.co/Laudando-Associates-LLC/d-fine-small"><img src="https://img.shields.io/badge/HuggingFace-Model-yellow?logo=huggingface&style=for-the-badge"></a> | <a href="https://huggingface.co/Laudando-Associates-LLC/d-fine-small/resolve/main/model.onnx"><img src="https://img.shields.io/badge/-ONNX-005CED?style=for-the-badge&logo=onnx&logoColor=white"></a> | <a href="https://huggingface.co/Laudando-Associates-LLC/d-fine-small/resolve/main/pytorch_model.bin"><img src="https://img.shields.io/badge/PyTorch-EE4C2C?style=for-the-badge&logo=pytorch&logoColor=white"></a> |
| Medium | 19.6M | 0.840 | <a href="https://huggingface.co/Laudando-Associates-LLC/d-fine-medium"><img src="https://img.shields.io/badge/HuggingFace-Model-yellow?logo=huggingface&style=for-the-badge"></a> | <a href="https://huggingface.co/Laudando-Associates-LLC/d-fine-medium/resolve/main/model.onnx"><img src="https://img.shields.io/badge/-ONNX-005CED?style=for-the-badge&logo=onnx&logoColor=white"></a> | <a href="https://huggingface.co/Laudando-Associates-LLC/d-fine-medium/resolve/main/pytorch_model.bin"><img src="https://img.shields.io/badge/PyTorch-EE4C2C?style=for-the-badge&logo=pytorch&logoColor=white"></a> |
| Large | 31.2M | 0.828 | <a href="https://huggingface.co/Laudando-Associates-LLC/d-fine-large"><img src="https://img.shields.io/badge/HuggingFace-Model-yellow?logo=huggingface&style=for-the-badge"></a> | <a href="https://huggingface.co/Laudando-Associates-LLC/d-fine-large/resolve/main/model.onnx"><img src="https://img.shields.io/badge/-ONNX-005CED?style=for-the-badge&logo=onnx&logoColor=white"></a> | <a href="https://huggingface.co/Laudando-Associates-LLC/d-fine-large/resolve/main/pytorch_model.bin"><img src="https://img.shields.io/badge/PyTorch-EE4C2C?style=for-the-badge&logo=pytorch&logoColor=white"></a> |
| Extra Large | 62.7M | 0.803 | <a href="https://huggingface.co/Laudando-Associates-LLC/d-fine-xlarge"><img src="https://img.shields.io/badge/HuggingFace-Model-yellow?logo=huggingface&style=for-the-badge"></a> | <a href="https://huggingface.co/Laudando-Associates-LLC/d-fine-xlarge/resolve/main/model.onnx"><img src="https://img.shields.io/badge/-ONNX-005CED?style=for-the-badge&logo=onnx&logoColor=white"></a> | <a href="https://huggingface.co/Laudando-Associates-LLC/d-fine-xlarge/resolve/main/pytorch_model.bin"><img src="https://img.shields.io/badge/PyTorch-EE4C2C?style=for-the-badge&logo=pytorch&logoColor=white"></a> |
> mAP values are evaluated on the validation set of the [L&A Pucks Dataset](https://huggingface.co/datasets/Laudando-Associates-LLC/pucks).
## Installation
```bash
pip install -r requirements.txt
```
> Tip: Use a virtual environment (venv or conda) to avoid dependency conflicts.
## Quick start on [L&A Pucks Dataset](https://huggingface.co/datasets/Laudando-Associates-LLC/pucks)
```python
from datasets import load_dataset
from transformers import AutoProcessor, AutoModel
from PIL import ImageDraw, ImageFont
# Load the validation split (or 'train')
ds = load_dataset("Laudando-Associates-LLC/pucks", split="test")
# Access the first example
image = ds[1]["image"]
# Load processor and model
processor = AutoProcessor.from_pretrained("Laudando-Associates-LLC/d-fine", trust_remote_code=True)
model = AutoModel.from_pretrained(f"Laudando-Associates-LLC/d-fine-nano", trust_remote_code=True)
# Process the image, reize and pad
inputs = processor(image)
# Run inference
outputs = model(**inputs, conf_threshold=0.4)
# Draw boxes
draw = ImageDraw.Draw(image)
font = ImageFont.truetype("DejaVuSans-Bold.ttf", size=24)
for result in outputs:
boxes = result["boxes"]
labels = result["labels"]
scores = result["scores"]
for box, label, score in zip(boxes, labels, scores):
x1, y1, x2, y2 = box.tolist()
draw.rectangle([x1, y1, x2, y2], outline="blue", width=5)
draw.text((x1, max(0, y1 - 25)), f"{score:.2f}", fill="blue", font=font)
# Save result
image.save("output.jpg")
```
## How to Use
The D-FINE model family uses a shared processor and variant-specific models. All components are compatible with Hugging Face's ```transformers``` library via ```trust_remote_code=True```.
### Step 1: Load the Preprocessor
The preprocessor is common to all D-FINE variants and handles resizing and padding.
```python
from transformers import AutoProcessor
# Load the shared D-FINE processor
processor = AutoProcessor.from_pretrained("Laudando-Associates-LLC/d-fine", trust_remote_code=True)
```
### Step 2: Load a D-FINE model variant
You can choose from any of the five variants: Nano, Small, Medium, Large, or Extra Large.
```python
from transformers import AutoModel
model_variant = "nano" # small, medium, large, xlarge
# Load the D-FINE model variant
model = AutoModel.from_pretrained(f"Laudando-Associates-LLC/d-fine-{model_variant}", trust_remote_code=True)
```
### Step 3: Run Inference
Using Pillow with a single or batch images:
```python
from PIL import Image
# Single image
image = Image.open("your_image.jpg").convert("RGB")
inputs = processor(image)
# Batch of images
batch_images = [
Image.open("image1.jpg").convert("RGB"),
Image.open("image2.jpg").convert("RGB")
]
inputs = processor(batch_images)
# Run inference
outputs = model(**inputs, conf_threshold=0.4)
for result in outputs:
boxes = result["boxes"] # [N, 4] bounding boxes (x1, y1, x2, y2)
labels = result["labels"] # [N] class indices
scores = result["scores"] # [N] confidence scores
```
Using OpenCV with a single or batch images:
```python
import cv2
# Single OpenCV image (BGR)
image = cv2.imread("your_image.jpg")
inputs = processor(image)
# Batch of OpenCV images
batch_images = [
cv2.imread("image1.jpg"),
cv2.imread("image2.jpg")
]
inputs = processor(batch_images)
# Run inference
outputs = model(**inputs, conf_threshold=0.4)
for result in outputs:
boxes = result["boxes"] # [N, 4] bounding boxes (x1, y1, x2, y2)
labels = result["labels"] # [N] class indices
scores = result["scores"] # [N] confidence scores
```
## License
The D-FINE models use [Apache License 2.0](https://github.com/Peterande/D-FINE/blob/master/LICENSE). The L&A Pucks Dataset which the models have been trained on use [L&Aser Dataset Replication License (Version 1.0)](https://huggingface.co/datasets/Laudando-Associates-LLC/pucks/blob/main/LICENSE).
## Citation
If you use `D-FINE` or its methods in your work, please cite the following BibTeX entries:
```latex
@misc{peng2024dfine,
title={D-FINE: Redefine Regression Task in DETRs as Fine-grained Distribution Refinement},
author={Yansong Peng and Hebei Li and Peixi Wu and Yueyi Zhang and Xiaoyan Sun and Feng Wu},
year={2024},
eprint={2410.13842},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
``` |
lisabdunlap/qwen_3_8b_on_pretrained_new_set_ft_e20 | lisabdunlap | "2025-05-13T05:39:21Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"unsloth",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-05-13T05:37:02Z" | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
shinheewon/m2m100_en-ja | shinheewon | "2025-05-13T05:39:00Z" | 0 | 0 | null | [
"safetensors",
"m2m_100",
"dataset:sentence-transformers/parallel-sentences-opensubtitles",
"base_model:facebook/m2m100_418M",
"base_model:finetune:facebook/m2m100_418M",
"license:mit",
"region:us"
] | null | "2025-05-13T05:15:28Z" | ---
license: mit
datasets:
- sentence-transformers/parallel-sentences-opensubtitles
base_model:
- facebook/m2m100_418M
--- |
Dukeeeeee/Thompson | Dukeeeeee | "2025-05-13T05:36:27Z" | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | "2025-05-13T05:36:27Z" | ---
license: apache-2.0
---
|
JOSESMOKE/tear_522 | JOSESMOKE | "2025-05-13T05:33:31Z" | 0 | 0 | null | [
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] | any-to-any | "2025-05-13T03:13:10Z" | ---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
DevQuasar/ValiantLabs.Qwen3-8B-Esper3-GGUF | DevQuasar | "2025-05-13T05:31:59Z" | 0 | 0 | null | [
"gguf",
"text-generation",
"base_model:ValiantLabs/Qwen3-8B-Esper3",
"base_model:quantized:ValiantLabs/Qwen3-8B-Esper3",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | "2025-05-13T03:57:07Z" | ---
base_model:
- ValiantLabs/Qwen3-8B-Esper3
pipeline_tag: text-generation
---
[<img src="https://raw.githubusercontent.com/csabakecskemeti/devquasar/main/dq_logo_black-transparent.png" width="200"/>](https://devquasar.com)
Quantized version of: [ValiantLabs/Qwen3-8B-Esper3](https://huggingface.co/ValiantLabs/Qwen3-8B-Esper3)
'Make knowledge free for everyone'
<p align="center">
Made with <br>
<a href="https://www.civo.com/" target="_blank">
<img src="https://www.civo.com/assets/public/brand-assets/civo-logo-colour-60cc1622dedf346f7afde1fff760523f731b0aac106a5465af98ff4073114b74.svg" width="100"/>
</a>
</p>
<a href='https://ko-fi.com/L4L416YX7C' target='_blank'><img height='36' style='border:0px;height:36px;' src='https://storage.ko-fi.com/cdn/kofi6.png?v=6' border='0' alt='Buy Me a Coffee at ko-fi.com' /></a>
|
JoshMe1/dd2ed4f2-ef80-41b8-9b9f-8c881cd890fd | JoshMe1 | "2025-05-13T05:31:50Z" | 0 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"gptj",
"text-generation",
"generated_from_trainer",
"axolotl",
"dpo",
"trl",
"conversational",
"arxiv:2305.18290",
"base_model:furiosa-ai/mlperf-gpt-j-6b",
"base_model:finetune:furiosa-ai/mlperf-gpt-j-6b",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-05-12T23:09:05Z" | ---
base_model: furiosa-ai/mlperf-gpt-j-6b
library_name: transformers
model_name: dd2ed4f2-ef80-41b8-9b9f-8c881cd890fd
tags:
- generated_from_trainer
- axolotl
- dpo
- trl
licence: license
---
# Model Card for dd2ed4f2-ef80-41b8-9b9f-8c881cd890fd
This model is a fine-tuned version of [furiosa-ai/mlperf-gpt-j-6b](https://huggingface.co/furiosa-ai/mlperf-gpt-j-6b).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="JoshMe1/dd2ed4f2-ef80-41b8-9b9f-8c881cd890fd", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/fareljtm12-uty/Gradients-On-Demand/runs/isa1wyi2)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.12.0.dev0
- Transformers: 4.46.0
- Pytorch: 2.5.0+cu124
- Datasets: 3.0.1
- Tokenizers: 0.20.1
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
lisabdunlap/qwen_3_8b_on_pretrained_new_set_ft_e10 | lisabdunlap | "2025-05-13T05:31:43Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"unsloth",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-05-13T05:29:20Z" | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
chevyperez/tutorialmayo2025 | chevyperez | "2025-05-13T05:29:29Z" | 0 | 0 | null | [
"license:other",
"region:us"
] | null | "2025-05-13T03:24:13Z" | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
--- |
0utis9/sentiment-analyzer | 0utis9 | "2025-05-13T05:29:03Z" | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | "2025-05-13T05:29:03Z" | ---
license: apache-2.0
---
|
Wallace51/Lawrence | Wallace51 | "2025-05-13T05:27:05Z" | 0 | 0 | null | [
"license:bigscience-openrail-m",
"region:us"
] | null | "2025-05-13T05:27:05Z" | ---
license: bigscience-openrail-m
---
|
Alec57/Newton | Alec57 | "2025-05-13T05:27:05Z" | 0 | 0 | null | [
"license:bigcode-openrail-m",
"region:us"
] | null | "2025-05-13T05:27:05Z" | ---
license: bigcode-openrail-m
---
|
Brice54/Elinor | Brice54 | "2025-05-13T05:27:05Z" | 0 | 0 | null | [
"license:bigcode-openrail-m",
"region:us"
] | null | "2025-05-13T05:27:05Z" | ---
license: bigcode-openrail-m
---
|
marianoiry/gensyn-checkpoints-sturdy_twitchy_jay | marianoiry | "2025-05-13T05:23:32Z" | 5 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am sturdy twitchy jay",
"unsloth",
"trl",
"conversational",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-1.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-04-19T00:44:29Z" | ---
base_model: Gensyn/Qwen2.5-1.5B-Instruct
library_name: transformers
model_name: gensyn-checkpoints-sturdy_twitchy_jay
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am sturdy twitchy jay
- unsloth
- trl
licence: license
---
# Model Card for gensyn-checkpoints-sturdy_twitchy_jay
This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="marianoiry/gensyn-checkpoints-sturdy_twitchy_jay", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.6.0
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
RaamT/aadhaar_annotations | RaamT | "2025-05-13T05:20:50Z" | 0 | 0 | null | [
"region:us"
] | null | "2025-05-13T04:47:55Z" | # Aadhaar Annotations
YOLO model for extracting text fields from Aadhaar cards.
- Task: Object Detection
- Framework: Ultralytics YOLO
- Classes: Aadhaar_Names, Aadhaar_DOB, Aadhaar_Gender, Aadhaar_Address
- License: MIT |
SM0K1NGUN/gemma-medical-finetuned | SM0K1NGUN | "2025-05-13T05:18:05Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:google/gemma-1.1-2b-it",
"base_model:adapter:google/gemma-1.1-2b-it",
"region:us"
] | null | "2025-05-13T05:04:05Z" | ---
base_model: google/gemma-1.1-2b-it
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.2 |
RaamT/voter_id_annotations | RaamT | "2025-05-13T05:17:49Z" | 0 | 0 | null | [
"region:us"
] | null | "2025-05-13T04:49:08Z" | # ID Classifier
YOLO model for classifying ID documents.
- Task: Object Detection
- Framework: Ultralytics YOLO
- Classes: aadhar_front, aadhar_back, pan_card_front, dl_front, passport, voter_id_front
- License: MIT |
Tiba/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-purring_feathered_hawk | Tiba | "2025-05-13T05:17:05Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am purring feathered hawk",
"unsloth",
"trl",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-1.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct",
"endpoints_compatible",
"region:us"
] | null | "2025-05-10T17:17:36Z" | ---
base_model: Gensyn/Qwen2.5-1.5B-Instruct
library_name: transformers
model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-purring_feathered_hawk
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am purring feathered hawk
- unsloth
- trl
licence: license
---
# Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-purring_feathered_hawk
This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Tiba/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-purring_feathered_hawk", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.5.1
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
naser1973/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-snorting_tawny_quail | naser1973 | "2025-05-13T05:17:03Z" | 17 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am snorting tawny quail",
"trl",
"conversational",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-04-12T13:20:21Z" | ---
base_model: Gensyn/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-snorting_tawny_quail
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am snorting tawny quail
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-snorting_tawny_quail
This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="naser1973/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-snorting_tawny_quail", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.7.0
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
mci29/sn29_q2m5_e4so | mci29 | "2025-05-13T05:10:10Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-05-13T05:05:41Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
awdaw182/sdadawsda | awdaw182 | "2025-05-13T05:05:00Z" | 0 | 1 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | "2025-05-13T05:05:00Z" | ---
license: creativeml-openrail-m
---
|
awdaw183/sdadaswda | awdaw183 | "2025-05-13T05:05:00Z" | 0 | 1 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | "2025-05-13T05:05:00Z" | ---
license: creativeml-openrail-m
---
|
Naga1289/AdvUnlearn_Pegasus | Naga1289 | "2025-05-13T05:01:53Z" | 0 | 0 | diffusers | [
"diffusers",
"safetensors",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | "2025-05-13T04:59:48Z" | ---
library_name: diffusers
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Kepalakaki/Kakiku | Kepalakaki | "2025-05-13T04:56:39Z" | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | "2025-05-13T04:56:39Z" | ---
license: creativeml-openrail-m
---
|
kjamesh/Reinforce-PixelCopter_v1 | kjamesh | "2025-05-13T04:51:25Z" | 0 | 0 | null | [
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | "2025-05-13T01:08:55Z" | ---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-PixelCopter_v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 13.60 +/- 14.49
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
hophop1/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-winged_fanged_mallard | hophop1 | "2025-05-13T04:50:03Z" | 1 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am winged fanged mallard",
"trl",
"conversational",
"arxiv:2402.03300",
"base_model:unsloth/Qwen2.5-0.5B-Instruct",
"base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-05-08T14:14:10Z" | ---
base_model: unsloth/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-winged_fanged_mallard
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am winged fanged mallard
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-winged_fanged_mallard
This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="hophop1/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-winged_fanged_mallard", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.17.0
- Transformers: 4.51.3
- Pytorch: 2.7.0
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
AngelRaychev/0.5B-value-iteration_1 | AngelRaychev | "2025-05-13T04:48:31Z" | 311 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"conversational",
"base_model:AngelRaychev/0.5B-value-iteration_0",
"base_model:finetune:AngelRaychev/0.5B-value-iteration_0",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-04-23T06:40:19Z" | ---
library_name: transformers
license: apache-2.0
base_model: AngelRaychev/0.5B-value-iteration_0
tags:
- generated_from_trainer
model-index:
- name: 0.5B-value-iteration_1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 0.5B-value-iteration_1
This model is a fine-tuned version of [AngelRaychev/0.5B-value-iteration_0](https://huggingface.co/AngelRaychev/0.5B-value-iteration_0) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3933
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 1024
- eval_batch_size: 1024
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: constant
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-------:|:----:|:---------------:|
| 5.9606 | 0.8130 | 100 | 5.9124 |
| 4.1873 | 1.6260 | 200 | 4.2078 |
| 2.6983 | 2.4390 | 300 | 2.6579 |
| 1.0651 | 3.2520 | 400 | 1.0912 |
| 0.5126 | 4.0650 | 500 | 0.5242 |
| 0.4485 | 4.8780 | 600 | 0.4762 |
| 0.4188 | 5.6911 | 700 | 0.4344 |
| 0.4063 | 6.5041 | 800 | 0.4181 |
| 0.4025 | 7.3171 | 900 | 0.4204 |
| 0.3972 | 8.1301 | 1000 | 0.4092 |
| 0.3981 | 8.9431 | 1100 | 0.4093 |
| 0.3995 | 9.7561 | 1200 | 0.4161 |
| 0.3945 | 10.5691 | 1300 | 0.4101 |
| 0.3933 | 11.3821 | 1400 | 0.4063 |
| 0.3941 | 12.1951 | 1500 | 0.4039 |
| 0.3901 | 13.0081 | 1600 | 0.4029 |
| 0.3908 | 13.8211 | 1700 | 0.4024 |
| 0.3878 | 14.6341 | 1800 | 0.4007 |
| 0.3859 | 15.4472 | 1900 | 0.4011 |
| 0.3882 | 16.2602 | 2000 | 0.4004 |
| 0.3851 | 17.0732 | 2100 | 0.3990 |
| 0.3834 | 17.8862 | 2200 | 0.3991 |
| 0.3842 | 18.6992 | 2300 | 0.3979 |
| 0.3838 | 19.5122 | 2400 | 0.3971 |
| 0.3846 | 20.3252 | 2500 | 0.3971 |
| 0.381 | 21.1382 | 2600 | 0.3978 |
| 0.3837 | 21.9512 | 2700 | 0.3975 |
| 0.3805 | 22.7642 | 2800 | 0.3957 |
| 0.3811 | 23.5772 | 2900 | 0.3973 |
| 0.3814 | 24.3902 | 3000 | 0.3953 |
| 0.3821 | 25.2033 | 3100 | 0.3957 |
| 0.3813 | 26.0163 | 3200 | 0.3951 |
| 0.3794 | 26.8293 | 3300 | 0.3953 |
| 0.3824 | 27.6423 | 3400 | 0.3945 |
| 0.3779 | 28.4553 | 3500 | 0.3944 |
| 0.3796 | 29.2683 | 3600 | 0.3953 |
| 0.3793 | 30.0813 | 3700 | 0.3948 |
| 0.3809 | 30.8943 | 3800 | 0.3949 |
| 0.3796 | 31.7073 | 3900 | 0.3946 |
| 0.3785 | 32.5203 | 4000 | 0.3939 |
| 0.3791 | 33.3333 | 4100 | 0.3940 |
| 0.3791 | 34.1463 | 4200 | 0.3942 |
| 0.3785 | 34.9593 | 4300 | 0.3937 |
| 0.3784 | 35.7724 | 4400 | 0.3939 |
| 0.3789 | 36.5854 | 4500 | 0.3941 |
| 0.3775 | 37.3984 | 4600 | 0.3940 |
| 0.3784 | 38.2114 | 4700 | 0.3939 |
| 0.3795 | 39.0244 | 4800 | 0.3940 |
| 0.3768 | 39.8374 | 4900 | 0.3938 |
| 0.3789 | 40.6504 | 5000 | 0.3938 |
| 0.378 | 41.4634 | 5100 | 0.3939 |
| 0.3794 | 42.2764 | 5200 | 0.3938 |
| 0.3792 | 43.0894 | 5300 | 0.3941 |
| 0.3786 | 43.9024 | 5400 | 0.3936 |
| 0.3785 | 44.7154 | 5500 | 0.3938 |
| 0.3793 | 45.5285 | 5600 | 0.3933 |
| 0.3782 | 46.3415 | 5700 | 0.3936 |
| 0.3789 | 47.1545 | 5800 | 0.3956 |
| 0.3765 | 47.9675 | 5900 | 0.3936 |
| 0.3781 | 48.7805 | 6000 | 0.3946 |
| 0.3805 | 49.5935 | 6100 | 0.3947 |
### Framework versions
- Transformers 4.51.2
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
|
cchyun/hf-tutorial | cchyun | "2025-05-13T04:47:02Z" | 4 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2025-04-01T13:18:59Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
chocckaka/Qwen2_5_7B_inst_bespoke_agentflan_full_sft | chocckaka | "2025-05-13T04:46:43Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-05-13T04:37:19Z" | ---
library_name: transformers
license: other
base_model: Qwen/Qwen2.5-7B-Instruct
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: sft
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sft
This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on the Bespoke-Stratos-17k and Agent-FLANv2 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 12
- total_train_batch_size: 96
- total_eval_batch_size: 64
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.49.0
- Pytorch 2.6.0+cu124
- Datasets 3.2.0
- Tokenizers 0.21.0
|
carlaluiza10/carlaluizaa | carlaluiza10 | "2025-05-13T04:41:14Z" | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | "2025-05-13T04:26:07Z" | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: CLOC
---
# Carlaluizaa
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `CLOC` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "CLOC",
"lora_weights": "https://huggingface.co/carlaluiza10/carlaluizaa/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('carlaluiza10/carlaluizaa', weight_name='lora.safetensors')
image = pipeline('CLOC').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 1000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/carlaluiza10/carlaluizaa/discussions) to add images that show off what you’ve made with this LoRA.
|
abnv22107/deepseek-r1-medical-cot | abnv22107 | "2025-05-13T04:40:24Z" | 0 | 0 | null | [
"safetensors",
"deepseek",
"medical",
"reasoning",
"llama",
"qlora",
"en",
"dataset:FreedomIntelligence/medical-o1-reasoning-SFT",
"license:apache-2.0",
"region:us"
] | null | "2025-05-12T18:21:11Z" | ---
language: en
license: apache-2.0
tags:
- deepseek
- medical
- reasoning
- llama
- qlora
datasets:
- FreedomIntelligence/medical-o1-reasoning-SFT
---
# DeepSeek-R1-Medical-CoT
This model is a fine-tuned version of [deepseek-ai/DeepSeek-R1-Distill-Llama-8B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Llama-8B) on medical reasoning data using QLoRA. It's specifically trained to improve clinical reasoning, diagnostics, and treatment planning capabilities.
## Training Details
- Base model: deepseek-ai/DeepSeek-R1-Distill-Llama-8B
- Training dataset: FreedomIntelligence/medical-o1-reasoning-SFT (3000 samples)
- Fine-tuning method: QLoRA with Unsloth
- LoRA rank: 16
- Training epochs: 1
- Max sequence length: 2048
## Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "abnv22107/deepseek-r1-medical-cot"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
# For inference
prompt = """Below is a task description along with additional context provided in the input section. Your goal is to provide a well-reasoned response that effectively addresses the request.
Before crafting your answer, take a moment to carefully analyze the question. Develop a clear, step-by-step thought process to ensure your response is both logical and accurate.
### Task:
You are a medical expert specializing in clinical reasoning, diagnostics, and treatment planning. Answer the medical question below using your advanced knowledge.
### Query:
Your medical question here
### Answer:
<think>
"""
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=1200)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
## Limitations
This model is intended for research and educational purposes only and should not be used for actual medical diagnosis or treatment decisions.
|
ktam204/Foundation-Sec-8B-Pentest-merged-16bit-nothink | ktam204 | "2025-05-13T04:36:24Z" | 0 | 0 | transformers | [
"transformers",
"pytorch",
"gguf",
"llama",
"text-generation",
"unsloth",
"trl",
"sft",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-05-13T02:01:29Z" | ---
library_name: transformers
tags:
- unsloth
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
nomeyslf/may_13_o4 | nomeyslf | "2025-05-13T04:32:17Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"phi3",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-05-13T04:26:31Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
florian-morel22/my-resnet | florian-morel22 | "2025-05-13T04:31:24Z" | 0 | 0 | null | [
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"region:us"
] | null | "2025-05-13T03:51:29Z" | ---
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Code: [More Information Needed]
- Paper: [More Information Needed]
- Docs: [More Information Needed] |
abharadwaj123/skywork-3b-fine-tuned-vagueness-750-3 | abharadwaj123 | "2025-05-13T04:27:34Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2025-05-13T04:27:32Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
XiaomiMiMo/MiMo-7B-SFT | XiaomiMiMo | "2025-05-13T04:27:32Z" | 981 | 22 | transformers | [
"transformers",
"safetensors",
"mimo",
"text-generation",
"conversational",
"custom_code",
"arxiv:2505.07608",
"license:mit",
"autotrain_compatible",
"region:us"
] | text-generation | "2025-04-29T23:30:47Z" | ---
license: mit
library_name: transformers
---
<div align="center">
<picture>
<source srcset="https://github.com/XiaomiMiMo/MiMo/raw/main/figures/Xiaomi_MiMo_darkmode.png?raw=true" media="(prefers-color-scheme: dark)">
<img src="https://github.com/XiaomiMiMo/MiMo/raw/main/figures/Xiaomi_MiMo.png?raw=true" width="60%" alt="Xiaomi-MiMo" />
</picture>
</div>
<h3 align="center">
<b>
<span>━━━━━━━━━━━━━━━━━━━━━━━━━</span>
<br/>
Unlocking the Reasoning Potential of Language Model<br/>From Pretraining to Posttraining
<br/>
<span>━━━━━━━━━━━━━━━━━━━━━━━━━</span>
<br/>
</b>
</h3>
<br/>
<div align="center" style="line-height: 1;">
|
<a href="https://huggingface.co/XiaomiMiMo" target="_blank">🤗 HuggingFace</a>
|
<a href="https://www.modelscope.cn/organization/XiaomiMiMo" target="_blank">🤖️ ModelScope</a>
|
<a href="https://arxiv.org/abs/2505.07608" target="_blank">📔 Technical Report</a>
|
<br/>
</div>
<br/>
> This model repository is licensed under the MIT License.
## I. Introduction
Currently, most successful RL works, including open-source research, rely on relatively large base models, e.g., 32B models, particularly for enhancing code reasoning capabilities. Moreover, it was widely considered that achieving uniform and simultaneous improvements in both mathematical and code capabilities within a small model is challenging. Nonetheless, we believe that the effectiveness of the RL trained reasoning model relies on the inherent reasoning potential of the base model. To fully unlock the reasoning potential of language models, efforts must focus not only on post-training but also on pre-training strategies tailored to reasoning.
In this work, we present MiMo-7B, a series of models trained from scratch and born for reasoning tasks. Our RL experiments from MiMo-7B-Base show that our model possesses extraordinary reasoning potential, even surpassing much larger 32B models. Additionally, we perform RL training on a cold-started SFT model, resulting in MiMo-7B-RL, which demonstrates superior performance on both mathematics and code reasoning tasks, matching the performance of OpenAI o1-mini.
<p align="center">
<img width="80%" src="https://github.com/XiaomiMiMo/MiMo/raw/main/figures/curve.png?raw=true">
</p>
We open-source MiMo-7B series, including checkpoints of the base model, SFT model, RL model trained from base model, and RL model trained from the SFT model.
We believe this report along with the models will provide valuable insights to develop powerful reasoning LLMs that benefit the larger community.
### 🌟 Highlights
- **Pre-Training: Base Model Born for Reasoning**
- We optimize the data preprocessing pipeline, enhancing text extraction toolkits and applying multi-dimensional data filtering to increase reasoning pattern density in pre-training data. We also employ multiple strategies to generate massive diverse synthetic reasoning data.
- We adopt a three-stage data mixture strategy for pre-training. Overall, MiMo-7B-Base is pre-trained on approximately 25 trillion tokens.
- We incorporate Multiple-Token Prediction as an additional training objective, which enhances model performance and accelerates inference.
- **Post-Training Recipe: Pioneering Reasoning Model**
- We curate 130K mathematics and code problems as RL training data, which can be verified by rule-based verifiers. Each problem undergoes careful cleaning and difficulty assessment to ensure quality. We employ only rule-based accuracy rewards to avoid potential reward hacking.
- To mitigate the sparse reward issue for challenging code problems, we introduce a test difficulty driven code reward. By assigning fine-grained scores for test cases with varying difficulty levels, the policy can be more effectively optimized via dense reward signal.
- We implement a data re-sampling strategy for easy problems to enhance rollout sampling efficiency and stabilize policy updates, particularly in the later phases of RL training.
- **RL Infrastructure**
- We develop a Seamless Rollout Engine to accelerate RL training and validation. Our design integrates continuous rollout, asynchronous reward computation, and early termination to minimize GPU idle time, achieving $2.29\times$ faster training and $1.96\times$ faster validation.
- We support MTP in vLLM and enhance the robustness of the inference engine in the RL system.
## II. Model Details
The MTP layers of MiMo-7B is tuned during pretraining and SFT and freezed during RL. With one MTP layer for speculative decoding, the acceptance rate is about 90%.
<p align="center">
<img width="80%" src="https://github.com/XiaomiMiMo/MiMo/raw/main/figures/architecture.png?raw=true">
</p>
> Models are available at [https://huggingface.co/XiaomiMiMo](https://huggingface.co/XiaomiMiMo) and [https://www.modelscope.cn/organization/XiaomiMiMo](https://www.modelscope.cn/organization/XiaomiMiMo)
| **Model** | **Description** | **Download (HuggingFace)** | **Download (ModelScope)** |
| :-------------: | :---------------------------------------------------------------------------: | :-------------------------------------------------------------------------------: | :-----------------------------------------------------------------------------------------: |
| MiMo-7B-Base | Base model with extraordinary reasoning potential | [🤗 XiaomiMiMo/MiMo-7B-Base](https://huggingface.co/XiaomiMiMo/MiMo-7B-Base) | [🤖️ XiaomiMiMo/MiMo-7B-Base](https://www.modelscope.cn/models/XiaomiMiMo/MiMo-7B-Base) |
| MiMo-7B-RL-Zero | RL model trained from base model | [🤗 XiaomiMiMo/MiMo-7B-RL-Zero](https://huggingface.co/XiaomiMiMo/MiMo-7B-RL-Zero) | [🤖️ XiaomiMiMo/MiMo-7B-RL-Zero](https://www.modelscope.cn/models/XiaomiMiMo/MiMo-7B-RL-Zero) |
| MiMo-7B-SFT | SFT model trained from base model | [🤗 XiaomiMiMo/MiMo-7B-SFT](https://huggingface.co/XiaomiMiMo/MiMo-7B-SFT) | [🤖️ XiaomiMiMo/MiMo-7B-SFT](https://www.modelscope.cn/models/XiaomiMiMo/MiMo-7B-SFT) |
| MiMo-7B-RL | RL model trained from SFT model, superior performance matching OpenAI o1-mini | [🤗 XiaomiMiMo/MiMo-7B-RL](https://huggingface.co/XiaomiMiMo/MiMo-7B-RL) | [🤖️ XiaomiMiMo/MiMo-7B-RL](https://www.modelscope.cn/models/XiaomiMiMo/MiMo-7B-RL) |
## III. Evaluation Results
| Benchmark | GPT-4o-0513 | Claude-3.5-Sonnet-1022 | OpenAI o1-mini | QwQ-32B-Preview | R1-Distill-Qwen-14B | R1-Distill-Qwen-7B | MiMo-7B-RL |
| ----------------------------- | :---------: | :--------------------: | :------------: | :-------------: | :-----------------: | :----------------: | :--------: |
| **General** | | | | | | | |
| GPQA Diamond<br/>(Pass@1) | 49.9 | 65.0 | 60.0 | 54.5 | 59.1 | 49.1 | 54.4 |
| SuperGPQA<br/>(Pass@1) | 42.4 | 48.2 | 45.2 | 43.6 | 40.6 | 28.9 | 40.5 |
| DROP<br/>(3-shot F1) | 83.7 | 88.3 | 83.9 | 71.2 | 85.5 | 77.0 | 78.7 |
| MMLU-Pro<br/>(EM) | 72.6 | 78.0 | 80.3 | 52.0 | 68.8 | 53.5 | 58.6 |
| IF-Eval<br/>(Prompt Strict) | 84.3 | 86.5 | 84.8 | 40.4 | 78.3 | 60.5 | 61.0 |
| **Mathematics** | | | | | | | |
| MATH-500<br/>(Pass@1) | 74.6 | 78.3 | 90.0 | 90.6 | 93.9 | 92.8 | 95.8 |
| AIME 2024<br/>(Pass@1) | 9.3 | 16.0 | 63.6 | 50.0 | 69.7 | 55.5 | 68.2 |
| AIME 2025<br/>(Pass@1) | 11.6 | 7.4 | 50.7 | 32.4 | 48.2 | 38.8 | 55.4 |
| **Code** | | | | | | | |
| LiveCodeBench v5<br/>(Pass@1) | 32.9 | 38.9 | 53.8 | 41.9 | 53.1 | 37.6 | 57.8 |
| LiveCodeBench v6<br/>(Pass@1) | 30.9 | 37.2 | 46.8 | 39.1 | 31.9 | 23.9 | 49.3 |
MiMo-7B series
| Benchmark | MiMo-7B-Base | MiMo-7B-RL-Zero | MiMo-7B-SFT | MiMo-7B-RL |
| ----------------------------- | :----------: | :-------------: | :---------: | :--------: |
| **Mathematics** | | | | |
| MATH500<br/>(Pass@1) | 37.4 | 93.6 | 93.0 | 95.8 |
| AIME 2024<br/>(Pass@1) | 32.9 | 56.4 | 58.7 | 68.2 |
| AIME 2025<br/>(Pass@1) | 24.3 | 46.3 | 44.3 | 55.4 |
| **Code** | | | | |
| LiveCodeBench v5<br/>(Pass@1) | 32.9 | 49.1 | 52.3 | 57.8 |
| LiveCodeBench v6<br/>(Pass@1) | 29.1 | 42.9 | 45.5 | 49.3 |
> [!IMPORTANT]
> The evaluations are conducted with `temperature=0.6`.
>
> AIME24 and AIME25 are with averaged score of 32 repetitions. LiveCodeBench v5 (20240801-20250201), LiveCodeBench v6 (20250201-20250501), GPQA-Diamond and IF-Eval are with averaged score of 8 repetitions. MATH500 and SuperGPQA are with a single run.
## IV. Deployment
### SGLang Inference
Thanks to the [contribution](https://github.com/sgl-project/sglang/pull/5921) from the SGLang team, we supported MiMo in SGLang mainstream within 24h with MTP coming soon.
Example Script
```bash
# Install the latest SGlang from main branch
python3 -m uv pip install "sglang[all] @ git+https://github.com/sgl-project/sglang.git/@main#egg=sglang&subdirectory=python"
# Launch SGLang Server
python3 -m sglang.launch_server --model-path XiaomiMiMo/MiMo-7B-SFT --host 0.0.0.0 --trust-remote-code
```
Detailed usage can be found in [SGLang documents](https://docs.sglang.ai/backend/send_request.html). MTP will also be supported in 24h.
### vLLM inference
1. [Recommended] We officially support inference with MiMo-MTP using [our fork of vLLM](https://github.com/XiaomiMiMo/vllm/tree/feat_mimo_mtp_stable_073).
Example script
```py
from vllm import LLM, SamplingParams
model_path = "/path/to/MiMo"
llm = LLM(
model=model_path,
trust_remote_code=True,
num_speculative_tokens=1,
disable_log_stats=False
)
sampling_params = SamplingParams(temperature=0.6)
conversation = [
{
"role": "system",
"content": ""
},
{
"role": "user",
"content": "Write an essay about the importance of higher education.",
},
]
outputs = llm.chat(conversation,
sampling_params=sampling_params,
use_tqdm=False)
for output in outputs:
prompt = output.prompt
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")
print("=" * 80)
```
2. Or, you can register a vLLM loader for MiMo without loading MTP parameters.
You can copy the [`registry/register_mimo_in_vllm.py`](https://github.com/XiaomiMiMo/MiMo/blob/main/registry/register_mimo_in_vllm.py) to your directory and import it with
```py
import register_mimo_in_vllm
from vllm import LLM, SamplingParams
model_path = "/path/to/MiMo"
llm = LLM(
model=model_path,
trust_remote_code=True,
# num_speculative_tokens=1,
disable_log_stats=False
)
sampling_params = SamplingParams(temperature=0.6)
```
### HuggingFace inference
Example script
```py
from transformers import AutoModel, AutoModelForCausalLM, AutoTokenizer
model_id = "XiaomiMiMo/MiMo-7B-SFT"
model = AutoModelForCausalLM.from_pretrained(model_id, trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained(model_id)
inputs = tokenizer(["Today is"], return_tensors='pt')
output = model.generate(**inputs, max_new_tokens = 100)
print(tokenizer.decode(output.tolist()[0]))
```
### Recommended environment and prompts
- We recommend using [our fork of vLLM](https://github.com/XiaomiMiMo/vllm/tree/feat_mimo_mtp_stable_073) which is developed based on vLLM 0.7.3.
- We recommend using empty system prompt.
> We haven't verified MiMo with other inference engines and welcome contributions based on the model definition in the Huggingface repo 💻.
## V. Citation
```bibtex
@misc{coreteam2025mimounlockingreasoningpotential,
title={MiMo: Unlocking the Reasoning Potential of Language Model -- From Pretraining to Posttraining},
author={{Xiaomi LLM-Core Team}},
year={2025},
eprint={2505.07608},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2505.07608},
}
```
## VI. Contact
Please contact us at [[email protected]](mailto:[email protected]) or open an issue if you have any questions.
|
VXMCC/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-bristly_stubby_gecko | VXMCC | "2025-05-13T04:26:37Z" | 1 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am bristly stubby gecko",
"trl",
"conversational",
"arxiv:2402.03300",
"base_model:unsloth/Qwen2.5-0.5B-Instruct",
"base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-05-06T15:02:16Z" | ---
base_model: unsloth/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-bristly_stubby_gecko
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am bristly stubby gecko
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-bristly_stubby_gecko
This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="VXMCC/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-bristly_stubby_gecko", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.17.0
- Transformers: 4.51.3
- Pytorch: 2.7.0
- Datasets: 3.5.1
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
Villanuevaaaaaaaaa/VanceCrina | Villanuevaaaaaaaaa | "2025-05-13T04:24:10Z" | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | "2025-05-13T04:24:10Z" | ---
license: apache-2.0
---
|
abharadwaj123/skywork-2b-fine-tuned-vagueness-750-3 | abharadwaj123 | "2025-05-13T04:22:04Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2025-05-13T04:22:03Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
RichardErkhov/jaspionjader_-_Kosmos-EVAA-PRP-v30-8B-gguf | RichardErkhov | "2025-05-13T04:19:25Z" | 0 | 0 | null | [
"gguf",
"endpoints_compatible",
"region:us"
] | null | "2025-05-13T03:33:59Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Kosmos-EVAA-PRP-v30-8B - GGUF
- Model creator: https://huggingface.co/jaspionjader/
- Original model: https://huggingface.co/jaspionjader/Kosmos-EVAA-PRP-v30-8B/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Kosmos-EVAA-PRP-v30-8B.Q2_K.gguf](https://huggingface.co/RichardErkhov/jaspionjader_-_Kosmos-EVAA-PRP-v30-8B-gguf/blob/main/Kosmos-EVAA-PRP-v30-8B.Q2_K.gguf) | Q2_K | 2.96GB |
| [Kosmos-EVAA-PRP-v30-8B.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/jaspionjader_-_Kosmos-EVAA-PRP-v30-8B-gguf/blob/main/Kosmos-EVAA-PRP-v30-8B.IQ3_XS.gguf) | IQ3_XS | 3.28GB |
| [Kosmos-EVAA-PRP-v30-8B.IQ3_S.gguf](https://huggingface.co/RichardErkhov/jaspionjader_-_Kosmos-EVAA-PRP-v30-8B-gguf/blob/main/Kosmos-EVAA-PRP-v30-8B.IQ3_S.gguf) | IQ3_S | 3.43GB |
| [Kosmos-EVAA-PRP-v30-8B.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/jaspionjader_-_Kosmos-EVAA-PRP-v30-8B-gguf/blob/main/Kosmos-EVAA-PRP-v30-8B.Q3_K_S.gguf) | Q3_K_S | 3.41GB |
| [Kosmos-EVAA-PRP-v30-8B.IQ3_M.gguf](https://huggingface.co/RichardErkhov/jaspionjader_-_Kosmos-EVAA-PRP-v30-8B-gguf/blob/main/Kosmos-EVAA-PRP-v30-8B.IQ3_M.gguf) | IQ3_M | 3.52GB |
| [Kosmos-EVAA-PRP-v30-8B.Q3_K.gguf](https://huggingface.co/RichardErkhov/jaspionjader_-_Kosmos-EVAA-PRP-v30-8B-gguf/blob/main/Kosmos-EVAA-PRP-v30-8B.Q3_K.gguf) | Q3_K | 3.74GB |
| [Kosmos-EVAA-PRP-v30-8B.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/jaspionjader_-_Kosmos-EVAA-PRP-v30-8B-gguf/blob/main/Kosmos-EVAA-PRP-v30-8B.Q3_K_M.gguf) | Q3_K_M | 3.74GB |
| [Kosmos-EVAA-PRP-v30-8B.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/jaspionjader_-_Kosmos-EVAA-PRP-v30-8B-gguf/blob/main/Kosmos-EVAA-PRP-v30-8B.Q3_K_L.gguf) | Q3_K_L | 4.03GB |
| [Kosmos-EVAA-PRP-v30-8B.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/jaspionjader_-_Kosmos-EVAA-PRP-v30-8B-gguf/blob/main/Kosmos-EVAA-PRP-v30-8B.IQ4_XS.gguf) | IQ4_XS | 4.18GB |
| [Kosmos-EVAA-PRP-v30-8B.Q4_0.gguf](https://huggingface.co/RichardErkhov/jaspionjader_-_Kosmos-EVAA-PRP-v30-8B-gguf/blob/main/Kosmos-EVAA-PRP-v30-8B.Q4_0.gguf) | Q4_0 | 4.34GB |
| [Kosmos-EVAA-PRP-v30-8B.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/jaspionjader_-_Kosmos-EVAA-PRP-v30-8B-gguf/blob/main/Kosmos-EVAA-PRP-v30-8B.IQ4_NL.gguf) | IQ4_NL | 4.38GB |
| [Kosmos-EVAA-PRP-v30-8B.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/jaspionjader_-_Kosmos-EVAA-PRP-v30-8B-gguf/blob/main/Kosmos-EVAA-PRP-v30-8B.Q4_K_S.gguf) | Q4_K_S | 4.37GB |
| [Kosmos-EVAA-PRP-v30-8B.Q4_K.gguf](https://huggingface.co/RichardErkhov/jaspionjader_-_Kosmos-EVAA-PRP-v30-8B-gguf/blob/main/Kosmos-EVAA-PRP-v30-8B.Q4_K.gguf) | Q4_K | 4.58GB |
| [Kosmos-EVAA-PRP-v30-8B.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/jaspionjader_-_Kosmos-EVAA-PRP-v30-8B-gguf/blob/main/Kosmos-EVAA-PRP-v30-8B.Q4_K_M.gguf) | Q4_K_M | 4.58GB |
| [Kosmos-EVAA-PRP-v30-8B.Q4_1.gguf](https://huggingface.co/RichardErkhov/jaspionjader_-_Kosmos-EVAA-PRP-v30-8B-gguf/blob/main/Kosmos-EVAA-PRP-v30-8B.Q4_1.gguf) | Q4_1 | 4.78GB |
| [Kosmos-EVAA-PRP-v30-8B.Q5_0.gguf](https://huggingface.co/RichardErkhov/jaspionjader_-_Kosmos-EVAA-PRP-v30-8B-gguf/blob/main/Kosmos-EVAA-PRP-v30-8B.Q5_0.gguf) | Q5_0 | 5.21GB |
| [Kosmos-EVAA-PRP-v30-8B.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/jaspionjader_-_Kosmos-EVAA-PRP-v30-8B-gguf/blob/main/Kosmos-EVAA-PRP-v30-8B.Q5_K_S.gguf) | Q5_K_S | 5.21GB |
| [Kosmos-EVAA-PRP-v30-8B.Q5_K.gguf](https://huggingface.co/RichardErkhov/jaspionjader_-_Kosmos-EVAA-PRP-v30-8B-gguf/blob/main/Kosmos-EVAA-PRP-v30-8B.Q5_K.gguf) | Q5_K | 5.34GB |
| [Kosmos-EVAA-PRP-v30-8B.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/jaspionjader_-_Kosmos-EVAA-PRP-v30-8B-gguf/blob/main/Kosmos-EVAA-PRP-v30-8B.Q5_K_M.gguf) | Q5_K_M | 5.34GB |
| [Kosmos-EVAA-PRP-v30-8B.Q5_1.gguf](https://huggingface.co/RichardErkhov/jaspionjader_-_Kosmos-EVAA-PRP-v30-8B-gguf/blob/main/Kosmos-EVAA-PRP-v30-8B.Q5_1.gguf) | Q5_1 | 5.65GB |
| [Kosmos-EVAA-PRP-v30-8B.Q6_K.gguf](https://huggingface.co/RichardErkhov/jaspionjader_-_Kosmos-EVAA-PRP-v30-8B-gguf/blob/main/Kosmos-EVAA-PRP-v30-8B.Q6_K.gguf) | Q6_K | 6.14GB |
| [Kosmos-EVAA-PRP-v30-8B.Q8_0.gguf](https://huggingface.co/RichardErkhov/jaspionjader_-_Kosmos-EVAA-PRP-v30-8B-gguf/blob/main/Kosmos-EVAA-PRP-v30-8B.Q8_0.gguf) | Q8_0 | 7.95GB |
Original model description:
---
base_model:
- jaspionjader/Kosmos-EVAA-PRP-v29-8B
- jaspionjader/Kosmos-EVAA-gamma-light-8B
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [jaspionjader/Kosmos-EVAA-PRP-v29-8B](https://huggingface.co/jaspionjader/Kosmos-EVAA-PRP-v29-8B)
* [jaspionjader/Kosmos-EVAA-gamma-light-8B](https://huggingface.co/jaspionjader/Kosmos-EVAA-gamma-light-8B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: jaspionjader/Kosmos-EVAA-PRP-v29-8B
layer_range:
- 0
- 32
- model: jaspionjader/Kosmos-EVAA-gamma-light-8B
layer_range:
- 0
- 32
merge_method: slerp
base_model: jaspionjader/Kosmos-EVAA-PRP-v29-8B
parameters:
t:
- filter: self_attn
value:
- 0
- 0.5
- 0.3
- 0.7
- 1
- filter: mlp
value:
- 1
- 0.5
- 0.7
- 0.3
- 0
- value: 0.5
dtype: bfloat16
```
|
Boese0601/SeedMorpher | Boese0601 | "2025-05-13T04:12:26Z" | 0 | 0 | null | [
"en",
"dataset:Boese0601/SeedMorph-Bench-Test",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:finetune:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | null | "2025-05-13T03:12:27Z" | ---
license: other
license_name: flux.1-dev-non-commercial-license
license_link: LICENSE
datasets:
- Boese0601/SeedMorph-Bench-Test
language:
- en
base_model:
- black-forest-labs/FLUX.1-dev
---
[](https://huggingface.co/datasets/Boese0601/SeedMorph-Bench-Test)
[](https://huggingface.co/datasets/Boese0601/SeedMorph-Bench-Train-Demo)
[](https://huggingface.co/Boese0601/SeedMorpher)
[](https://github.com/Boese0601/SeedMorph)
|
iTroned/self_iterative_v1_targeted_iteration_4 | iTroned | "2025-05-13T04:09:59Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2025-05-12T00:28:13Z" | ---
library_name: transformers
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: self_iterative_v1_targeted_iteration_4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/itroned-ntnu/huggingface/runs/83gwcv2r)
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/itroned-ntnu/huggingface/runs/83gwcv2r)
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/itroned-ntnu/huggingface/runs/83gwcv2r)
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/itroned-ntnu/huggingface/runs/83gwcv2r)
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/itroned-ntnu/huggingface/runs/83gwcv2r)
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/itroned-ntnu/huggingface/runs/83gwcv2r)
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/itroned-ntnu/huggingface/runs/83gwcv2r)
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/itroned-ntnu/huggingface/runs/83gwcv2r)
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/itroned-ntnu/huggingface/runs/83gwcv2r)
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/itroned-ntnu/huggingface/runs/83gwcv2r)
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/itroned-ntnu/huggingface/runs/83gwcv2r)
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/itroned-ntnu/huggingface/runs/83gwcv2r)
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/itroned-ntnu/huggingface/runs/83gwcv2r)
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/itroned-ntnu/huggingface/runs/83gwcv2r)
# self_iterative_v1_targeted_iteration_4
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.9568
- Accuracy Targeted: 0.7222
- F1 Macro Targeted: 0.6052
- F1 Weighted Targeted: 0.6816
- F1 Macro Total: 0.6052
- F1 Weighted Total: 0.6816
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-06
- train_batch_size: 4
- eval_batch_size: 4
- seed: 1337
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy Targeted | F1 Macro Targeted | F1 Weighted Targeted | F1 Macro Total | F1 Weighted Total |
|:-------------:|:-----:|:-----:|:---------------:|:-----------------:|:-----------------:|:--------------------:|:--------------:|:-----------------:|
| 0.7396 | 1.0 | 7593 | 2.4368 | 0.7133 | 0.5897 | 0.6698 | 0.5897 | 0.6698 |
| 0.5688 | 2.0 | 15186 | 2.9568 | 0.7222 | 0.6052 | 0.6816 | 0.6052 | 0.6816 |
| 0.3648 | 3.0 | 22779 | 3.7836 | 0.6867 | 0.5577 | 0.6426 | 0.5577 | 0.6426 |
| 0.2504 | 4.0 | 30372 | 3.8976 | 0.7222 | 0.5846 | 0.6696 | 0.5846 | 0.6696 |
| 0.1778 | 5.0 | 37965 | 4.6549 | 0.6933 | 0.5596 | 0.6459 | 0.5596 | 0.6459 |
| 0.1079 | 6.0 | 45558 | 4.4240 | 0.6978 | 0.5597 | 0.6474 | 0.5597 | 0.6474 |
| 0.0732 | 7.0 | 53151 | 4.5696 | 0.6778 | 0.5847 | 0.6546 | 0.5847 | 0.6546 |
### Framework versions
- Transformers 4.50.2
- Pytorch 2.6.0+cu124
- Datasets 3.0.1
- Tokenizers 0.21.1
|
JOSESMOKE/tear_519 | JOSESMOKE | "2025-05-13T04:09:36Z" | 0 | 0 | null | [
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] | any-to-any | "2025-05-13T03:09:56Z" | ---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
Jorgeis1/babyllama-10midk | Jorgeis1 | "2025-05-13T04:07:32Z" | 7 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-05-06T19:11:23Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
taposblood/tapos | taposblood | "2025-05-13T04:07:26Z" | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | "2025-05-13T04:07:26Z" | ---
license: apache-2.0
---
|
Syldehayem/train_bert_distill_base_20 | Syldehayem | "2025-05-13T04:06:32Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"feature-extraction",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | feature-extraction | "2025-05-13T02:29:38Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
August42/Campbell | August42 | "2025-05-13T04:04:39Z" | 0 | 0 | null | [
"license:bigscience-bloom-rail-1.0",
"region:us"
] | null | "2025-05-13T04:04:39Z" | ---
license: bigscience-bloom-rail-1.0
---
|
Lupe43/Gilbert | Lupe43 | "2025-05-13T04:04:39Z" | 0 | 0 | null | [
"license:bigcode-openrail-m",
"region:us"
] | null | "2025-05-13T04:04:39Z" | ---
license: bigcode-openrail-m
---
|
DSFDGSG/dsad | DSFDGSG | "2025-05-13T04:04:31Z" | 0 | 0 | null | [
"license:cdla-sharing-1.0",
"region:us"
] | null | "2025-05-13T04:04:31Z" | ---
license: cdla-sharing-1.0
---
|
xuan-luo/MTPQwen3-0.6B | xuan-luo | "2025-05-13T04:03:49Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mtpqwen3",
"text-generation",
"conversational",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"region:us"
] | text-generation | "2025-05-13T04:02:26Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
blackloverbd4/black | blackloverbd4 | "2025-05-13T04:03:48Z" | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | "2025-05-13T04:03:48Z" | ---
license: apache-2.0
---
|
Aby003/bert-finetuned-ner | Aby003 | "2025-05-13T04:01:19Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | "2025-05-12T05:56:41Z" | ---
library_name: transformers
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
config: conll2003
split: validation
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9378727634194831
- name: Recall
type: recall
value: 0.9527095254123191
- name: F1
type: f1
value: 0.9452329270328937
- name: Accuracy
type: accuracy
value: 0.9871519397186084
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0572
- Precision: 0.9379
- Recall: 0.9527
- F1: 0.9452
- Accuracy: 0.9872
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0757 | 1.0 | 1756 | 0.0604 | 0.9082 | 0.9391 | 0.9234 | 0.9833 |
| 0.0341 | 2.0 | 3512 | 0.0645 | 0.9305 | 0.9465 | 0.9384 | 0.9854 |
| 0.0212 | 3.0 | 5268 | 0.0572 | 0.9379 | 0.9527 | 0.9452 | 0.9872 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
|
connorblack/qwen3-4b-4bit-curated-lora-2-epoch | connorblack | "2025-05-13T03:59:20Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen3",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2025-05-13T03:21:08Z" | ---
base_model: unsloth/qwen3-4b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** connorblack
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen3-4b-unsloth-bnb-4bit
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
infojolnupur/jolnupur | infojolnupur | "2025-05-13T03:55:16Z" | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | "2025-05-13T03:55:16Z" | ---
license: apache-2.0
---
|
ThomasTheMaker/Qwen3-1.7B-Alpaca-demo-en-Instruct | ThomasTheMaker | "2025-05-13T03:54:09Z" | 0 | 0 | null | [
"safetensors",
"license:apache-2.0",
"region:us"
] | null | "2025-05-13T03:51:25Z" | ---
license: apache-2.0
---
|
ma921/qwen2.5_r_dpo_golden-hh_noise10_epoch3_gamma2 | ma921 | "2025-05-13T03:52:39Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"conversational",
"base_model:ma921/qwen-2.5-sft-golden-hh",
"base_model:finetune:ma921/qwen-2.5-sft-golden-hh",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-05-13T03:51:04Z" | ---
library_name: transformers
license: apache-2.0
base_model: ma921/qwen-2.5-sft-golden-hh
tags:
- generated_from_trainer
model-index:
- name: qwen2.5_r_dpo_golden-hh_noise10_epoch3_gamma2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# qwen2.5_r_dpo_golden-hh_noise10_epoch3_gamma2
This model is a fine-tuned version of [ma921/qwen-2.5-sft-golden-hh](https://huggingface.co/ma921/qwen-2.5-sft-golden-hh) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- gradient_accumulation_steps: 64
- total_train_batch_size: 256
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
|
nomeyslf/may_13_o2 | nomeyslf | "2025-05-13T03:51:16Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"phi3",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-05-13T03:45:29Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
xuan1228/313706034_hw4 | xuan1228 | "2025-05-13T03:49:50Z" | 0 | 0 | null | [
"safetensors",
"region:us"
] | null | "2025-05-12T17:38:01Z" | # 中文多選題問答模型 (附推理過程)
## 模型資訊
- 基礎模型: Qwen/Qwen2.5-7B-Instruct
- 微調方法: LoRA (Low-Rank Adaptation)
- 訓練數據: 中文多選題問答 (含推理過程)
- 訓練日期: 2025-05-13
## 用途
此模型經過微調,可以分析中文多選題,提供推理過程並選出最合適的答案(A、B、C或D選項)。
## 訓練參數
- LoRA rank: 16
- LoRA alpha: 32
- Learning rate: 0.0003
- Epochs: 1
|
Subsets and Splits