Update README.md
Browse files
README.md
CHANGED
@@ -32,4 +32,38 @@ The JavaCoder models are !B parameter models trained on 80+ programming language
|
|
32 |
- **Project Website:**
|
33 |
- **Paper:**
|
34 |
- **Point of Contact:**
|
35 |
-
- **Languages:** 80+ Programming languages
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
32 |
- **Project Website:**
|
33 |
- **Paper:**
|
34 |
- **Point of Contact:**
|
35 |
+
- **Languages:** 80+ Programming languages
|
36 |
+
|
37 |
+
## Use
|
38 |
+
|
39 |
+
### Intended use
|
40 |
+
|
41 |
+
The model was trained on GitHub code. As such it is _not_ an instruction model and commands like "Write a function that computes the square root." do not work well. However, by using the [Tech Assistant prompt](https://huggingface.co/datasets/bigcode/ta-prompt) you can turn it into a capable technical assistant.
|
42 |
+
|
43 |
+
**Feel free to share your generations in the Community tab!**
|
44 |
+
|
45 |
+
### Generation
|
46 |
+
```Java
|
47 |
+
# pip install -q transformers
|
48 |
+
from transformers import AutoModelForCausalLM, AutoTokenizer
|
49 |
+
|
50 |
+
checkpoint = "infosys/javacoder-1b"
|
51 |
+
device = "cuda" # for GPU usage or "cpu" for CPU usage
|
52 |
+
|
53 |
+
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
|
54 |
+
model = AutoModelForCausalLM.from_pretrained(checkpoint).to(device)
|
55 |
+
|
56 |
+
inputs = tokenizer.encode("public class HelloWorld {\n public static void main(String[] args) {", return_tensors="pt").to(device)
|
57 |
+
outputs = model.generate(inputs)
|
58 |
+
print(tokenizer.decode(outputs[0]))
|
59 |
+
```
|
60 |
+
|
61 |
+
### Fill-in-the-middle
|
62 |
+
Fill-in-the-middle uses special tokens to identify the prefix/middle/suffix part of the input and output:
|
63 |
+
|
64 |
+
```Java
|
65 |
+
input_text = "<fim_prefix>public class HelloWorld {\n public static void main(String[] args) {<fim_suffix>}\n}<fim_middle>"
|
66 |
+
inputs = tokenizer.encode(input_text, return_tensors="pt").to(device)
|
67 |
+
outputs = model.generate(inputs)
|
68 |
+
print(tokenizer.decode(outputs[0]))
|
69 |
+
```
|