Update README.md
Browse files
README.md
CHANGED
@@ -122,6 +122,36 @@ Under Download Model, you can enter the model repo: infosys/NT-Java-1.1B-GGUF an
|
|
122 |
|
123 |
Then click Download.
|
124 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
125 |
### On the command line, including multiple files at once
|
126 |
|
127 |
I recommend using the `huggingface-hub` Python library:
|
|
|
122 |
|
123 |
Then click Download.
|
124 |
|
125 |
+
|
126 |
+
## How to use with Ollama
|
127 |
+
|
128 |
+
### Building from `Modelfile`
|
129 |
+
|
130 |
+
Assuming that you have already downloaded GGUF files, here is how you can use them with [Ollama](https://ollama.com/):
|
131 |
+
|
132 |
+
1. **Get the Modelfile:**
|
133 |
+
|
134 |
+
```
|
135 |
+
huggingface-cli download microsoft/Phi-3-mini-4k-instruct-gguf Modelfile_q4 --local-dir /path/to/your/local/dir
|
136 |
+
```
|
137 |
+
|
138 |
+
2. Build the Ollama Model:
|
139 |
+
Use the Ollama CLI to create your model with the following command:
|
140 |
+
|
141 |
+
```
|
142 |
+
ollama create phi3 -f Modelfile_q4
|
143 |
+
```
|
144 |
+
|
145 |
+
3. **Run the *phi3* model:**
|
146 |
+
|
147 |
+
Now you can run the Phi-3-Mini-4k-Instruct model with Ollama using the following command:
|
148 |
+
|
149 |
+
```
|
150 |
+
ollama run phi3 "Your prompt here"
|
151 |
+
```
|
152 |
+
|
153 |
+
Replace "Your prompt here" with the actual prompt you want to use for generating responses from the model.
|
154 |
+
|
155 |
### On the command line, including multiple files at once
|
156 |
|
157 |
I recommend using the `huggingface-hub` Python library:
|