Update README.md
Browse files
README.md
CHANGED
@@ -47,10 +47,9 @@ widget:
|
|
47 |

|
48 |
|
49 |
|
50 |
-
Given a goal and tools can
|
51 |
-
What if it has a meagre 1.3b params/neurons akin to that of an owl ?
|
52 |
-
|
53 |
-
Apparently it can.
|
54 |
Releasing `pip-code-bandit` and `pipflow`
|
55 |
-- a model and a library to manage and run goal oriented agentic system.
|
56 |
|
@@ -61,17 +60,18 @@ Releasing `pip-code-bandit` and `pipflow`
|
|
61 |
-- number of params ~ 1.3b [2.9 Gb GPU memory footprint]
|
62 |
-- sequence length ~ 16.3k [Can go higher but will show performance degradation]
|
63 |
-- license - apache 2.0
|
|
|
64 |
-- tasks:
|
65 |
-
1. complex planning of sequential function calls
|
66 |
-
2.
|
67 |
-
3.
|
68 |
-
4. code generation | goal
|
69 |
-
5.
|
70 |
-
6.
|
71 |
-
7.
|
72 |
-
8.
|
|
|
73 |
|
74 |
-
-- instruction following , RL tuned.
|
75 |
```
|
76 |
|
77 |
|
@@ -93,7 +93,7 @@ complete open sourced - apache 2.0. License
|
|
93 |
### NOTE:
|
94 |
|
95 |
|
96 |
-
If you wish to try this model without utilizing your GPU, we have hosted the model on our end. To execute the library using the hosted
|
97 |
|
98 |
```bash
|
99 |
pip3 install git+https://github.com/PipableAI/pipflow.git
|
@@ -137,11 +137,11 @@ For detailed usage refer to the [colab_notebook](https://colab.research.google.c
|
|
137 |
### Model Use
|
138 |
|
139 |
```bash
|
140 |
-
pip install transformers
|
141 |
-
pip install accelerate
|
142 |
```
|
143 |
|
144 |
```python
|
|
|
145 |
from transformers import AutoModelForCausalLM, AutoTokenizer
|
146 |
from accelerate import Accelerator
|
147 |
model =AutoModelForCausalLM.from_pretrained("PipableAI/pip-code-bandit",torch_dtype=torch.bfloat16,device_map="auto")
|
|
|
47 |

|
48 |
|
49 |
|
50 |
+
Given a goal and tools, can AI intelligently use the tools to reach the goal?
|
51 |
+
What if it has a meagre 1.3b params/neurons akin to that of an owl? Can it follow instructions and plan to reach a goal?
|
52 |
+
Apparently it can!
|
|
|
53 |
Releasing `pip-code-bandit` and `pipflow`
|
54 |
-- a model and a library to manage and run goal oriented agentic system.
|
55 |
|
|
|
60 |
-- number of params ~ 1.3b [2.9 Gb GPU memory footprint]
|
61 |
-- sequence length ~ 16.3k [Can go higher but will show performance degradation]
|
62 |
-- license - apache 2.0
|
63 |
+
-- instruction following , RL tuned.
|
64 |
-- tasks:
|
65 |
+
1. complex planning(plan) of sequential function calls | a list of callables and goal
|
66 |
+
2. corrected plan | feedback instructions with error
|
67 |
+
3. function calling | doc or code and goal
|
68 |
+
4. code generation | plan and goal
|
69 |
+
5. code generation | goal
|
70 |
+
6. doc generation | code
|
71 |
+
7. code generation | doc
|
72 |
+
8. file parsed to json | any raw data
|
73 |
+
9. sql generation | schema, question, instructions and examples
|
74 |
|
|
|
75 |
```
|
76 |
|
77 |
|
|
|
93 |
### NOTE:
|
94 |
|
95 |
|
96 |
+
If you wish to try this model without utilizing your GPU, we have hosted the model on our end. To execute the library using the hosted model, initialize the generator as shown below:
|
97 |
|
98 |
```bash
|
99 |
pip3 install git+https://github.com/PipableAI/pipflow.git
|
|
|
137 |
### Model Use
|
138 |
|
139 |
```bash
|
140 |
+
pip install transformers accelerate torch
|
|
|
141 |
```
|
142 |
|
143 |
```python
|
144 |
+
import torch
|
145 |
from transformers import AutoModelForCausalLM, AutoTokenizer
|
146 |
from accelerate import Accelerator
|
147 |
model =AutoModelForCausalLM.from_pretrained("PipableAI/pip-code-bandit",torch_dtype=torch.bfloat16,device_map="auto")
|