pratham-pi commited on
Commit
c677009
·
verified ·
1 Parent(s): fb070a5

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +16 -16
README.md CHANGED
@@ -47,10 +47,9 @@ widget:
47
  ![image/png](https://cdn-uploads.huggingface.co/production/uploads/658d8095a2a6a6e0da8bb8a6/NuTFBTMAsPgFwMxCjdqFv.png)
48
 
49
 
50
- Given a goal and tools can Ai intelligently use the tools to reach the goal ?
51
- What if it has a meagre 1.3b params/neurons akin to that of an owl ?
52
- Can it follow instructions and plan to reach a goal ?
53
- Apparently it can.
54
  Releasing `pip-code-bandit` and `pipflow`
55
  -- a model and a library to manage and run goal oriented agentic system.
56
 
@@ -61,17 +60,18 @@ Releasing `pip-code-bandit` and `pipflow`
61
  -- number of params ~ 1.3b [2.9 Gb GPU memory footprint]
62
  -- sequence length ~ 16.3k [Can go higher but will show performance degradation]
63
  -- license - apache 2.0
 
64
  -- tasks:
65
- 1. complex planning of sequential function calls with right params to accomplish a goal | a list of callables
66
- 2. function calling | doc or code and goal
67
- 3. code generation | plan and goal
68
- 4. code generation | goal
69
- 5. doc generation | code
70
- 6. code generation | doc
71
- 7. file recreated in json | any raw data
72
- 8. corrected generation | new instruction with error
 
73
 
74
- -- instruction following , RL tuned.
75
  ```
76
 
77
 
@@ -93,7 +93,7 @@ complete open sourced - apache 2.0. License
93
  ### NOTE:
94
 
95
 
96
- If you wish to try this model without utilizing your GPU, we have hosted the model on our end. To execute the library using the hosted playground model, initialize the generator as shown below:
97
 
98
  ```bash
99
  pip3 install git+https://github.com/PipableAI/pipflow.git
@@ -137,11 +137,11 @@ For detailed usage refer to the [colab_notebook](https://colab.research.google.c
137
  ### Model Use
138
 
139
  ```bash
140
- pip install transformers
141
- pip install accelerate
142
  ```
143
 
144
  ```python
 
145
  from transformers import AutoModelForCausalLM, AutoTokenizer
146
  from accelerate import Accelerator
147
  model =AutoModelForCausalLM.from_pretrained("PipableAI/pip-code-bandit",torch_dtype=torch.bfloat16,device_map="auto")
 
47
  ![image/png](https://cdn-uploads.huggingface.co/production/uploads/658d8095a2a6a6e0da8bb8a6/NuTFBTMAsPgFwMxCjdqFv.png)
48
 
49
 
50
+ Given a goal and tools, can AI intelligently use the tools to reach the goal?
51
+ What if it has a meagre 1.3b params/neurons akin to that of an owl? Can it follow instructions and plan to reach a goal?
52
+ Apparently it can!
 
53
  Releasing `pip-code-bandit` and `pipflow`
54
  -- a model and a library to manage and run goal oriented agentic system.
55
 
 
60
  -- number of params ~ 1.3b [2.9 Gb GPU memory footprint]
61
  -- sequence length ~ 16.3k [Can go higher but will show performance degradation]
62
  -- license - apache 2.0
63
+ -- instruction following , RL tuned.
64
  -- tasks:
65
+ 1. complex planning(plan) of sequential function calls | a list of callables and goal
66
+ 2. corrected plan | feedback instructions with error
67
+ 3. function calling | doc or code and goal
68
+ 4. code generation | plan and goal
69
+ 5. code generation | goal
70
+ 6. doc generation | code
71
+ 7. code generation | doc
72
+ 8. file parsed to json | any raw data
73
+ 9. sql generation | schema, question, instructions and examples
74
 
 
75
  ```
76
 
77
 
 
93
  ### NOTE:
94
 
95
 
96
+ If you wish to try this model without utilizing your GPU, we have hosted the model on our end. To execute the library using the hosted model, initialize the generator as shown below:
97
 
98
  ```bash
99
  pip3 install git+https://github.com/PipableAI/pipflow.git
 
137
  ### Model Use
138
 
139
  ```bash
140
+ pip install transformers accelerate torch
 
141
  ```
142
 
143
  ```python
144
+ import torch
145
  from transformers import AutoModelForCausalLM, AutoTokenizer
146
  from accelerate import Accelerator
147
  model =AutoModelForCausalLM.from_pretrained("PipableAI/pip-code-bandit",torch_dtype=torch.bfloat16,device_map="auto")