Transformers
GGUF
llama
doberst commited on
Commit
1ce4968
·
verified ·
1 Parent(s): c0d3b45

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +15 -19
README.md CHANGED
@@ -6,24 +6,29 @@ license: apache-2.0
6
 
7
  <!-- Provide a quick summary of what the model is/does. -->
8
 
9
- **slim-sentiment-tool** is part of the SLIM ("Structured Language Instruction Model") model series, providing a set of small, specialized decoder-based LLMs, fine-tuned for function-calling.
10
 
11
- slim-sentiment-tool is a 4_K_M quantized GGUF version of slim-sentiment-tool, providing a fast, small inference implementation.
12
 
13
- Load in your favorite GGUF inference engine, or try with llmware as follows:
14
 
15
  from llmware.models import ModelCatalog
16
 
17
- sentiment_tool = ModelCatalog().load_model("llmware/slim-sentiment-tool")
18
- response = sentiment_tool.function_call(text_sample, params=["sentiment"], function="classify")
 
19
 
20
- Slim models can also be loaded even more simply as part of LLMfx calls:
 
 
 
 
21
 
22
  from llmware.agents import LLMfx
23
 
24
  llm_fx = LLMfx()
25
- llm_fx.load_tool("sentiment")
26
- response = llm_fx.sentiment(text)
27
 
28
 
29
  ### Model Description
@@ -40,19 +45,10 @@ Slim models can also be loaded even more simply as part of LLMfx calls:
40
 
41
  <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
42
 
43
- The intended use of SLIM models is to re-imagine traditional 'hard-coded' classifiers through the use of function calls.
44
 
45
- Example:
46
 
47
- text = "The stock market declined yesterday as investors worried increasingly about the slowing economy."
48
-
49
- model generation - {"sentiment": ["negative"]}
50
-
51
- keys = "sentiment"
52
-
53
- All of the SLIM models use a novel prompt instruction structured as follows:
54
-
55
- "<human> " + text + "<classify> " + keys + "</classify>" + "/n<bot>: "
56
 
57
 
58
  ## Model Card Contact
 
6
 
7
  <!-- Provide a quick summary of what the model is/does. -->
8
 
9
+ **slim-ner-tool** is part of the SLIM ("Structured Language Instruction Model") model series, providing a set of small, specialized decoder-based LLMs, fine-tuned for function-calling.
10
 
11
+ slim-ner-tool is a 4_K_M quantized GGUF version of slim-ner, providing a small, fast inference implementation.
12
 
13
+ Load in your favorite GGUF inference engine (see details in config.json to set up the prompt template), or try with llmware as follows:
14
 
15
  from llmware.models import ModelCatalog
16
 
17
+ # to load the model and make a basic inference
18
+ ner_tool = ModelCatalog().load_model("slim-ner-tool")
19
+ response = ner_tool.function_call(text_sample)
20
 
21
+ # this one line will download the model and run a series of tests
22
+ ModelCatalog().test_run("slim-ner-tool", verbose=True)
23
+
24
+
25
+ Slim models can also be loaded even more simply as part of a multi-model, multi-step LLMfx calls:
26
 
27
  from llmware.agents import LLMfx
28
 
29
  llm_fx = LLMfx()
30
+ llm_fx.load_tool("ner")
31
+ response = llm_fx.named_entity_extraction(text)
32
 
33
 
34
  ### Model Description
 
45
 
46
  <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
47
 
48
+ SLIM models provide a fast, flexible, intuitive way to integrate classifiers and structured function calls into RAG and LLM application workflows.
49
 
50
+ Model instructions, details and test samples have been packaged into the config.json file in the repository, along with the GGUF file.
51
 
 
 
 
 
 
 
 
 
 
52
 
53
 
54
  ## Model Card Contact