title
stringlengths 1
300
| score
int64 0
8.54k
| selftext
stringlengths 0
40k
| created
timestamp[ns]date 2023-04-01 04:30:41
2025-06-30 03:16:29
⌀ | url
stringlengths 0
878
| author
stringlengths 3
20
| domain
stringlengths 0
82
| edited
timestamp[ns]date 1970-01-01 00:00:00
2025-06-26 17:30:18
| gilded
int64 0
2
| gildings
stringclasses 7
values | id
stringlengths 7
7
| locked
bool 2
classes | media
stringlengths 646
1.8k
⌀ | name
stringlengths 10
10
| permalink
stringlengths 33
82
| spoiler
bool 2
classes | stickied
bool 2
classes | thumbnail
stringlengths 4
213
| ups
int64 0
8.54k
| preview
stringlengths 301
5.01k
⌀ |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Multi-bot chat with fictional characters
| 1 |
[removed]
| 2023-09-04T14:58:26 |
https://www.reddit.com/r/LocalLLaMA/comments/169u7vi/multibot_chat_with_fictional_characters/
|
justynasty
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
169u7vi
| false | null |
t3_169u7vi
|
/r/LocalLLaMA/comments/169u7vi/multibot_chat_with_fictional_characters/
| false | false | 1 | null |
|
Code interpreter/data analysis via open source LLM?
| 1 |
[removed]
| 2023-09-04T15:43:17 |
https://www.reddit.com/r/LocalLLaMA/comments/169vduc/code_interpreterdata_analysis_via_open_source_llm/
|
alvintsoiwc
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
169vduc
| false | null |
t3_169vduc
|
/r/LocalLLaMA/comments/169vduc/code_interpreterdata_analysis_via_open_source_llm/
| false | false |
self
| 1 | null |
Since lot of people took interest in my project, So uploading a screenshot of llama 2 chat 13B running locally using google search to provide answers.
| 1 | 2023-09-04T16:18:53 |
akbbiswas
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
169wbto
| false | null |
t3_169wbto
|
/r/LocalLLaMA/comments/169wbto/since_lot_of_people_took_interest_in_my_project/
| false | false | 1 |
{'enabled': True, 'images': [{'id': 'G7iJSL46kXKFJpX9Z3gb5dx_p9PdLNFBlVSX5sRZv_8', 'resolutions': [{'height': 113, 'url': 'https://preview.redd.it/fcgrjdjok9mb1.png?width=108&crop=smart&auto=webp&s=f19fb9bc0d6eba0f02ce780022f8345536853368', 'width': 108}, {'height': 227, 'url': 'https://preview.redd.it/fcgrjdjok9mb1.png?width=216&crop=smart&auto=webp&s=17b3c3cb160c2ab9fe5e0a895d32a2b3f5adb84a', 'width': 216}, {'height': 337, 'url': 'https://preview.redd.it/fcgrjdjok9mb1.png?width=320&crop=smart&auto=webp&s=b4c0067befaab6290d7f2eba5a40db0118575684', 'width': 320}, {'height': 674, 'url': 'https://preview.redd.it/fcgrjdjok9mb1.png?width=640&crop=smart&auto=webp&s=1dd0f2f2d5283de1a3b0916c44f118c5d2a3373b', 'width': 640}], 'source': {'height': 903, 'url': 'https://preview.redd.it/fcgrjdjok9mb1.png?auto=webp&s=e253d94f02feabf5578f23e908d2f97b66d12e3d', 'width': 857}, 'variants': {}}]}
|
|||
Building an agent using an open llm
| 1 |
Hi, I am trying to build a very simple agent as a POC with the end goal of summarizing a document gotten from the web (like a paper from arxiv). I have been using the react format (I think?) where it looks like:
Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.
You have access to the following tools:
Search: useful for when you need to answer questions about current events. You should ask targeted questions.
Webpage: useful for when you need specifics after a search using the Source. Use the supplied source url.
​
You will receive a message from the human, then you should start a loop and do one of two things
Option 1: You use a tool to answer the question.
For this, you should use the following format:
===RESPONSE====
{
"Thought": "you should always think about what to do",
"Action": "the action to take, should be one of \[Search\]",
"Action Input": "the input to the action, to be sent to the tool"
}
After this, the human will respond with an observation, and you will continue until you are ready to answer.
Option 2: You respond to the human.
For this, you should use the following format:
===RESPONSE====
{
"Action": "Response To Human",
"Action Input": "your response to the human, summarizing what you did and what you learned"
}
So the model actually does search, and can come up with an answer, sometimes it gets stuck in a loop (probably because the search results are low quality). But is this the correct way to interact with the model for this purpose? I find that the coder models seem to follow the output format closer, anyone else experience this?
| 2023-09-04T16:46:07 |
https://www.reddit.com/r/LocalLLaMA/comments/169x1ud/building_an_agent_using_an_open_llm/
|
klop2031
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
169x1ud
| false | null |
t3_169x1ud
|
/r/LocalLLaMA/comments/169x1ud/building_an_agent_using_an_open_llm/
| false | false |
self
| 1 | null |
4x3060 testing
| 1 |
Tested today rig on vast.ai with 4x3060
I didn't have much time, so i have not played with gpu split properly to measure max context, but it seems that it's not spread evenly between gpus...
in eu you can get 4x3060 with 24 months guarantee for 1300$
2x3090, used, 6 months guarantee (not offical) will cost 1900$
1x4090 new cost 2000$
Take my results with grain of salt, but it seems it's only worth buying 3060s if:
you get two to run 13b models in 8bit
you get 4 to run 34b models in 8bit (this looks nice)
and 10tps+ is enough for you
I dont think 4.5 tps on 70b is worth it as some people have it with single 3090 and cpu
having one 3090 gives soo much better speed on smaller models...
I was hoping to get a little better result for 70b, so i will end up getting another 3090 without telling my wife :D
However! 34b in 8bits is GOOD, if I couldn't afford another 3090, I would probably sell the first one and stack 3060s instead.
examples:
Lemur 70b, ctx max 512 (?)
exllama
gpu split 10,10,10,10
4.45 tok/s
codellama 13b gptq (wizardcoder python) ctx max 4k?
exllama
24.30 tokens/s,
3,3,3 gpu split allows for 16k context?
wizardcoder-python-34b-v1.0.Q4_K_M.gguf
llama.cpp
all layers not split config
8.79 tokens/s,
10,10,10,10
13.12 tokens/s,
34b q8
11.83 tokens/s,
llama2 13b q8
16.56 tokens/s - 19.31 tokens/s
| 2023-09-04T17:07:20 |
https://www.reddit.com/r/LocalLLaMA/comments/169xlzb/4x3060_testing/
|
kpodkanowicz
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
169xlzb
| false | null |
t3_169xlzb
|
/r/LocalLLaMA/comments/169xlzb/4x3060_testing/
| false | false |
self
| 1 | null |
llama2 GPU speedup
| 1 |
How to use GPU to run LLAMA2 faster. Using CPU is too slow when I wish to do lot of repetitive summary on data. I have tried llama-cpp-python library and done all that I can find to use my 4gb gpu nvidia t1200 but is not working. Can ctransformer be good option ? is it fast?
| 2023-09-04T17:34:50 |
https://www.reddit.com/r/LocalLLaMA/comments/169yc3f/llama2_gpu_speedup/
|
PhantomLord06
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
169yc3f
| false | null |
t3_169yc3f
|
/r/LocalLLaMA/comments/169yc3f/llama2_gpu_speedup/
| false | false |
self
| 1 | null |
Split model matrices
| 1 |
[removed]
| 2023-09-04T17:43:35 |
https://www.reddit.com/r/LocalLLaMA/comments/169ykh7/split_model_matrices/
|
FootballDoc
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
169ykh7
| false | null |
t3_169ykh7
|
/r/LocalLLaMA/comments/169ykh7/split_model_matrices/
| false | false |
self
| 1 | null |
Method to split model matrices
| 1 |
[removed]
| 2023-09-04T17:44:46 |
https://www.reddit.com/r/LocalLLaMA/comments/169ylj0/method_to_split_model_matrices/
|
FootballDoc
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
169ylj0
| false | null |
t3_169ylj0
|
/r/LocalLLaMA/comments/169ylj0/method_to_split_model_matrices/
| false | false |
self
| 1 | null |
🚀We trained a new 1.6B parameters code model that reaches 32% HumanEval and is SOTA for the size
| 1 |
We’ve finished training a new code model Refact LLM which took us about a month. The main use case is for blazing-fast code completion with fill-in-the-middle, additionally, the model could reply to chat prompts.
It has much better performance than all of the code models of similar size, and almost reaches the same HumanEval as Starcoder being 10x smaller in size.
With the small size, it can work with most modern GPUs requiring just 3GB RAM.
You can try self-hosting it in Refact [https://github.com/smallcloudai/refact/](https://github.com/smallcloudai/refact/) and get a local fast copilot alternative with decent suggestions.
Weights and model card [https://huggingface.co/smallcloudai/Refact-1\_6B-fim](https://huggingface.co/smallcloudai/Refact-1_6B-fim).
We would love to hear your feedback!
| 2023-09-04T17:48:01 |
https://www.reddit.com/r/LocalLLaMA/comments/169yonh/we_trained_a_new_16b_parameters_code_model_that/
|
kateklink
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
169yonh
| false | null |
t3_169yonh
|
/r/LocalLLaMA/comments/169yonh/we_trained_a_new_16b_parameters_code_model_that/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': '1_wks0LGL7qavGXNT_Z37zwFckag2ZQ7CPAL6tKcDAo', 'resolutions': [{'height': 66, 'url': 'https://external-preview.redd.it/n8v241FMHVkDSAuIqs1OnzTDATwrQoobgY3lRVuiavg.jpg?width=108&crop=smart&auto=webp&s=b3affacaf0b9cd81d36f7334d635b14396159d1f', 'width': 108}, {'height': 132, 'url': 'https://external-preview.redd.it/n8v241FMHVkDSAuIqs1OnzTDATwrQoobgY3lRVuiavg.jpg?width=216&crop=smart&auto=webp&s=ae82a927378724194b0e837aaaaba5adedaf7492', 'width': 216}, {'height': 196, 'url': 'https://external-preview.redd.it/n8v241FMHVkDSAuIqs1OnzTDATwrQoobgY3lRVuiavg.jpg?width=320&crop=smart&auto=webp&s=3d1e2ca12fbda8802a95970439cf366ea91ec291', 'width': 320}, {'height': 393, 'url': 'https://external-preview.redd.it/n8v241FMHVkDSAuIqs1OnzTDATwrQoobgY3lRVuiavg.jpg?width=640&crop=smart&auto=webp&s=526115137a16a15ad190f962b0d6d2be4cb47d4a', 'width': 640}], 'source': {'height': 512, 'url': 'https://external-preview.redd.it/n8v241FMHVkDSAuIqs1OnzTDATwrQoobgY3lRVuiavg.jpg?auto=webp&s=1742064ddb7c1423e73e9315a0444741c482f32b', 'width': 832}, 'variants': {}}]}
|
vllm reducing quality when loading local fine tuned llama2 model
| 1 |
Has anyone else encountered the issue that a model loaded with vllm generates low quality/gibberish output when using a local, fine tuned llama2 13B hf model?
Just using the standard inference method from the vllm blog:
`from vllm import LLM`
`prompts = ["Hello, my name is", "The capital of France is"]`
`llm = LLM(model="lmsys/vicuna-7b-v1.3") # Create an LLM.`
`outputs = llm.generate(prompts)`
​
| 2023-09-04T17:53:15 |
https://www.reddit.com/r/LocalLLaMA/comments/169ytis/vllm_reducing_quality_when_loading_local_fine/
|
DeanBlub
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
169ytis
| false | null |
t3_169ytis
|
/r/LocalLLaMA/comments/169ytis/vllm_reducing_quality_when_loading_local_fine/
| false | false |
self
| 1 | null |
Finetuning a llama 2 model using Claude or Claude instant dataset.
| 6 |
Won't it be nice to finetune a LLaMA 2 model on a custom dataset crafted from Anthropic Claude models, particularly the Claude instant or Claude 2 models? Most finetuned models are based on messages generated using GPT-4, so it would be interesting to see a LLaMA 2 model finetuned on a jailbroken/uncensored Claude dataset. This would be particularly useful for models that are finetuned mainly for roleplaying or creative writing tasks.
Aside from the intense filter, Anthropic Claude models are known for their ability to describe actions in realistic detail and are well-known for their creative writing abilities. Although I haven't tested GPT-4 in detail, from my limited experience, I think the Claude models are still excellent in descriptive writing. It would be interesting to see how a model finetuned on a dataset made from Claude's responses or even a model finetuned on both GPT-4 and Claude conversations dataset would perform.
Here's are examples that shows the difference in writing quality between ChatGpt (gpt-3.5 turbo on Poe) and Claude instant: [https://pastebin.com/569TFewE](https://pastebin.com/569TFewE), [https://pastebin.com/ffHS3fwH](https://pastebin.com/ffHS3fwH)
So, what do you guys think?"
| 2023-09-04T20:34:59 |
https://www.reddit.com/r/LocalLLaMA/comments/16a37eq/finetuning_a_llama_2_model_using_claude_or_claude/
|
DannyBrownMz
|
self.LocalLLaMA
| 2023-09-04T20:53:42 | 0 |
{}
|
16a37eq
| false | null |
t3_16a37eq
|
/r/LocalLLaMA/comments/16a37eq/finetuning_a_llama_2_model_using_claude_or_claude/
| false | false |
self
| 6 | null |
Need Advice: I have 12 , RTX3070s and want to put them to work on an AI project… What are some ideas that I could enhance my work/life/financial situation ?
| 1 |
First time posting on here, been watching for awhile… very thankful for all of your contributions. It’s helped me understand a lot of very complex thoughts about this new world we live in.
So I have been exploring ideas and training myself in python on basic concepts such as installing packages and basic functions and definitions.
Now I want to push myself and setup a project that I can train a machine to learn from a dataset via my own hardware.
What would you do with all this computing power?
I’ve explored algo trading , personal assistants , and a few others but it’s extremely slow going as I have basic knowledge.
Any help would be welcomed :)
| 2023-09-04T20:49:04 |
https://www.reddit.com/r/LocalLLaMA/comments/16a3l0l/need_advice_i_have_12_rtx3070s_and_want_to_put/
|
Overall-Split8048
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16a3l0l
| false | null |
t3_16a3l0l
|
/r/LocalLLaMA/comments/16a3l0l/need_advice_i_have_12_rtx3070s_and_want_to_put/
| false | false |
self
| 1 | null |
docker wont work when trying to install Nous Hermes Llama 2 13B
| 1 |
[removed]
| 2023-09-04T22:20:42 |
https://www.reddit.com/r/LocalLLaMA/comments/16a61ca/docker_wont_work_when_trying_to_install_nous/
|
TheFarisaurusRex
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16a61ca
| false | null |
t3_16a61ca
|
/r/LocalLLaMA/comments/16a61ca/docker_wont_work_when_trying_to_install_nous/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'EgxglN1oOrkKevE7sa6DylFeaX96Qyhel4McIlIaExc', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/9NKrQ8iiWMltHEGCGEqcM5TbhnxHuY6oq0Yb4gvuvnQ.jpg?width=108&crop=smart&auto=webp&s=fd6eea8730d5aab3bf78c89a32c0af1c3b782d29', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/9NKrQ8iiWMltHEGCGEqcM5TbhnxHuY6oq0Yb4gvuvnQ.jpg?width=216&crop=smart&auto=webp&s=ab82bbc6e786f949b78bf27eed271f9cc8fed400', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/9NKrQ8iiWMltHEGCGEqcM5TbhnxHuY6oq0Yb4gvuvnQ.jpg?width=320&crop=smart&auto=webp&s=23a4f80205adf903c946b135ac878f1086c0bc04', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/9NKrQ8iiWMltHEGCGEqcM5TbhnxHuY6oq0Yb4gvuvnQ.jpg?auto=webp&s=7738712cb96339d0db74f35628de86f0d4d30d34', 'width': 480}, 'variants': {}}]}
|
Code Llama is Amazing!
| 1 |
phind-codellama-34b-v2.Q5\_K\_S.gguf works great, but I've actually only needed codellama-13b-oasst-sft-v10.Q4\_K\_S.gguf
This is what I've been waiting for. I'm not going to say it's as good as chatGPT 3.5, but for most of my purposes it is. I'm using KoboldCPP with a temperature setting of .1 and it passes all of my 20 or so tests, most of them somewhat boiler plate but here is an example of one:
​
Prompt: c# show a win form app that in textBox1 merges duplicate lines while adding up totals of numbers shown on the far right after the last white space, make sure to comment the code:
Result:
Optional Pastebin for formatting: [https://pastebin.com/LTLs1XRD](https://pastebin.com/LTLs1XRD)
`// Split the text in textBox1 into individual linesstring[] lines = textBox1.Text.Split(new string[] { Environment.NewLine }, StringSplitOptions.None);// Create a dictionary to store the line totalsDictionary<string, int> lineTotals = new Dictionary<string, int>();// Loop through each lineforeach (string line in lines){// Trim the line and split it into partsstring[] parts = line.Trim().Split(' ');`
`// Check if the first part of the line is already a key in the lineTotals dictionaryif (!lineTotals.ContainsKey(parts[0])){// If not, add the line to the dictionary with its corresponding totallineTotals.Add(parts[0], int.Parse(parts[parts.Length - 1]));}else{// If yes, update the existing value in the dictionarylineTotals[parts[0]] += int.Parse(parts[parts.Length - 1]);}}// Clear the contents of textBox1textBox1.Text = "";// Loop through each entry in the lineTotals dictionaryforeach (var item in lineTotals){// Format the entry as a string and append it to textBox1textBox1.Text += $"{item.Key} {item.Value}{Environment.NewLine}";}\`
​
Let me know if you need further assistance!
\`\`\`csharp
// Example usage:
textBox1.Text = @"
apple 5
banana 3
orange 2
apple 7
grape 4
banana 6
";
MergeDuplicateLinesWithTotal(textBox1);
// Output:
// apple 12
// banana 9
// orange 2
// grape 4
| 2023-09-04T23:08:02 |
https://www.reddit.com/r/LocalLLaMA/comments/16a78bf/code_llama_is_amazing/
|
ThePseudoMcCoy
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16a78bf
| false | null |
t3_16a78bf
|
/r/LocalLLaMA/comments/16a78bf/code_llama_is_amazing/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'OgFzGCIRw1ZxjMOSkfV1OiH-_nQiZl8rzSonmOAuhGs', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/-WiKXADWH5lgU4gQv5fcDAQ9QKNBZTJ-D83BykIL2HA.jpg?width=108&crop=smart&auto=webp&s=df9c6a296446d05d873c629a30253398c4d29c1b', 'width': 108}], 'source': {'height': 150, 'url': 'https://external-preview.redd.it/-WiKXADWH5lgU4gQv5fcDAQ9QKNBZTJ-D83BykIL2HA.jpg?auto=webp&s=07c121a0180003f7373863af66192b6ff6a937da', 'width': 150}, 'variants': {}}]}
|
Automated Routing of Tasks to Optimal Models: A PR for Semantic-Kernel
| 1 |
The need for efficient model routing has been a [point of discussion](https://twitter.com/Altimor/status/1696221744024142210) in the community. Addressing this, I've submitted a [pull request](https://github.com/microsoft/semantic-kernel/pull/2323) to Semantic-Kernel that introduces an automated multi-model connector.
#### Key Features:
- **Automated Task Collection**: Gathers samples and tests them across multiple models.
- **Model Vetting**: Uses a primary connector (e.g., ChatGPT) to evaluate the performance of secondary models.
- **Dynamic Routing Table**: Updates the routing table based on performance and cost metrics.
This PR is part of a [larger initiative](https://github.com/microsoft/semantic-kernel/pull/2229) aimed at integrating OpenAI and Local llamas toolkits via a bridge between `semantic-kernel` and `oobabooga`.
The multi-connector currently exists as:
* [a Unit test series](https://github.com/microsoft/semantic-kernel/blob/70c0eafc9410c9bd5a444d815db868e5732dbcc8/dotnet/src/Connectors/Connectors.UnitTests/MultiConnector/TextCompletion/MultiConnectorTextCompletionTests.cs) that was a proof of concept where a primary connector capable of all 4 arithmetic operations would vet secondary connectors capable of only one each but faster and cheaper.
* [An integration test series](https://github.com/microsoft/semantic-kernel/blob/70c0eafc9410c9bd5a444d815db868e5732dbcc8/dotnet/src/IntegrationTests/Connectors/MultiConnector/MultiConnectorTests.cs) with settings made of plans to tests with test data and validation data of various complexities, and a list of real primary and secondary connectors comprising ChatGPT and local Llamas and a [Readme.md](https://github.com/microsoft/semantic-kernel/blob/70c0eafc9410c9bd5a444d815db868e5732dbcc8/dotnet/src/IntegrationTests/Connectors/MultiConnector/README.md) on how to set up the test environment.
Note that the plan + data format is highly customizable in variety and difficulty, and the integration tests also include a version where the plan is automatically generated by the primary connector in the role of which ChatGPT 4 has exhibited excellent capabilities for designing plans given a set of skills to be tested.
The question remains whether this should be integrated into Semantic-Kernel's core functionality. Regardless, I'm going to extend the code for optimal distributed mixture of experts, leveraging [Infer.Net](https://mbmlbook.com/LearningSkills.html) and Spark.net.
Your thoughts and contributions would be highly valuable as this directly impacts various technology stacks accommodated by Semantic-Kernel.
| 2023-09-04T23:40:53 |
https://www.reddit.com/r/LocalLLaMA/comments/16a80j4/automated_routing_of_tasks_to_optimal_models_a_pr/
|
Jessynoo
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16a80j4
| false | null |
t3_16a80j4
|
/r/LocalLLaMA/comments/16a80j4/automated_routing_of_tasks_to_optimal_models_a_pr/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'JEsRby0ymh-Lsg-o1hInETv2KYW9o55WVpsSQSVxEFk', 'resolutions': [{'height': 137, 'url': 'https://external-preview.redd.it/rLV5GfkPivMe22A0Kw7Y8OW3AAW6hjWXV9XXnnuAQ9A.jpg?width=108&crop=smart&auto=webp&s=29604d57b4f02de40372a1dbeb5fce4053eeaece', 'width': 108}, {'height': 275, 'url': 'https://external-preview.redd.it/rLV5GfkPivMe22A0Kw7Y8OW3AAW6hjWXV9XXnnuAQ9A.jpg?width=216&crop=smart&auto=webp&s=b08b5d7af46b4a88b79603511c3743a15e84a42a', 'width': 216}, {'height': 408, 'url': 'https://external-preview.redd.it/rLV5GfkPivMe22A0Kw7Y8OW3AAW6hjWXV9XXnnuAQ9A.jpg?width=320&crop=smart&auto=webp&s=cbabd7ea1cee9e444bfd177d239a0d1c3d719bf5', 'width': 320}, {'height': 816, 'url': 'https://external-preview.redd.it/rLV5GfkPivMe22A0Kw7Y8OW3AAW6hjWXV9XXnnuAQ9A.jpg?width=640&crop=smart&auto=webp&s=e7a47ccdb2eb24dc1b90393333f20d5123ce39ea', 'width': 640}], 'source': {'height': 908, 'url': 'https://external-preview.redd.it/rLV5GfkPivMe22A0Kw7Y8OW3AAW6hjWXV9XXnnuAQ9A.jpg?auto=webp&s=15c1180df1a6904827c22b43a1b9192206b2fa84', 'width': 712}, 'variants': {}}]}
|
Looking for Llama model to run on Google Collab (either free or pro)
| 1 |
Hello, as the name says, I'm looking for a version of Llama that can run on Collab, and I'm willing to pay for higher GPU. I'm trying a quantized version now here: [https://github.com/MuhammadMoinFaisal/LargeLanguageModelsProjects/blob/main/Run%20Llama2%20Google%20Colab/Llama\_2.ipynb](https://github.com/MuhammadMoinFaisal/LargeLanguageModelsProjects/blob/main/Run%20Llama2%20Google%20Colab/Llama_2.ipynb)
and it seems to be using too much memory and is timing out my runtime. It seems to be working on the A100 GPU but that uses a ton of compute units, so I'd prefer to be able to use a smaller GPU. Any help is appreciated. Thanks!
​
| 2023-09-05T02:19:54 |
https://www.reddit.com/r/LocalLLaMA/comments/16abkaq/looking_for_llama_model_to_run_on_google_collab/
|
M1x1ma
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16abkaq
| false | null |
t3_16abkaq
|
/r/LocalLLaMA/comments/16abkaq/looking_for_llama_model_to_run_on_google_collab/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'eKjsCg89liVZs8XOdObb8bG5Phv6NC5UPXepy7tAe9c', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/aW7Vqh0303tpwdK67i-L4f6Y3WiLzyvCpoM9yIvLsOQ.jpg?width=108&crop=smart&auto=webp&s=00c1aa9eb551167bc15de42add005d453be9eef3', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/aW7Vqh0303tpwdK67i-L4f6Y3WiLzyvCpoM9yIvLsOQ.jpg?width=216&crop=smart&auto=webp&s=408038144a6329f579b671791d6c5d0354d13084', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/aW7Vqh0303tpwdK67i-L4f6Y3WiLzyvCpoM9yIvLsOQ.jpg?width=320&crop=smart&auto=webp&s=4265fd64389a38ab28cb42db7e30dbef35c14b3e', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/aW7Vqh0303tpwdK67i-L4f6Y3WiLzyvCpoM9yIvLsOQ.jpg?width=640&crop=smart&auto=webp&s=4639247c5ac448668a4403ab7813da36ee2eaaa0', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/aW7Vqh0303tpwdK67i-L4f6Y3WiLzyvCpoM9yIvLsOQ.jpg?width=960&crop=smart&auto=webp&s=458396c51870b7d67d361a37e90c1398530ebe69', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/aW7Vqh0303tpwdK67i-L4f6Y3WiLzyvCpoM9yIvLsOQ.jpg?width=1080&crop=smart&auto=webp&s=bb081e8780065817aa8b968dc0aaeee86b100088', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/aW7Vqh0303tpwdK67i-L4f6Y3WiLzyvCpoM9yIvLsOQ.jpg?auto=webp&s=3465c1c073bd55335b367e6b35b97d98995e4036', 'width': 1200}, 'variants': {}}]}
|
How to fine-tuning LLM for a chat bot to answer about a website's functionalities?
| 1 |
Hello, so I’m fine-tuning a LLM to use as a chat bot for a website. My goal is to allow the bot to be able to answer questions about this website’s functionalities, e.g. how to change password, what are the payment methods available, how to download certain games etc…
Currently, my dataset consists of a list of users’ intents (around 50 intents) along with corresponding responses for each intent. To get started, I wrote about 3-4 questions for each intent and obtain around 150-200 question answer pairs. Then, these pairs are used to fine-tune a LLM (I'm using a 13B model) in the following format:
`{system prompt}`
`Question: {question}`
`Answer: {answer}`
The result I obtained is not very good, even though the style of the response is somewhat similar to what I want, the model often misunderstand user's intent and generate responses that are unrelated to the question. Initially, I assume this is due to the lack of data. However, when researching about fine-tuning LLM, I saw this OpenAI post [https://community.openai.com/t/finetuning-for-domain-knowledge-and-questions/24817](https://community.openai.com/t/finetuning-for-domain-knowledge-and-questions/24817) that talked about fine-tuning cannot teach model new knowledge, and since the website in my use case requires some specific knowledge about its functionalities, LLM cannot learn them effectively thus failed to answer the question.
I'm still a bit skeptical about the above post however, so my questions are:
\- If I increase the number of question-answer pairs (for example, by parsing the chat logs of the website), should I expect the LLM to perform better or it will still fail due to the lack of domain knowledge
\- I know that currently the best way to incorporate knowledge to LLM is to use retrieval. However, currently I do not have a corpus to search from, so should I spend some time to write the manual for the website (if fine-tuning do not work) or is there a better way to create documentation for the website utilizing chat logs?
\- A final question which is slightly unrelated to my current use case: According to my research, there are some approaches where the model is continued to pretrained on a large amount of unstructured domain data to acquire knowledge before fine-tuning on instruction data (e.g. [https://www.stochastic.ai/blog/xfinance-vs-bloomberg-gpt](https://www.stochastic.ai/blog/xfinance-vs-bloomberg-gpt)) and obtain good results. Does this approach only work when you have a large corpus of documents (500k in the link)? If we only have few documents e.g. 100-200 documents (each document has around 5-10 pages), is this method a good way to try or retrieval is still better?
​
| 2023-09-05T03:49:53 |
https://www.reddit.com/r/LocalLLaMA/comments/16adfar/how_to_finetuning_llm_for_a_chat_bot_to_answer/
|
scienceotaku68
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16adfar
| false | null |
t3_16adfar
|
/r/LocalLLaMA/comments/16adfar/how_to_finetuning_llm_for_a_chat_bot_to_answer/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'BxNvUeMFd6obe78ihkAJrzGJFpOkQpmEJ7BbVi_larY', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/2J07IKccOEu528Dr2WlyssuQYfjax6yT553dEfNzT00.jpg?width=108&crop=smart&auto=webp&s=93162f369263d5c2a9e1768260a77dbe05103f3e', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/2J07IKccOEu528Dr2WlyssuQYfjax6yT553dEfNzT00.jpg?width=216&crop=smart&auto=webp&s=dbc4176da2b6e2f8cc29002232fb799303e8c4d7', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/2J07IKccOEu528Dr2WlyssuQYfjax6yT553dEfNzT00.jpg?width=320&crop=smart&auto=webp&s=0e5a8489fbfa03e8d18ba305ccd4945626d306f2', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/2J07IKccOEu528Dr2WlyssuQYfjax6yT553dEfNzT00.jpg?width=640&crop=smart&auto=webp&s=34d3e6a5c65886be595f06be7e712b5946391d72', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/2J07IKccOEu528Dr2WlyssuQYfjax6yT553dEfNzT00.jpg?width=960&crop=smart&auto=webp&s=073702ab34f5d22d066990279fcb8219ec0c160c', 'width': 960}], 'source': {'height': 576, 'url': 'https://external-preview.redd.it/2J07IKccOEu528Dr2WlyssuQYfjax6yT553dEfNzT00.jpg?auto=webp&s=8a98b7aa0d64757a3ef9f7a2c7bf4c8f10b02230', 'width': 1024}, 'variants': {}}]}
|
How can I use multiple computers to locally run LLM's with a API endpoint?
| 1 |
My plan is to have a few computers on the network to work together to run a (or multiple) models locally with API to use with a chat interface. Any idea as to how I can do this?
| 2023-09-05T04:08:22 |
https://www.reddit.com/r/LocalLLaMA/comments/16adse5/how_can_i_use_multiple_computers_to_locally_run/
|
heyitsradin
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16adse5
| false | null |
t3_16adse5
|
/r/LocalLLaMA/comments/16adse5/how_can_i_use_multiple_computers_to_locally_run/
| false | false |
self
| 1 | null |
Efficient tokenization of timestamps?
| 1 |
Good golly, timestamps tokenize poorly. I'm experimenting with adding timestamps as prefixes to chat messages, and finding that they take one character per token in some cases (such as below for the string `[9/4/23 11:51PM]`), based on the strftime `%-m/%-d/%y %-I:%M%p`)
When I toggle timestamps in my custom app, I go from 72 to 86 messages in context (that's individual messages, not message/response pairs). I guess that's not intolerably bad, but it's more than 10% of context being used for timestamps alone.
Does anyone have suggestions for how to cut this down to a smaller number of tokens?
I'm a bit surprised that numbers tokenize so damn poorly; at a minimum I'd expect the numbers 1.100 to have their own tokens, and possibly prepended with a space. If anyone can shed some machine-learning-light on why such numbers don't have their own tokens in sentencepiece, i'd love to hear it!
Also, has anyone else tried adding timestamps, and either stuck with it, or given up, or landed on a better solution? (At one point, I was injecting a timestamp periodically, prefixed with "Narrator:", which seemed to work somewhat, but somtimes the LLM would start parroting the narrator messages.)
https://preview.redd.it/fjc3atwabdmb1.png?width=242&format=png&auto=webp&s=e6e109e55959c9e549966c362a56a7f5d88f3cad
| 2023-09-05T05:02:27 |
https://www.reddit.com/r/LocalLLaMA/comments/16aeu3b/efficient_tokenization_of_timestamps/
|
tronathan
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16aeu3b
| false | null |
t3_16aeu3b
|
/r/LocalLLaMA/comments/16aeu3b/efficient_tokenization_of_timestamps/
| false | false | 1 | null |
|
Deploying a fine-tuned LLaMA 2 model
| 1 |
I have been following this tutorial: https://www.philschmid.de/sagemaker-llama2-qlora that details how to fine-tune a LLaMA model using AWS SageMaker. I have attempted to deploy it locally (retrieving the model from the S3 bucket onto my local PC) but loading the model gave a bunch of common bitsandbytes errors related to CUDA and seemed like more of a hassle than it was worth - plus it's for a university project and eventually I need it deployed so we can inference through an API. My questions are:
- Am I able to deploy this model using an AWS SageMaker instance? If yes, what is the minimum requirements to run this model for inferencing?
- A follow on from the above, I don't need to be able to inference 24/7 as it's a university project, would spot inferencing be a good choice to keep costs down?
To give some context on the project, the model is to simplify research papers syntactically and/or lexically. Any help in understanding is much appreciated!
| 2023-09-05T06:16:20 |
https://www.reddit.com/r/LocalLLaMA/comments/16ag6ux/deploying_a_finetuned_llama_2_model/
|
arkeniwnl
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16ag6ux
| false | null |
t3_16ag6ux
|
/r/LocalLLaMA/comments/16ag6ux/deploying_a_finetuned_llama_2_model/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'a1hmOnL77g0KEwQYz1ospL2mdtWnUUs4gvIEoj1uaZQ', 'resolutions': [{'height': 66, 'url': 'https://external-preview.redd.it/oD1pkZuJzDIaMa8rqAkLr3IbeHQwt3g4JwPFjRIiQ7Y.jpg?width=108&crop=smart&auto=webp&s=51e5811563287b81728e9f1934f629170807ff4e', 'width': 108}, {'height': 133, 'url': 'https://external-preview.redd.it/oD1pkZuJzDIaMa8rqAkLr3IbeHQwt3g4JwPFjRIiQ7Y.jpg?width=216&crop=smart&auto=webp&s=ba8036d2cb784339eb7dc74dafa9bd86b11eb825', 'width': 216}, {'height': 197, 'url': 'https://external-preview.redd.it/oD1pkZuJzDIaMa8rqAkLr3IbeHQwt3g4JwPFjRIiQ7Y.jpg?width=320&crop=smart&auto=webp&s=67febf1b1913b4183f59a8d225f41a5c9d711b22', 'width': 320}, {'height': 394, 'url': 'https://external-preview.redd.it/oD1pkZuJzDIaMa8rqAkLr3IbeHQwt3g4JwPFjRIiQ7Y.jpg?width=640&crop=smart&auto=webp&s=595ab44cc93358ada686f27b200786051d1a8e88', 'width': 640}, {'height': 591, 'url': 'https://external-preview.redd.it/oD1pkZuJzDIaMa8rqAkLr3IbeHQwt3g4JwPFjRIiQ7Y.jpg?width=960&crop=smart&auto=webp&s=003dd992d245ac23320b2acb44824b5db7724e9b', 'width': 960}, {'height': 665, 'url': 'https://external-preview.redd.it/oD1pkZuJzDIaMa8rqAkLr3IbeHQwt3g4JwPFjRIiQ7Y.jpg?width=1080&crop=smart&auto=webp&s=5f4dec01663537808962f2e3884c81dec0389a58', 'width': 1080}], 'source': {'height': 1478, 'url': 'https://external-preview.redd.it/oD1pkZuJzDIaMa8rqAkLr3IbeHQwt3g4JwPFjRIiQ7Y.jpg?auto=webp&s=c97d35973fec2476f6250ac9942fde23937ecc15', 'width': 2400}, 'variants': {}}]}
|
RTX 3080 Ti vs 4090 for QLoRa
| 1 |
I am a laptop user and want to get a new company laptop with the most amount of VRAM for model fine-tuning.
I can choose between the RTX 3080 Ti and the RTX 4099. Both have 16GB VRAM, (so enough for QLoRa on 13B?)
But what are the differences in terms of performance? What speedup can I expect from a 4090?
| 2023-09-05T07:01:24 |
https://www.reddit.com/r/LocalLLaMA/comments/16agzb9/rtx_3080_ti_vs_4090_for_qlora/
|
Koliham
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16agzb9
| false | null |
t3_16agzb9
|
/r/LocalLLaMA/comments/16agzb9/rtx_3080_ti_vs_4090_for_qlora/
| false | false |
self
| 1 | null |
Advice Getting Started w/ an AI Model - First: Which One?
| 1 |
I'm looking for some advice. I'm going to build an AI and train it, but I would love it to be pre-trained as a functional LLM first. First: What model should I use? What most up-to-date? What do I find that?
Here's my hardware: I have an old 1st Generation Intel i7 desktop computer that I want to convert into a dedicated AI machine. It has an old Nvidia GPU with just under a gig of RAM. I was thinking of throwing Linux Mint on there, but maybe there's another version that works better.
There's so much info out there and it's so confusing I don't know where to start. I was imagining a Llama model might be good, but who has released the best ones so far? (Is Berkeley's Koala open-source to the public and any good? Are there others better these days?)
What's versatile to creating a very specifically talented AI, and able to be further trained?
Thank you for your help! I'm a bit overwhelmed with technical jargon and models... I'd love to just get started.
| 2023-09-05T08:23:38 |
https://www.reddit.com/r/LocalLLaMA/comments/16aid7o/advice_getting_started_w_an_ai_model_first_which/
|
FlowingFire
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16aid7o
| false | null |
t3_16aid7o
|
/r/LocalLLaMA/comments/16aid7o/advice_getting_started_w_an_ai_model_first_which/
| false | false |
self
| 1 | null |
Running Llama 2 on a SLURM script
| 1 |
I am currently having issues with even getting a Llama2-7b to run on a HPC even though I have access to enough GPUs. I don’t believe there is an issue with my code but there easily could be. I have followed a typical huggingface guide to get it running on transformers. However it usually times out after a certain point. I know it shouldn’t be taking this long to inference. I haven’t had issues before running a quantized version in Colab. Is there anything I should check for?
| 2023-09-05T08:38:29 |
https://www.reddit.com/r/LocalLLaMA/comments/16ailwv/running_llama_2_on_a_slurm_script/
|
Fooples
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16ailwv
| false | null |
t3_16ailwv
|
/r/LocalLLaMA/comments/16ailwv/running_llama_2_on_a_slurm_script/
| false | false |
self
| 1 | null |
Lora vs Embeddings (Vector DB?) Knowledge Training
| 1 |
Hi all,
So recently my company wanted to venture into LLMs, the use-case is a standard one, where we will inject company-specific knowledgebase and use it internally, it might possibly serve other headquarters across the globe as well. The plan is to try Llama 2, then Code-Llama.
I tried created a very simple csv (only 8 rows of samples) with "question and "answer" column, then converted to a single "text" column (and a few samples of how,what,why), proceeds with Lora 4-bit and SFTTrainer. The results were good, but there were times where it hallucinates if the prompt wasn't direct or when tested on a smaller model.
I just stumbled across keywords like embeddings DB and seems like both are related to the topic of "Domain Training". Youtube tutorials talked about "Domain Training" but most of them are some general fine-tunings. We had a colleague who did some research on vector DB, but she left and we didn't really venture into vector db. Our previous works were object detection & OCR based.
So our boss wants a LLM which has knowledge for different projects/ team usages (countless Excels & PPTs that needs to process into csv). Should I try looking into vector DB and build one and link it with Llama, or just train Loras for different "knowledge"? Seen someone mention about LangChain as well. There is also the "fine-tuning is not for knowledge" saying, which made me confused.
FYI, I'm currently using Llama-2-chat 70B (4x A40 GPU). Kinda new into this domain, and never touched stable diffusion in the past.
TL:DR: Can someone give me a direction regarding my use-case? Much appreciated.
| 2023-09-05T08:46:46 |
https://www.reddit.com/r/LocalLLaMA/comments/16aiqxo/lora_vs_embeddings_vector_db_knowledge_training/
|
cringelord000222
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16aiqxo
| false | null |
t3_16aiqxo
|
/r/LocalLLaMA/comments/16aiqxo/lora_vs_embeddings_vector_db_knowledge_training/
| false | false |
self
| 1 | null |
Estimated cost of a 13b model on a server with H24 workflow/Concurrent users
| 1 |
Hey,
I was wondering what would be the cheapest way to get MythoMax 13b running on a server with enough power so that sometimes API queries can be made at the same time. The model would be running H24. What would be the way to go ? Is there a solution that can evolve with my needs if after some time I need more power? I you have an estimated cost i'd be happy to hear, I didn't find a lot of informations out there.
​
Thanks and have a nice day,
​
Rémi
| 2023-09-05T08:49:51 |
https://www.reddit.com/r/LocalLLaMA/comments/16aisrp/estimated_cost_of_a_13b_model_on_a_server_with/
|
Traditional_Hurry622
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16aisrp
| false | null |
t3_16aisrp
|
/r/LocalLLaMA/comments/16aisrp/estimated_cost_of_a_13b_model_on_a_server_with/
| false | false |
self
| 1 | null |
GitHub Copilot vs CodeLlama & Co for Code productivity
| 1 |
Hi everyone,
I am currently benchmarking the different LLMs for code productivity for my company and trying to find the best one in terms of cost / performance / latency / privacy.
The current challengers I see are in three brackets:
\- GitHub Copilot
\- Local models like CodeLlama & Co
\- Other vendors for LLMs specialized in code
What do you think provides the best tradeoff between cost / perf / latency / privacy in your opinion?
[View Poll](https://www.reddit.com/poll/16aivry)
| 2023-09-05T08:55:00 |
https://www.reddit.com/r/LocalLLaMA/comments/16aivry/github_copilot_vs_codellama_co_for_code/
|
Separate-Still3770
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16aivry
| false | null |
t3_16aivry
|
/r/LocalLLaMA/comments/16aivry/github_copilot_vs_codellama_co_for_code/
| false | false |
self
| 1 | null |
Has anyone finetuned LLAMA models using TPUs?
| 1 |
I have received access to resources on GCP for my research and I am curious if I can use them to fine-tune LLAMA models using Peft.
| 2023-09-05T08:55:24 |
https://www.reddit.com/r/LocalLLaMA/comments/16aiw18/has_anyone_finetuned_llama_models_using_tpus/
|
Marc_Marc_
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16aiw18
| false | null |
t3_16aiw18
|
/r/LocalLLaMA/comments/16aiw18/has_anyone_finetuned_llama_models_using_tpus/
| false | false |
self
| 1 | null |
Feed LLM with local knowledge
| 1 |
Say I want to build a LLM for films and movie production, I have several data to feed in:
\- description of the movie production process
\- some facts about "what is a good film" extracted during interviews
\- some movie titles along with metadata (film director, rating, year release, ...)
In short, I have a lot of documents I want the LLM to get but some of them are structured (like the latest) and some are raw.
Which way to go? I've read about retrieval augmented llm, embedding, and am kind of lost.
| 2023-09-05T09:50:10 |
https://www.reddit.com/r/LocalLLaMA/comments/16ajtl8/feed_llm_with_local_knowledge/
|
front-equal
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16ajtl8
| false | null |
t3_16ajtl8
|
/r/LocalLLaMA/comments/16ajtl8/feed_llm_with_local_knowledge/
| false | false |
self
| 1 | null |
Need Guidance on Exposing Local API for My LLM Model
| 1 |
Hi everyone, I'm working with Vicuna, a local language model (LLM),
and I'm looking to expose it through an API from my local machine.
This would allow me to interact with the LLM from various environments, including chatbots, agents, or platforms like FlowiseAI.
Can anyone guide me or recommend a tutorial on how to achieve this? Your assistance would be greatly appreciated!
| 2023-09-05T10:13:17 |
https://www.reddit.com/r/LocalLLaMA/comments/16ak8ip/need_guidance_on_exposing_local_api_for_my_llm/
|
Dave86ch
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16ak8ip
| false | null |
t3_16ak8ip
|
/r/LocalLLaMA/comments/16ak8ip/need_guidance_on_exposing_local_api_for_my_llm/
| false | false |
self
| 1 | null |
Biggest possible LLama2 model that can run on Ryzen 3950X with 128GB RAM + 24GB VRAM
| 1 |
Given that I have a system with 128GB of RAM, a 16-core Ryzen 3950X, and an RTX 4090 with 24GB of VRAM, what's the largest language model in terms of billions of parameters that I can feasibly run on my machine? I previously tried running a 70B parameter model on CPU and it crashed.
Are there any optimizations or tricks to help run larger models successfully?
| 2023-09-05T10:14:56 |
https://www.reddit.com/r/LocalLLaMA/comments/16ak9nh/biggest_possible_llama2_model_that_can_run_on/
|
No_Wing133
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16ak9nh
| false | null |
t3_16ak9nh
|
/r/LocalLLaMA/comments/16ak9nh/biggest_possible_llama2_model_that_can_run_on/
| false | false |
default
| 1 | null |
I cancelled my Chatgpt monthly membership because I'm tired of the constant censorship and the quality getting worse and worse. Does anyone know an alternative that I can go to?
| 1 |
Like chatgpt I'm willing to pay about $20 a month but I want an text generation AI that:
Remembers more than 8000 tokens
Doesn't have as much censorship
Can help write stories that I like to make
Those are the only three things I'm asking but Chatgpt refused to even hit those three. It's super ridiculous. I've tried to put myself on the waitlist for the API but it doesn't obviously go anywhere after several months.
This month was the last straw with how bad the updates are so I've just quit using it. But where else can I go?
Like you guys know any models that have like 30k of tokens?
| 2023-09-05T10:15:05 |
https://www.reddit.com/r/LocalLLaMA/comments/16ak9s2/i_cancelled_my_chatgpt_monthly_membership_because/
|
SerpentEmperor
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16ak9s2
| false | null |
t3_16ak9s2
|
/r/LocalLLaMA/comments/16ak9s2/i_cancelled_my_chatgpt_monthly_membership_because/
| false | false |
self
| 1 | null |
Model stop responding after few reply
| 1 |
This 13b 4bit mythomax, vicuna works good, but what if it stop replying after a few messages. I tried many settings but it doesn't help much. I don't know if the problem is in the pc and memory or what it is. I set temperature, top p, top k etc in different configurations. Anyone had the same? Maybe the ram is clogged up and stops replying? I have rtx 4070 and 16gb ram.
| 2023-09-05T10:24:30 |
https://www.reddit.com/r/LocalLLaMA/comments/16akg99/model_stop_responding_after_few_reply/
|
LonleyPaladin
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16akg99
| false | null |
t3_16akg99
|
/r/LocalLLaMA/comments/16akg99/model_stop_responding_after_few_reply/
| false | false |
self
| 1 | null |
can we run gpqt models with cuda in windows ?
| 1 |
[removed]
| 2023-09-05T11:17:05 |
https://www.reddit.com/r/LocalLLaMA/comments/16alf2m/can_we_run_gpqt_models_with_cuda_in_windows/
|
count_dracula14
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16alf2m
| false | null |
t3_16alf2m
|
/r/LocalLLaMA/comments/16alf2m/can_we_run_gpqt_models_with_cuda_in_windows/
| false | false |
self
| 1 | null |
LLM Obsession
| 1 |
I am obsessed with LLM. Every day, I find myself repeatedly checking Hugging Face to see if there is a new model that is compatible with my laptop GPU of RAM for GGUF, and I try it out. I have accumulated 3 terabytes of models, and I cannot bring myself to delete them. Is this a cause for concern, doctor?
| 2023-09-05T13:23:06 |
https://www.reddit.com/r/LocalLLaMA/comments/16ao5pe/llm_obsession/
|
GlobeTrekkerTV
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16ao5pe
| false | null |
t3_16ao5pe
|
/r/LocalLLaMA/comments/16ao5pe/llm_obsession/
| false | false |
self
| 1 | null |
Advice needed: Macbook pro M1 max 64G for local models, Should I get it?
| 1 |
Hey guys,
I need your experience/thoughts about this, I am currently running local models 7B on my Mac intel 16GB, works fine with decent speed, I can also run 13B but fairly slow. I have an option to replace that now with a M1 max 64GB with 32cores, my aim is to be able to run larger models or at least the 13B with enough speed on the go. It has to be a laptop cause I move a lot and I need to be able to take this with me on the go as much as possible. What are your takes on this? Do I go for the replacement?
| 2023-09-05T15:08:51 |
https://www.reddit.com/r/LocalLLaMA/comments/16aqs51/advice_needed_macbook_pro_m1_max_64g_for_local/
|
simplir
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16aqs51
| false | null |
t3_16aqs51
|
/r/LocalLLaMA/comments/16aqs51/advice_needed_macbook_pro_m1_max_64g_for_local/
| false | false |
self
| 1 | null |
Guide for begineers
| 1 |
I'm wondering if there's any step-by-step guide for beginners which desire to start learning about LLMs, maybe setting up and training LLaMA.
Any suggestions about where and how to start, the hardware needed or suggested etc...
Many thanks
| 2023-09-05T15:26:05 |
https://www.reddit.com/r/LocalLLaMA/comments/16ar7xs/guide_for_begineers/
|
TheHypersonic
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16ar7xs
| false | null |
t3_16ar7xs
|
/r/LocalLLaMA/comments/16ar7xs/guide_for_begineers/
| false | false |
self
| 1 | null |
Errors loading Transformers, Llama models
| 1 |
Hi Everyone!
I'm trying to run these via oobabooga/texgen webui via WSL with a 3090 and 128GB RAM. I have double and triple-checked that all requirements have been updated, and that exllama has been installed, and continue to get these errors upon loading them.
Trying to load TheBloke\_Wizard-Vicuna-30B-Uncensored-GPTQ
(model loader changes to Transformers, but I get the same result when using GPTQ. Other GPTQ models loads just fine, go figure)
Output:
^(Traceback (most recent call last):)
^(File “/home/\~/ai/text-generation-webui/modules/ui\_model\_menu.py”, line 185, in load\_model\_wrapper)
shared.model, shared.tokenizer = load_model(shared.model_name, loader)
^(File “/home/\~/ai/text-generation-webui/modules/models.py”, line 79, in load\_model)
output = load_func_map[loader](model_name)
^(File “/home/\~/ai/text-generation-webui/modules/models.py”, line 224, in huggingface\_loader)
model = LoaderClass.from_pretrained(checkpoint, **params)
^(File “/home/\~/anaconda3/envs/textgen/lib/python3.10/site-packages/transformers/models/auto/auto\_factory.py”, line 493, in from\_pretrained)
return model_class.from_pretrained(
^(File “/home/\~/anaconda3/envs/textgen/lib/python3.10/site-packages/transformers/modeling\_utils.py”, line 2629, in from\_pretrained)
state_dict = load_state_dict(resolved_archive_file)
^(File “/home/\~/anaconda3/envs/textgen/lib/python3.10/site-packages/transformers/modeling\_utils.py”, line 447, in load\_state\_dict)
with safe_open(checkpoint_file, framework="pt") as f:
^(safetensors\_rust.SafetensorError: Error while deserializing header: MetadataIncompleteBuffer)
Plenty of thanks in advance for any advice. I really just wish I could use these.
| 2023-09-05T15:28:05 |
https://www.reddit.com/r/LocalLLaMA/comments/16ar9sq/errors_loading_transformers_llama_models/
|
drycounty
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16ar9sq
| false | null |
t3_16ar9sq
|
/r/LocalLLaMA/comments/16ar9sq/errors_loading_transformers_llama_models/
| false | false |
self
| 1 | null |
The concern about LLM knowledge base and the possible solution
| 1 | 2023-09-05T15:58:12 |
https://medium.com/@SymeCloud/the-concern-about-llm-knowledge-base-and-the-possible-solution-7c212d2e8f99
|
nalaginrut
|
medium.com
| 1970-01-01T00:00:00 | 0 |
{}
|
16as1hn
| false | null |
t3_16as1hn
|
/r/LocalLLaMA/comments/16as1hn/the_concern_about_llm_knowledge_base_and_the/
| false | false | 1 |
{'enabled': False, 'images': [{'id': 'PFmYKNDQ_Zy5lFGaemVZ37fWFCR4-AtF6QzH77dbqbc', 'resolutions': [{'height': 96, 'url': 'https://external-preview.redd.it/e1oMH0Qvy4ZtNAfaieyqpXQmaCVpIcgVH8TxE2u7xDk.jpg?width=108&crop=smart&auto=webp&s=966c9b0c3e9f903364cd249b2f358b08cf127498', 'width': 108}, {'height': 193, 'url': 'https://external-preview.redd.it/e1oMH0Qvy4ZtNAfaieyqpXQmaCVpIcgVH8TxE2u7xDk.jpg?width=216&crop=smart&auto=webp&s=648c82d26b62065273deafad43cce9704730d45d', 'width': 216}, {'height': 286, 'url': 'https://external-preview.redd.it/e1oMH0Qvy4ZtNAfaieyqpXQmaCVpIcgVH8TxE2u7xDk.jpg?width=320&crop=smart&auto=webp&s=74ba53075b9959be4f08f6022f76bd17907d46b5', 'width': 320}], 'source': {'height': 454, 'url': 'https://external-preview.redd.it/e1oMH0Qvy4ZtNAfaieyqpXQmaCVpIcgVH8TxE2u7xDk.jpg?auto=webp&s=81bbb226f0395d8d9f3b950e51feb1694c0e71cf', 'width': 507}, 'variants': {}}]}
|
||
Reasonable Graphics card for LLM AND Gaming
| 1 |
I'm currently in the market of building my first PC in over a decade. (Mac User)
Primary reason is gaming, however, if possible I would love to be able to also run some larger LLM's locally. Can anyone recommend a consumer card that would be good for both?
I came across these two cards on Amazon - would it make sense to spend the extra $ on the more expensive version?
\- **$2,965**: ZOTAC Gaming GeForce RTX™ 3090 Trinity OC 24GB GDDR6X 384-bit 19.5 Gbps PCIE 4.0 ([https://www.amazon.ca/Graphics-IceStorm-Advanced-Lighting-ZT-A30900J-10P/dp/B08ZL6XD9H/ref=sr\_1\_2?crid=37QCG7OQYVZ4Z&keywords=3090&qid=1693929052&sprefix=3090%2Caps%2C124&sr=8-2&ufe=app\_do%3Aamzn1.fos.fe67de69-a579-4370-9bc8-5e38fc5a3bcc](https://www.amazon.ca/Graphics-IceStorm-Advanced-Lighting-ZT-A30900J-10P/dp/B08ZL6XD9H/ref=sr_1_2?crid=37QCG7OQYVZ4Z&keywords=3090&qid=1693929052&sprefix=3090%2Caps%2C124&sr=8-2&ufe=app_do%3Aamzn1.fos.fe67de69-a579-4370-9bc8-5e38fc5a3bcc))
\- **$1,799:** EVGA GeForce RTX 3090 FTW3 Ultra Gaming, 24GB GDDR6X, iCX3 ([https://www.amazon.ca/EVGA-GeForce-Technology-Backplate-24G-P5-3987-KR/dp/B0916ZWZ9S/ref=sr\_1\_1?crid=37QCG7OQYVZ4Z&keywords=3090&qid=1693929052&sprefix=3090%2Caps%2C124&sr=8-1&ufe=app\_do%3Aamzn1.fos.a4f9e18d-83e4-4dfa-94d2-717d5a20bf26](https://www.amazon.ca/EVGA-GeForce-Technology-Backplate-24G-P5-3987-KR/dp/B0916ZWZ9S/ref=sr_1_1?crid=37QCG7OQYVZ4Z&keywords=3090&qid=1693929052&sprefix=3090%2Caps%2C124&sr=8-1&ufe=app_do%3Aamzn1.fos.a4f9e18d-83e4-4dfa-94d2-717d5a20bf26))
​
Open to any suggestions :)
| 2023-09-05T16:00:05 |
https://www.reddit.com/r/LocalLLaMA/comments/16as386/reasonable_graphics_card_for_llm_and_gaming/
|
MattBurnes
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16as386
| false | null |
t3_16as386
|
/r/LocalLLaMA/comments/16as386/reasonable_graphics_card_for_llm_and_gaming/
| false | false |
self
| 1 | null |
I need some help or guidance on how to set up a locall LLM
| 1 |
Hello:
I would like to run this llama-2 70b on my computer like it: [https://huggingface.co/chat](https://huggingface.co/chat).
But I do not know how to achieve it, and also that it can search both in Internet and my own files.
I have tried GPT4ALL and Faraday, but both miss my requirements for searching Internet and my own files.
Could you guide me to do this?. Perhaps there is some Langchain code already done or some other GPT4ALL alike that do all I wish.
Thank you.
| 2023-09-05T16:37:16 |
https://www.reddit.com/r/LocalLLaMA/comments/16at2gw/i_need_some_help_or_guidance_on_how_to_set_up_a/
|
SnooWoofers780
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16at2gw
| false | null |
t3_16at2gw
|
/r/LocalLLaMA/comments/16at2gw/i_need_some_help_or_guidance_on_how_to_set_up_a/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'O4__VvuTP1zjgNXHpYgGtbNlwm8CyL1iGZRclIV-cFg', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/5keZ3GZzk8vGrHDudxMqXr9Ja7Wko-SGl9RrNbjC6P4.jpg?width=108&crop=smart&auto=webp&s=c5c01ca386f7a26e8afeb5073e51c35d0d581de7', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/5keZ3GZzk8vGrHDudxMqXr9Ja7Wko-SGl9RrNbjC6P4.jpg?width=216&crop=smart&auto=webp&s=0e915f82e672294c639c476433af5f1919265348', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/5keZ3GZzk8vGrHDudxMqXr9Ja7Wko-SGl9RrNbjC6P4.jpg?width=320&crop=smart&auto=webp&s=87643eb4a9654c3497efe7fce371db617f9ff816', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/5keZ3GZzk8vGrHDudxMqXr9Ja7Wko-SGl9RrNbjC6P4.jpg?width=640&crop=smart&auto=webp&s=20315fe6e900582303995761624ac0728d1703f9', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/5keZ3GZzk8vGrHDudxMqXr9Ja7Wko-SGl9RrNbjC6P4.jpg?width=960&crop=smart&auto=webp&s=6d8bc7d3273f5290083f6668e10d5b513621bfa3', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/5keZ3GZzk8vGrHDudxMqXr9Ja7Wko-SGl9RrNbjC6P4.jpg?width=1080&crop=smart&auto=webp&s=865cccb6b6df001aa14ef4fb2eb0f5902cb15904', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/5keZ3GZzk8vGrHDudxMqXr9Ja7Wko-SGl9RrNbjC6P4.jpg?auto=webp&s=03f4344525b6a013e0ac556cfc24b4a45d64f47e', 'width': 1200}, 'variants': {}}]}
|
In general, is training done on base models?
| 1 |
Take TheBloke/Carl-Llama-2-13B-GGUF for example.
The original model card says
> **Training:** Entire dataset was trained on Azure 4 x A100 80GB. For 3 epoch, training took 50 hours. DeepSpeed codebase was used for training purpose. This was trained on Llama-2 by Meta. GGML Quant models are converted by Kijana Mitchell. Extremely thankful to him.
Does that mean it was trained on base Llama?
| 2023-09-05T16:50:02 |
https://www.reddit.com/r/LocalLLaMA/comments/16atee3/in_general_is_training_done_on_base_models/
|
multiverse_fan
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16atee3
| false | null |
t3_16atee3
|
/r/LocalLLaMA/comments/16atee3/in_general_is_training_done_on_base_models/
| false | false |
self
| 1 | null |
GPTMe: A fancy CLI to interact with LLMs (GPT or Llama) in a Chat-style interface, with capabilities to execute code & commands on the local machine
| 1 | 2023-09-05T16:57:52 |
https://github.com/ErikBjare/gptme
|
ErikBjare
|
github.com
| 1970-01-01T00:00:00 | 0 |
{}
|
16atlia
| false | null |
t3_16atlia
|
/r/LocalLLaMA/comments/16atlia/gptme_a_fancy_cli_to_interact_with_llms_gpt_or/
| false | false | 1 |
{'enabled': False, 'images': [{'id': 'pl9qRx-8qE8tbw-ndu0ENMqJ9cc_GhFLbUmG3_LAtgs', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/dnaVcvhIweTXxyEdOijnBUTV2XNlQwZ2qfBwQ1ERhVU.jpg?width=108&crop=smart&auto=webp&s=cdeb5d306a3b83803dd77147481b6047a1b66857', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/dnaVcvhIweTXxyEdOijnBUTV2XNlQwZ2qfBwQ1ERhVU.jpg?width=216&crop=smart&auto=webp&s=66740c277a544ebcc05e21d1d3c98e1ff77ea076', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/dnaVcvhIweTXxyEdOijnBUTV2XNlQwZ2qfBwQ1ERhVU.jpg?width=320&crop=smart&auto=webp&s=101815f3bf66e72ac231637b8d9612d26f2fb379', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/dnaVcvhIweTXxyEdOijnBUTV2XNlQwZ2qfBwQ1ERhVU.jpg?width=640&crop=smart&auto=webp&s=4baae5c4c8e634c9c73557fd3b6e089a20d7f4ee', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/dnaVcvhIweTXxyEdOijnBUTV2XNlQwZ2qfBwQ1ERhVU.jpg?width=960&crop=smart&auto=webp&s=0a1c69134b3b1770a2ea1ab035e1ca56809397fa', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/dnaVcvhIweTXxyEdOijnBUTV2XNlQwZ2qfBwQ1ERhVU.jpg?width=1080&crop=smart&auto=webp&s=13afe67250ee41965a8a457bca0ac8295dae4b8e', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/dnaVcvhIweTXxyEdOijnBUTV2XNlQwZ2qfBwQ1ERhVU.jpg?auto=webp&s=14f1fd429ec88f98811351acc51852c2cd6b2e3d', 'width': 1200}, 'variants': {}}]}
|
||
I'm convinced now that “personal LLMs” are going to be a huge thing
| 1 |
Hey all,
[I posted here](https://www.reddit.com/r/LocalLLaMA/comments/16954fj/what_do_you_use_your_local_llm_for/) over the weekend asking for feedback on how you all use your local deployments. The response rate and engagement was great, and it gave me a lot to chew on.
The way I see it, the users here are all early adopters of a trend that is only going to keep growing. In the past privacy concerns have always been in the periphery with the internet. However, reflecting on this technology more, I am starting to be of the opinion that LLMs will mark a shift in attitude. This technology is going to become too personally integrated with our lives for each of us to feel comfortable letting someone else have complete control over it.
The most extreme example I can muster up is a world where personal LLMs are basically extensions or “grafts” onto our brains. Each individual will want to own and control that LLM for a whole list of reasons I can imagine. There will still be large appetite and use for the giga-brain 100T models, but personal LLMs will also be a huge thing, especially as they continue to mature towards current bleeding edge performance.
I'm going to start working on this. I have a modest amount of compute to deploy at this moment (8x A100s) and a bit of focus / determination. I am going to start by setting to work on replicating the work of [this recent paper](https://arxiv.org/abs/2306.11644). Hopefully I can count on some of you to be beta testers when I have a smol model ready.
Ultimately, I want to work on building the best local LLMs in the world, form-fit to consumer hardware. Basically, focus on making smaller models better rather than bigger models bigger.
​
Happy to hear any of your additional thoughts.
​
| 2023-09-05T17:16:30 |
https://www.reddit.com/r/LocalLLaMA/comments/16au3ga/im_convinced_now_that_personal_llms_are_going_to/
|
docsoc1
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16au3ga
| false | null |
t3_16au3ga
|
/r/LocalLLaMA/comments/16au3ga/im_convinced_now_that_personal_llms_are_going_to/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'q3evP6JeDpAC2MdSQHWYxnCYTqbJkElIQsLFqVSdkss', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=108&crop=smart&auto=webp&s=2711d572cfc6c713893cf24e8c4a7344d5ad8a4c', 'width': 108}, {'height': 126, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=216&crop=smart&auto=webp&s=b6624f0c1eedc14997e7f1780efbe6e5cb50c1e2', 'width': 216}, {'height': 186, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=320&crop=smart&auto=webp&s=9db38144ef3065833b9ba158c764f7be47de3016', 'width': 320}, {'height': 373, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=640&crop=smart&auto=webp&s=72b056142e7533b5628a2a34f37f7e5415727075', 'width': 640}, {'height': 560, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=960&crop=smart&auto=webp&s=2637f961ee21190172b9ca6c8adf3ac9612db083', 'width': 960}, {'height': 630, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=1080&crop=smart&auto=webp&s=782eead871df2939a587ee3beae442cc59282f64', 'width': 1080}], 'source': {'height': 700, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?auto=webp&s=f1cd025aeb52ffa82fc9e5a4a2f157da0d919147', 'width': 1200}, 'variants': {}}]}
|
GPT4ALL pylama issues can't figure out!
| 1 |
I've been away from the AI world for the last few months. Was upset to find that my python program no longer works with the new quantized binary models. I had a look in the tutorial threads but can't put my finger on the issue.
This is my code:
def gpt4all(user_input):
global conversation
gptj = GPT4All(model_path="C:\\Users\\BF\\AppData\\Local\\nomicai\\GPT4All",
model_name="WizardLM-7B-uncensored.ggmlv3.q4_1.bin",
model_type="llama"
)
conversation.append({"role": "user", "content": user_input})
gpt4allcompletion = gptj.chat_completion(conversation)
bot_response = gpt4allcompletion["choices"][0]["message"]["content"]
conversation.append({"role": "assistant", "content": bot_response})
send_to_gui(user_input, bot_response)
total_tokens = gpt4allcompletion["usage"]["total_tokens"]
print("Total Tokens Generated:", total_tokens)
but I keep getting this error:
Found model file.
llama.cpp: loading model from C:\\\\Users\\\\BF\\\\AppData\\\\Local\\\\nomicai\\\\GPT4All\\WizardLM-7B-uncensored.ggmlv3.q4_1.bin
error loading model: unknown (magic, version) combination: 67676a74, 00000003; is this really a GGML file?
llama_init_from_file: failed to load model
LLAMA ERROR: failed to load model from C:\\\\Users\\\\BF\\\\AppData\\\\Local\\\\nomicai\\\\GPT4All\\WizardLM-7B-uncensored.ggmlv3.q4_1.bin
LLAMA ERROR: prompt won't work with an unloaded model!
I saw some references to reinstalling old versions of pylama-cpp but none of this worked. I'm on Windows 10.
| 2023-09-05T17:26:09 |
https://www.reddit.com/r/LocalLLaMA/comments/16auclq/gpt4all_pylama_issues_cant_figure_out/
|
darkbluetwilight
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16auclq
| false | null |
t3_16auclq
|
/r/LocalLLaMA/comments/16auclq/gpt4all_pylama_issues_cant_figure_out/
| false | false |
self
| 1 | null |
KOR with Llama
| 1 |
Hey guys, so I have to extract certain named entities, specifically of the medical domain, from passages. Problem is we have very few examples of the names entity, for ex: drug code. So we decided to try KOR with Llama2. Any one have any experience with it? Because with my first try, outputs seem to be dire. Any alternatives suggestion?
| 2023-09-05T17:35:18 |
https://www.reddit.com/r/LocalLLaMA/comments/16aulek/kor_with_llama/
|
Tinjar12
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16aulek
| false | null |
t3_16aulek
|
/r/LocalLLaMA/comments/16aulek/kor_with_llama/
| false | false |
self
| 1 | null |
Multi part Model
| 1 |
I cant find any one part of the models, I also can't find any explanation on how to combine multipart models into one. What am I missing?
| 2023-09-05T17:56:42 |
https://www.reddit.com/r/LocalLLaMA/comments/16av5b8/multi_part_model/
|
EagleNoes
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16av5b8
| false | null |
t3_16av5b8
|
/r/LocalLLaMA/comments/16av5b8/multi_part_model/
| false | false |
self
| 1 | null |
LlaMA 13B Setup for custom requirement
| 1 |
My day to day work can extensively be cut back on if I am able to use LLaMA 13B (34B would be better but 13B model works fine for me, tested on Poe). I am seriously considering, setting up a rig from scratch, what's the cheapest config that can handle the following:-
1. Run LLaMA at 10-15 tokens per second
2. Accept context of up to 40,000 words
3. No limit on output or at least up to 10,000 words
Please specify the exact config, I will check the prices here locally (India). If this is not possible, then whatever max I can get (don't want to compromise on context as will regularly need to churn large documents)
| 2023-09-05T18:17:00 |
https://www.reddit.com/r/LocalLLaMA/comments/16avold/llama_13b_setup_for_custom_requirement/
|
mohityadavx
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16avold
| false | null |
t3_16avold
|
/r/LocalLLaMA/comments/16avold/llama_13b_setup_for_custom_requirement/
| false | false |
self
| 1 | null |
How would a 4090 laptop fair in the current state of LocalLLaMA?
| 1 |
Hello!
I've recently been given a chance to get a machine from my company to "explore applications of LLM" in our office, main goal is to basically trying to have a small LLM that can write small and basic programs quickly.
Unfortunately, my boss insisted it be a laptop. So, I'm wondering if the top-of-the-line 4090 laptop GPU would fair me well?
It has 16GB vram and 9728 cuda cores.
What size of models and token generation speed am I looking at for this? Would they be able to fit my needs?
| 2023-09-05T18:51:47 |
https://www.reddit.com/r/LocalLLaMA/comments/16awkmo/how_would_a_4090_laptop_fair_in_the_current_state/
|
rexyuan
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16awkmo
| false | null |
t3_16awkmo
|
/r/LocalLLaMA/comments/16awkmo/how_would_a_4090_laptop_fair_in_the_current_state/
| false | false |
self
| 1 | null |
A couple of questions on using GGML/GGUF to maximise context length
| 1 |
Hello, firstly apologies if any of this has been asked before, I searched the subreddit and I couldn't find the answers I was after. I'm also a noob of 1 week, so I don't really understand how this technology works beyond a few FAQs I've seen floating around various subreddits.
I am currently running the GPTQ version of MythoMax L2 13b, with an RTX 3080 Ti (12GB) and 32GB RAM. My CPU is 10th generation i5, with 6 cores. The model loads perfectly fine and is usable at a context length of 2048, however when I turned up the context length to 4096 my card crashed when the conversation's context limit was reached. Scared of damaging my card, I turned the context limit down to 3072 and I've not had any problems. My guess is that I ran out of VRAM due to the increased context limit.
My plan is to use a GGML/GGUF model to unload some of the model into my RAM, leaving space for a longer context length.
My first question is, is there a conversion that can be done between context length and required VRAM, so that I know how much of the model to unload? (I.e. - does 4096 context length need 4096MB reserved?).
The same goes for GPU layers. Does half of the model's layers = half of it's RAM size? (I.e. - If a model has 1000 layers, and it uses 12GB RAM, would assigning 500 layers to the GPU use 6GB of VRAM?) Also, where can I find how many layers a particular model has? [This guide](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md) implies that you kind of have to guess the layers incrementally, which I'm hoping is not the case.
My second question is, does this even make sense as an idea, or am I misunderstanding something?
Finally, I'm currently generating replies with a limit of 150, which take me around 30 - 50 seconds to generate (this isn't a problem for me). Is the performance hit going to make this significantly longer? Should I even bother with the above?
| 2023-09-05T19:01:25 |
https://www.reddit.com/r/LocalLLaMA/comments/16awtiy/a_couple_of_questions_on_using_ggmlgguf_to/
|
EeyoresM8
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16awtiy
| false | null |
t3_16awtiy
|
/r/LocalLLaMA/comments/16awtiy/a_couple_of_questions_on_using_ggmlgguf_to/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': '8xtjwwtcKerurGMhTFLFFArH0ZxmDESqopVHCKpKfxU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/l4xYfRJ9MJu2EVsIiQGGndv0eMRVGWcafH1AgtXUuXk.jpg?width=108&crop=smart&auto=webp&s=941c18c68662db8cebcab4705115e8b304d71510', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/l4xYfRJ9MJu2EVsIiQGGndv0eMRVGWcafH1AgtXUuXk.jpg?width=216&crop=smart&auto=webp&s=a5af6977535b44cbc9eaedafc82717fe404a88fa', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/l4xYfRJ9MJu2EVsIiQGGndv0eMRVGWcafH1AgtXUuXk.jpg?width=320&crop=smart&auto=webp&s=076f77b080ab63ff722867f555b6bff5405744e8', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/l4xYfRJ9MJu2EVsIiQGGndv0eMRVGWcafH1AgtXUuXk.jpg?width=640&crop=smart&auto=webp&s=378faf360c7bde91f190b7dfcd326c415eb7351e', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/l4xYfRJ9MJu2EVsIiQGGndv0eMRVGWcafH1AgtXUuXk.jpg?width=960&crop=smart&auto=webp&s=e140cef23f13fa463cceaf3e19ec393ad207c7f7', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/l4xYfRJ9MJu2EVsIiQGGndv0eMRVGWcafH1AgtXUuXk.jpg?width=1080&crop=smart&auto=webp&s=7ab42ccd0b1b6aa4dbc69b230e8d73d03084e8f7', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/l4xYfRJ9MJu2EVsIiQGGndv0eMRVGWcafH1AgtXUuXk.jpg?auto=webp&s=52d751d67300c7131b4920f1ec89ae388b6ec97e', 'width': 1200}, 'variants': {}}]}
|
A recursive transformer (just a wild idea)
| 1 |
I've read many times that models that a ton of weights in typical models do not much, and can benefit from more training, but there is a problem with overfitting and catastrofic forgetting.
Also, there is that infamous "stack more layers" paradigm which, as much as I understand, few agree HOW it actually works, but more layers - more parameters and more RAM to run the model.
Sooo... I know this may be completely crazy from ML pov, but can you use, say, 13b model and, like, "loop" it two times during training and inference and reap benefits of it working like ~30b model so far as memory footprint is concerned? Or 3/4/etc?
Yea, you'll get half the tokens per second, but than larger models will be slower anyway, but you'll be able to fit a potentially "smarter" one into a smaller vram, which is precious right now.
| 2023-09-05T19:06:42 |
https://www.reddit.com/r/LocalLLaMA/comments/16awyld/a_recursive_transformer_just_a_wild_idea/
|
BalorNG
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16awyld
| false | null |
t3_16awyld
|
/r/LocalLLaMA/comments/16awyld/a_recursive_transformer_just_a_wild_idea/
| false | false |
self
| 1 | null |
Affordable cloud to deploy a basic llm pipeline as API backend?
| 1 |
To narrow down the question, I'm particularly thinking of a quantized 13B model for a personal project that doesn't involve too many requests, so it doesn't matter if it's "slow". Put it in other words, it doesn't need to serve many users concurrently, it's not a chatbot use case, it's more of a summarization pipeline.
Thanks in advance.
| 2023-09-05T19:17:12 |
https://www.reddit.com/r/LocalLLaMA/comments/16ax8cu/affordable_cloud_to_deploy_a_basic_llm_pipeline/
|
Responsible_Warning3
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16ax8cu
| false | null |
t3_16ax8cu
|
/r/LocalLLaMA/comments/16ax8cu/affordable_cloud_to_deploy_a_basic_llm_pipeline/
| false | false |
self
| 1 | null |
Streamlit launches LLM Hackathon 🧠
| 1 |
Streamlit just launched its [latest hackathon](https://hubs.la/Q020TdDj0) focused on large language models and AI 🚀
Awesome opportunity to build a Streamlit app using LangChain, LlamaIndex, AssemblyAI, Weaviate, or Clarifai, and win cool prizes (AirPods, Yeti microphone, mechanical keyboard, to name a few) – plus, the first 250 folks to enter get a pair of Streamlit socks 🧦
More info on the hackathon [here](https://hubs.la/Q020TdDj0)
[Streamlit LLM Hackathon](https://preview.redd.it/pesr0j6uphmb1.jpg?width=1200&format=pjpg&auto=webp&s=2f6c29c193679108fa35d6f1a75ef8a9f2724a0c)
​
| 2023-09-05T19:43:34 |
https://www.reddit.com/r/LocalLLaMA/comments/16axxoh/streamlit_launches_llm_hackathon/
|
carolinedfrasca
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16axxoh
| false | null |
t3_16axxoh
|
/r/LocalLLaMA/comments/16axxoh/streamlit_launches_llm_hackathon/
| false | false | 1 |
{'enabled': False, 'images': [{'id': 'cOIsV8EFAdNgcu8QXWE034usjIzLh7xTucelMcLjcuQ', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/qVImSBBedqbXz2tzudizmjsay9lLj8_wx490VAJdUBc.jpg?width=108&crop=smart&auto=webp&s=d81e665e134b57fd868afc6922b40ef38653eb31', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/qVImSBBedqbXz2tzudizmjsay9lLj8_wx490VAJdUBc.jpg?width=216&crop=smart&auto=webp&s=641db623af5150dc50d8d33d5b5b146e7bfd969a', 'width': 216}, {'height': 167, 'url': 'https://external-preview.redd.it/qVImSBBedqbXz2tzudizmjsay9lLj8_wx490VAJdUBc.jpg?width=320&crop=smart&auto=webp&s=7efc46706dfed1302fb7edb269ad1fa6af0e5832', 'width': 320}, {'height': 334, 'url': 'https://external-preview.redd.it/qVImSBBedqbXz2tzudizmjsay9lLj8_wx490VAJdUBc.jpg?width=640&crop=smart&auto=webp&s=a48b5e082ec4a75fa758694a149274e4d8a3a1c4', 'width': 640}, {'height': 502, 'url': 'https://external-preview.redd.it/qVImSBBedqbXz2tzudizmjsay9lLj8_wx490VAJdUBc.jpg?width=960&crop=smart&auto=webp&s=61b19c0a24117caab3673e84d74f079c197d44df', 'width': 960}, {'height': 565, 'url': 'https://external-preview.redd.it/qVImSBBedqbXz2tzudizmjsay9lLj8_wx490VAJdUBc.jpg?width=1080&crop=smart&auto=webp&s=c3b575b83a199f9e117a184706d351f551988ffe', 'width': 1080}], 'source': {'height': 628, 'url': 'https://external-preview.redd.it/qVImSBBedqbXz2tzudizmjsay9lLj8_wx490VAJdUBc.jpg?auto=webp&s=3791c49f260f016e93a7ed5b1ce033b97d29cf9d', 'width': 1200}, 'variants': {}}]}
|
|
Streamlit launches LLM Hackathon 🧠
| 1 |
Streamlit just launched its latest hackathon focused on large language models and AI 🚀
Awesome opportunity to build a Streamlit app using LangChain, LlamaIndex, AssemblyAI, Weaviate, or Clarifai, and win cool prizes (AirPods, Yeti microphone, mechanical keyboard, to name a few) – plus, the first 250 folks to enter get a pair of Streamlit socks 🧦
More info on the hackathon [here](https://streamlit.io/community/llm-hackathon-2023)
[Streamlit LLM Hackathon](https://preview.redd.it/8f0ikwheuhmb1.jpg?width=1200&format=pjpg&auto=webp&s=b4b14c65f5e5380945bd9448daf01a494eaab41d)
| 2023-09-05T20:06:47 |
https://www.reddit.com/r/LocalLLaMA/comments/16ayk5z/streamlit_launches_llm_hackathon/
|
carolinedfrasca
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16ayk5z
| false | null |
t3_16ayk5z
|
/r/LocalLLaMA/comments/16ayk5z/streamlit_launches_llm_hackathon/
| false | false | 1 |
{'enabled': False, 'images': [{'id': 'cOIsV8EFAdNgcu8QXWE034usjIzLh7xTucelMcLjcuQ', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/qVImSBBedqbXz2tzudizmjsay9lLj8_wx490VAJdUBc.jpg?width=108&crop=smart&auto=webp&s=d81e665e134b57fd868afc6922b40ef38653eb31', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/qVImSBBedqbXz2tzudizmjsay9lLj8_wx490VAJdUBc.jpg?width=216&crop=smart&auto=webp&s=641db623af5150dc50d8d33d5b5b146e7bfd969a', 'width': 216}, {'height': 167, 'url': 'https://external-preview.redd.it/qVImSBBedqbXz2tzudizmjsay9lLj8_wx490VAJdUBc.jpg?width=320&crop=smart&auto=webp&s=7efc46706dfed1302fb7edb269ad1fa6af0e5832', 'width': 320}, {'height': 334, 'url': 'https://external-preview.redd.it/qVImSBBedqbXz2tzudizmjsay9lLj8_wx490VAJdUBc.jpg?width=640&crop=smart&auto=webp&s=a48b5e082ec4a75fa758694a149274e4d8a3a1c4', 'width': 640}, {'height': 502, 'url': 'https://external-preview.redd.it/qVImSBBedqbXz2tzudizmjsay9lLj8_wx490VAJdUBc.jpg?width=960&crop=smart&auto=webp&s=61b19c0a24117caab3673e84d74f079c197d44df', 'width': 960}, {'height': 565, 'url': 'https://external-preview.redd.it/qVImSBBedqbXz2tzudizmjsay9lLj8_wx490VAJdUBc.jpg?width=1080&crop=smart&auto=webp&s=c3b575b83a199f9e117a184706d351f551988ffe', 'width': 1080}], 'source': {'height': 628, 'url': 'https://external-preview.redd.it/qVImSBBedqbXz2tzudizmjsay9lLj8_wx490VAJdUBc.jpg?auto=webp&s=3791c49f260f016e93a7ed5b1ce033b97d29cf9d', 'width': 1200}, 'variants': {}}]}
|
|
Completely new to this but does anyone know how to configure this part if I have 10GBs of VRAM (RTX 3080) and 32GBs of RAM? I'm completely clueless and any help would be fantastic thank you
| 1 | 2023-09-05T21:02:14 |
Affectionate-Grab864
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
16b02vs
| false | null |
t3_16b02vs
|
/r/LocalLLaMA/comments/16b02vs/completely_new_to_this_but_does_anyone_know_how/
| false | false | 1 |
{'enabled': True, 'images': [{'id': '_sX8c0sr6X-rqHutp6jn0eOzN5e3Bneo3LFRd3RhX8U', 'resolutions': [{'height': 89, 'url': 'https://preview.redd.it/0dso4rd64imb1.png?width=108&crop=smart&auto=webp&s=ea1b168a5e4290ee154a406f60d16742cc8eb8f7', 'width': 108}, {'height': 178, 'url': 'https://preview.redd.it/0dso4rd64imb1.png?width=216&crop=smart&auto=webp&s=0a4b774de497ae139c9499390d2f40e2a7cfe61a', 'width': 216}, {'height': 265, 'url': 'https://preview.redd.it/0dso4rd64imb1.png?width=320&crop=smart&auto=webp&s=7436a2c785a8342e7cbbaa334b07389543b92929', 'width': 320}, {'height': 530, 'url': 'https://preview.redd.it/0dso4rd64imb1.png?width=640&crop=smart&auto=webp&s=be2d98b64447e525c0ce70f7dfbb8cfe4f23b6e2', 'width': 640}], 'source': {'height': 791, 'url': 'https://preview.redd.it/0dso4rd64imb1.png?auto=webp&s=4ff56cd4d364a00174f36f8afb23eb6d7de927d6', 'width': 955}, 'variants': {}}]}
|
|||
Anyone know how to get current version koboldcpp to use GPU on an M2 Mac?
| 1 |
[removed]
| 2023-09-05T21:09:54 |
https://www.reddit.com/r/LocalLLaMA/comments/16b0ajj/anyone_know_how_to_get_current_version_koboldcpp/
|
Virtamancer
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16b0ajj
| false | null |
t3_16b0ajj
|
/r/LocalLLaMA/comments/16b0ajj/anyone_know_how_to_get_current_version_koboldcpp/
| false | false |
self
| 1 | null |
The more I write in the chat, the longer the robot thinks
| 1 |
Do I understand correctly that he is constantly thinking over previous messages in the dialog? Is their length adjusted somewhere? I am using Llama-2 under oobadoga WebUI
| 2023-09-05T21:31:34 |
https://www.reddit.com/r/LocalLLaMA/comments/16b0whd/the_more_i_write_in_the_chat_the_longer_the_robot/
|
Hatred_grows
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16b0whd
| false | null |
t3_16b0whd
|
/r/LocalLLaMA/comments/16b0whd/the_more_i_write_in_the_chat_the_longer_the_robot/
| false | false |
self
| 1 | null |
Inside Meta's AI Drama: internal feuds over compute power, and a rumor on Llama 3
| 1 |
This exclusive comes from The Information, the same publication that has published other exclusive articles on Meta AI news like the formerly unknown releases of Llama 2 and Code Llama.
The [full article](https://www.theinformation.com/articles/inside-metas-ai-drama-internal-feuds-over-compute-power) is paywalled, but it provides some insights to what's going on behind the scenes with Meta and Llama and potentially confirms an earlier rumor about Llama 3.
**TLDR/biggest takeaways:**
* Llama 1 started development approximately in July 2022, months before ChatGPT's release.
* Meta AI has struggled with providing computing resources to its two major LLM development teams, OPT and Llama. The Llama team, based in Paris, believed smaller models would be more efficient and felt their work was overlooked compared to the North American OPT team that received more resources. This caused low morale and tensions.
* A week before Llama's release, Meta's FAIR (Fundamental AI Research) Paris chief left the company and the future of Llama was in doubt, but the OPT team wasn't fairing any better. Roughly half of the 19 OPT paper authors had left Meta by this time, so FAIR leaders decided to bring together members of the competing LLM teams to focus on Llama 2. This process was complete by February, and the large model in development by the OPT team was abandoned.
* Meta has been dealing with companywide layoffs, and FAIR has been no exception to losing researchers, including by defection. More than half of the 14 authors of the original Llama research paper have left the company, and the entire FAIR team that worked on Llama 2 was moved over to a new team to work on Meta's AI technologies for its apps. As already known by Mark Zuckerberg's comments on Meta AI and confirmed by Joelle Pineau, head of FAIR, this team for proprietary apps is separate from open source development. However, this does not mean work on future Llama models has stopped.
**Related news for Llama 3** and some thoughts:
The article briefly mentioned a recent generative AI social hour at Meta's campus, where the company gave llama plushies to attendees. There was some speculation about whether this event happened or not based on tweets by industry insiders, but this article confirms it did. This lends credence to Jason Wei's Llama 3 rumor. Jason Wei is a researcher at OpenAI who recently had [this](https://twitter.com/agikoala/status/1695125016764157988) to say about the social hour and Llama 3:
>Overheard at a Meta GenAI social:
>
>"We have compute to train Llama 3 and 4. **The plan is for Llama-3 to be as good as GPT-4.**"
>
>"Wow, if Llama-3 is as good as GPT-4, will you guys still open source it?"
>
>"Yeah we will. Sorry alignment people."
I wouldn't tell anyone to hold their breath, but this news could be gamechanging if true. It brings into question a recent [quote](https://www.engadget.com/meta-could-soon-make-its-ai-model-available-for-commercial-projects-114021749.html) from Yann LeCunn:
>The competitive landscape of AI is going to completely change in the coming months, in the coming weeks maybe, when there will be open source platforms that are actually as good as the ones that are not," Yann LeCun, Meta's vice-president and Chief AI scientist, said at a July conference.
When he said this, many readers assumed this only meant Llama 2, but it now seems possible that he could have been referring to Llama 3 as well. 'In the coming weeks' would have referred to Llama 2, which released that month and we know is obviously not GPT-4, but 'in the coming months' could have been speculation or hints on the capabilities of a newer model, the Llama 3 model.
Since FAIR doesn't have to split compute between two competing LLM teams anymore, it can all go toward Llama. They have a pattern of training for close to half a year and releasing as soon as it's done or almost as soon as it's done. Llama 2 was from January to July and released in July, Code Llama was from January to July and released in August, and Llama 1 was presumably from July 2022 to January 2023 and released in February. Llama 3, with more compute dedicated to it, could plausibly have started training in July and release this year. Llama 4, if it happens, would be a hypothetical model for next year.
Take this rumor with a grain of salt, but I'm sure Meta knows this is their prime chance to entrench Llama in the LLM ecosystem. GPT-4 at home would undoubtedly be enough for most people until the goalposts shift again.
| 2023-09-05T21:43:56 |
https://www.reddit.com/r/LocalLLaMA/comments/16b18tj/inside_metas_ai_drama_internal_feuds_over_compute/
|
llamaShill
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16b18tj
| false | null |
t3_16b18tj
|
/r/LocalLLaMA/comments/16b18tj/inside_metas_ai_drama_internal_feuds_over_compute/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'LpaR91r08St8Yr52aV2kbsZKrvhW8kNpMnxey-dg2oY', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/mR-vEFiBD85lbvnnnbksbGl519DXC9qrIbWto7j0S6E.jpg?width=108&crop=smart&auto=webp&s=a7e3fa55bf1e8216fb0765df3242f8463718129b', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/mR-vEFiBD85lbvnnnbksbGl519DXC9qrIbWto7j0S6E.jpg?width=216&crop=smart&auto=webp&s=aeaa4821f54cdfb11ea42280245c80565beeb281', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/mR-vEFiBD85lbvnnnbksbGl519DXC9qrIbWto7j0S6E.jpg?width=320&crop=smart&auto=webp&s=f6df88edd04a00b2608199ac7f752e5da864419d', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/mR-vEFiBD85lbvnnnbksbGl519DXC9qrIbWto7j0S6E.jpg?width=640&crop=smart&auto=webp&s=2d596e1c5b53ee85e48a60ce963746152c00e517', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/mR-vEFiBD85lbvnnnbksbGl519DXC9qrIbWto7j0S6E.jpg?width=960&crop=smart&auto=webp&s=9f9392100ce3e8d1e1aebfa4f91fd947639ac7aa', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/mR-vEFiBD85lbvnnnbksbGl519DXC9qrIbWto7j0S6E.jpg?width=1080&crop=smart&auto=webp&s=aeb34523a57e0ce42bb40a51259cab3e04ef7a28', 'width': 1080}], 'source': {'height': 675, 'url': 'https://external-preview.redd.it/mR-vEFiBD85lbvnnnbksbGl519DXC9qrIbWto7j0S6E.jpg?auto=webp&s=79bc9491fc2e6c42fb51e4351c9f1f53cbbcd7b6', 'width': 1200}, 'variants': {}}]}
|
I built a very simple TPS (tokens per second) visualizer so that you don't have to
| 1 |
Hi all,
A key question I've often come across whilst researching what hardware I should procure for my home server, is how many tokens per second a given GPU combination is able to generate for a given model.
The acceptability of said TPS always seems to be subjective, so I wanted to have an easy way to visualize this.
I wrote this very simple static app which accepts a TPS value, and prints random tokens of 2-4 characters, linearly over the course of a second.
[https://tokens-per-second-visualizer.tiiny.site](https://tokens-per-second-visualizer.tiiny.site)
I've been using it to determine what TPS I'd be happy wit, so thought I'd share in case it would be helpful for you as well.
This is the full source code if you're interested in extending its functionality.
```html
<!DOCTYPE html>
<html>
<head>
<title>Token Per Second Visualizer</title>
<link
href="https://cdn.jsdelivr.net/npm/[email protected]/dist/css/bootstrap.min.css"
rel="stylesheet">
</head>
<body>
<div class="container my-4">
<h1 class="text-center">Token Per Second Visualizer</h1>
<div class="my-4">
<input type="number" id="tps" class="form-control"
placeholder="Enter tokens per second">
<button class="btn btn-primary my-2"
onclick="start()">Prompt</button>
</div>
<textarea id="output" class="form-control" rows="10"
readonly></textarea>
</div>
<script>
let interval;
function start() {
clearInterval(interval);
const tps = parseInt(document.getElementById('tps').value, 10);
const output = document.getElementById('output');
output.value = '';
for (let i = 0; i < tps; i++) {
setTimeout(() => {
const tokenLength = Math.floor(Math.random() * 3) + 2;
const token = Array.from({ length: tokenLength }, () =>
String.fromCharCode(97 + Math.floor(Math.random() * 26))).join('');
output.value += token + ' ';
}, (1000 / tps) * i);
}
interval = setInterval(() => {
for (let i = 0; i < tps; i++) {
setTimeout(() => {
const tokenLength = Math.floor(Math.random() * 3) + 2;
const token = Array.from({ length: tokenLength }, () =>
String.fromCharCode(97 + Math.floor(Math.random() * 26))).join('');
output.value += token + ' ';
}, (1000 / tps) * i);
}
}, 1000);
}
</script>
</body>
</html>
```
| 2023-09-05T21:59:32 |
https://www.reddit.com/r/LocalLLaMA/comments/16b1nvp/i_built_a_very_simple_tps_tokens_per_second/
|
iBifteki
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16b1nvp
| false | null |
t3_16b1nvp
|
/r/LocalLLaMA/comments/16b1nvp/i_built_a_very_simple_tps_tokens_per_second/
| false | false |
self
| 1 | null |
[Request] Instructions on how to convert models to the GGUF format.
| 1 |
I need to process large prompts, and there's almost no support for large context size models. I haven't found any GUI frontend that supports more than 2K context, and llama.cpp requires the GGUF format, of which there's very few models, almost none for large contexts.
I tried using the provided converstion tool but it doesn't work. I know /u/fun_tangerine_1086 has been doing conversions [here](https://www.reddit.com/r/LocalLLaMA/comments/15zvxta/converting_some_models_to_gguf_formats_from/), but I need to be able to do them myself and not depend on someone else's goodwill and time.
If anyone can explain what the process is, that'd be awesome.
| 2023-09-05T22:40:05 |
https://www.reddit.com/r/LocalLLaMA/comments/16b2qou/request_instructions_on_how_to_convert_models_to/
|
jungle
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16b2qou
| false | null |
t3_16b2qou
|
/r/LocalLLaMA/comments/16b2qou/request_instructions_on_how_to_convert_models_to/
| false | false |
self
| 1 | null |
Good low-key video Jerry Kaplan, Why Generative AI is Truly Revolutionary
| 1 |
[https://www.youtube.com/watch?v=h7AgTzWMhUc](https://www.youtube.com/watch?v=h7AgTzWMhUc)
ChatGPT summarize:
* Jerry Kaplan, a technology expert with a Ph.D. in computer science and a background in entrepreneurship, discusses generative AI in a YouTube conversation.
* Generative AI represents advanced artificial general intelligence (AGI) capable of human-like intelligence and creativity.
* Recent breakthroughs in natural language processing, increased computing power, and the Transformer architecture contribute to the advancement of generative AI.
* Generative AI can continuously learn, curate data, and solve complex problems in various domains.
* Concerns about AI's downsides in politics and the environment are acknowledged, but the transformative potential of AI is highlighted.
* Kaplan discusses existential risks associated with AI, emphasizing the unlikely sudden rise of a new form of life.
* The impact of AI on labor markets is explored, emphasizing historical patterns of job displacement and increased productivity.
* AI's potential to complement human capabilities, especially in writing and creativity, is noted.
* The question of machine consciousness and its differences from human consciousness is discussed, particularly regarding the perception of time.
* Challenges related to AI, including deep fakes and the need for authentication, are addressed.
* AI's role in creating a protective shield against misinformation is considered.
* Kaplan envisions a future where machines handle most work, allowing humans to focus on purpose, meaning, and relationships.
* The discussion concludes with mention of Jerry Kaplan's upcoming book, "Generative Artificial Intelligence: What Everyone Needs to Know."
| 2023-09-06T00:50:59 |
https://www.reddit.com/r/LocalLLaMA/comments/16b5xay/good_lowkey_video_jerry_kaplan_why_generative_ai/
|
FPham
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16b5xay
| false | null |
t3_16b5xay
|
/r/LocalLLaMA/comments/16b5xay/good_lowkey_video_jerry_kaplan_why_generative_ai/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': '5ibR4gyccUh40J0K6twf60h-aMSWtSm_YrTtqCyNgyw', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/smTj6ha57uAKiBWPm-kKz3E8qKTck10dFpaTIWCC5nY.jpg?width=108&crop=smart&auto=webp&s=9e8924dd9a11ea782d460a5521960c8056d50d15', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/smTj6ha57uAKiBWPm-kKz3E8qKTck10dFpaTIWCC5nY.jpg?width=216&crop=smart&auto=webp&s=9f0f164bf2512ad590f506d5cfb9b8bb30177fb3', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/smTj6ha57uAKiBWPm-kKz3E8qKTck10dFpaTIWCC5nY.jpg?width=320&crop=smart&auto=webp&s=46a8966232dcbda4e5026fec9a849f259d271c6e', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/smTj6ha57uAKiBWPm-kKz3E8qKTck10dFpaTIWCC5nY.jpg?auto=webp&s=912498ccbf8fdc1ed1567ed91a6a94c083dc219d', 'width': 480}, 'variants': {}}]}
|
llama.cpp/Koboldcpp hipBLAS compile under Windows
| 1 |
Anyone able to compile llama.cpp or koboldcpp with hipBLAS enabled under Windows? I did get llama-python-cpp working with hipBLAS (ROCm) under linux and it works great but have not been able to get it to compile in Windows. Under linux at least, llama.cpp even splits the vram usage pretty much equally with dual cards. I have a pair of W6800's with 32GB's each on my Win 10 machine and want to see how they perform. The only thing I have changed in the location of clang and clang++ in the Makfile along with add an env varriable pointing to the root folder that ROCm is installed in. I'm really not sure that anyone has actually compiled it for Windows at all so please let me know. Here is the error that I get:
clang++: clang: clang: error: clangerror: : error: unsupported option '-fPIC' for target 'x86\_64-pc-windows-msvc'error: unsupported option '-fPIC' for target 'x86\_64-pc-windows-msvc'
unsupported option '-fPIC' for target 'x86\_64-pc-windows-msvc'
unsupported option '-fPIC' for target 'x86\_64-pc-windows-msvc'
​
Pretty sure I just need to update the compile options.
| 2023-09-06T01:27:12 |
https://www.reddit.com/r/LocalLLaMA/comments/16b6r73/llamacppkoboldcpp_hipblas_compile_under_windows/
|
ccbadd
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16b6r73
| false | null |
t3_16b6r73
|
/r/LocalLLaMA/comments/16b6r73/llamacppkoboldcpp_hipblas_compile_under_windows/
| false | false |
self
| 1 | null |
Eval mmlu result against various infer methods (HF_Causal, VLLM, AutoGPTQ, AutoGPTQ-exllama)
| 1 |
I modified declare-lab's instruct-eval scripts, add support to VLLM, AutoGPTQ (and new autoGPTQ support exllama now), and test the mmlu result. I also add support to fastllm (which can accelerate ChatGLM2-6b.
The code is here [https://github.com/declare-lab/instruct-eval](https://github.com/declare-lab/instruct-eval) , I'd like to hear any errors in those code.
All GPTQ is 4bit\_32g\_actor, quantizated with wikitext2, all test is running on cuda 11.7, ubuntu 18.04, V100 GPU.
The result is below, FP16 is running use hf's causal with model.half().
[mmlu score](https://preview.redd.it/mpwhykutmjmb1.png?width=1172&format=png&auto=webp&s=328e2bf75a9b43e433a42e07870cb31d126aa613)
Fastllm result, which is better than origin for ChatGLM2, but has some problem for Qwen:
https://preview.redd.it/o2ummgcgnjmb1.png?width=643&format=png&auto=webp&s=2afd70c83ac636fc3f9c0852f4f151253f4d13c0
​
​
| 2023-09-06T02:12:07 |
https://www.reddit.com/r/LocalLLaMA/comments/16b7qr1/eval_mmlu_result_against_various_infer_methods_hf/
|
bash99Ben
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16b7qr1
| false | null |
t3_16b7qr1
|
/r/LocalLLaMA/comments/16b7qr1/eval_mmlu_result_against_various_infer_methods_hf/
| false | false | 1 |
{'enabled': False, 'images': [{'id': 'evpneWxD5awodnxHnoDwaty2jWBMEu9gPDFPExqpN6s', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/pNvFkZHQgmonjE72eRiWdFT17dYj30GAq16lFaaNDxI.jpg?width=108&crop=smart&auto=webp&s=57d7328218559ecfa1273e7865a5151292dbb450', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/pNvFkZHQgmonjE72eRiWdFT17dYj30GAq16lFaaNDxI.jpg?width=216&crop=smart&auto=webp&s=0a525fedf7c6822ba15f19c2a09747f88b22b442', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/pNvFkZHQgmonjE72eRiWdFT17dYj30GAq16lFaaNDxI.jpg?width=320&crop=smart&auto=webp&s=342a22c27abc3a720b4e2081842b8b5b0d652626', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/pNvFkZHQgmonjE72eRiWdFT17dYj30GAq16lFaaNDxI.jpg?width=640&crop=smart&auto=webp&s=f3ba884c0d90536a04fa452c57f0f1acc896a81e', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/pNvFkZHQgmonjE72eRiWdFT17dYj30GAq16lFaaNDxI.jpg?width=960&crop=smart&auto=webp&s=4a0993cdd96eea9c217a4ea34ebc80b8c153f095', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/pNvFkZHQgmonjE72eRiWdFT17dYj30GAq16lFaaNDxI.jpg?width=1080&crop=smart&auto=webp&s=2573782625888480bd56bc08119c778de3b01dc9', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/pNvFkZHQgmonjE72eRiWdFT17dYj30GAq16lFaaNDxI.jpg?auto=webp&s=3a66a13e1ecf0fa867b91d8eb90fb448d8db6d22', 'width': 1200}, 'variants': {}}]}
|
|
Inconsistency of carbon footprint in LLaMA 1 paper
| 1 |
When I was reading the LLaMA 1 paper, it mentioned their power consumption and carbon emitted:
This means that developing these models would have cost around 2,638 MWh under our assumptions, and a total emission of 1,015 tCO2eq.
However, Table 15 does not state the same information:
https://preview.redd.it/1b23zuy2pjmb1.png?width=2014&format=png&auto=webp&s=e3fd7eebecd9ffc887a1fdf95bd20aefe903e97f
The combined power consumption and carbon emitted for all llama models in the table are 777 and 300, respectively. This clearly does not align with the previous information.
Could I have misunderstood something?
| 2023-09-06T02:26:53 |
https://www.reddit.com/r/LocalLLaMA/comments/16b829c/inconsistency_of_carbon_footprint_in_llama_1_paper/
|
secsilm
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16b829c
| false | null |
t3_16b829c
|
/r/LocalLLaMA/comments/16b829c/inconsistency_of_carbon_footprint_in_llama_1_paper/
| false | false | 1 | null |
|
Can I replace ChatGPT with local models yet?
| 1 |
I'm not particularly literate on the topic of LLM metrics, so I'm here because I'm wondering if there are any local ChatGPT alternatives I can set up today that could largely substitute either GPT-3.5 or GPT-4. Note that I'm not talking about just LLaMA, I'm open to anything really.
| 2023-09-06T02:29:35 |
https://www.reddit.com/r/LocalLLaMA/comments/16b84b6/can_i_replace_chatgpt_with_local_models_yet/
|
88sSSSs88
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16b84b6
| false | null |
t3_16b84b6
|
/r/LocalLLaMA/comments/16b84b6/can_i_replace_chatgpt_with_local_models_yet/
| false | false |
self
| 1 | null |
SQL analysis
| 1 |
What's the best open source LLM for building a query builder from text to sql, analyze it, and more querying from document ? I heard that llama 2 not so good at coding. And also can I use two different llms for different endpoint in my api ? Example for generate code I'm using gpt 3.5 and for querying document I'm using llama 2 ? Thanks
| 2023-09-06T02:32:44 |
https://www.reddit.com/r/LocalLLaMA/comments/16b86ry/sql_analysis/
|
laveriaroha
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16b86ry
| false | null |
t3_16b86ry
|
/r/LocalLLaMA/comments/16b86ry/sql_analysis/
| false | false |
self
| 1 | null |
Can or doe's anyone use LLm's to provide feedback on fiction you write?
| 1 |
Also can it be trusted as honest and objective or will it always say good job?
| 2023-09-06T02:55:41 |
https://www.reddit.com/r/LocalLLaMA/comments/16b8o9a/can_or_does_anyone_use_llms_to_provide_feedback/
|
poet3991
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16b8o9a
| false | null |
t3_16b8o9a
|
/r/LocalLLaMA/comments/16b8o9a/can_or_does_anyone_use_llms_to_provide_feedback/
| false | false |
self
| 1 | null |
Is it better to continue pretrain LLM for a domain then do instruction fine-tune or do instruction fine-tune using question-answer pairs for that domain immediately?
| 1 |
Let's say for example, I want to create a chat bot that answer questions about cars. Should I first continue train the LLM on a corpus containing documents about cars then do instruction fine-tuning? Or I can do instruction fine-tuning on question answer pairs about cars immediately?
I think that the first method requires less instruction data than the second method and instruction data are harder to collect so the first method should be preferred in most cases. However, in cases where I don't have a domain corpus but have a large amount question answer pairs in that domain, can instruction fine-tuning on this data achieve similar/better result?
| 2023-09-06T04:41:04 |
https://www.reddit.com/r/LocalLLaMA/comments/16barbk/is_it_better_to_continue_pretrain_llm_for_a/
|
scienceotaku68
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16barbk
| false | null |
t3_16barbk
|
/r/LocalLLaMA/comments/16barbk/is_it_better_to_continue_pretrain_llm_for_a/
| false | false |
self
| 1 | null |
ChatGPT-4 has gotten better for me over time
| 1 |
First off, I know this is r/LocalLLaMA, but I see a lot of comments about ChatGPT worsening, whether it's due to OpenAI "making it worse", quantization hurting performance, etc.
But for me, it's gotten better at translations, coding tasks, math questions, and really just anything I throw at it. No hate towards opensource or Llama as I want it to succeed just as much as everyone else, but none of the Llama 2 models come close to being good enough to drop paying $20 a month and likely pay even more to locally run a good model at acceptable speeds or rent from a cloud service. That's even ignoring the features like "Custom instructions" which works 100% of the time for me as well.
I'd love for the people who say it's worsened to hop on playground and test the March model to see if it really has worsened. I'd also love it if people specified their prompts and what version of ChatGPT they used. ChatGPT-3.5 hasn't really gotten worse for me either, but it's been pretty terrible since the beginning so that's not saying much.
Sorry, again not trying to hate on open-source models or shill ClosedAI, but it makes my eye twitch a little every time I see this comment and I have to wonder if users are using ChatGPT wrong or I'm doing something wrong to have Llama not perform as well as I'd hoped.
| 2023-09-06T04:46:42 |
https://www.reddit.com/r/LocalLLaMA/comments/16bav14/chatgpt4_has_gotten_better_for_me_over_time/
|
JuicyStandoffishMan
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16bav14
| false | null |
t3_16bav14
|
/r/LocalLLaMA/comments/16bav14/chatgpt4_has_gotten_better_for_me_over_time/
| false | false |
self
| 1 | null |
Someone managed to connect local LLM's to the Internet
| 1 |
[removed]
| 2023-09-06T05:21:41 |
https://www.reddit.com/r/LocalLLaMA/comments/16bbin2/someone_managed_to_connect_local_llms_to_the/
|
No_Marionberry312
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16bbin2
| false | null |
t3_16bbin2
|
/r/LocalLLaMA/comments/16bbin2/someone_managed_to_connect_local_llms_to_the/
| false | false | 1 | null |
|
How does Llama-2 perform in sentiment analysis?
| 1 |
Hey guys, if you have explored using Llama-2 in doing sentiment analysis, just wanted to get your experience in how Llama-2 perform in this task?
I have tried using GPT and it’s pretty accurate.
If Llama-2 isn’t all that good in sentiment analysis, which other open LLM would you recommend?
Thank heaps!
| 2023-09-06T06:30:25 |
https://www.reddit.com/r/LocalLLaMA/comments/16bcpn1/how_does_llama2_perform_in_sentiment_analysis/
|
--leockl--
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16bcpn1
| false | null |
t3_16bcpn1
|
/r/LocalLLaMA/comments/16bcpn1/how_does_llama2_perform_in_sentiment_analysis/
| false | false |
self
| 1 | null |
How can I load a local model and play with it?
| 1 |
[removed]
| 2023-09-06T07:26:31 |
https://www.reddit.com/r/LocalLLaMA/comments/16bdmmh/how_can_i_load_a_local_model_and_play_with_it/
|
william_luckybob
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16bdmmh
| false |
{'oembed': {'author_name': 'James Briggs', 'author_url': 'https://www.youtube.com/@jamesbriggs', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/6iHVJyX2e50?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen title="Llama 2 in LangChain — FIRST Open Source Conversational Agent!"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/6iHVJyX2e50/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'Llama 2 in LangChain — FIRST Open Source Conversational Agent!', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'}
|
t3_16bdmmh
|
/r/LocalLLaMA/comments/16bdmmh/how_can_i_load_a_local_model_and_play_with_it/
| false | false | 1 |
{'enabled': False, 'images': [{'id': 'YciGqF6lm8xszzTBA-e2GH9xMEAXy2SADgKjOWd7OLc', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/xIj5386wZ9s0e27dzMbFn2VI0pjC8-_FXNvxuxVIeAo.jpg?width=108&crop=smart&auto=webp&s=1cbdfc0fb58ac0aac17289e6064ca16f7ddc52b2', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/xIj5386wZ9s0e27dzMbFn2VI0pjC8-_FXNvxuxVIeAo.jpg?width=216&crop=smart&auto=webp&s=5eb3e500c45d81a472235172ef66c15739a87d54', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/xIj5386wZ9s0e27dzMbFn2VI0pjC8-_FXNvxuxVIeAo.jpg?width=320&crop=smart&auto=webp&s=23ed3e66a6bf198685d24a3ef0beffcce665bb8e', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/xIj5386wZ9s0e27dzMbFn2VI0pjC8-_FXNvxuxVIeAo.jpg?auto=webp&s=7cfb901bdda9da6de6f6d072e5fed4b069aa7430', 'width': 480}, 'variants': {}}]}
|
|
Which framework would you use to build a RAG using llama-2?
| 1 |
[removed]
| 2023-09-06T07:33:23 |
https://www.reddit.com/r/LocalLLaMA/comments/16bdqs0/which_framework_would_you_use_to_build_a_rag/
|
front-equal
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16bdqs0
| false | null |
t3_16bdqs0
|
/r/LocalLLaMA/comments/16bdqs0/which_framework_would_you_use_to_build_a_rag/
| false | false |
self
| 1 | null |
qlora fine-tuned codellama 34b model
| 1 |
I always thought the codellama was pretrained on 1/4 of the original llama2 34b, maybe it can be futher fine-tuned rather than just for coding. So I took the \`codellama/CodeLlama-34b-hf\` base model and fine-tuned it on \`timdettmers/openassistant-guanaco\` dataset.
Surprisingly, it has become a multilingual chat model that I can now talk to in Chinese :p.
Here is the model weight if you want to try it out:
[https://huggingface.co/mzbac/CodeLlama-34b-guanaco-gptq](https://huggingface.co/mzbac/CodeLlama-34b-guanaco-gptq)
| 2023-09-06T07:54:23 |
https://www.reddit.com/r/LocalLLaMA/comments/16be2vc/qlora_finetuned_codellama_34b_model/
|
mzbacd
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16be2vc
| false | null |
t3_16be2vc
|
/r/LocalLLaMA/comments/16be2vc/qlora_finetuned_codellama_34b_model/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'WdMUdOa1Y0Not51DEOCF2_l9TPzadfYPoCr9OD_hTb0', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/3MeU-ZZ8c5QqQHGzpGk-uCJS0PIuyTdUIKs_1qYMUnc.jpg?width=108&crop=smart&auto=webp&s=d6617d92a951b966e84a74b0debc3ba3259ba1ad', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/3MeU-ZZ8c5QqQHGzpGk-uCJS0PIuyTdUIKs_1qYMUnc.jpg?width=216&crop=smart&auto=webp&s=b05b0313aaf3fb2cdf6ca417bc5317d50adab2f5', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/3MeU-ZZ8c5QqQHGzpGk-uCJS0PIuyTdUIKs_1qYMUnc.jpg?width=320&crop=smart&auto=webp&s=4da8bdd26ab28ae20f7981f0da35d6bb14b3cc15', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/3MeU-ZZ8c5QqQHGzpGk-uCJS0PIuyTdUIKs_1qYMUnc.jpg?width=640&crop=smart&auto=webp&s=07ee48e34fa4124976d91f9413bc57e93601ea35', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/3MeU-ZZ8c5QqQHGzpGk-uCJS0PIuyTdUIKs_1qYMUnc.jpg?width=960&crop=smart&auto=webp&s=ba5374d514062114e532292ee39a2eb9f3f1b955', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/3MeU-ZZ8c5QqQHGzpGk-uCJS0PIuyTdUIKs_1qYMUnc.jpg?width=1080&crop=smart&auto=webp&s=58577a86e335242828f28f5da7bfae8490287014', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/3MeU-ZZ8c5QqQHGzpGk-uCJS0PIuyTdUIKs_1qYMUnc.jpg?auto=webp&s=62df925d9bafa5051a6e422810f1c8594db3c3e4', 'width': 1200}, 'variants': {}}]}
|
Any good learning resources for fine-tuning, quantization, and LoRA?
| 1 |
Hey all, I have been working with GPT3 for like 2 years now, I have lots of experience using it in all sorts of python programs but my experience is mostly on the surface level, I have only fine-tuned a couple of times and that was using the openai API. Id like to take the next step and learn about running models locally, quantization, LoRA and all of that good stuff :P
Im hoping for any particularly good resources where I can learn about these things ideally without having to dig through super technical papers, I like super practical stuff best :)
Any pointers here would be awesome!
| 2023-09-06T08:23:31 |
https://www.reddit.com/r/LocalLLaMA/comments/16bekej/any_good_learning_resources_for_finetuning/
|
Chance_Confection_37
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16bekej
| false | null |
t3_16bekej
|
/r/LocalLLaMA/comments/16bekej/any_good_learning_resources_for_finetuning/
| false | false |
self
| 1 | null |
Understanding Retrieval Augmented Generation (RAG)
| 1 |
[removed]
| 2023-09-06T08:41:34 |
https://www.reddit.com/r/LocalLLaMA/comments/16beuqa/understanding_retrieval_augmented_generation_rag/
|
Floatbot_Inc
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16beuqa
| false | null |
t3_16beuqa
|
/r/LocalLLaMA/comments/16beuqa/understanding_retrieval_augmented_generation_rag/
| false | false |
self
| 1 | null |
Weird loss curves when training for the LLM science exam - anyone seen anything similar?
| 1 |
I just finished reading this very interesting post from [fast.ai](https://fast.ai): [https://www.fast.ai/posts/2023-09-04-learning-jumps/](https://www.fast.ai/posts/2023-09-04-learning-jumps/)
Theyre training an LLM to answer science questions for a kaggle competition, the dataset is 6500 question-answer pairs. These are some of the weirdest loss curves I've ever seen, and the hypothesis they arrive at is that the LLM is simply remembering things from the training set really quickly.
I know this community does quite a lot of finetuning - has anyone seen these shaped loss curves in their own models? I'd be very interested to know it it happens in other areas aside from question answering.
Better yet, anyone got any more ideas for how to alleviate the problem? The post covers all the things I would have thought of (data augmentation, more dropout etc). The idea of stochastic depth looks interesting - basically only using a random subset of the network in each mini-batch (I think).
​
​
| 2023-09-06T09:17:40 |
https://www.reddit.com/r/LocalLLaMA/comments/16bferl/weird_loss_curves_when_training_for_the_llm/
|
heisenbork4
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16bferl
| false | null |
t3_16bferl
|
/r/LocalLLaMA/comments/16bferl/weird_loss_curves_when_training_for_the_llm/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'E25NtJMen0qHD0SG4k9ae4JyjU3zsRRlY0XGd5ykMMk', 'resolutions': [{'height': 49, 'url': 'https://external-preview.redd.it/d9ocJ8clzyJgAzJNpbT8rl6Ck4fuuuGgtyVtLxqCEsw.jpg?width=108&crop=smart&auto=webp&s=3ff6cefde0542398c2dc99595a9f5b29e30a90d5', 'width': 108}, {'height': 99, 'url': 'https://external-preview.redd.it/d9ocJ8clzyJgAzJNpbT8rl6Ck4fuuuGgtyVtLxqCEsw.jpg?width=216&crop=smart&auto=webp&s=cb13e3415ea351711e1612b69f492a80f4872944', 'width': 216}, {'height': 146, 'url': 'https://external-preview.redd.it/d9ocJ8clzyJgAzJNpbT8rl6Ck4fuuuGgtyVtLxqCEsw.jpg?width=320&crop=smart&auto=webp&s=135f33c32f4f11927218be83aca80e29b7357a3f', 'width': 320}, {'height': 293, 'url': 'https://external-preview.redd.it/d9ocJ8clzyJgAzJNpbT8rl6Ck4fuuuGgtyVtLxqCEsw.jpg?width=640&crop=smart&auto=webp&s=f18b421777d7433a740db39b7bcae22251473aef', 'width': 640}, {'height': 440, 'url': 'https://external-preview.redd.it/d9ocJ8clzyJgAzJNpbT8rl6Ck4fuuuGgtyVtLxqCEsw.jpg?width=960&crop=smart&auto=webp&s=a4c6cfb06de2b709d05c2151265d0044a1d1ee75', 'width': 960}, {'height': 495, 'url': 'https://external-preview.redd.it/d9ocJ8clzyJgAzJNpbT8rl6Ck4fuuuGgtyVtLxqCEsw.jpg?width=1080&crop=smart&auto=webp&s=09f444b48c22df8a51296656babfad69bc079d11', 'width': 1080}], 'source': {'height': 505, 'url': 'https://external-preview.redd.it/d9ocJ8clzyJgAzJNpbT8rl6Ck4fuuuGgtyVtLxqCEsw.jpg?auto=webp&s=042d0d343c351af196b9ca51f35a66bdf3a0ef0e', 'width': 1100}, 'variants': {}}]}
|
Looking for Affordable Alternatives to ChatGPT 4 API - Best LLM APIs Today?
| 1 |
I've been using ChatGPT 4 for a while now, and it's been a fantastic tool for various tasks especially through its API.
So, I'm on the hunt for LLM (Large Language Model) APIs that can offer a similar experience to ChatGPT 4 but at a more affordable price point for a quite big project.
​
Thank you!
| 2023-09-06T09:31:23 |
https://www.reddit.com/r/LocalLLaMA/comments/16bfms3/looking_for_affordable_alternatives_to_chatgpt_4/
|
pacmanpill
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16bfms3
| false | null |
t3_16bfms3
|
/r/LocalLLaMA/comments/16bfms3/looking_for_affordable_alternatives_to_chatgpt_4/
| false | false |
self
| 1 | null |
Was Joi from Blade Runner 2049 a local LLM?
| 1 |
Inspired by another discussion on personal LLMs, I was reminded of that movie. I know that when it came out in 2017, LLMs did not yet exist but as I think back on it, Joi fits the mold of a local LLM as we have them today perfectly and the movie seems to be oddly prophetic (some info from the Wiki rather than the movie).
\- She was initially released in 2029, a reasonable date for more wide adoption of LLMs
\- She was installed in K's apartment and could be transferred onto a mobile device. She showed no sign of being hosted online though she could access online services. She suggested deleting her from the main console to avoid giving away information. Clearly local.
\- Joi's primary role was to communicate with other entities, human, replicant or otherwise, in a manner representing the user's mate or companion. She recorded data "memories" as she ran, allowing her "personality" to change over time. Tell me that's not a waifu in Silly Tavern with Extras enabled.
\- Replicants despised Joi, probably resenting being put on the same level as her. As the prostitute tells her: "I've been inside you. Not so much there as you think." It implies they think that she's not sentient in the same way they are.
\- She's specifically spelled Joi, not Joy. JOI is an acronym for Jerk-off Instructions, something that nobody ever mentions but I find it hard to believe that the writers didn't do this on purpose. Not being corporeal, that's all she can do for K after all without resorting to outside help. LLMs are certainly used in that way today.
\- While sometimes insightful, her responses to K all seem a bit "canned" and too compliant, something I've found that current LLMs also do. We also learn that her default "character card", so to speak, is what prompted her to call K "Joe" rather than an inspiration she'd had.
Honestly, short of the holo tech, I'd say we're 90% of the way to replicating (pun not intended) Joi. We can already use SD in conjunction with the LLM to create images and it's very feasible to have a customizable full motion avatar for an LLM comparable to what's in the movie within a fairly short time.
​
| 2023-09-06T09:43:20 |
https://www.reddit.com/r/LocalLLaMA/comments/16bfu1n/was_joi_from_blade_runner_2049_a_local_llm/
|
Herr_Drosselmeyer
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16bfu1n
| false | null |
t3_16bfu1n
|
/r/LocalLLaMA/comments/16bfu1n/was_joi_from_blade_runner_2049_a_local_llm/
| false | false |
self
| 1 | null |
What's the current best way to deal with memory management?
| 1 |
With long conversations or large documents to parse, what tools are currently being used to help manage memory, fetch&format relevant information, etc... so that you have the important information within the context limits.
I've heard that Microsoft is working on a GitHub Copilot for Docs which is supposed to be able to parse through large documents but I'm curious what's out there now and what methods people use for things like handling very long conversations or stories.
| 2023-09-06T10:16:36 |
https://www.reddit.com/r/LocalLLaMA/comments/16bgeoh/whats_the_current_best_way_to_deal_with_memory/
|
Sixhaunt
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16bgeoh
| false | null |
t3_16bgeoh
|
/r/LocalLLaMA/comments/16bgeoh/whats_the_current_best_way_to_deal_with_memory/
| false | false |
self
| 1 | null |
Why aren't the original llama2 weights more widely available yet?
| 1 |
[removed]
| 2023-09-06T10:35:42 |
https://www.reddit.com/r/LocalLLaMA/comments/16bgrek/why_arent_the_original_llama2_weights_more_widely/
|
AskingForMyMumWhoHDL
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16bgrek
| false | null |
t3_16bgrek
|
/r/LocalLLaMA/comments/16bgrek/why_arent_the_original_llama2_weights_more_widely/
| false | false |
self
| 1 | null |
RAG assisted language translation
| 1 |
I'm seeing more posts suggesting vector db's/RAG as a more reliable way of increasing knowledge in a llm, versus lora/qlora training. I'm curious if a use case could be implemented having say, domain specific language translations in a vector db that could inform new translations. Could a model look for best case examples in a vector db for unusual language usage, jargon or technical terms? Interested if anyone has tested this.
| 2023-09-06T11:00:54 |
https://www.reddit.com/r/LocalLLaMA/comments/16bh6y9/rag_assisted_language_translation/
|
satyaloka93
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16bh6y9
| false | null |
t3_16bh6y9
|
/r/LocalLLaMA/comments/16bh6y9/rag_assisted_language_translation/
| false | false |
self
| 1 | null |
What is your favorite 7B model and why ?
| 1 |
I use very little of the 7b model, what dou you suggest and why ?
| 2023-09-06T11:16:29 |
https://www.reddit.com/r/LocalLLaMA/comments/16bhhsh/what_is_your_favorite_7b_model_and_why/
|
GG9242
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16bhhsh
| false | null |
t3_16bhhsh
|
/r/LocalLLaMA/comments/16bhhsh/what_is_your_favorite_7b_model_and_why/
| false | false |
self
| 1 | null |
Any benchmarks that prove out GPT4 is getting dumber?
| 1 |
Seems to widely discussed that performance is dropping with ChatGPT and GPT4 but is anyone tracking this systematically to prove this hypothesis true or false ?
| 2023-09-06T11:52:52 |
https://www.reddit.com/r/LocalLLaMA/comments/16bi7bs/any_benchmarks_that_prove_out_gpt4_is_getting/
|
drivenkey
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16bi7bs
| false | null |
t3_16bi7bs
|
/r/LocalLLaMA/comments/16bi7bs/any_benchmarks_that_prove_out_gpt4_is_getting/
| false | false |
self
| 1 | null |
Best open source model for Function Calling?
| 1 |
Hello all:
Which open-source (fine-tuned) model do you believe is best at "function calling" in its responses, as defined by OpenAI (see [https://openai.com/blog/function-calling-and-other-api-updates](https://openai.com/blog/function-calling-and-other-api-updates))? Essentially, it should be proficient in generating a response to a prompt in the form of well-structured JSON or YAML that can seamlessly feed into another function. (First idea would be to use WizardCoder or Codellama, but I dont really want to have code generation capability, but ensure that the outputs are well formed).
Alternatively, do you know of an existing dataset that I could use to further fine-tune to a model to support the function calling capabilities?
Is there any established test for checking response well-formedness compliance?
Thanks
| 2023-09-06T12:10:03 |
https://www.reddit.com/r/LocalLLaMA/comments/16bik9d/best_open_source_model_for_function_calling/
|
devjamc
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16bik9d
| false | null |
t3_16bik9d
|
/r/LocalLLaMA/comments/16bik9d/best_open_source_model_for_function_calling/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': '4m2xcMb0X4JK25JBnZkXXd2NcS-Ea_rQDjiPK_cOk14', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/XhPkftsGlIY3gF04BOpT59G5cnN-Gs97zz9vAY0-SyY.jpg?width=108&crop=smart&auto=webp&s=cea26890127ad7fafd99e5d141248ece4a877494', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/XhPkftsGlIY3gF04BOpT59G5cnN-Gs97zz9vAY0-SyY.jpg?width=216&crop=smart&auto=webp&s=8b79ce172d14c0b69c58a8604ac08040df9a246a', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/XhPkftsGlIY3gF04BOpT59G5cnN-Gs97zz9vAY0-SyY.jpg?width=320&crop=smart&auto=webp&s=2ff55addd25e4a6a8df6ace581f97632a51ed7f1', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/XhPkftsGlIY3gF04BOpT59G5cnN-Gs97zz9vAY0-SyY.jpg?width=640&crop=smart&auto=webp&s=46338a4c061c273ccaa8b7502c76c25c4748ed31', 'width': 640}, {'height': 960, 'url': 'https://external-preview.redd.it/XhPkftsGlIY3gF04BOpT59G5cnN-Gs97zz9vAY0-SyY.jpg?width=960&crop=smart&auto=webp&s=2fac8523f69ab5343caddeda389c2c8fe0a3fa1e', 'width': 960}], 'source': {'height': 1000, 'url': 'https://external-preview.redd.it/XhPkftsGlIY3gF04BOpT59G5cnN-Gs97zz9vAY0-SyY.jpg?auto=webp&s=6502832bc9ff07e3d4c8054b3c3c331fd13bfdf9', 'width': 1000}, 'variants': {}}]}
|
Falcon180B: authors open source a new 180B version!
| 1 |
Today, Technology Innovation Institute (Authors of Falcon 40B and Falcon 7B) announced a new version of Falcon:
- 180 Billion parameters
- Trained on 3.5 trillion tokens
- Available for research and commercial usage
- Claims similar performance to Bard, slightly below gpt4
Announcement: https://falconllm.tii.ae/falcon-models.html
HF model:
https://huggingface.co/tiiuae/falcon-180B
Note: This is by far the largest open source modern (released in 2023) LLM both in terms of parameters size and dataset.
| 2023-09-06T12:46:38 |
https://www.reddit.com/r/LocalLLaMA/comments/16bjdmd/falcon180b_authors_open_source_a_new_180b_version/
|
Amgadoz
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16bjdmd
| false | null |
t3_16bjdmd
|
/r/LocalLLaMA/comments/16bjdmd/falcon180b_authors_open_source_a_new_180b_version/
| false | false |
self
| 1 | null |
Falcon180B - The Hugging Face Recap Edition
| 1 |
Hi all!
I'm the Chief Llama Officer at HF and I'm here to talk about an different topic... falcons!
Falcon 180B is out, and here is what you might want to know about it!
* The model has 180 billion parameters (duh)
* Trained on 3.5 trillion tokens (almost 2x Llama 2!)
* Trained on 7 million GPU hours (4x Llama 2)
* Both a base and a chat model have been released
* Quality is on par with PaLM-2 Large, above Llama 2 and GPT-3.5 across 13 benchmarks, making this one of the best models publicly known
* License has restrictive conditions -> one cannot provide API services to 3rd parties
* Chat model has super simple format
​
System: Add an optional system prompt here
User: This is the user input
Falcon: This is what the model generates
User: This might be a second turn input
Falcon: and so on
* You need many GPUs for training. With GPTQ/int4 for inference, you need 8 A100s
* Support in transformers (with an existing release, 4.33, which happened today)
* Integrations with bitsandbytes, PEFT, and GPTQ.
* RoPE scaling
* Speculative decoding support
* 4-bit and 8-bit show almost no degradation. Performance across benchmarks remains stable
​
Demo to try it out: [https://huggingface.co/spaces/tiiuae/falcon-180b-demo](https://huggingface.co/spaces/tiiuae/falcon-180b-demo)
Blog post: [https://huggingface.co/blog/falcon-180b](https://huggingface.co/blog/falcon-180b)
​
Enjoy and have a huggastic day!
| 2023-09-06T13:11:19 |
https://www.reddit.com/r/LocalLLaMA/comments/16bjxut/falcon180b_the_hugging_face_recap_edition/
|
hackerllama
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16bjxut
| false | null |
t3_16bjxut
|
/r/LocalLLaMA/comments/16bjxut/falcon180b_the_hugging_face_recap_edition/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'VnPfz4T_PBvE0aZgbbxKHMpFvaTgKkhfcvRwLUnRubE', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/gVREFem92AsURy5iDqcLjCowcjYwzocZHWIO1o2kdtA.jpg?width=108&crop=smart&auto=webp&s=782b98cf2b42e53ba1106df3e6981f32d2b7c645', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/gVREFem92AsURy5iDqcLjCowcjYwzocZHWIO1o2kdtA.jpg?width=216&crop=smart&auto=webp&s=004403bb96a0bdf43721038ac58efd0e44c314e5', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/gVREFem92AsURy5iDqcLjCowcjYwzocZHWIO1o2kdtA.jpg?width=320&crop=smart&auto=webp&s=3fb8f9e84ca4c69daab2536e4627a53cef02d091', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/gVREFem92AsURy5iDqcLjCowcjYwzocZHWIO1o2kdtA.jpg?width=640&crop=smart&auto=webp&s=a1b6a6f03f7260dec2bf8a1aa2b5db4ad8948691', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/gVREFem92AsURy5iDqcLjCowcjYwzocZHWIO1o2kdtA.jpg?width=960&crop=smart&auto=webp&s=52e1a32bfa50aa184426188f1100f4422c8da21e', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/gVREFem92AsURy5iDqcLjCowcjYwzocZHWIO1o2kdtA.jpg?width=1080&crop=smart&auto=webp&s=ec39e04a5d28782130f45ac04d4e3158e7dff049', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/gVREFem92AsURy5iDqcLjCowcjYwzocZHWIO1o2kdtA.jpg?auto=webp&s=460d01f934e88241cae9a6f4bacfcd527322b97d', 'width': 1200}, 'variants': {}}]}
|
I got a 64GB Sys-Ram, 4GB Nvidia GPU rig with an i5 3570k, what’s the biggest model I can run?
| 1 |
ChatGPT is only spitting out about 5t/s for my paid account. Hoping I can get something to run with a similar t/s output with this setup.
| 2023-09-06T13:15:05 |
https://www.reddit.com/r/LocalLLaMA/comments/16bk0t6/i_got_a_64gb_sysram_4gb_nvidia_gpu_rig_with_an_i5/
|
JodyShackelford
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16bk0t6
| false | null |
t3_16bk0t6
|
/r/LocalLLaMA/comments/16bk0t6/i_got_a_64gb_sysram_4gb_nvidia_gpu_rig_with_an_i5/
| false | false |
self
| 1 | null |
How to ask LLaMA2 to return short message?
| 1 |
I am building a chat application with LLaMA2. It works relly well. The problem is I want it to return short message but it doen't seem to follow my instruction.
Here are what I tried:
\- Use system promt to tell it write short messages.
\- Use max token
But it still reply a very message sometimes.
| 2023-09-06T13:54:14 |
https://www.reddit.com/r/LocalLLaMA/comments/16bkxsy/how_to_ask_llama2_to_return_short_message/
|
davidtranjs
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16bkxsy
| false | null |
t3_16bkxsy
|
/r/LocalLLaMA/comments/16bkxsy/how_to_ask_llama2_to_return_short_message/
| false | false |
self
| 1 | null |
Is it possible to develop AI in the way of interactive games now?
| 1 |
There used to be interactive games like e-chicken where you could breed pets, but if you breed real ai's in an interactive way, is it possible with today's technology?
| 2023-09-06T14:10:33 |
https://www.reddit.com/r/LocalLLaMA/comments/16blbqo/is_it_possible_to_develop_ai_in_the_way_of/
|
q8019222
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16blbqo
| false | null |
t3_16blbqo
|
/r/LocalLLaMA/comments/16blbqo/is_it_possible_to_develop_ai_in_the_way_of/
| false | false |
self
| 1 | null |
Small request - how would you recreate goblin.tools locally?
| 1 |
So the [goblin.tools](https://goblin.tools) website has been popping up for me on tiktok, but would like to have something local running on a pc hopefully. Mostly interested in the various to-dos to help with my notes/day , though i'm guessing most features and tabs would be a single text prompt away when things get going.
Which model would you use? up to 13b if possible, ideally 7b but i can run 13 on CPU.
Would you use ooba? Do i need sillytavern? That old-ish sillytavern proxy? Which character card would be suited for this? Hopefully sfw and still helpful. Prompts? Settings? CFG? Built in prompts or templates? Seems like there's a lot to choose from, hopefully gguf helps a bit when I update to it.
There's also are also phone apps if you wanna read the reviews for more insight.
​
Also i think Notion has similar options now, though i haven't looked too deep into it, goblin tools seemed like the right mix of simplicity and features that I'd like to replicate.
| 2023-09-06T14:18:49 |
https://www.reddit.com/r/LocalLLaMA/comments/16bliqc/small_request_how_would_you_recreate_goblintools/
|
ethereal_intellect
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16bliqc
| false | null |
t3_16bliqc
|
/r/LocalLLaMA/comments/16bliqc/small_request_how_would_you_recreate_goblintools/
| false | false |
self
| 1 | null |
How to size local machine
| 1 |
How does one go about sizing a new machine built for a local LLM? If I want to just run an instance of CodeLlama-34b what are the parameters I have to take into account?
| 2023-09-06T14:55:35 |
https://www.reddit.com/r/LocalLLaMA/comments/16bmek9/how_to_size_local_machine/
|
jrdubbleu
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16bmek9
| false | null |
t3_16bmek9
|
/r/LocalLLaMA/comments/16bmek9/how_to_size_local_machine/
| false | false |
self
| 1 | null |
Not using torrents for distributing models is a huge wasted opportunity
| 1 |
I could be missing something, but isn't it a bit obvious? Models are massive, mostly static, and downloaded by a bunch of peers all over the place; fast download speeds are a big nice-to-have.
I figure we mostly end up using the models hosted by Hugging Face because it's convenient. But it does feel like a centralization point that not only isn't required but also makes our experience a bit worse.
For example, downloading Falcon-180b... that can't be easy on hf servers.
| 2023-09-06T15:16:26 |
https://www.reddit.com/r/LocalLLaMA/comments/16bmx0p/not_using_torrents_for_distributing_models_is_a/
|
Agusx1211
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16bmx0p
| false | null |
t3_16bmx0p
|
/r/LocalLLaMA/comments/16bmx0p/not_using_torrents_for_distributing_models_is_a/
| false | false |
self
| 1 | null |
Pygmalion 2 (7B & 13B) and Mythalion 13B released!
| 1 |
Pygmalion 2 is the successor of the original Pygmalion models used for RP, while Mythalion is a merge between Pygmalion 2 and MythoMax. They are supposed to be good at uncensored chat/role play (haven't tried yet).
Blog post (including suggested generation parameters for SillyTavern)
* [https://pygmalionai.github.io/blog/posts/introducing\_pygmalion\_2/](https://pygmalionai.github.io/blog/posts/introducing_pygmalion_2/)
Quantized 13B by TheBloke:
* [https://huggingface.co/TheBloke/Pygmalion-2-13B-GPTQ](https://huggingface.co/TheBloke/Pygmalion-2-13B-GPTQ)
* [https://huggingface.co/TheBloke/Mythalion-13B-GPTQ](https://huggingface.co/TheBloke/Mythalion-13B-GPTQ)
​
| 2023-09-06T15:49:46 |
https://www.reddit.com/r/LocalLLaMA/comments/16bnr9j/pygmalion_2_7b_13b_and_mythalion_13b_released/
|
whtne047htnb
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16bnr9j
| false | null |
t3_16bnr9j
|
/r/LocalLLaMA/comments/16bnr9j/pygmalion_2_7b_13b_and_mythalion_13b_released/
| false | false |
self
| 1 | null |
Do you lose gpu performance with motored CPUs
| 1 |
If you are running two rtx 3090’s on a duel cpu system do you lose performance since each gpu is on different pcie lanes? I’m guessing yes.
| 2023-09-06T16:35:08 |
https://www.reddit.com/r/LocalLLaMA/comments/16boxm6/do_you_lose_gpu_performance_with_motored_cpus/
|
Ergosyn
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16boxm6
| false | null |
t3_16boxm6
|
/r/LocalLLaMA/comments/16boxm6/do_you_lose_gpu_performance_with_motored_cpus/
| false | false |
self
| 1 | null |
Improve model load time - seems to be running single CPU
| 1 |
[removed]
| 2023-09-06T16:49:23 |
https://www.reddit.com/r/LocalLLaMA/comments/16bpahn/improve_model_load_time_seems_to_be_running/
|
JamesHutchisonReal
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16bpahn
| false | null |
t3_16bpahn
|
/r/LocalLLaMA/comments/16bpahn/improve_model_load_time_seems_to_be_running/
| false | false |
self
| 1 | null |
TheBloke blessed us with numerous LLM versions on HuggingFace. I had troubles sifting through them and finding ones of personal interest, so I created an online tool to specifically filter and explore a user's LLMs.
| 1 |
[removed]
| 2023-09-06T16:55:12 |
https://www.reddit.com/r/LocalLLaMA/comments/16bpfqp/thebloke_blessed_us_with_numerous_llm_versions_on/
|
Jack_5515
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16bpfqp
| false | null |
t3_16bpfqp
|
/r/LocalLLaMA/comments/16bpfqp/thebloke_blessed_us_with_numerous_llm_versions_on/
| false | false | 1 | null |
|
Extending vocabulary size + fine-tuning.
| 1 |
My application requires to generate quite large XML-like files (\~50k tokens on average). With the [latest advances](https://www.reddit.com/r/LocalLLaMA/comments/166jik4/128k_context_llama_2_finetunes_using_yarn/) in positional encodings, I don't think context length is longer a problem, unlike the VRAM. The files contain a lot of XML-like tags and I think my application would benefit from extending the vocabulary size by introducing several new tokens (as discussed in [this thread](https://www.reddit.com/r/learnmachinelearning/comments/13ak7jk/llm_custom_dictionary/) and [this issue](https://github.com/huggingface/tokenizers/issues/247)). I'm convinced that fine-tuning would be necessary after the extension. The problem is I don't have enough VRAM to fine-tune the full model (7B). Is there a way to somehow use Lora to train the additional embeddings or is this idea completely wrong?
| 2023-09-06T16:57:22 |
https://www.reddit.com/r/LocalLLaMA/comments/16bphnm/extending_vocabulary_size_finetuning/
|
AddictionLord
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16bphnm
| false | null |
t3_16bphnm
|
/r/LocalLLaMA/comments/16bphnm/extending_vocabulary_size_finetuning/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': '2qUj1ted_KyxXLm9DALvPHzMOsmQdkzlE_98zI830pE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Df7vGXspi9tnVzyL1aKo7WbvqHR2BCNS6E4imeSG1yE.jpg?width=108&crop=smart&auto=webp&s=30a6228199d8c9a2e0026414d26f683a436ceb6d', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Df7vGXspi9tnVzyL1aKo7WbvqHR2BCNS6E4imeSG1yE.jpg?width=216&crop=smart&auto=webp&s=dcb32867dc0d6e59b4a9f21781da23fc0e7634a1', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Df7vGXspi9tnVzyL1aKo7WbvqHR2BCNS6E4imeSG1yE.jpg?width=320&crop=smart&auto=webp&s=714de4e7766a6aa95cf87b76de17c46a2def539a', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Df7vGXspi9tnVzyL1aKo7WbvqHR2BCNS6E4imeSG1yE.jpg?width=640&crop=smart&auto=webp&s=129b29e0d03ea47779e9e152f57ca6a86a67c7f7', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/Df7vGXspi9tnVzyL1aKo7WbvqHR2BCNS6E4imeSG1yE.jpg?width=960&crop=smart&auto=webp&s=f0e5967aa665f1d94a2c8dde09a9056529eddfe1', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/Df7vGXspi9tnVzyL1aKo7WbvqHR2BCNS6E4imeSG1yE.jpg?width=1080&crop=smart&auto=webp&s=0226d9896c07f85a66b0ea31c169f4d0d86fd51b', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/Df7vGXspi9tnVzyL1aKo7WbvqHR2BCNS6E4imeSG1yE.jpg?auto=webp&s=ef05f7f282f612760c9eb608e0aef402cc89935a', 'width': 1200}, 'variants': {}}]}
|
Fine-tuning Learning rate
| 1 |
After reading the fast.ai [post](https://www.fast.ai/posts/2023-09-04-learning-jumps/), I'm wondering whether learning rate decay and schedulers are even necessary for finetuning LLMs? Especially when training for only 1 or 2 epochs. Will training the LLM on all the samples with a small constant learning rate be better?What does everyone else think about this?
| 2023-09-06T17:30:34 |
https://www.reddit.com/r/LocalLLaMA/comments/16bqcox/finetuning_learning_rate/
|
LiquidGunay
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16bqcox
| false | null |
t3_16bqcox
|
/r/LocalLLaMA/comments/16bqcox/finetuning_learning_rate/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'MhwgXe_bJK90p9Vop2eb4fIws4VyXTiDdefn4XduJdc', 'resolutions': [{'height': 69, 'url': 'https://external-preview.redd.it/irXR2ZlGamnQWGjyw-Z4j4NArIwOeZtKlA382m0molc.jpg?width=108&crop=smart&auto=webp&s=fa1ef3d0c583487d4d0a0ccbfa2db28d51f2716f', 'width': 108}, {'height': 138, 'url': 'https://external-preview.redd.it/irXR2ZlGamnQWGjyw-Z4j4NArIwOeZtKlA382m0molc.jpg?width=216&crop=smart&auto=webp&s=fa649d747716c0fbbe62c5af36ccd8214d8a2da5', 'width': 216}, {'height': 205, 'url': 'https://external-preview.redd.it/irXR2ZlGamnQWGjyw-Z4j4NArIwOeZtKlA382m0molc.jpg?width=320&crop=smart&auto=webp&s=4117d22220e8693ee10a3c0c8516f49950f3b4d5', 'width': 320}, {'height': 410, 'url': 'https://external-preview.redd.it/irXR2ZlGamnQWGjyw-Z4j4NArIwOeZtKlA382m0molc.jpg?width=640&crop=smart&auto=webp&s=93db5897d854cd726c3ed568d120d4f98d149862', 'width': 640}, {'height': 615, 'url': 'https://external-preview.redd.it/irXR2ZlGamnQWGjyw-Z4j4NArIwOeZtKlA382m0molc.jpg?width=960&crop=smart&auto=webp&s=268e5f221d37dc9175fe5d70a2a7fe201103d6fa', 'width': 960}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/irXR2ZlGamnQWGjyw-Z4j4NArIwOeZtKlA382m0molc.jpg?auto=webp&s=508466db495aa4e4b4a64c39db803aa72f874425', 'width': 1010}, 'variants': {}}]}
|
Idea about model optimization
| 1 |
Hi, I had an Idea for reducing Model Sizes by separating and creating LoRA Models out of it combined with quantization of parts of the Neural network (separated) combined with building "better" smaller parts of the model, also reduce the bit size, cause the smaller model (maybe doesn't need it) , and add new training data in the process of interaction.
So... Something between pruning, extracting parts of the model for faster usage and smaller size, quantization and Fine-Tuning all on an model that already exist and is capable of "doing things". What do you think about this?
| 2023-09-06T17:31:09 |
https://www.reddit.com/r/LocalLLaMA/comments/16bqda4/idea_about_model_optimization/
|
Deep-View-2411
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16bqda4
| false | null |
t3_16bqda4
|
/r/LocalLLaMA/comments/16bqda4/idea_about_model_optimization/
| false | false |
self
| 1 | null |
Specs for codellama-34B
| 1 |
Probably a noob question but I need help lol. So I’m supposed to request a dedicated server from my company for codellama-34B. The instruct model has been working really well for what we need on our current server with 4x RTX A6000 but we might not be able to spend as much on the new one. Anyone know what I should request and what the tokens/sec would be roughly?
| 2023-09-06T17:50:39 |
https://www.reddit.com/r/LocalLLaMA/comments/16bqvk3/specs_for_codellama34b/
|
burnerIhrdlyknowher
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16bqvk3
| false | null |
t3_16bqvk3
|
/r/LocalLLaMA/comments/16bqvk3/specs_for_codellama34b/
| false | false |
self
| 1 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.