title
stringlengths 1
300
| score
int64 0
8.54k
| selftext
stringlengths 0
40k
| created
timestamp[ns]date 2023-04-01 04:30:41
2025-06-30 03:16:29
⌀ | url
stringlengths 0
878
| author
stringlengths 3
20
| domain
stringlengths 0
82
| edited
timestamp[ns]date 1970-01-01 00:00:00
2025-06-26 17:30:18
| gilded
int64 0
2
| gildings
stringclasses 7
values | id
stringlengths 7
7
| locked
bool 2
classes | media
stringlengths 646
1.8k
⌀ | name
stringlengths 10
10
| permalink
stringlengths 33
82
| spoiler
bool 2
classes | stickied
bool 2
classes | thumbnail
stringlengths 4
213
| ups
int64 0
8.54k
| preview
stringlengths 301
5.01k
⌀ |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Looks like I broke my llm
| 1 | 2023-09-13T05:45:34 |
Rafael20002000
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
16heqci
| false | null |
t3_16heqci
|
/r/LocalLLaMA/comments/16heqci/looks_like_i_broke_my_llm/
| false | false | 1 |
{'enabled': True, 'images': [{'id': 'qHlVL06KpwQMt7lRvC-7vNfEviCrDdaJovdxLkQatow', 'resolutions': [{'height': 33, 'url': 'https://preview.redd.it/gwmvqiqvnynb1.png?width=108&crop=smart&auto=webp&s=1d8abdd966497f0d28ff3138807d02938ecbd385', 'width': 108}, {'height': 67, 'url': 'https://preview.redd.it/gwmvqiqvnynb1.png?width=216&crop=smart&auto=webp&s=6108cabd5442f3fbda5b04e1a0c7ab7686b2cdf7', 'width': 216}, {'height': 100, 'url': 'https://preview.redd.it/gwmvqiqvnynb1.png?width=320&crop=smart&auto=webp&s=a9579b5a26f5049dbf9f7d057bd598f639e6333b', 'width': 320}, {'height': 200, 'url': 'https://preview.redd.it/gwmvqiqvnynb1.png?width=640&crop=smart&auto=webp&s=5add7cabe3f1218ee0cd8b3b3435103bc35d1234', 'width': 640}, {'height': 300, 'url': 'https://preview.redd.it/gwmvqiqvnynb1.png?width=960&crop=smart&auto=webp&s=bc73963dc2531728774a98b181d633c0ec1a54b0', 'width': 960}, {'height': 338, 'url': 'https://preview.redd.it/gwmvqiqvnynb1.png?width=1080&crop=smart&auto=webp&s=812dbc2b6484ab062f9482a415b522aa3cdd2611', 'width': 1080}], 'source': {'height': 431, 'url': 'https://preview.redd.it/gwmvqiqvnynb1.png?auto=webp&s=c826e0a2810449ddb667e88a81b9f6e9ef0e044a', 'width': 1376}, 'variants': {}}]}
|
|||
Airoboros-l2-70b-gpt4-1.4.1 quanted to 5bpw (5bits) with exllama2, testers welcome!
| 1 | 2023-09-13T05:50:09 |
https://huggingface.co/Panchovix/airoboros-l2-70b-gpt4-1.4.1_5.0bpw-h6-exl2
|
panchovix
|
huggingface.co
| 1970-01-01T00:00:00 | 0 |
{}
|
16hetf8
| false | null |
t3_16hetf8
|
/r/LocalLLaMA/comments/16hetf8/airoborosl270bgpt4141_quanted_to_5bpw_5bits_with/
| false | false | 1 |
{'enabled': False, 'images': [{'id': 'gGbp9mg1X-UUpGje0e7ALHl5LPI6m37HjMsmS76Oujw', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/plbIoohNm6b0LLyfZXjCDP_PiwMEYsCDJu7KxPbwe4U.jpg?width=108&crop=smart&auto=webp&s=2b7ce83743f57e4eee5612211602e9cea425af0f', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/plbIoohNm6b0LLyfZXjCDP_PiwMEYsCDJu7KxPbwe4U.jpg?width=216&crop=smart&auto=webp&s=7a354d672cd2b45214280307915909473a094b02', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/plbIoohNm6b0LLyfZXjCDP_PiwMEYsCDJu7KxPbwe4U.jpg?width=320&crop=smart&auto=webp&s=f9cf140f7a9ef6fb51e85e2a50cb8b5cc29eb36e', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/plbIoohNm6b0LLyfZXjCDP_PiwMEYsCDJu7KxPbwe4U.jpg?width=640&crop=smart&auto=webp&s=286ae72dd87439faf374ba00320703394990c0fc', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/plbIoohNm6b0LLyfZXjCDP_PiwMEYsCDJu7KxPbwe4U.jpg?width=960&crop=smart&auto=webp&s=2c26c292ffcfb947fe2546535a4357ce6fbd97c7', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/plbIoohNm6b0LLyfZXjCDP_PiwMEYsCDJu7KxPbwe4U.jpg?width=1080&crop=smart&auto=webp&s=0b6610879cc5766bd1811b6829e2d6b9df4e291e', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/plbIoohNm6b0LLyfZXjCDP_PiwMEYsCDJu7KxPbwe4U.jpg?auto=webp&s=fa61ad0e842284f70d95cd664338cd7c7b481e8b', 'width': 1200}, 'variants': {}}]}
|
||
Is there a LLM alternative to Co-Pilot or Code Interpreter?
| 1 |
I was wondering if there is an alternative to Chat GPT code Interpreter or Auto-GPT but locally.
I am a newbie to coding and have managed to build a MVP however the workflow is pretty dynamic so I use Bing to help me with my coding tasks.
This is what my current workflow looks like:
1. Define the task and state my environment (usually VsCode and Python)
2. Copy the solution from Bing/Chat GPT to my environment and run it.
3. Share errors if any to Bing/Chat GPT and then paste it to my environment.
4. Repeat step 1-3 till I get output.
5. If I dont get desired output, modify step 1 and repeat till I get what I want.
6. Stop
Now if I have a Co-pilot like interpreter that can eliminate human intervention, it can make the workflow much more efficent and avoid (human) hallucinations as obviously there is a gap between what we want and how we explain what we want and a lot can get lost in translating it to a machine. Something that can work in any environment (VS, Pycharm, Notebook, Linux, etc.)
Of course this approach can help everyone however, do these solutions exist? If yes, are there any tuitorials I can follow to implement it?
I have read that latest models have improved their context sizes so a larger code base can be passed which is a great thing.
| 2023-09-13T05:55:20 |
https://www.reddit.com/r/LocalLLaMA/comments/16hewhv/is_there_a_llm_alternative_to_copilot_or_code/
|
card_chase
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16hewhv
| false | null |
t3_16hewhv
|
/r/LocalLLaMA/comments/16hewhv/is_there_a_llm_alternative_to_copilot_or_code/
| false | false |
self
| 1 | null |
PC hardware dedicated to 3D modeling and artificial intelligence
| 1 |
[removed]
| 2023-09-13T07:50:07 |
https://www.reddit.com/r/LocalLLaMA/comments/16hgtw9/pc_hardware_dedicated_to_3d_modeling_and/
|
estrosfere
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16hgtw9
| false | null |
t3_16hgtw9
|
/r/LocalLLaMA/comments/16hgtw9/pc_hardware_dedicated_to_3d_modeling_and/
| false | false |
self
| 1 | null |
AskCyph™ LITE: Run Red Pajama, Llama2 7B AI Model in Brower
| 1 |
[AskCyph™ LITE](https://askcyph.cypherchat.app/), is a lightweight AI chatbot that runs Red Pajama LLama AI Models directly in your browser without sending your data to another provider. The first time it takes little time to download the model and the following time it is a lot faster to initialize.
Currently, we support Red Pajama (Basic) and Llama 2 7b (Advanced). We created this as a way for enthusiasts of all levels to take a plunge and have an AI model running.
✅ Offline access
✅ Enhanced privacy
✅ Basic or Advanced Models
✅ Requires 4GB/8GB free RAM
✅ Relatively new Computer with integrated or external GPU
Acknowledging open-source projects that inspired us, made the innovation and AskCyph™ LITE possible.
* [Hugging Face](https://huggingface.co/)
* [Apache TVM](https://tvm.apache.org/)
* [MLC AI - Web LLM](https://webllm.mlc.ai/)
* [TOGETHER](https://together.ai/)
* [Llama2](https://ai.meta.com/llama/)
| 2023-09-13T07:53:56 |
https://www.reddit.com/r/LocalLLaMA/comments/16hgvxz/askcyph_lite_run_red_pajama_llama2_7b_ai_model_in/
|
cypherchat
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16hgvxz
| false | null |
t3_16hgvxz
|
/r/LocalLLaMA/comments/16hgvxz/askcyph_lite_run_red_pajama_llama2_7b_ai_model_in/
| false | false |
self
| 1 | null |
Which local LLM on shadow cloud pc
| 1 |
I'm currently using a shadow cloud pc - power + upgrade and I would like to know what LLM i can run locally on that pc. I have zero knowledge about all of this, i'm just trying to explore and learn from what i'm doing. Ultimately I'm searching for something that could access my local files and interact with them (i don't know if that's possible yet).
I want to use it to help me code/learn coding and also for studying. I don't mind paying as long as it's not too much. (20 € monthly max)
What would you recommend me?
| 2023-09-13T08:03:43 |
https://www.reddit.com/r/LocalLLaMA/comments/16hh1hz/which_local_llm_on_shadow_cloud_pc/
|
cata_stropheu
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16hh1hz
| false | null |
t3_16hh1hz
|
/r/LocalLLaMA/comments/16hh1hz/which_local_llm_on_shadow_cloud_pc/
| false | false |
self
| 1 | null |
"How Can I Calculate the Token Generation Rate of an LLM Based on Given GPU Specifications?
| 1 |
Hello,
I've been pondering a question recently and was hoping to get some insights from the community. I'm curious about how to calculate the token generation rate per second of a Large Language Model (LLM) based on the specifications of GPU.
Is there a formula or method I can use to estimate the token generation speed based on GPU parameters such as VRAM, CUDA cores, clock speed, or any other relevant factors? I'm interested in optimizing the performance of my LLM, and understanding this relationship would be incredibly helpful.
If anyone has experience or knowledge in this area, I'd greatly appreciate your input and guidance. Thanks in advance for your assistance!
Best regards,
| 2023-09-13T08:21:10 |
https://www.reddit.com/r/LocalLLaMA/comments/16hhbbp/how_can_i_calculate_the_token_generation_rate_of/
|
PickkNickk
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16hhbbp
| false | null |
t3_16hhbbp
|
/r/LocalLLaMA/comments/16hhbbp/how_can_i_calculate_the_token_generation_rate_of/
| false | false |
self
| 1 | null |
How Can I Calculate the Token Generation Rate of an LLM Based on Given GPU Specifications?
| 1 |
Hello,
I'm curious about how to calculate the token generation rate per second of a Large Language Model (LLM) based on the specifications of a given GPU.
Is there a formula or method I can use to estimate the token generation speed based on GPU parameters such as **VRAM**, **CUDA cores**, **clock speed,** or **any other relevant** factors? I'm interested in optimizing and setting up a server for my LLM based Saas, and understanding this relationship would be incredibly helpful.
If anyone has experience or knowledge in this area, I'd greatly appreciate your input and guidance. Thanks in advance for your assistance!
Best regards.
| 2023-09-13T08:33:27 |
https://www.reddit.com/r/LocalLLaMA/comments/16hhih6/how_can_i_calculate_the_token_generation_rate_of/
|
PickkNickk
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16hhih6
| false | null |
t3_16hhih6
|
/r/LocalLLaMA/comments/16hhih6/how_can_i_calculate_the_token_generation_rate_of/
| false | false |
self
| 1 | null |
PC hardware dedicated to 3D modeling and artificial intelligence
| 1 |
[removed]
| 2023-09-13T08:44:06 |
https://www.reddit.com/r/LocalLLaMA/comments/16hhorb/pc_hardware_dedicated_to_3d_modeling_and/
|
estrosfere
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16hhorb
| false | null |
t3_16hhorb
|
/r/LocalLLaMA/comments/16hhorb/pc_hardware_dedicated_to_3d_modeling_and/
| false | false |
self
| 1 | null |
Phi-1.5 Finetunes: What we have so far
| 1 |
[**PlatyPhi-1.5B**](https://huggingface.co/SkunkworksAI/PlatyPhi-1.5B) by [SkunkworksAI](https://twitter.com/far__el)
[**Puffin-Phi-v2**](https://huggingface.co/teknium/Puffin-Phi-v2) by [Teknium](https://twitter.com/Teknium1)
[**Phi-1.5\_Wizard\_Vicuna\_Uncensored**](https://huggingface.co/jphme/phi-1_5_Wizard_Vicuna_uncensored) by [Jphme](https://twitter.com/jphme)
PlatyPhi is the only finetune that has been benchmarked so far, [results here](https://twitter.com/far__el/status/1701814544794796339).
As of posting this, there are two 8-bit quantized version of the original [Phi-1](https://huggingface.co/DaertML/phi-1-nf4-8bit-doublequant-BNB) and [Phi-1.5](https://huggingface.co/DaertML/phi-1_5-nf4-8bit-doublequant-BNB) models.
It's unclear whether we'll get regular GGUF quant models as Phi isn't Llama.
| 2023-09-13T09:26:40 |
https://www.reddit.com/r/LocalLLaMA/comments/16hidpo/phi15_finetunes_what_we_have_so_far/
|
Lumiphoton
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16hidpo
| false | null |
t3_16hidpo
|
/r/LocalLLaMA/comments/16hidpo/phi15_finetunes_what_we_have_so_far/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': '3cxODlbih2qIQ-Qz9TOJSgByP2mgwayu1N0D8OG073s', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/RJ7PsQZ4lIb7sfFJDR-YlqR_Bh_m3Ix2fnKwRkPZRr4.jpg?width=108&crop=smart&auto=webp&s=3c309606abd293dae0bc749c2fb96432af102072', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/RJ7PsQZ4lIb7sfFJDR-YlqR_Bh_m3Ix2fnKwRkPZRr4.jpg?width=216&crop=smart&auto=webp&s=094478cc5f4bad9c28a0d918e2e884d69e51925b', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/RJ7PsQZ4lIb7sfFJDR-YlqR_Bh_m3Ix2fnKwRkPZRr4.jpg?width=320&crop=smart&auto=webp&s=ba1c1c5e6782b8fa37158b0d6ed0f1d85262edb4', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/RJ7PsQZ4lIb7sfFJDR-YlqR_Bh_m3Ix2fnKwRkPZRr4.jpg?width=640&crop=smart&auto=webp&s=4a2a88410032f0be7ec9676837bf399889e707f8', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/RJ7PsQZ4lIb7sfFJDR-YlqR_Bh_m3Ix2fnKwRkPZRr4.jpg?width=960&crop=smart&auto=webp&s=e5ed026f66603f43291dffdd36fe82100924844b', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/RJ7PsQZ4lIb7sfFJDR-YlqR_Bh_m3Ix2fnKwRkPZRr4.jpg?width=1080&crop=smart&auto=webp&s=1f6d3d62142671751e406bb3ab0e4da96ec2ba94', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/RJ7PsQZ4lIb7sfFJDR-YlqR_Bh_m3Ix2fnKwRkPZRr4.jpg?auto=webp&s=e486f4e7b68099f47135aa900a13b4130ac476be', 'width': 1200}, 'variants': {}}]}
|
End-to-End Training and deployment with DeterminedAI and Titan Takeoff Server
| 1 |
Hey everyone,
Just wondering if anyone is having problems with finding a fuss-free way of fine-tuning and deploying LLMs?
I wanted to share a project that our team at TitanML recently did, which uses two open-source packages. We fine-tuned a model with Determined AI, then deployed it locally with the our own Titan Takeoff Server, which runs. We've written a step-by-step guide, highlighting everything from initial model training with Determined AI and to its optimised and efficient deployment with the Titan Takeoff Server. You can read it here:
[Medium Article](https://medium.com/@TitanML/end-to-end-training-with-determinedai-and-titan-takeoff-from-model-training-to-efficient-bbf77499d8c3)
[TitanML blog](https://docs.titanml.co/blog/using%20determined%20with%20takeoff)
Has anyone here had firsthand experiences with these platforms or similar end-to-end training workflows? Please do let me know if you do end up trying either of the packages as I'd love to hear from you all!
**Full Disclosure:** The article is from our team at TitanML. We're keen to share and discuss our findings with the broader community, and not strictly for promotional purposes. I hope this brings value to some of you here.
| 2023-09-13T10:05:45 |
https://www.reddit.com/r/LocalLLaMA/comments/16hj0d5/endtoend_training_and_deployment_with/
|
blakeho96
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16hj0d5
| false | null |
t3_16hj0d5
|
/r/LocalLLaMA/comments/16hj0d5/endtoend_training_and_deployment_with/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'EcvK1C6emXJaHdD1vhAMMSxvwYX7jse8nrG7qOEC_QI', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/Oci_Ydf7Jn7OqCqimyZ9glhSmKyzwMuLX1wgf7tUJdM.jpg?width=108&crop=smart&auto=webp&s=deca343903d978e3954e05e2d75bce1512ba7996', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/Oci_Ydf7Jn7OqCqimyZ9glhSmKyzwMuLX1wgf7tUJdM.jpg?width=216&crop=smart&auto=webp&s=14dd6c80be6599eec4c838d00f161dafd2c04cf4', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/Oci_Ydf7Jn7OqCqimyZ9glhSmKyzwMuLX1wgf7tUJdM.jpg?width=320&crop=smart&auto=webp&s=63e24426d861ad49873e22a3b96bf3f24a7fe87c', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/Oci_Ydf7Jn7OqCqimyZ9glhSmKyzwMuLX1wgf7tUJdM.jpg?width=640&crop=smart&auto=webp&s=d8ee8835151231b9aa936339650b6923c955c724', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/Oci_Ydf7Jn7OqCqimyZ9glhSmKyzwMuLX1wgf7tUJdM.jpg?width=960&crop=smart&auto=webp&s=3b65917352a8cceecbabc2bc5447225b9f149a66', 'width': 960}, {'height': 568, 'url': 'https://external-preview.redd.it/Oci_Ydf7Jn7OqCqimyZ9glhSmKyzwMuLX1wgf7tUJdM.jpg?width=1080&crop=smart&auto=webp&s=754346692cb3408302b7362a5e0f5677e5d757bd', 'width': 1080}], 'source': {'height': 568, 'url': 'https://external-preview.redd.it/Oci_Ydf7Jn7OqCqimyZ9glhSmKyzwMuLX1wgf7tUJdM.jpg?auto=webp&s=64a05cc2220b6bef7981a8d43a612c308366fa69', 'width': 1080}, 'variants': {}}]}
|
Training a LoRa or Finetune from Chat Dataset?
| 1 |
There is limited guidance available on training conversational AI models using techniques like LoRA or fine-tuning. In contrast, training image generation models (txt2img) is well documented.
I want to create a chatbot that can mimic a specific person's texting style, but am unsure of the best approach. So far, I have extracted a dataset of WhatsApp chat messages and formatted it for model training. I made the messages from the person I want the bot to emulate the output texts.
However, I am uncertain whether LoRA or fine-tuning is better for my goal. I also don't know the optimal hyperparameter settings to use for either method, or which model trainer would work best. I am currently using the oogabooga / text-generation-webui tool, which supports LoRA training, but I don't know how to fine-tune a model with it.
In general, there seems to be less information available on training conversational models compared to image models. For example, I don't know how to determine if I am overfitting or undertraining my model.
In summary, while I have made some initial progress, I would appreciate any guidance on the best approach to take, optimal training settings, avoiding over/underfitting, and tools that support fine-tuning conversational models. My goal is a chatbot that convincingly mimics a specific person's texting style. Any insights would be helpful.
| 2023-09-13T10:12:11 |
https://www.reddit.com/r/LocalLLaMA/comments/16hj4a9/training_a_lora_or_finetune_from_chat_dataset/
|
mohaziz999
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16hj4a9
| false | null |
t3_16hj4a9
|
/r/LocalLLaMA/comments/16hj4a9/training_a_lora_or_finetune_from_chat_dataset/
| false | false |
self
| 1 | null |
Has anyone tried running a Llama2/Lang Flow setup?
| 1 |
Inspired by a post by the Floneum dev I was looking into alternatives to text generation. I found Lang Flow which is a GUI for LangChain. As far as I can tell it is a node based system to integrate LangChain functionality in a node based system.
It seems to have Llama2 model support but I haven't been able to find much in the way of guides/tutorials on how to set up such a system. I come from a design background and have used a bit of ComfyUI for SD and use node based workflows a lot in my design work.
Does anyone have an expertise with this workflow, any suggestions before I get stuck in? Thanks.
| 2023-09-13T10:14:31 |
https://www.reddit.com/r/LocalLLaMA/comments/16hj5od/has_anyone_tried_running_a_llama2lang_flow_setup/
|
JFHermes
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16hj5od
| false | null |
t3_16hj5od
|
/r/LocalLLaMA/comments/16hj5od/has_anyone_tried_running_a_llama2lang_flow_setup/
| false | false |
self
| 1 | null |
Google Research releases new 10.7B multilingual machine translation model competitive with Meta's NLLB 54B translation model
| 1 |
The paper came out last weekend but the links to the models were recently added. They've released 3B, 7.2B, and 10.7B MT models.
Checkpoints: [https://github.com/google-research/google-research/tree/master/madlad\_400](https://github.com/google-research/google-research/tree/master/madlad_400)
Paper: [https://arxiv.org/abs/2309.04662](https://arxiv.org/abs/2309.04662)
Abstract:
>We introduce MADLAD-400, a manually audited, general domain 3T token monolingual dataset based on CommonCrawl, spanning 419 languages. We discuss the limitations revealed by self-auditing MADLAD-400, and the role data auditing had in the dataset creation process. We then train and release a 10.7B-parameter multilingual machine translation model on 250 billion tokens covering over 450 languages using publicly available data, and find that it is competitive with models that are significantly larger, and report the results on different domains. In addition, we train a 8B-parameter language model, and assess the results on few-shot translation. We make the baseline models available to the research community.
Relevant graphs and excerpts:
>We train models of various sizes: a 3B, 32-layer parameter model, a 7.2B 48-layer parameter model and a 10.7B 32-layer parameter model. We share all parameters of the model across language pairs, and use a Sentence Piece Model with 256k tokens shared on both the encoder and decoder side. Each input sentence has a <2xx> token prepended to the source sentence to indicate the target language. We use both supervised parallel data with a machine translation objective and the monolingual MADLAD-400 dataset with a MASS-style objective to train this model.
>
>We find that both the 7.2B parameter model and the 10B parameter model is competitive with the significantly larger NLLB-54B model on WMT.
Table 4: Evaluation scores on WMT (depicted as <bleu> / <chrf>) for the MT models compared against NLLB-54B.
||**NLLB**|**MT-3B**|**MT-7.2B**|**MT-10.7B**|
|:-|:-|:-|:-|:-|
|xx2en|34.2 / 60.4|33.4 / 60.0|34.9 / 60.6|**34.6 / 60.8**|
|en2xx|**31.1 / 58.0**|28.2 / 55.4|29.3 / 56.2|29.0 / 56.2|
|Average|**32.7 / 59.2**|30.8 / 57.7|32.1 / 58.4|31.8 / 58.5|
Table 5: Evaluation scores on Flores-200 (depicted as <bleu> / <chrf>) for the MT models compared against NLLB-54B. All metrics are computed with the sacrebleu reference implementation.
​
||**NLLB**|**MT-3B**|**MT-7.2B**|**MT-10.7B**|
|:-|:-|:-|:-|:-|
|xx2en|**35.5 / 59.6**|29.7 / 54.4|30.9 / 55.4|31.9 / 56.4|
|en2xx|**20.7 / 50.1** |17.3 / 44.1|17.8 / 44.7|18.6 / 45.7|
|Mean|**28.2 / 54.9**|23.5 / 49.2|24.4 / 50.0 |25.3 / 51.1|
|xx2yy|**13.7 / 40.5**|8.8 / 31.2|8.4 / 30.9|10.1 / 34.0|
​
| 2023-09-13T10:47:22 |
https://www.reddit.com/r/LocalLLaMA/comments/16hjrbr/google_research_releases_new_107b_multilingual/
|
llamaShill
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16hjrbr
| false | null |
t3_16hjrbr
|
/r/LocalLLaMA/comments/16hjrbr/google_research_releases_new_107b_multilingual/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'dY9S0gzeKgVffR4ulbjEMVkIPcZo1M3veyP5LZakDUc', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Ngm9vK1fiO5rF10maCmpIG7nmTTPjeOOKPutoLh_Auc.jpg?width=108&crop=smart&auto=webp&s=2f879aa334a608a7acd2ef2bb8893625bce89455', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Ngm9vK1fiO5rF10maCmpIG7nmTTPjeOOKPutoLh_Auc.jpg?width=216&crop=smart&auto=webp&s=e80144daf9dc8b271ea268e792edbdcff6bccff7', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Ngm9vK1fiO5rF10maCmpIG7nmTTPjeOOKPutoLh_Auc.jpg?width=320&crop=smart&auto=webp&s=904105bc08df42c1ba66442d4f8f1f2835dbdb5f', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Ngm9vK1fiO5rF10maCmpIG7nmTTPjeOOKPutoLh_Auc.jpg?width=640&crop=smart&auto=webp&s=9ee41724bd9ef38ccaab75afa1e7204d25265240', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/Ngm9vK1fiO5rF10maCmpIG7nmTTPjeOOKPutoLh_Auc.jpg?width=960&crop=smart&auto=webp&s=0d3d53a36b45e676bce9645ab855210a78ea9cf8', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/Ngm9vK1fiO5rF10maCmpIG7nmTTPjeOOKPutoLh_Auc.jpg?width=1080&crop=smart&auto=webp&s=370628c1698a84e187b80e13653c46500ea8797c', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/Ngm9vK1fiO5rF10maCmpIG7nmTTPjeOOKPutoLh_Auc.jpg?auto=webp&s=941ca5b6d22dfdf4ce62716c79b4a136a6efdca4', 'width': 1200}, 'variants': {}}]}
|
Best OpenAI -like API for local models?
| 1 |
Say I have Python code that uses the OpenAI model and works with GPT4 etc.
Now I want to “seamlessly” switch to using a locally running llama2 (or other) model.
I know there are various libraries out there that provide OpenAI-like APIs for local models, but wanted to find out here which one people consider to be the best dev-ex.
The common problem I am running into with the libs I tried is there are issues dealing with llama2’s prompt formatting ([INST] <<SYS>> etc).
Tried these great libs and ran into issues :
* Llama-cpp-Python
* LocalAI:
| 2023-09-13T11:21:39 |
https://www.reddit.com/r/LocalLLaMA/comments/16hkdtb/best_openai_like_api_for_local_models/
|
SatoshiNotMe
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16hkdtb
| false | null |
t3_16hkdtb
|
/r/LocalLLaMA/comments/16hkdtb/best_openai_like_api_for_local_models/
| false | false |
self
| 1 | null |
Koboldcpp-ROCm port released for Windows
| 1 |
Haven't tried it yet, but i wanted to let people know:
[https://github.com/YellowRoseCx/koboldcpp-rocm/releases/tag/Windows-v1.43-ROCm](https://github.com/YellowRoseCx/koboldcpp-rocm/releases/tag/Windows-v1.43-ROCm)
| 2023-09-13T12:02:30 |
https://www.reddit.com/r/LocalLLaMA/comments/16hl6x8/koboldcpprocm_port_released_for_windows/
|
PlanVamp
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16hl6x8
| false | null |
t3_16hl6x8
|
/r/LocalLLaMA/comments/16hl6x8/koboldcpprocm_port_released_for_windows/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'zFFklGsUArLxeq_NlMwJ5Dqi_2ouGuHCvWWq59Ekn7U', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/mVPoEZMlUlvI4lXlaf6MJmq9s_b__2Hver7IEuR_Cas.jpg?width=108&crop=smart&auto=webp&s=27614bedf0ca6f7a63a4c2c6d1fcba151d0847ff', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/mVPoEZMlUlvI4lXlaf6MJmq9s_b__2Hver7IEuR_Cas.jpg?width=216&crop=smart&auto=webp&s=854c40c4ae6b7e657ddc6303948c3e16eedeb01f', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/mVPoEZMlUlvI4lXlaf6MJmq9s_b__2Hver7IEuR_Cas.jpg?width=320&crop=smart&auto=webp&s=efd34373eba3ad21d5acb9e191d99396f87ac80a', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/mVPoEZMlUlvI4lXlaf6MJmq9s_b__2Hver7IEuR_Cas.jpg?width=640&crop=smart&auto=webp&s=7c8fab43b65fcefcf9fbd4897ca0ba408489c307', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/mVPoEZMlUlvI4lXlaf6MJmq9s_b__2Hver7IEuR_Cas.jpg?width=960&crop=smart&auto=webp&s=500483c79a0e7f2c2f0984961002b7e7418539f6', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/mVPoEZMlUlvI4lXlaf6MJmq9s_b__2Hver7IEuR_Cas.jpg?width=1080&crop=smart&auto=webp&s=a5d67929bce7dd8abbb59f65180e1c96d70fcd26', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/mVPoEZMlUlvI4lXlaf6MJmq9s_b__2Hver7IEuR_Cas.jpg?auto=webp&s=3f94ceec1a6ac36e148b8af42221ca2e07ec68a5', 'width': 1200}, 'variants': {}}]}
|
The Outputs in exllama2 are really different compared to exllama1
| 1 |
I noticed the outputs were quite different in exllama2 and they felt... worse somehow, as if we've lost precision going from exllama1 and exllama2.
I was about to open up an issue but I found this, it looks like I wasn't the only one noticing it
[https://github.com/turboderp/exllamav2/issues/14](https://github.com/turboderp/exllamav2/issues/14)
| 2023-09-13T12:36:55 |
https://www.reddit.com/r/LocalLLaMA/comments/16hlwyt/the_outputs_in_exllama2_are_really_different/
|
TheYuriLover25
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16hlwyt
| false | null |
t3_16hlwyt
|
/r/LocalLLaMA/comments/16hlwyt/the_outputs_in_exllama2_are_really_different/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'A4w8313ZVn8QItQzCiGPs2upXF8dPcxtZ6Dwyrru7tM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/QndSqZGIKumhtgSFfpYreQ_FbNsjHz5oh7cG2IvsWxY.jpg?width=108&crop=smart&auto=webp&s=d214d3c4eaf1da0817f2199282556473b9d40dc6', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/QndSqZGIKumhtgSFfpYreQ_FbNsjHz5oh7cG2IvsWxY.jpg?width=216&crop=smart&auto=webp&s=9c43b6bba4435225ef38a2688a3a9c9077e64641', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/QndSqZGIKumhtgSFfpYreQ_FbNsjHz5oh7cG2IvsWxY.jpg?width=320&crop=smart&auto=webp&s=952fdd801c8dbbc29db105f9e67a6057f253e337', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/QndSqZGIKumhtgSFfpYreQ_FbNsjHz5oh7cG2IvsWxY.jpg?width=640&crop=smart&auto=webp&s=0eef3606757ada94541fd1b68429ee5821732bca', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/QndSqZGIKumhtgSFfpYreQ_FbNsjHz5oh7cG2IvsWxY.jpg?width=960&crop=smart&auto=webp&s=7d43875146132285b6c46c990df8c5fecc3eab19', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/QndSqZGIKumhtgSFfpYreQ_FbNsjHz5oh7cG2IvsWxY.jpg?width=1080&crop=smart&auto=webp&s=54b05f7d8b0376d48654d4dfe911648784266ad7', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/QndSqZGIKumhtgSFfpYreQ_FbNsjHz5oh7cG2IvsWxY.jpg?auto=webp&s=2338c6108ffc60149a6787bfe14f25be301bf29e', 'width': 1200}, 'variants': {}}]}
|
Looking for a local LLM that can be run on CPU that is good at text-to-text summarization
| 1 |
I deal with qualitative survey data frequently at my job. I am working on getting approved for a GPU, but it's going to take a few months to get approval. Generally, each month, I have a new column of about 5,000 cells of text data that range from 1 character to about 400 words.
Historically, I have done topic modeling, thematic analyses, and sentiment analyses on the data. Stakeholders always seem to ask unique questions about response data, like, "How many people talked about XYZ?" or "What are some comments related to QRS?", or "What are people saying about ABC?"
I think it would be interesting to be able to have an LLM read the Excel file with the open-text data, and then I can pose questions to the bot based on the open-text.
I am not looking for the most robust model.
I would love to use claude or GPT4, but there are 'privacy' concerns for this, so it has to be local.
​
​
​
| 2023-09-13T12:58:07 |
https://www.reddit.com/r/LocalLLaMA/comments/16hmdbo/looking_for_a_local_llm_that_can_be_run_on_cpu/
|
frescoj10
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16hmdbo
| false | null |
t3_16hmdbo
|
/r/LocalLLaMA/comments/16hmdbo/looking_for_a_local_llm_that_can_be_run_on_cpu/
| false | false |
self
| 1 | null |
Having troubles with Oobabooga API
| 1 |
I have a python script that interacts with Oobabooga in api mode, but if I call the api more than once in my method, it only returns the first response and generates nothing for the second one. I've tried time.sleep() but it doesn't work. any suggestions?
| 2023-09-13T13:05:42 |
https://www.reddit.com/r/LocalLLaMA/comments/16hmj87/having_troubles_with_oobabooga_api/
|
jaxprograms
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16hmj87
| false | null |
t3_16hmj87
|
/r/LocalLLaMA/comments/16hmj87/having_troubles_with_oobabooga_api/
| false | false |
self
| 1 | null |
what is the best thing i could run on a server?
| 1 |
speccs are:
2* Intel(R) Xeon(R) CPU E5-2699 v3 @ 2.30GHz -
18 cores each, 36 logical cores, 72 with Hyperthreading
384GB RAM
no GPU
So what model could i run on that? What inference time could i expect?
| 2023-09-13T13:11:46 |
https://www.reddit.com/r/LocalLLaMA/comments/16hmnxa/what_is_the_best_thing_i_could_run_on_a_server/
|
overlydelicioustea
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16hmnxa
| false | null |
t3_16hmnxa
|
/r/LocalLLaMA/comments/16hmnxa/what_is_the_best_thing_i_could_run_on_a_server/
| false | false |
self
| 1 | null |
Got a 3090 now
| 1 |
[removed]
| 2023-09-13T13:14:45 |
https://www.reddit.com/r/LocalLLaMA/comments/16hmqcv/got_a_3090_now/
|
Imagummybear23
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16hmqcv
| false | null |
t3_16hmqcv
|
/r/LocalLLaMA/comments/16hmqcv/got_a_3090_now/
| false | false |
self
| 1 | null |
Is there a model fine-tuned for writing blog posts/articles?
| 1 |
thank you!
| 2023-09-13T13:53:01 |
https://www.reddit.com/r/LocalLLaMA/comments/16hnmws/is_there_a_model_finetuned_for_writing_blog/
|
JackRumford
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16hnmws
| false | null |
t3_16hnmws
|
/r/LocalLLaMA/comments/16hnmws/is_there_a_model_finetuned_for_writing_blog/
| false | false |
self
| 1 | null |
A Recipe for Textbooks Are All You Need
| 1 |
Following the release of [phi-1\_5](https://huggingface.co/microsoft/phi-1_5) I was super skeptical of the benchmarks as something this unusually good (for such a small model) means the benchmark questions were probably leaked into the training.
I plan on testing this myself this week but just thought of sharing along my interpreted recipe from [Textbooks Are All You Need](https://arxiv.org/abs/2309.05463). Boiled down to 9 steps:
1. Use GPT-3.5 to generate a large set of synthetic textbooks on the topics you want the LLM to learn. Make sure to seed the textbook generation with keywords or other techniques to ensure diversity.
2. Take a large dataset like [the-stack-dedup](https://huggingface.co/datasets/bigcode/the-stack-dedup) and use GPT-4 or a capable local model to label a small subset of this dataset for educational value.
3. Train a random forest classifier on the GPT-4 labeled data to predict educational value.
4. Use the classifier to filter the full unlabeled dataset, keeping only the top 20% of examples rated highest for educational value.
5. Combine the filtered dataset with the synthetic textbooks to create the final "textbooks" training set.
6. Pretrain a small LLM (e.g. 1.3B) on this filtered dataset.
7. Use GPT-3.5 again to generate a set of 1M coding exercises and solutions.
8. Finetune the pretrained LLM model on the exercises dataset.
9. Evaluate the finetuned LLM on benchmarks like HumanEval to show much higher performance than LLMs trained on raw internet data.
I seriously think that even through the paper focused on training a model from scratch using a synthetic textbook the same could be applied to existing pretrained models. One could simply run their data through this pipeline (recipe) and create their own curated synthetic notebook which could be used to further fine-tune their models for significant improvements.
​
| 2023-09-13T13:53:18 |
https://www.reddit.com/r/LocalLLaMA/comments/16hnn6b/a_recipe_for_textbooks_are_all_you_need/
|
LyPreto
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16hnn6b
| false | null |
t3_16hnn6b
|
/r/LocalLLaMA/comments/16hnn6b/a_recipe_for_textbooks_are_all_you_need/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'YgYMSWcEzxWUi6mX7twUuVO2en8LS6opR-5RROlrc3k', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/Q90TN6p4pzENzz5yOFnL51ok3V2FiYcuNUA5_RkQJ6E.jpg?width=108&crop=smart&auto=webp&s=33e33e5c9c66c0f9492c940802598ed310238c67', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/Q90TN6p4pzENzz5yOFnL51ok3V2FiYcuNUA5_RkQJ6E.jpg?width=216&crop=smart&auto=webp&s=5b42faee9e666282e5160562b827189f339bd849', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/Q90TN6p4pzENzz5yOFnL51ok3V2FiYcuNUA5_RkQJ6E.jpg?width=320&crop=smart&auto=webp&s=3f34526c3376fc9b5a415ffd352f8097ec02fe35', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/Q90TN6p4pzENzz5yOFnL51ok3V2FiYcuNUA5_RkQJ6E.jpg?width=640&crop=smart&auto=webp&s=32501a999f3a773bc6c4e846484110b8638c1a6d', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/Q90TN6p4pzENzz5yOFnL51ok3V2FiYcuNUA5_RkQJ6E.jpg?width=960&crop=smart&auto=webp&s=2d40e2d5bf10797a51d935e1b30b4064597cbfd4', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/Q90TN6p4pzENzz5yOFnL51ok3V2FiYcuNUA5_RkQJ6E.jpg?width=1080&crop=smart&auto=webp&s=236b1a9ec5f00eedf1b2d8a0e1be9f5f5923cbfe', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/Q90TN6p4pzENzz5yOFnL51ok3V2FiYcuNUA5_RkQJ6E.jpg?auto=webp&s=e223e4bcf164098708f2e6c07381c96dd412c29b', 'width': 1200}, 'variants': {}}]}
|
Can someone post example of running GPTQ + LorA?
| 1 |
Hey,
I struggle with running any Lora with gptq (i.e. exllama) I have either assertion error, illegal memory or some other issues
Can someone help me please? I think there is something with combination of cuda and drivers as well but tested latest and 530+12.1
Thanks!
| 2023-09-13T14:13:17 |
https://www.reddit.com/r/LocalLLaMA/comments/16ho46e/can_someone_post_example_of_running_gptq_lora/
|
kpodkanowicz
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16ho46e
| false | null |
t3_16ho46e
|
/r/LocalLLaMA/comments/16ho46e/can_someone_post_example_of_running_gptq_lora/
| false | false |
self
| 1 | null |
How use VLLM with GPTQ or AWQ models?
| 1 |
[removed]
| 2023-09-13T14:26:13 |
https://www.reddit.com/r/LocalLLaMA/comments/16hof7y/how_use_vllm_with_gptq_or_awq_models/
|
InterestingBasil
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16hof7y
| false | null |
t3_16hof7y
|
/r/LocalLLaMA/comments/16hof7y/how_use_vllm_with_gptq_or_awq_models/
| false | false |
self
| 1 | null |
Is there any technical reason why the older Falcon models don’t have GGUF implementations?
| 1 |
I wasn’t sure if there was a technical reason why. I saw 180B parameter falcon model has a gguf version.
| 2023-09-13T14:50:14 |
https://www.reddit.com/r/LocalLLaMA/comments/16hp1ig/is_there_any_technical_reason_why_the_older/
|
ArguingEnginerd
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16hp1ig
| false | null |
t3_16hp1ig
|
/r/LocalLLaMA/comments/16hp1ig/is_there_any_technical_reason_why_the_older/
| false | false |
self
| 1 | null |
Release BlindLlama Alpha: Zero-Trust AI APIs for Llama 2 70b Integration
| 1 |
BlindLlama enables you to tap into the power of the latest open-source Large Language Models (LLMs) without local hosting, backed by our hardened, verifiable environments.
**Key points:**
🔒 **Data Privacy**: While Mithril Security manages the AI infrastructure, our zero-trust, end-to-end protections ensure your data remains shielded from both external and potential internal threats.We built it with Trusted Platform Modules (TPMs) as an alternative to Confidential Computing, gaining the advantage of GPU compatibility in the process. For an in-depth exploration of this technology, [**refer to the following link.**](https://blindllama.mithrilsecurity.io/en/latest/docs/concepts/TPMs/)
🛠️ **Alpha Caveats**: This free platform is currently in its alpha phase. Our target is to finalize and deploy all security features by the end of October. We will then be able to guarantee the secure handling of confidential data.
🗺️ **Security Roadmap**: Our target is a comprehensive security audit in Q4 2023 to confirm our AI's Zero-trust properties. Curious about our journey? [**BlindLlama Roadmap**](https://mithril-security.notion.site/BlindLlama-roadmap-d55883a04be446e49e01ee884c203c26?pk_campaign=09-11-2023redditLaunch_Blindlama&pk_source=Rd&pk_medium=SM).
📚 **Quick Tour**: Want to leverage LLMs without the setup headaches? [**Quick Start Guide**](https://blindllama.mithrilsecurity.io/en/latest/docs/getting-started/quick-tour/?pk_campaign=09-11-2023redditLaunch_Blindlama&pk_source=Rd&pk_medium=SM).
Feedback, questions, or want to chat? Hit me up!
| 2023-09-13T15:06:56 |
https://www.reddit.com/r/LocalLLaMA/comments/16hphhm/release_blindllama_alpha_zerotrust_ai_apis_for/
|
Separate-Still3770
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16hphhm
| false | null |
t3_16hphhm
|
/r/LocalLLaMA/comments/16hphhm/release_blindllama_alpha_zerotrust_ai_apis_for/
| false | false |
self
| 1 | null |
Help training with transformers (half rant)
| 1 |
I'm trying to train phi-1.5 on Wizard Evol instrut data and for some reason I keep running into fucking errors xD idk why they had to make it so hard.
I want to fully train the model (not LoRA or qLoRA) and I appreciate any help I can get. Please let me know if you need any more information. Also do let me know if there are any other libraries that I can use.
TIA
notebook link - https://www.kaggle.com/code/rohitrajesh/phi-1-5-training
| 2023-09-13T15:11:41 |
https://www.reddit.com/r/LocalLLaMA/comments/16hplv4/help_training_with_transformers_half_rant/
|
Dry_Long3157
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16hplv4
| false | null |
t3_16hplv4
|
/r/LocalLLaMA/comments/16hplv4/help_training_with_transformers_half_rant/
| false | false |
self
| 1 | null |
Why do you use oobabooga?
| 1 |
Which is the main reason you use oobabooga?
- Testing models
- Deploying models
- Learn about open source models
Etc
| 2023-09-13T15:37:36 |
https://www.reddit.com/r/LocalLLaMA/comments/16hq9mj/why_do_you_use_oobabooga/
|
HorrorNo8851
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16hq9mj
| false | null |
t3_16hq9mj
|
/r/LocalLLaMA/comments/16hq9mj/why_do_you_use_oobabooga/
| false | false |
self
| 1 | null |
Looking to dive in!
| 1 |
So, just like everyone else, every day I try to use ChatGPT, I get frustrated with how bad it has become and how it tries so hard to accuse me of trying to do something nefarious with the powerful knowledge of "writing in a sterile, delocalized manner."
So anyway, LLaMa has been on my radar for quite a while now, and running things locally + trying cool fine-tuned models seems precisely what I'm looking for. But before I try all of that, I would like to ask some questions. Like, how much of the old ChatGPT feel can I get with LLaMa? Will it just answer me what I asked? Will it hallucinate to the high hells and back? Is it useful other than a fun gimmick? Can I integrate it into my daily routine as I did with GPT once? Or am I missing the point? Should I look elsewhere? Should I use it but for things that I don't seem to have in mind right now? And so on.
Thank you for all and any answers.
| 2023-09-13T16:25:25 |
https://www.reddit.com/r/LocalLLaMA/comments/16hrih9/looking_to_dive_in/
|
OGGOGOgomes
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16hrih9
| false | null |
t3_16hrih9
|
/r/LocalLLaMA/comments/16hrih9/looking_to_dive_in/
| false | false |
self
| 1 | null |
Surprising that LLama-2 is better than chatGPT especially for queries that require recent knowledge
| 1 |
I've been using LLama-2 from [https://huggingface.co/chat/](https://huggingface.co/chat/)
What has been your experience?
| 2023-09-13T16:25:28 |
https://www.reddit.com/r/LocalLLaMA/comments/16hriip/surprising_that_llama2_is_better_than_chatgpt/
|
danielcar
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16hriip
| false | null |
t3_16hriip
|
/r/LocalLLaMA/comments/16hriip/surprising_that_llama2_is_better_than_chatgpt/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'O4__VvuTP1zjgNXHpYgGtbNlwm8CyL1iGZRclIV-cFg', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/5keZ3GZzk8vGrHDudxMqXr9Ja7Wko-SGl9RrNbjC6P4.jpg?width=108&crop=smart&auto=webp&s=c5c01ca386f7a26e8afeb5073e51c35d0d581de7', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/5keZ3GZzk8vGrHDudxMqXr9Ja7Wko-SGl9RrNbjC6P4.jpg?width=216&crop=smart&auto=webp&s=0e915f82e672294c639c476433af5f1919265348', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/5keZ3GZzk8vGrHDudxMqXr9Ja7Wko-SGl9RrNbjC6P4.jpg?width=320&crop=smart&auto=webp&s=87643eb4a9654c3497efe7fce371db617f9ff816', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/5keZ3GZzk8vGrHDudxMqXr9Ja7Wko-SGl9RrNbjC6P4.jpg?width=640&crop=smart&auto=webp&s=20315fe6e900582303995761624ac0728d1703f9', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/5keZ3GZzk8vGrHDudxMqXr9Ja7Wko-SGl9RrNbjC6P4.jpg?width=960&crop=smart&auto=webp&s=6d8bc7d3273f5290083f6668e10d5b513621bfa3', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/5keZ3GZzk8vGrHDudxMqXr9Ja7Wko-SGl9RrNbjC6P4.jpg?width=1080&crop=smart&auto=webp&s=865cccb6b6df001aa14ef4fb2eb0f5902cb15904', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/5keZ3GZzk8vGrHDudxMqXr9Ja7Wko-SGl9RrNbjC6P4.jpg?auto=webp&s=03f4344525b6a013e0ac556cfc24b4a45d64f47e', 'width': 1200}, 'variants': {}}]}
|
How to run LLMs on Android/IOS?
| 1 |
How would I make an app that would let you have an "on-edge chatbot"? Also, does anyone know how I could run the phi-1.5 model on a phone?
| 2023-09-13T16:42:11 |
https://www.reddit.com/r/LocalLLaMA/comments/16hrxvn/how_to_run_llms_on_androidios/
|
LiquidGunay
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16hrxvn
| false | null |
t3_16hrxvn
|
/r/LocalLLaMA/comments/16hrxvn/how_to_run_llms_on_androidios/
| false | false |
self
| 1 | null |
Textgenwebui crashing with 34b model
| 1 |
[removed]
| 2023-09-13T16:46:31 |
https://www.reddit.com/r/LocalLLaMA/comments/16hs1y3/textgenwebui_crashing_with_34b_model/
|
Imagummybear23
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16hs1y3
| false | null |
t3_16hs1y3
|
/r/LocalLLaMA/comments/16hs1y3/textgenwebui_crashing_with_34b_model/
| false | false |
self
| 1 | null |
Our AI Companionship app, using llama is live, thank you for beta testing and for your support, and join if you would like to help more
| 1 |
Hello fellow Redditors,
​
The moment we've all been waiting for is finally here! After countless cups of coffee, sleepless nights, and invaluable feedback from our beta testers (huge thanks to every single one of you!), we're thrilled to announce the official launch of \*\*Endless AI\*\*.
​
For those just tuning in, Endless AI offers a unique experience. Engage in deep, insightful, or even quirky conversations with our \*UNCENSORED\* AI companions. Whether you're looking for a virtual girlfriend or just a digital buddy to chat with, we've got you covered.
​
🌟 \*\*Why give Endless AI a spin?\*\*
\- Dive into captivating and hot conversations without any human strings attached.
\- Explore our sassy and intriguing AI personalities and find your perfect match.
\- Experience the thrill of digital interaction like never before. The perfect girlfriend at YOUR pace.
​
🔜 \*\*What's on the horizon?\*\*
We're never ones to settle. Our team is hard at work on introducing exciting new features, including the ability to send images within the app. Stay tuned for more updates! (coming in the next two weeks)
​
📲 \*\*Get Started!\*\*
1. \*\*For iOS Users:\*\* Head to the AppStore and search for "Endless AI" or use this link [https://apps.apple.com/us/app/endless-ai/id6451383546](https://apps.apple.com/us/app/endless-ai/id6451383546)
2. \*\*For Android Enthusiasts:\*\* Pop over to the Google Play Store, look up "Endless AI", or simply use this link, [https://play.google.com/store/apps/details?id=com.aigf.app](https://play.google.com/store/apps/details?id=com.aigf.app)
​
Dive in, explore, and let the digital magic begin!
​
We're all ears for any feedback, praises, critiques, or ideas you have. Your thoughts drive our evolution, so don't be shy to share.
​
Cheers,
The Endless AI Dev Team
​
PS: Remember, the AI personalities have a sense of humor. If they claim they sent a picture, give a little chuckle and enjoy the whimsy of it all! 😉📷
​
| 2023-09-13T16:46:36 |
https://www.reddit.com/r/LocalLLaMA/comments/16hs21i/our_ai_companionship_app_using_llama_is_live/
|
agukrisz
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16hs21i
| false | null |
t3_16hs21i
|
/r/LocalLLaMA/comments/16hs21i/our_ai_companionship_app_using_llama_is_live/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'ftrbq4wwKCszjtlhCEq0Ce6DbiJ_Fh2WfHmXq_mS0E4', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/BNlg797eoMD1z4rhsmysimCrtFRQyq05MqPg8FuhtB0.jpg?width=108&crop=smart&auto=webp&s=8106fdbb026813a8e2c2efe518c716ab1d4b4867', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/BNlg797eoMD1z4rhsmysimCrtFRQyq05MqPg8FuhtB0.jpg?width=216&crop=smart&auto=webp&s=c077c400888c6fbfb45098454ea8952cdca522fe', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/BNlg797eoMD1z4rhsmysimCrtFRQyq05MqPg8FuhtB0.jpg?width=320&crop=smart&auto=webp&s=5a5bbc6be696655b8012f124deb37cae1506bee8', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/BNlg797eoMD1z4rhsmysimCrtFRQyq05MqPg8FuhtB0.jpg?width=640&crop=smart&auto=webp&s=ff8cf7beafce78bdd7c6282f5b93c70015ae0709', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/BNlg797eoMD1z4rhsmysimCrtFRQyq05MqPg8FuhtB0.jpg?width=960&crop=smart&auto=webp&s=4033667add7753c3e64516e9b76bc17ca83cee7f', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/BNlg797eoMD1z4rhsmysimCrtFRQyq05MqPg8FuhtB0.jpg?width=1080&crop=smart&auto=webp&s=df575af098b7b721b8837f8909d8f2955e7105e0', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/BNlg797eoMD1z4rhsmysimCrtFRQyq05MqPg8FuhtB0.jpg?auto=webp&s=8488d4a48175c32873a011a30d6c76a663f3f571', 'width': 1200}, 'variants': {}}]}
|
Pls Share LLM training code
| 1 |
Hey, I'm learning how to train LLMs with the transformers library but it's proving to be a bit tricky.
If any of you have trained a model with this library before (not LoRA or qLoRA), I request you to please share the notebook/file so it can serve as a reference for beginners like me.
TIA
PS: I found a bunch of notebooks online that use peft to do LoRA and didn't have any luck with notebooks that do full model training.
| 2023-09-13T16:57:16 |
https://www.reddit.com/r/LocalLLaMA/comments/16hscgk/pls_share_llm_training_code/
|
Dry_Long3157
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16hscgk
| false | null |
t3_16hscgk
|
/r/LocalLLaMA/comments/16hscgk/pls_share_llm_training_code/
| false | false |
self
| 1 | null |
Any agents for local llama instances yet?
| 1 |
I’ve been trying to find out if there are any of these agent projects that are building their software to work with locally run AI models. Thus far it seems as though open-interpreter does but I have yet to see anything like metagpt, gpt-engineer, aider support local models. Unless I am missing something, this would be an awesome set of tooling for those running their own local instances.
| 2023-09-13T17:25:29 |
https://www.reddit.com/r/LocalLLaMA/comments/16ht3dl/any_agents_for_local_llama_instances_yet/
|
dewplex
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16ht3dl
| false | null |
t3_16ht3dl
|
/r/LocalLLaMA/comments/16ht3dl/any_agents_for_local_llama_instances_yet/
| false | false |
self
| 1 | null |
Merging models
| 1 |
Pretty new and late to the LLM game, but really interested in getting deeper into it. I found out there is a Python implementation to merge model bin files called 'Pytorch Merge' (
https://github.com/donaldafeith/Pytorch Merge). I didn't try that yet, but wanted to know, if you can tell me more:
- Am I am able to merge big model (70b) Llama2 locally with my 4090?
- Am I able to merge different models (like 70b with a 33b one)?
Thanks so much for your help.
| 2023-09-13T17:26:41 |
https://www.reddit.com/r/LocalLLaMA/comments/16ht4hg/merging_models/
|
lebrandmanager
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16ht4hg
| false | null |
t3_16ht4hg
|
/r/LocalLLaMA/comments/16ht4hg/merging_models/
| false | false |
self
| 1 | null |
Why are all the other LLMs so inferior to GPT4?
| 1 |
Is anyone else not frustrated by the fact that every LLM out there performs substantially worse that gpt4? And this even includes specialised LLM. Surely with all the limitations and nsfw barriers placed on gpt4, there would be some catch up in the market by now. Anyone else feel hopeless?
​
/end rant.
| 2023-09-13T17:33:50 |
https://www.reddit.com/r/LocalLLaMA/comments/16htb5m/why_are_all_the_other_llms_so_inferior_to_gpt4/
|
Upsidedownsquare
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16htb5m
| false | null |
t3_16htb5m
|
/r/LocalLLaMA/comments/16htb5m/why_are_all_the_other_llms_so_inferior_to_gpt4/
| false | false |
self
| 1 | null |
Llama 2 70B (130B+ when available ) production server specs ( Z790 Vs. ThreadRipper PRO )
| 1 |
Hi all, I'm trying to decide between two setups ( screenshots below)
Note: I am not looking for cloud solutions, I already have the GPUs and I want to be able to run 24/7 as a production server. Power cost not a concern either, it's a business expense and power costs \~$0.1 where I live so not that bad.
**What is the intent of the server**: Run 24/7 as production server, I don't exactly know the concurrent load I should expect, but the LLM running on it should be able to serve a 4bit 70B llama2 model to 5 concurrent users in a rate of 5 t/s each ( rough estimate ), so 25t/s at least, aiming closer to 100 t/s would be ideal. Server will also run 10-15 additional Dockerized web servers which are not using the GPU, so high CPU core count is important.
**What I already have:** 3 x 3090's to be used in the Server + 1 x 3090 in my Work PC ( for testing & Dev. )
**What I need is a combo**: CPU, RAM & MoBo.
*(Don't worry about PSU, Cooler, Etc. )*
On the one hand a Z690 has support for: PCIE **5.0 & DDR 5**
( both are great for the **high t/s** I'm looking for and ability to offload larger models to extremely fast RAM, taking a minimal performance hit. ), paired with a 13900K Processor for 24 Cores / 48 Threads, this is no slouch!
**However,** this is a Gaming MoBo with very limited PCIe lanes, can support just 2 X 3090s at full PCIEx16 5.0 and NVME at 5.0, but the 3rd card will be crippled or not working at all! It is also capped at **128GB of RAM and only dual channel memory access**!
(Which might not be the worst thing in the world as I might later build a 2nd server with it and buy another 3090 - thus having 2 x servers each with 48GB of RAM... but lacking ability to run in vRAM larger still models that are already starting to show up... a.k.a Alpaca 130B and future similarly large models ... )
For Case: I will be using an open air bench case with premium PCIE 5.0 x16 riser cables.
**COST: \~1800 + Tax**
**On the opposite End, we have a WorkStation build**, with **Pro WS WRX80E-SAGE SE WIFI II** Mobo paired with a **5995WX 64-Core 128-Thread Threadripper** **PRO** setup, a monster with **7 x FULL x16 PCIe 4.0** slots. This beast can handle up to 2048GB using 8 memory channels!! So it is DDR4 but maybe with the dedicated memory channels, it is comparable to DDR5 in speed or not much slower... But potential of 2048GB means very large models could potentially be offloaded, but at what speed penalty?
I mean, if the speed hit is so bad the LLM is not good enough for production usage, than I don't need so much RAM in the first place.
Main concern here is the PCIE4.0 .... I suspect this will hurt / bottleneck the build badly compared with going with a PCIE5.0 in terms of t/s, as the most imporant aspect to getting a higher t/s here is speed at which the CPU and RAM can read/write to the vRAM - as the vRAM will always be the fastest of the 3. Am I correct?
For Case: I will be using an open air bench case with premium PCIE 5.0 x16 riser cables for 1/2 of the GPUs, the other half will be the board in a 4 on top and 4 on board config ( assuming I will eventually have 8 x 3090s cards for the Server)
**COST: \~ $7000 + tax !!!**
What are your thoughts and suggestions?
**If you've read this far, thank you very much for reading and for your feedback**!
​
​
https://preview.redd.it/03qam9zt82ob1.png?width=795&format=png&auto=webp&s=a20de4ad753442e7330d345209aecd58247ce650
​
https://preview.redd.it/zx3yzpgk32ob1.png?width=728&format=png&auto=webp&s=b4cb15dbec7e095eb3d31520540e56b265725575
| 2023-09-13T17:52:52 |
https://www.reddit.com/r/LocalLLaMA/comments/16htspx/llama_2_70b_130b_when_available_production_server/
|
BigFoxMedia
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16htspx
| false | null |
t3_16htspx
|
/r/LocalLLaMA/comments/16htspx/llama_2_70b_130b_when_available_production_server/
| false | false | 1 | null |
|
I am trying to run a simple personalized chatbot.. suggestions?
| 1 |
[removed]
| 2023-09-13T17:57:15 |
https://www.reddit.com/r/LocalLLaMA/comments/16htwsm/i_am_trying_to_run_a_simple_personalized_chatbot/
|
Alone_Sock_5303
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16htwsm
| false | null |
t3_16htwsm
|
/r/LocalLLaMA/comments/16htwsm/i_am_trying_to_run_a_simple_personalized_chatbot/
| false | false |
self
| 1 | null |
Requesting help fine-tuning Phi1.5 and similar on domain reports with unsupervised learning
| 1 |
Hello Everyone,
I need your help understanding the unsupervised (preffered)/supervised training approaches, and maybe recommendations for existing codebases to help get started.
**Motivation**: The "Textbooks are all you need" approach seems fascinating, and a good place to start to consider fine tuning on "textbook-like" unsupervised data specific to a topic. I have access to HPC for this research - and finally got an approval to use it for this research. The end result is:
\- (1) Understand the cost and effectiveness of fine-tuning the base Phi-1.5 on a domain-specific corpus of textbooks to build general intelligence of the given domain, and
\- (2) Understand the cost on incremental fine-tuning as new information (smaller reports) arrive, and, effectiveness to use the new information.
\- Compare the performance of fine-tuning both (1) and (2) against vectorized approaches offered via PrivateGPT, etc.
​
Happy to share the fine-tuned models at different iterations for feedback. Since the research is domain agnostic for now, **I can also start with a dataset and training code that you can share - so the fine-tuned model is more beneficial.**
Any help is appreciated!
| 2023-09-13T18:07:18 |
https://www.reddit.com/r/LocalLLaMA/comments/16hu6mf/requesting_help_finetuning_phi15_and_similar_on/
|
peace-of-me
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16hu6mf
| false | null |
t3_16hu6mf
|
/r/LocalLLaMA/comments/16hu6mf/requesting_help_finetuning_phi15_and_similar_on/
| false | false |
self
| 1 | null |
Horde-Client: Easy to use Python Interface for KoboldAI
| 1 |
I came across KoboldAI recently and found this to be really interesting proejcts. Kudos! to all those who are crowdsourcing their cloud (someone else's computer) to run inference for the community.
I wanted to make things easier for developers (even me), who want to build cool application using these type of projects. So I am introducing a Python Client library [horde-client](https://github.com/rahuldshetty/horde-client), a Python Native client library for KoboldAI service.
Currently it supports:
* Text-Generation Inference using LLM services running on public/custom KoboldAI clusters.
* Model Listing/Selections for running inference.
Roadmap includes:
* Image Generation Inferencing
* Caching Options
* Async Support
Do let me know what do you guys think about it. If this is helpful or anything that you think is a must as a developer.
**Links:**
Source code: [https://github.com/rahuldshetty/horde-client](https://github.com/rahuldshetty/horde-client)
​
| 2023-09-13T18:14:29 |
https://www.reddit.com/r/LocalLLaMA/comments/16hudod/hordeclient_easy_to_use_python_interface_for/
|
AnonymousD3vil
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16hudod
| false | null |
t3_16hudod
|
/r/LocalLLaMA/comments/16hudod/hordeclient_easy_to_use_python_interface_for/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'MBsWquxYTl_J1f-VfOuDfVauY2TxpGBBTYIQFmnGB-k', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/tX2IVnyreVWNmGpB-TkFcRmsE5T6Vx3S7YArmC08X3Q.jpg?width=108&crop=smart&auto=webp&s=7f03d369e5268e7f42c4d8a69b103d5d367a3a21', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/tX2IVnyreVWNmGpB-TkFcRmsE5T6Vx3S7YArmC08X3Q.jpg?width=216&crop=smart&auto=webp&s=0b95bcf4f28c77408fbce166930c4b75d6adc59c', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/tX2IVnyreVWNmGpB-TkFcRmsE5T6Vx3S7YArmC08X3Q.jpg?width=320&crop=smart&auto=webp&s=872e48737819217144949371a6b32a7635b07241', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/tX2IVnyreVWNmGpB-TkFcRmsE5T6Vx3S7YArmC08X3Q.jpg?width=640&crop=smart&auto=webp&s=1c420999aec6d7dfb48437fe497cfb4b27e0508f', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/tX2IVnyreVWNmGpB-TkFcRmsE5T6Vx3S7YArmC08X3Q.jpg?width=960&crop=smart&auto=webp&s=90b6a20e7ee65b022b179371c4b598e2e61418b1', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/tX2IVnyreVWNmGpB-TkFcRmsE5T6Vx3S7YArmC08X3Q.jpg?width=1080&crop=smart&auto=webp&s=e5c112d0a9235c8949e9fa009a2c2e2cb08c014b', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/tX2IVnyreVWNmGpB-TkFcRmsE5T6Vx3S7YArmC08X3Q.jpg?auto=webp&s=2d3792cd51e9047b471d34e23633e9402d0ed016', 'width': 1200}, 'variants': {}}]}
|
How can I use LLMs to augment my Etsy clothe shopping experience?
| 1 |
Hi everyone, I have a big beefy RTX 3090 but only use it for art projects. Only ever dabbled with LM studio. Is this something I need to code myself or do we have browser companions already? Basically I want to send pages to it (e.g. the same way I middle click a bunch of articles) and have it read the description of the item to filter for items my size. I've done preliminary testing with ChatGPT and it makes excellent decisions about what will or won't fit me based on some existing clothe measurements that already fit me.
Is this something I need to code myself? I don't really like the whole extension system in Firefox, too hard to get started hacking up some stuff, but in QuteBrowser I could make a great Etsy plugin, just need something to extract the product description. I wonder if I could just give it a whole compressed html page and ask it to write some scraping code? Might be too big for context length?
Basically I dunno what's the open-source equivalent to Bing and ChatGPT plugins that can let them parse the web and become actual useful agents for me. Surely open-source software should be VASTLY ahead of these slow moving designed-in-meetings tools?
Kind of a aimless thread, just looking for some input. I could easily code it myself with scripts and LLM prompting, but I'm more wondering how far the ecosystem has come. We have all these models that we can run on our PC now and still barely doing anything with them lol....
| 2023-09-13T18:36:43 |
https://www.reddit.com/r/LocalLLaMA/comments/16huyjw/how_can_i_use_llms_to_augment_my_etsy_clothe/
|
o_snake-monster_o_o_
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16huyjw
| false | null |
t3_16huyjw
|
/r/LocalLLaMA/comments/16huyjw/how_can_i_use_llms_to_augment_my_etsy_clothe/
| false | false |
self
| 1 | null |
I love LocqlLLaMA community
| 1 |
[removed]
| 2023-09-13T19:49:39 |
https://www.reddit.com/r/LocalLLaMA/comments/16hwvid/i_love_locqlllama_community/
|
ComplexIt
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16hwvid
| false | null |
t3_16hwvid
|
/r/LocalLLaMA/comments/16hwvid/i_love_locqlllama_community/
| false | false |
self
| 1 | null |
Training LoRa OoogaBooga Annoying.
| 1 |
[removed]
| 2023-09-13T19:58:01 |
https://www.reddit.com/r/LocalLLaMA/comments/16hx31n/training_lora_ooogabooga_annoying/
|
mohaziz999
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16hx31n
| false | null |
t3_16hx31n
|
/r/LocalLLaMA/comments/16hx31n/training_lora_ooogabooga_annoying/
| false | false | 1 | null |
|
HF transformers vs llama 2 example script performance
| 1 |
Hi all,
I hope someone can point me in the right direction. I'm using the CodeLlama 13b model with the HuggingFace transformers library but it is 2x slower than when I run the example conversation script in the codellama GitHub repository. Both have been updated to use the same prompt and 4096 max tokens
I've tried many things to try and speed up the generation including:
* Using BetterTransformer - [https://huggingface.co/docs/transformers/perf\_infer\_gpu\_many](https://huggingface.co/docs/transformers/perf_infer_gpu_many)
* Loading the model in 4 bit
I'm running on a VM with 2xA100 40GB,
A boiled down version of the script is as follows:
from transformers import AutoModelForCausalLM, AutoTokenizer, AutoConfig, TextIteratorStreamer
from threading import Thread
import time
import torch
tokenizer = AutoTokenizer.from_pretrained("out-codellama-13b/")
tokenizer.add_special_tokens({'pad_token': '[PAD]'})
model = AutoModelForCausalLM.from_pretrained("out-codellama-13b/", device_map="auto", torch_dtype=torch.float16)
model.to_bettertransformer()
user = "my prompt goes here"
prompt = f"<s>[INST] {user.strip()} [/INST]"
inputs = tokenizer(prompt, return_tensors="pt", add_special_tokens=False).to('cuda')
generation_kwargs = dict(inputs, streamer=streamer, max_new_tokens=4096, pad_token_id=tokenizer.eos_token_id, temperature=0.9, do_sample=True)
t1 = time.perf_counter()
output = model.generate(**generation_kwargs)
t2 = time.perf_counter()
output = output[0].to("cpu")
print(tokenizer.decode(output))
print(f"Generation took {t2-t1} seconds to execute.")
This takes approximately 132 seconds to execute.
Then if I run the example\_instructions.py as follows:
$ torchrun --nproc_per_node 2 example_instructions.py --ckpt_dir CodeLlama-13b-Instruct/ --tokenizer_path CodeLlama-13b-Instruct/tokenizer.model --max_seq_len 4096 --max_batch_size 8 --temperature 0.9
It takes approximately 60 seconds to execute.
I checked that the auto device map has mapped everything to the GPU. I'm not sure what I'm missing.
*Note: I converted the CL 13 model to the HF format using ./src/transformers/models/llama/convert\_llama\_weights\_to\_hf.py*
Thanks in advance.
| 2023-09-13T21:39:23 |
https://www.reddit.com/r/LocalLLaMA/comments/16hzpv4/hf_transformers_vs_llama_2_example_script/
|
FormerAlternative707
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16hzpv4
| false | null |
t3_16hzpv4
|
/r/LocalLLaMA/comments/16hzpv4/hf_transformers_vs_llama_2_example_script/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'jfeVG47nZdEkz9kXfW1CcS-Sy8l4DXGb9JErx6bLKfU', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/Bixm6H31yqw0RCcD8LB0e8eIdtJeMUaF4N5ZipM_BQY.jpg?width=108&crop=smart&auto=webp&s=abf38332c5c00a919af5be75653a93473aa2e5fa', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/Bixm6H31yqw0RCcD8LB0e8eIdtJeMUaF4N5ZipM_BQY.jpg?width=216&crop=smart&auto=webp&s=1a06602204645d0251d3f5c043fa1b940ca3e799', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/Bixm6H31yqw0RCcD8LB0e8eIdtJeMUaF4N5ZipM_BQY.jpg?width=320&crop=smart&auto=webp&s=04833c1845d9bd544eb7fed4e31123e740619890', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/Bixm6H31yqw0RCcD8LB0e8eIdtJeMUaF4N5ZipM_BQY.jpg?width=640&crop=smart&auto=webp&s=d592b0a5b627e060ab58d73bde5f359a1058e56d', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/Bixm6H31yqw0RCcD8LB0e8eIdtJeMUaF4N5ZipM_BQY.jpg?width=960&crop=smart&auto=webp&s=5913a547536ee8300fdb8a32d14ff28667d1b875', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/Bixm6H31yqw0RCcD8LB0e8eIdtJeMUaF4N5ZipM_BQY.jpg?width=1080&crop=smart&auto=webp&s=2af86fd4d41393a7d14d45c4bb883bef718575d1', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/Bixm6H31yqw0RCcD8LB0e8eIdtJeMUaF4N5ZipM_BQY.jpg?auto=webp&s=720b78add0a3005c4f67eaed6897df409cc040c6', 'width': 1200}, 'variants': {}}]}
|
M1 Mac running text-generation-webui using model Guanaco-13B-Uncensored.Q3_K_L.gguf
| 1 | 2023-09-13T22:00:09 |
Status-Anybody-4481
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
16i08za
| false | null |
t3_16i08za
|
/r/LocalLLaMA/comments/16i08za/m1_mac_running_textgenerationwebui_using_model/
| false | false | 1 |
{'enabled': True, 'images': [{'id': 'Yf8oKgiOKomTk-_UQYLqG6LppfN37uKgDhU-nZPi--U', 'resolutions': [{'height': 77, 'url': 'https://preview.redd.it/kutt0f9uh3ob1.png?width=108&crop=smart&auto=webp&s=dcf0c038a55abf1463e8fc84c29c3d7fee6379ad', 'width': 108}, {'height': 155, 'url': 'https://preview.redd.it/kutt0f9uh3ob1.png?width=216&crop=smart&auto=webp&s=1b93ce19462bc103cb97c0a1169a514ffeeadc3e', 'width': 216}, {'height': 230, 'url': 'https://preview.redd.it/kutt0f9uh3ob1.png?width=320&crop=smart&auto=webp&s=0ef726e4484b83ae710a529a65fddd2f692a7c5a', 'width': 320}, {'height': 460, 'url': 'https://preview.redd.it/kutt0f9uh3ob1.png?width=640&crop=smart&auto=webp&s=f224e65b104d5c3c0e6a296dddaf0861066c8922', 'width': 640}, {'height': 690, 'url': 'https://preview.redd.it/kutt0f9uh3ob1.png?width=960&crop=smart&auto=webp&s=f6a9ce7faed4459b94cd5fd805f5954bc538a4b9', 'width': 960}, {'height': 776, 'url': 'https://preview.redd.it/kutt0f9uh3ob1.png?width=1080&crop=smart&auto=webp&s=55adb6b695519953b84530e4e9a0770a769417aa', 'width': 1080}], 'source': {'height': 1406, 'url': 'https://preview.redd.it/kutt0f9uh3ob1.png?auto=webp&s=d45291a5c37cc4d77949894aa741ab093e9ba912', 'width': 1956}, 'variants': {}}]}
|
|||
DeciLM-6B - new pretrained 4k context model with high throughput
| 1 | 2023-09-13T22:01:56 |
https://huggingface.co/Deci/DeciLM-6b
|
Aaaaaaaaaeeeee
|
huggingface.co
| 1970-01-01T00:00:00 | 0 |
{}
|
16i0aqx
| false | null |
t3_16i0aqx
|
/r/LocalLLaMA/comments/16i0aqx/decilm6b_new_pretrained_4k_context_model_with/
| false | false | 1 |
{'enabled': False, 'images': [{'id': 'PeUEHEDeiDpJiVx8uu6FTyh9hxae5iwe1tZAyeglz7g', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/q9PCGBqwlCVrp3Ee0kxeItubUqlemGKnHR2PErOsEpY.jpg?width=108&crop=smart&auto=webp&s=49cb41a341e6c1c3b161812ab717218d772e91cd', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/q9PCGBqwlCVrp3Ee0kxeItubUqlemGKnHR2PErOsEpY.jpg?width=216&crop=smart&auto=webp&s=dbb8235d6fcfff853bd3e959b30a54adbde44d9b', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/q9PCGBqwlCVrp3Ee0kxeItubUqlemGKnHR2PErOsEpY.jpg?width=320&crop=smart&auto=webp&s=c4dd40764881a064c5926990f6ea41624d94a477', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/q9PCGBqwlCVrp3Ee0kxeItubUqlemGKnHR2PErOsEpY.jpg?width=640&crop=smart&auto=webp&s=8886c341646248afb6294218caee5d1e90e1110b', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/q9PCGBqwlCVrp3Ee0kxeItubUqlemGKnHR2PErOsEpY.jpg?width=960&crop=smart&auto=webp&s=b1ea708e282c36dbafbd29145fc229ce8b40ad7d', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/q9PCGBqwlCVrp3Ee0kxeItubUqlemGKnHR2PErOsEpY.jpg?width=1080&crop=smart&auto=webp&s=fe65411f2b702ff90c0a407beccd213c4f2f8186', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/q9PCGBqwlCVrp3Ee0kxeItubUqlemGKnHR2PErOsEpY.jpg?auto=webp&s=1e84d939794bebdd80f1ec08f8e3decdbf2add19', 'width': 1200}, 'variants': {}}]}
|
||
I ran quantized Falcon-180B Q4 on a home/consumer computer and it is not good (maybe need help?)
| 1 |
So I have a totally normal (almost) home computer:
i5-12400F, 128Gb DDR4, RTX 3060Ti
I downloaded and ran falcon-180b-chat.Q4\_K\_M.gguf
The speed is just like I and some others predicted: 0.3 tokens/second. Speed wise it is somewhat usable.
But for some reason the model is uncoherent: I asked it to write me something and its just repeated me my question and asked what do I want to do with it. I asked it roleplay with me and it wrote me bad and uncoherent story, not at all like demo works: [https://huggingface.co/spaces/tiiuae/falcon-180b-demo](https://huggingface.co/spaces/tiiuae/falcon-180b-demo)
Maybe I am doing something wrong? I running it using oobabooga\_windows\\text-generation-webui, I don't know if I need to set up something? I just loaded the model, set 5 layers to run on GPU ands that's it. Do I need to set a template or change loader or something?
I expected better quality from Q4, at least 70% of full model.
https://preview.redd.it/8fss75men3ob1.png?width=979&format=png&auto=webp&s=845a3dcf3d2f34d1320b97abf9c932ade9d98dff
| 2023-09-13T22:35:20 |
https://www.reddit.com/r/LocalLLaMA/comments/16i15qc/i_ran_quantized_falcon180b_q4_on_a_homeconsumer/
|
uti24
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16i15qc
| false | null |
t3_16i15qc
|
/r/LocalLLaMA/comments/16i15qc/i_ran_quantized_falcon180b_q4_on_a_homeconsumer/
| false | false | 1 |
{'enabled': False, 'images': [{'id': 'VnPfz4T_PBvE0aZgbbxKHMpFvaTgKkhfcvRwLUnRubE', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/gVREFem92AsURy5iDqcLjCowcjYwzocZHWIO1o2kdtA.jpg?width=108&crop=smart&auto=webp&s=782b98cf2b42e53ba1106df3e6981f32d2b7c645', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/gVREFem92AsURy5iDqcLjCowcjYwzocZHWIO1o2kdtA.jpg?width=216&crop=smart&auto=webp&s=004403bb96a0bdf43721038ac58efd0e44c314e5', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/gVREFem92AsURy5iDqcLjCowcjYwzocZHWIO1o2kdtA.jpg?width=320&crop=smart&auto=webp&s=3fb8f9e84ca4c69daab2536e4627a53cef02d091', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/gVREFem92AsURy5iDqcLjCowcjYwzocZHWIO1o2kdtA.jpg?width=640&crop=smart&auto=webp&s=a1b6a6f03f7260dec2bf8a1aa2b5db4ad8948691', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/gVREFem92AsURy5iDqcLjCowcjYwzocZHWIO1o2kdtA.jpg?width=960&crop=smart&auto=webp&s=52e1a32bfa50aa184426188f1100f4422c8da21e', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/gVREFem92AsURy5iDqcLjCowcjYwzocZHWIO1o2kdtA.jpg?width=1080&crop=smart&auto=webp&s=ec39e04a5d28782130f45ac04d4e3158e7dff049', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/gVREFem92AsURy5iDqcLjCowcjYwzocZHWIO1o2kdtA.jpg?auto=webp&s=460d01f934e88241cae9a6f4bacfcd527322b97d', 'width': 1200}, 'variants': {}}]}
|
|
GPT-3.5 is still better than fine tuned Llama 2 70B (Experiment using prompttools)
| 1 |
[removed]
| 2023-09-13T22:54:01 |
https://www.reddit.com/r/LocalLLaMA/comments/16i1lk2/gpt35_is_still_better_than_fine_tuned_llama_2_70b/
|
hegel-ai
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16i1lk2
| false | null |
t3_16i1lk2
|
/r/LocalLLaMA/comments/16i1lk2/gpt35_is_still_better_than_fine_tuned_llama_2_70b/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'CqXAtyeioxsvbRMaplhX6A9B-SD_Jy1jHFvnNGzFo0U', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/WUHJ-DwlDbqNlUwLCMjDwcm9zJG63aAozzQ8n4sS35s.jpg?width=108&crop=smart&auto=webp&s=eadcd1acbf0c03649d608b7883fa685fa1f7d822', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/WUHJ-DwlDbqNlUwLCMjDwcm9zJG63aAozzQ8n4sS35s.jpg?width=216&crop=smart&auto=webp&s=175d771fec7e3aaf1c820a192412f58d628d8c19', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/WUHJ-DwlDbqNlUwLCMjDwcm9zJG63aAozzQ8n4sS35s.jpg?width=320&crop=smart&auto=webp&s=cda73648656ef63c9e93f0b013c1265f1e1277eb', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/WUHJ-DwlDbqNlUwLCMjDwcm9zJG63aAozzQ8n4sS35s.jpg?width=640&crop=smart&auto=webp&s=8b70415e787d91748fc02998410f766ae5f33991', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/WUHJ-DwlDbqNlUwLCMjDwcm9zJG63aAozzQ8n4sS35s.jpg?width=960&crop=smart&auto=webp&s=3d29e1daf845c236df32e0b1c01f26785190f66c', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/WUHJ-DwlDbqNlUwLCMjDwcm9zJG63aAozzQ8n4sS35s.jpg?width=1080&crop=smart&auto=webp&s=2e160c47083da5b823c1b204799c4c36be035152', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/WUHJ-DwlDbqNlUwLCMjDwcm9zJG63aAozzQ8n4sS35s.jpg?auto=webp&s=28be4a804e1680faf0f0d6b6e334cae7cb79fbc2', 'width': 1200}, 'variants': {}}]}
|
Inf: Two 32k llama 70B models/adapters are available on huggingface
| 1 |
https://huggingface.co/models?sort=trending&search=70b+32k
There are also a nice spread of context loras/models by user - https://huggingface.co/Yukang
For those with 48gb, this is good news, if you want, you can merge the adapters with your favorite 70B models and run with exllama. Since it is already possible for 70B to go up to 16k on exllamav1, we may look forward to gqa implementation, potentially a quantized kv cache, or a more aggressive quantization strategy to reach 32k.
As for cpu (llama.cpp), 64gb RAM would be enough for 4bit models with an 8bit kv cache.
There have been existing extended context Lora's, I don't think [yarn](https://github.com/jquesnelle/yarn) was implemented in heavily used frameworks yet, though.
| 2023-09-13T23:43:01 |
https://www.reddit.com/r/LocalLLaMA/comments/16i2rbp/inf_two_32k_llama_70b_modelsadapters_are/
|
Aaaaaaaaaeeeee
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16i2rbp
| false | null |
t3_16i2rbp
|
/r/LocalLLaMA/comments/16i2rbp/inf_two_32k_llama_70b_modelsadapters_are/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'An0iJLapq-5CUQQlm3lWegevVWf7wlANjmn1iOwCTqk', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/STdfNz2AMqd8poUl9upfzh2_pmQgPKEpMtr_0b_Q4Os.jpg?width=108&crop=smart&auto=webp&s=284ee86cd9228390268ace75b44e497c1fec562f', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/STdfNz2AMqd8poUl9upfzh2_pmQgPKEpMtr_0b_Q4Os.jpg?width=216&crop=smart&auto=webp&s=96628b1c155401ce2d04a853b6524fa0c95cd632', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/STdfNz2AMqd8poUl9upfzh2_pmQgPKEpMtr_0b_Q4Os.jpg?width=320&crop=smart&auto=webp&s=f5f435bb4d31f0f695560cb0fb6f456702452062', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/STdfNz2AMqd8poUl9upfzh2_pmQgPKEpMtr_0b_Q4Os.jpg?width=640&crop=smart&auto=webp&s=b8b6a03fcde27061acee8ab4cb6ecc598a7ac6b9', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/STdfNz2AMqd8poUl9upfzh2_pmQgPKEpMtr_0b_Q4Os.jpg?width=960&crop=smart&auto=webp&s=bbda73bd4f11be7b71efb3892b4107414d815613', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/STdfNz2AMqd8poUl9upfzh2_pmQgPKEpMtr_0b_Q4Os.jpg?width=1080&crop=smart&auto=webp&s=0158100ff6f9041cc8dcb861b66a3db041df5095', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/STdfNz2AMqd8poUl9upfzh2_pmQgPKEpMtr_0b_Q4Os.jpg?auto=webp&s=daff0272548bd7ffe5bc2b1eff6cd5c752144ed4', 'width': 1200}, 'variants': {}}]}
|
Best Chess LMs?
| 1 |
Any local LLMs that are pretty good at chess? Preferably ones that don't make illegal moves all the time. So far I've only found GPT-4 to be the best; Bard, GPT-3.5 and raw LLaMA just play randomly after move 4.
| 2023-09-14T00:46:47 |
https://www.reddit.com/r/LocalLLaMA/comments/16i46t6/best_chess_lms/
|
Zelenskyobama2
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16i46t6
| false | null |
t3_16i46t6
|
/r/LocalLLaMA/comments/16i46t6/best_chess_lms/
| false | false |
self
| 1 | null |
Absolutely best (and unexpected?) model for smut by a longshot
| 1 |
Just wanted to share, I feel this is almost like a secret, but:
Falcon-180B is fantastic for smut! I know, I know, its censored, but somehow this censoring is negligible. I think it's a stuff many has searched for since llm's emerge ;)
I understand that not a lot of people can run this model locally, but we have a demo:
[https://huggingface.co/spaces/tiiuae/falcon-180b-demo](https://huggingface.co/spaces/tiiuae/falcon-180b-demo)
Censoring working strangely, if you ask to write a smut directly it will deny. But if you ask to roleplay with you without giving much hints what to come, in following messages Falcon allows you to request almost any kind of plot changes, including smut and whatnot. And it also writes better than anything else I saw.
| 2023-09-14T01:03:48 |
https://www.reddit.com/r/LocalLLaMA/comments/16i4k1d/absolutely_best_and_unexpected_model_for_smut_by/
|
uti24
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16i4k1d
| false | null |
t3_16i4k1d
|
/r/LocalLLaMA/comments/16i4k1d/absolutely_best_and_unexpected_model_for_smut_by/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'VnPfz4T_PBvE0aZgbbxKHMpFvaTgKkhfcvRwLUnRubE', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/gVREFem92AsURy5iDqcLjCowcjYwzocZHWIO1o2kdtA.jpg?width=108&crop=smart&auto=webp&s=782b98cf2b42e53ba1106df3e6981f32d2b7c645', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/gVREFem92AsURy5iDqcLjCowcjYwzocZHWIO1o2kdtA.jpg?width=216&crop=smart&auto=webp&s=004403bb96a0bdf43721038ac58efd0e44c314e5', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/gVREFem92AsURy5iDqcLjCowcjYwzocZHWIO1o2kdtA.jpg?width=320&crop=smart&auto=webp&s=3fb8f9e84ca4c69daab2536e4627a53cef02d091', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/gVREFem92AsURy5iDqcLjCowcjYwzocZHWIO1o2kdtA.jpg?width=640&crop=smart&auto=webp&s=a1b6a6f03f7260dec2bf8a1aa2b5db4ad8948691', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/gVREFem92AsURy5iDqcLjCowcjYwzocZHWIO1o2kdtA.jpg?width=960&crop=smart&auto=webp&s=52e1a32bfa50aa184426188f1100f4422c8da21e', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/gVREFem92AsURy5iDqcLjCowcjYwzocZHWIO1o2kdtA.jpg?width=1080&crop=smart&auto=webp&s=ec39e04a5d28782130f45ac04d4e3158e7dff049', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/gVREFem92AsURy5iDqcLjCowcjYwzocZHWIO1o2kdtA.jpg?auto=webp&s=460d01f934e88241cae9a6f4bacfcd527322b97d', 'width': 1200}, 'variants': {}}]}
|
Getting an A6000 where to start?
| 1 |
I have an A6000 coming my way in a few days, currently am running 1080ti and 3060.
What is the first thing you would do if you had access to an a6000?
| 2023-09-14T01:38:41 |
https://www.reddit.com/r/LocalLLaMA/comments/16i5azq/getting_an_a6000_where_to_start/
|
drnick316
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16i5azq
| false | null |
t3_16i5azq
|
/r/LocalLLaMA/comments/16i5azq/getting_an_a6000_where_to_start/
| false | false |
self
| 1 | null |
We should use exllama1 for GPTQ and exllama2 for exl2?
| 1 |
I've done some perplexity test, and because GPTQ (act\_order + groupsize 32) is a 4.625 bit average, it should be compared to exl2 bpw = 4.625, and that's what I did
[https://huggingface.co/TheBloke/Synthia-13B-GPTQ/tree/gptq-4bit-32g-actorder\_True](https://huggingface.co/TheBloke/Synthia-13B-GPTQ/tree/gptq-4bit-32g-actorder_True)
[https://huggingface.co/latimar/Synthia-13B-exl2/tree/4\_625-bpw-h6](https://huggingface.co/latimar/Synthia-13B-exl2/tree/4_625-bpw-h6)
Here are the results:
https://preview.redd.it/xljcfguls4ob1.png?width=3215&format=png&auto=webp&s=72805bc7bf8c46c45399503cc0853d15cd59a163
The conclusion is this one:
* Only use exllama1 for GPTQ models
* exllama2 + exl2 is giving the best results overall
I'm just a bit sad that exllama2 gives worse results than exllama1 when compared on the same exact settings, if exllama2 was as precise as exllama1, that 4.98 perplexity would be even lower.
But if those results are the same for all models, it will make GPTQ obsolete yeah, and that's thanks to the variable quant, and not really because of the new loader
[https://github.com/turboderp/exllamav2/issues/28](https://github.com/turboderp/exllamav2/issues/28)
​
| 2023-09-14T02:25:32 |
https://www.reddit.com/r/LocalLLaMA/comments/16i6a83/we_should_use_exllama1_for_gptq_and_exllama2_for/
|
Wonderful_Ad_5134
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16i6a83
| false | null |
t3_16i6a83
|
/r/LocalLLaMA/comments/16i6a83/we_should_use_exllama1_for_gptq_and_exllama2_for/
| false | false | 1 |
{'enabled': False, 'images': [{'id': 'Hx5Q7form2K0L5cQAVFSzztjbiQnhGKmLf2nSyyEHk0', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/QpwDrFMV1I2XUp9LDAaXOHHRTQDYJmcsTTrj-hFU7bU.jpg?width=108&crop=smart&auto=webp&s=f4cc13b38111e5bd25eb86b0a9ee93b539765acf', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/QpwDrFMV1I2XUp9LDAaXOHHRTQDYJmcsTTrj-hFU7bU.jpg?width=216&crop=smart&auto=webp&s=d6991ed8cb4a96d4ca1291f5673ec87fd2198be8', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/QpwDrFMV1I2XUp9LDAaXOHHRTQDYJmcsTTrj-hFU7bU.jpg?width=320&crop=smart&auto=webp&s=a197d3701ade6ac6f89163a740a84f1d19663a36', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/QpwDrFMV1I2XUp9LDAaXOHHRTQDYJmcsTTrj-hFU7bU.jpg?width=640&crop=smart&auto=webp&s=54986661a82164a61cd4c4e2748d440a0bf99283', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/QpwDrFMV1I2XUp9LDAaXOHHRTQDYJmcsTTrj-hFU7bU.jpg?width=960&crop=smart&auto=webp&s=1bbb615d724171e16151ea5df6e57939446fda4e', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/QpwDrFMV1I2XUp9LDAaXOHHRTQDYJmcsTTrj-hFU7bU.jpg?width=1080&crop=smart&auto=webp&s=dd5a1ff98f227ca2b5893d44a2c10ad15862c54c', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/QpwDrFMV1I2XUp9LDAaXOHHRTQDYJmcsTTrj-hFU7bU.jpg?auto=webp&s=a0164f21d7524b3c742b0175883df2bd8f8c6fd1', 'width': 1200}, 'variants': {}}]}
|
|
College Project - LLM toolkit
| 1 |
[removed]
| 2023-09-14T02:28:12 |
https://www.reddit.com/r/LocalLLaMA/comments/16i6c82/college_project_llm_toolkit/
|
Hades8800
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16i6c82
| false | null |
t3_16i6c82
|
/r/LocalLLaMA/comments/16i6c82/college_project_llm_toolkit/
| false | false |
self
| 1 | null |
Baichuan2's vocab has a single token for "Guided by Xi Jinping's thoughts of socialism with Chinese characteristics in the new era"
| 1 | 2023-09-14T03:02:02 |
https://twitter.com/suchenzang/status/1702126326369636631
|
Covid-Plannedemic_
|
twitter.com
| 1970-01-01T00:00:00 | 0 |
{}
|
16i70o8
| false |
{'oembed': {'author_name': 'Susan Zhang', 'author_url': 'https://twitter.com/suchenzang', 'cache_age': 3153600000, 'height': None, 'html': '<blockquote class="twitter-video"><p lang="en" dir="ltr">> open up the new ~125k Baichuan2 vocab<br>> find a single token just for "Guided by Xi Jinping's thoughts of socialism with Chinese characteristics in the new era"<br>> 😬<br>> finds another token just for "On our journey to achieve beautiful ideals and big goals"<br>> 🥹 <a href="https://t.co/iJzlk0E1zi">pic.twitter.com/iJzlk0E1zi</a></p>— Susan Zhang (@suchenzang) <a href="https://twitter.com/suchenzang/status/1702126326369636631?ref_src=twsrc%5Etfw">September 14, 2023</a></blockquote>\n<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>\n', 'provider_name': 'Twitter', 'provider_url': 'https://twitter.com', 'type': 'rich', 'url': 'https://twitter.com/suchenzang/status/1702126326369636631', 'version': '1.0', 'width': 350}, 'type': 'twitter.com'}
|
t3_16i70o8
|
/r/LocalLLaMA/comments/16i70o8/baichuan2s_vocab_has_a_single_token_for_guided_by/
| false | false | 1 |
{'enabled': False, 'images': [{'id': '_ScFAlutalFBr6aT3GFB5b3BmtsizQKlPjAPV7hbLXI', 'resolutions': [{'height': 113, 'url': 'https://external-preview.redd.it/57iNrgWLNLgafL3ruRwsoS98RSM8tpIHHjE38O7vOV8.jpg?width=108&crop=smart&auto=webp&s=9fa081ef19df996ee26453553ddf3427a655f851', 'width': 108}, {'height': 226, 'url': 'https://external-preview.redd.it/57iNrgWLNLgafL3ruRwsoS98RSM8tpIHHjE38O7vOV8.jpg?width=216&crop=smart&auto=webp&s=156084bda060595609ee42c67d1aba7c2bb6702d', 'width': 216}, {'height': 336, 'url': 'https://external-preview.redd.it/57iNrgWLNLgafL3ruRwsoS98RSM8tpIHHjE38O7vOV8.jpg?width=320&crop=smart&auto=webp&s=98894ecab52f76261be4c50d2fbe28670f6a9a8e', 'width': 320}, {'height': 672, 'url': 'https://external-preview.redd.it/57iNrgWLNLgafL3ruRwsoS98RSM8tpIHHjE38O7vOV8.jpg?width=640&crop=smart&auto=webp&s=89e4f03e6e7a1a53d91cb9135612e63f2a5e6b8c', 'width': 640}], 'source': {'height': 672, 'url': 'https://external-preview.redd.it/57iNrgWLNLgafL3ruRwsoS98RSM8tpIHHjE38O7vOV8.jpg?auto=webp&s=18f408d8ae9eaa1729bd27570d4cf9f9a911e155', 'width': 640}, 'variants': {}}]}
|
||
LLM best-suited for parsing documents
| 1 |
I want to use an LLM as a pseudo-parser that can extract information from my local files. Which LLM would be best suited for that? I'm guessing something instruct-based with large context size? I rarely see context size mentioned but assume larger number of parameters also correlates with larger context size? I'm guessing best approach would be to just pipe the contents of the file as a query concatenated with a question about it? Speed isn't critical, but I'd prefer a model that's not too large (under 34b, if possible).
| 2023-09-14T03:45:08 |
https://www.reddit.com/r/LocalLLaMA/comments/16i7v9j/llm_bestsuited_for_parsing_documents/
|
atsepkov
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16i7v9j
| false | null |
t3_16i7v9j
|
/r/LocalLLaMA/comments/16i7v9j/llm_bestsuited_for_parsing_documents/
| false | false |
self
| 1 | null |
H2OGPT questions from a noob
| 1 |
[removed]
| 2023-09-14T03:57:58 |
https://www.reddit.com/r/LocalLLaMA/comments/16i846s/h2ogpt_questions_from_a_noob/
|
consig1iere
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16i846s
| false | null |
t3_16i846s
|
/r/LocalLLaMA/comments/16i846s/h2ogpt_questions_from_a_noob/
| false | false |
default
| 1 | null |
Are there any AI powered notebooks for coding that I can install locally??
| 1 |
Probably a noobish question but are there any AI powered coding environments that would allow a user to load a coding model of choice and generate code in Python or JavaScript(or other languages) that can be tested in a separate safe virtual environment within the same window, then added to a project file or folder when complete. Maybe even go back and edit the code if needed and have the model explain and iterate on other open source project files as well. I'm thinking something that would almost combine ooba booga, vs code and juypter notebook into one but more simply and dedicated mainly for coding projects.
| 2023-09-14T05:16:52 |
https://www.reddit.com/r/LocalLLaMA/comments/16i9kb8/are_there_any_ai_powered_notebooks_for_coding/
|
AI_Trenches
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16i9kb8
| false | null |
t3_16i9kb8
|
/r/LocalLLaMA/comments/16i9kb8/are_there_any_ai_powered_notebooks_for_coding/
| false | false |
self
| 1 | null |
Mark Zuckerberg's Remarks at AI Forum | Meta
| 1 | 2023-09-14T05:56:56 |
https://about.fb.com/news/2023/09/mark-zuckerbergs-remarks-at-ai-forum/
|
noiseinvacuum
|
about.fb.com
| 1970-01-01T00:00:00 | 0 |
{}
|
16ia90n
| false | null |
t3_16ia90n
|
/r/LocalLLaMA/comments/16ia90n/mark_zuckerbergs_remarks_at_ai_forum_meta/
| false | false | 1 |
{'enabled': False, 'images': [{'id': 'dywf8cQP_7wsiEnac1YYP1TvJmn_73MvakYvULV63yo', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/xSwIR2j9G6ertDOpRotPn2sH82g2Qy-tS9dJKk4g0-k.jpg?width=108&crop=smart&auto=webp&s=8edca2623e4b03f1059af21ac64a1f5dea102048', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/xSwIR2j9G6ertDOpRotPn2sH82g2Qy-tS9dJKk4g0-k.jpg?width=216&crop=smart&auto=webp&s=dd4f85b42234006e4c554060434d8861397d3316', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/xSwIR2j9G6ertDOpRotPn2sH82g2Qy-tS9dJKk4g0-k.jpg?width=320&crop=smart&auto=webp&s=2651c8b8fe765f0f44a444f7a85f2fc11295a1e0', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/xSwIR2j9G6ertDOpRotPn2sH82g2Qy-tS9dJKk4g0-k.jpg?width=640&crop=smart&auto=webp&s=0d364f9245eb865e92115cf35249abce6e5b3fdd', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/xSwIR2j9G6ertDOpRotPn2sH82g2Qy-tS9dJKk4g0-k.jpg?width=960&crop=smart&auto=webp&s=1213bd5e34ca23be02715e1c60cf5bbfd4343e96', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/xSwIR2j9G6ertDOpRotPn2sH82g2Qy-tS9dJKk4g0-k.jpg?width=1080&crop=smart&auto=webp&s=48cb2c67333731e81f71d2f5e748258d54717cf5', 'width': 1080}], 'source': {'height': 675, 'url': 'https://external-preview.redd.it/xSwIR2j9G6ertDOpRotPn2sH82g2Qy-tS9dJKk4g0-k.jpg?auto=webp&s=0831b49817243646c7bc85efd3d15fbd27a361b6', 'width': 1200}, 'variants': {}}]}
|
||
Best model for summarization task?
| 1 |
Pretty much above.
| 2023-09-14T07:33:59 |
https://www.reddit.com/r/LocalLLaMA/comments/16ibwht/best_model_for_summarization_task/
|
Raise_Fickle
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16ibwht
| false | null |
t3_16ibwht
|
/r/LocalLLaMA/comments/16ibwht/best_model_for_summarization_task/
| false | false |
default
| 1 | null |
are there any good half uncensored models?
| 1 |
I'm looking for a model which is willing to use profanity etc. but isn't completely unhinged, any good models like this?
| 2023-09-14T07:57:31 |
https://www.reddit.com/r/LocalLLaMA/comments/16icaq2/are_there_any_good_half_uncensored_models/
|
jazza420
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16icaq2
| false | null |
t3_16icaq2
|
/r/LocalLLaMA/comments/16icaq2/are_there_any_good_half_uncensored_models/
| false | false |
self
| 1 | null |
How can I use embeddings from llama.cpp using OpenAI in Python?
| 1 |
[removed]
| 2023-09-14T08:22:47 |
https://www.reddit.com/r/LocalLLaMA/comments/16icq6d/how_can_i_use_embeddings_from_llamacpp_using/
|
mmmanel00
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16icq6d
| false | null |
t3_16icq6d
|
/r/LocalLLaMA/comments/16icq6d/how_can_i_use_embeddings_from_llamacpp_using/
| false | false |
self
| 1 | null |
I want to train my dataset using open source Vicuna
| 1 |
Hi guys , I want to train my dataset using open source Vicuna how to do? Please help me .
| 2023-09-14T08:45:35 |
https://www.reddit.com/r/LocalLLaMA/comments/16id3px/i_want_to_train_my_dataset_using_open_source/
|
Downtown-Message6608
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16id3px
| false | null |
t3_16id3px
|
/r/LocalLLaMA/comments/16id3px/i_want_to_train_my_dataset_using_open_source/
| false | false |
self
| 1 | null |
Need help with running mt5 LLM
| 1 |
Can someone give me advice or point me what to do regarding running mT5? I got 3 issues:
1. In paper authors refer to their models to range from 300M to 13B, but PyTorch bin files range from much bigger size (1.3Gb to 52Gb). Not sure what is explanation for that...
2. When I move bin file from download location with win Exlorer it is very slow. Win11 System run on SSD, I got 64GB RAM, 12GB VRAM and 13tg gen Intel CPU and moving ETA is like 4hrs for 4Gb. Not sure why is that.. Anyway moving with TotalCMD helps. I'm not having that issue with any other models, which are mostly GGUFs or GGMLs.
[https://huggingface.co/collections/google/mt5-release-65005f1a520f8d7b4d039509](https://huggingface.co/collections/google/mt5-release-65005f1a520f8d7b4d039509)
3. Most important - How to run mT5 model? I dont want to train it or FT it - just wanna run it for translation.
[https://github.com/google-research/multilingual-t5](https://github.com/google-research/multilingual-t5)
I downloaded bin from HF. What next? When trying to load it over LM studio it states a permission denied, regardless it is open source LLM, and didnt encountered any prior approval requirements like Llama2 has for example... Koboldcpp does not see it.
What loader do i need for mT5?
I want to translate documents in private environment, locally, not on Google Collab. Any advice would help...
| 2023-09-14T08:54:13 |
https://www.reddit.com/r/LocalLLaMA/comments/16id8tm/need_help_with_running_mt5_llm/
|
dodo13333
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16id8tm
| false | null |
t3_16id8tm
|
/r/LocalLLaMA/comments/16id8tm/need_help_with_running_mt5_llm/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'rwxfryfs6Fk0mjn-aAIWTXdrWk2g_R-QukpEYYIYjCQ', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/ovfSeAz1nkLkC-8cSwU3FsSd1MB1hwzKgG12lfls3nc.jpg?width=108&crop=smart&auto=webp&s=b0c8ee9aa0280bea1eb4c3d6ed4086cf81bf2849', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/ovfSeAz1nkLkC-8cSwU3FsSd1MB1hwzKgG12lfls3nc.jpg?width=216&crop=smart&auto=webp&s=8ae67490f4e963833d384850a9531c26be77e405', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/ovfSeAz1nkLkC-8cSwU3FsSd1MB1hwzKgG12lfls3nc.jpg?width=320&crop=smart&auto=webp&s=71e34a8eee0b102859d8db0ef0921597705e2be6', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/ovfSeAz1nkLkC-8cSwU3FsSd1MB1hwzKgG12lfls3nc.jpg?width=640&crop=smart&auto=webp&s=b5bacb72577481a2edb1b4e0b75bbebd51bc61f3', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/ovfSeAz1nkLkC-8cSwU3FsSd1MB1hwzKgG12lfls3nc.jpg?width=960&crop=smart&auto=webp&s=fb5bc033e268baac15916e5c9a4702d017153315', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/ovfSeAz1nkLkC-8cSwU3FsSd1MB1hwzKgG12lfls3nc.jpg?width=1080&crop=smart&auto=webp&s=450de02a1ce49ccfcf7795fdb9baa6b8072f9626', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/ovfSeAz1nkLkC-8cSwU3FsSd1MB1hwzKgG12lfls3nc.jpg?auto=webp&s=9fad594ed65b5d5ec5c925a9585bda86f801c7e6', 'width': 1200}, 'variants': {}}]}
|
HuggingChat, the open-source alternative to ChatGPT from HuggingFace just released a new websearch feature. It uses RAG and local embeddings to provide better results and show sources.
| 1 | 2023-09-14T09:51:10 |
https://v.redd.it/mws0k1gq07ob1
|
SensitiveCranberry
|
v.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
16ie6pn
| false |
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/mws0k1gq07ob1/DASHPlaylist.mpd?a=1697277085%2CMmZhNjkzZDY0NDk5NjYxZTJjNzc3YzUzMTcxNzM1OGMzYTI1NGJkZDlmNDFkYmEzNWQ5Zjc2YmNiNzY0NmZhYw%3D%3D&v=1&f=sd', 'duration': 40, 'fallback_url': 'https://v.redd.it/mws0k1gq07ob1/DASH_1080.mp4?source=fallback', 'height': 1834, 'hls_url': 'https://v.redd.it/mws0k1gq07ob1/HLSPlaylist.m3u8?a=1697277085%2CM2M5MTRkMjcyZjI3N2FhYTY0N2MxMzgwM2Y3NzBhNGMwNDlkN2EzYzZjNmQ3NThiMDY2ZjUxYjFlNzdlOTIwOA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/mws0k1gq07ob1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1080}}
|
t3_16ie6pn
|
/r/LocalLLaMA/comments/16ie6pn/huggingchat_the_opensource_alternative_to_chatgpt/
| false | false | 1 |
{'enabled': False, 'images': [{'id': 'jxXP1T1HuH4upKzJCd9b4zDtZA7vYPCTjC5WLf0AHuA', 'resolutions': [{'height': 183, 'url': 'https://external-preview.redd.it/3zCuU-iBUA3rQydmO5gSviE-nPyFW-CeL5DuUdWgI0M.png?width=108&crop=smart&format=pjpg&auto=webp&s=11be7371da20c94af1dfde585c22aa320fc17604', 'width': 108}, {'height': 366, 'url': 'https://external-preview.redd.it/3zCuU-iBUA3rQydmO5gSviE-nPyFW-CeL5DuUdWgI0M.png?width=216&crop=smart&format=pjpg&auto=webp&s=0988a947405da1b5ee0a6a82506e5bd55692834a', 'width': 216}, {'height': 543, 'url': 'https://external-preview.redd.it/3zCuU-iBUA3rQydmO5gSviE-nPyFW-CeL5DuUdWgI0M.png?width=320&crop=smart&format=pjpg&auto=webp&s=c818427d5c1395a02d0204ed0a96e21429a86e42', 'width': 320}, {'height': 1086, 'url': 'https://external-preview.redd.it/3zCuU-iBUA3rQydmO5gSviE-nPyFW-CeL5DuUdWgI0M.png?width=640&crop=smart&format=pjpg&auto=webp&s=7fcf008178f3c57a0b0f9ec69d4e345a5542b4ca', 'width': 640}, {'height': 1629, 'url': 'https://external-preview.redd.it/3zCuU-iBUA3rQydmO5gSviE-nPyFW-CeL5DuUdWgI0M.png?width=960&crop=smart&format=pjpg&auto=webp&s=c44068cedeb8d4efa8b8b6a921acab0596a09184', 'width': 960}, {'height': 1833, 'url': 'https://external-preview.redd.it/3zCuU-iBUA3rQydmO5gSviE-nPyFW-CeL5DuUdWgI0M.png?width=1080&crop=smart&format=pjpg&auto=webp&s=60c37506504434e7471c085baad8b2867ea254bc', 'width': 1080}], 'source': {'height': 1908, 'url': 'https://external-preview.redd.it/3zCuU-iBUA3rQydmO5gSviE-nPyFW-CeL5DuUdWgI0M.png?format=pjpg&auto=webp&s=0b9cf8c8d3592b1c26ad4570b7ec97353c21587b', 'width': 1124}, 'variants': {}}]}
|
||
OpinionGPT: Modelling Explicit Biases in Instruction-Tuned LLMs
| 1 |
Hello everyone,
Apologies for the self-promotion. At the Humboldt-Universität zu Berlin, we have been working on a project exploring human biases through AI, using a language model called OpinionGPT. Initially developed with the Llama 7b model, we've now advanced to using the Llama 13b model with LoRA adapters, where each adapter corresponds to a trained bias (similar to a Mixture of Experts Model)
We aimed to make biases transparent and explicit, not to suppress them. You can interact with the model through our [**web demo**](https://opiniongpt.informatik.hu-berlin.de/).
We would love to get some feedback and questions about our model and website.
| 2023-09-14T10:07:21 |
https://www.reddit.com/r/LocalLLaMA/comments/16iegd4/opiniongpt_modelling_explicit_biases_in/
|
DerBeginner
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16iegd4
| false | null |
t3_16iegd4
|
/r/LocalLLaMA/comments/16iegd4/opiniongpt_modelling_explicit_biases_in/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'c9EHuRnpxJ3uE87OLuRF2IQPzjWAcyI1EKZA0doXMqk', 'resolutions': [{'height': 106, 'url': 'https://external-preview.redd.it/wpKU_2bDJFuEzFQGyoSdviSEggrIYAc9h7P9IC0oT_M.jpg?width=108&crop=smart&auto=webp&s=fd71b7eb00f33e97856c37e1ff00a2fa7c160675', 'width': 108}, {'height': 213, 'url': 'https://external-preview.redd.it/wpKU_2bDJFuEzFQGyoSdviSEggrIYAc9h7P9IC0oT_M.jpg?width=216&crop=smart&auto=webp&s=f0f1651875a46f05b0a787e76a382dd0dae87f0c', 'width': 216}, {'height': 316, 'url': 'https://external-preview.redd.it/wpKU_2bDJFuEzFQGyoSdviSEggrIYAc9h7P9IC0oT_M.jpg?width=320&crop=smart&auto=webp&s=4c7b336cf23601fa1e758b2e1c0eb02fedf42a10', 'width': 320}, {'height': 632, 'url': 'https://external-preview.redd.it/wpKU_2bDJFuEzFQGyoSdviSEggrIYAc9h7P9IC0oT_M.jpg?width=640&crop=smart&auto=webp&s=181e71a8c4f92cca8f080fa78fa107a0a785257e', 'width': 640}], 'source': {'height': 762, 'url': 'https://external-preview.redd.it/wpKU_2bDJFuEzFQGyoSdviSEggrIYAc9h7P9IC0oT_M.jpg?auto=webp&s=ca61b9e526d20a949999e89f54e75cbf42b3c1d6', 'width': 771}, 'variants': {}}]}
|
Speed Up Llama 2 On CPU
| 1 |
[removed]
| 2023-09-14T10:19:32 |
https://www.reddit.com/r/LocalLLaMA/comments/16ienrz/speed_up_llama_2_on_cpu/
|
ZAYN_MALIK_1D
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16ienrz
| false | null |
t3_16ienrz
|
/r/LocalLLaMA/comments/16ienrz/speed_up_llama_2_on_cpu/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'gMm46SjLTa_UxUz3Fpg59f3Lnp_DBc_0PFqbb55ABz4', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/nhbz6Oj7yJDHLIp2ar0OQvn5j43XWiCVuZLrmTSfEEw.jpg?width=108&crop=smart&auto=webp&s=9d4bf10ca7be1f27c9aa320b148aa7955c10ec45', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/nhbz6Oj7yJDHLIp2ar0OQvn5j43XWiCVuZLrmTSfEEw.jpg?width=216&crop=smart&auto=webp&s=15a811929135dc01ad71d077ac38b030ae8eeff7', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/nhbz6Oj7yJDHLIp2ar0OQvn5j43XWiCVuZLrmTSfEEw.jpg?width=320&crop=smart&auto=webp&s=69d5114d5e71a1c865179be841a61f12ab5a1b64', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/nhbz6Oj7yJDHLIp2ar0OQvn5j43XWiCVuZLrmTSfEEw.jpg?auto=webp&s=9d2f6b18ef89124e01c67c6bd2ddce8170a84a40', 'width': 480}, 'variants': {}}]}
|
What are strategies for keeping a conversation on rails?
| 1 |
I've been trying to develop a character.AI-like system on my own but i'm encountering a lot of road bumps. One is that we can go from "I'm a D&D bard halfling" to "Sure he's a python script to generate Fibonacci numbers" in a single prompt.
What is the solution here, better system prompts? Better input prompts? Or do you need to step back to fine tuning? Lora?
| 2023-09-14T10:27:35 |
https://www.reddit.com/r/LocalLLaMA/comments/16iessf/what_are_strategies_for_keeping_a_conversation_on/
|
The_GodKing
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16iessf
| false | null |
t3_16iessf
|
/r/LocalLLaMA/comments/16iessf/what_are_strategies_for_keeping_a_conversation_on/
| false | false |
self
| 1 | null |
is 4 rtx 4090s better than single a6000 ada?
| 1 |
i am thinking of getting a pc for running llama 70b locally, and do all sort of projects with it, sooo the thing is, i am confused on the hardware, i see rtx 4090 has 24 gb vram, and a6000 has 48gb, which can be spooled into 96gb by adding a second a6000, and rtx 4090 cannot spool vram like a6000, soo i mean does having 4 rtx 4090 make it possible in any way to run llama 70b, and is it worth it?(considering it will cost me little bit less than 1 rtx a6000 ada, 4\*1599=\~6200$,i can only afford get 1 rtx 6000 ada right now)
and if it is possible to run llama 70b on rtx 4090, what is the predicted speed of text generation?
Thanks in advance
| 2023-09-14T10:59:49 |
https://www.reddit.com/r/LocalLLaMA/comments/16ifd6q/is_4_rtx_4090s_better_than_single_a6000_ada/
|
fx76
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16ifd6q
| false | null |
t3_16ifd6q
|
/r/LocalLLaMA/comments/16ifd6q/is_4_rtx_4090s_better_than_single_a6000_ada/
| false | false |
self
| 1 | null |
Help a data scientist understand software and hardware requitements
| 1 |
[removed]
| 2023-09-14T11:02:18 |
https://www.reddit.com/r/LocalLLaMA/comments/16iff27/help_a_data_scientist_understand_software_and/
|
semantic_pineapple
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16iff27
| false | null |
t3_16iff27
|
/r/LocalLLaMA/comments/16iff27/help_a_data_scientist_understand_software_and/
| false | false |
self
| 1 | null |
Help a data scientist figure out software and computational needs
| 1 |
Hey, I'm a data scientist, and I want to develop an app based on open-source LLMs. I'm trying to figure out the software stack and computational needs for my project. Basically, the app is supposed to modify a small set of simple instructions according to an altered reality (e.g., modify instructions to make an omelette if there are no eggs in the fridge). I figure that my context should be 1K-2K.
Essentially, I want to make my workstation LLM-compatible, and I'd appreciate your input:
1. I assume that HuggingFace Transformers\peft is the way to go?
2. Does it make sense to perform most of the fine-tuning on the cloud to optimize my budget usage?
3. If so, which local GPU would you recommend for testing my code and experimentation? The secondhand market is problematic where I live. What are your thoughts about the 7900XTX or Intel Arc A770? To my understanding, they do not support training qLoRA.
4. If a certain dataset, method, or prompt works for me on a local, small model (say 13B), can I assume that the 70B version of the same model would benefit from a similar method?
Thanks!
| 2023-09-14T11:10:55 |
https://www.reddit.com/r/LocalLLaMA/comments/16ifkpb/help_a_data_scientist_figure_out_software_and/
|
Jazzlike-Detective62
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16ifkpb
| false | null |
t3_16ifkpb
|
/r/LocalLLaMA/comments/16ifkpb/help_a_data_scientist_figure_out_software_and/
| false | false |
self
| 1 | null |
Apple CoreML
| 1 |
The Apple event made me aware of CoreML. As far as I know, this also utilizes the Neural Engine on the Apple M and A series processors. This really sounds interesting. Anyone ever tried to quantize a Llama 7B model down to 4 bits and then run it on an iPad Pro or iPhone? Or anyone made a comparison between running the same model with CoreML and llama.cpp with Metal acceleration?
| 2023-09-14T12:15:04 |
https://huggingface.co/blog/swift-coreml-llm
|
krazzmann
|
huggingface.co
| 1970-01-01T00:00:00 | 0 |
{}
|
16igu6g
| false | null |
t3_16igu6g
|
/r/LocalLLaMA/comments/16igu6g/apple_coreml/
| false | false | 1 |
{'enabled': False, 'images': [{'id': 'GdZfMrYcAiV4r11CvPLgyVUFWsqE5CkALvDatwtQlZ0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/vvz7xzyNDfzo2w7wNUKMbdnIxwsU4cvfjh-ZDjrRvE4.jpg?width=108&crop=smart&auto=webp&s=a1dab5f9bcad8fcb14f01da63b4ec2b53e41412f', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/vvz7xzyNDfzo2w7wNUKMbdnIxwsU4cvfjh-ZDjrRvE4.jpg?width=216&crop=smart&auto=webp&s=5be26557e6e2419ef12043c03536fbed81e5c8d1', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/vvz7xzyNDfzo2w7wNUKMbdnIxwsU4cvfjh-ZDjrRvE4.jpg?width=320&crop=smart&auto=webp&s=7a015253ad9f9e7d305ad05ea805b19a85db3a3b', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/vvz7xzyNDfzo2w7wNUKMbdnIxwsU4cvfjh-ZDjrRvE4.jpg?width=640&crop=smart&auto=webp&s=f8ec22bfe4df878fc12db83493870ad3a65b255c', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/vvz7xzyNDfzo2w7wNUKMbdnIxwsU4cvfjh-ZDjrRvE4.jpg?width=960&crop=smart&auto=webp&s=392699fcd6f5564b6cce3ee20c4e4ccf1bfc243a', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/vvz7xzyNDfzo2w7wNUKMbdnIxwsU4cvfjh-ZDjrRvE4.jpg?width=1080&crop=smart&auto=webp&s=eae31cce00083b7972cbbbe873e188ee99a117fe', 'width': 1080}], 'source': {'height': 650, 'url': 'https://external-preview.redd.it/vvz7xzyNDfzo2w7wNUKMbdnIxwsU4cvfjh-ZDjrRvE4.jpg?auto=webp&s=017d2a5b3efdda0608edd479c3edf6e9617d34a2', 'width': 1300}, 'variants': {}}]}
|
|
Train GPTQ model?
| 1 |
Hey I'm trying to find if it is possible to train a GPTQ model. I didn't find anything online, looking to get some help here.
TIA
| 2023-09-14T12:47:16 |
https://www.reddit.com/r/LocalLLaMA/comments/16ihjdm/train_gptq_model/
|
Dry_Long3157
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16ihjdm
| false | null |
t3_16ihjdm
|
/r/LocalLLaMA/comments/16ihjdm/train_gptq_model/
| false | false |
self
| 1 | null |
Alibaba removed Qwen?
| 31 |
All the links are dead. We never even got it working with quants. This is the second Chinese model to just disappear:
- https://huggingface.co/Qwen/Qwen-7B
- https://huggingface.co/Qwen/Qwen-7B-Chat
- https://modelscope.cn/studios/qwen/Qwen-7B-Chat-Demo/summary
| 2023-09-14T13:55:58 |
https://www.reddit.com/r/LocalLLaMA/comments/16ij5ux/alibaba_removed_qwen/
|
MoneroBee
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16ij5ux
| false | null |
t3_16ij5ux
|
/r/LocalLLaMA/comments/16ij5ux/alibaba_removed_qwen/
| false | false |
self
| 31 | null |
Best CPU Model to extract data from raw text?
| 1 |
Hey everyone! I’m using llama.cpp (llm-rs) to extract structured data from raw text. Right now I’m using this structure for input:
```
RAW TEXT
===
{
“xxx”: “
```
I was using Nous Hermes 13B but as you know a new model appears every day. My results were not terrible but I’m open to new recommendations of models or strategy.
Thanks!
PS: Sorry if backticks didn’t work I’m on my phone 😂
| 2023-09-14T14:38:44 |
https://www.reddit.com/r/LocalLLaMA/comments/16ik65n/best_cpu_model_to_extract_data_from_raw_text/
|
okyaygokay
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16ik65n
| false | null |
t3_16ik65n
|
/r/LocalLLaMA/comments/16ik65n/best_cpu_model_to_extract_data_from_raw_text/
| false | false |
self
| 1 | null |
MIT-IBM Watson AI Lab releases MoLM suite with three small sparse MoE models, the largest of which (8B params with 700M experts) performs on par with Pythia 2.8B while its throughput is comparable to Pythia 1.4B
| 1 |
Paper: [https://arxiv.org/abs/2306.04640](https://arxiv.org/abs/2306.04640)
GitHub: [https://github.com/ibm/moduleformer](https://github.com/ibm/moduleformer) (under Apache 2.0)
Twitter thread: [https://twitter.com/Yikang\_Shen/status/1702041129267388678](https://twitter.com/Yikang_Shen/status/1702041129267388678)
Abstract:
>Large Language Models (LLMs) have achieved remarkable results. However, existing models are expensive to train and deploy, and it is also difficult to expand their knowledge beyond pre-training data without forgetting previous knowledge. This paper proposes a new neural network architecture, ModuleFormer, that leverages modularity to improve the efficiency and flexibility of large language models. ModuleFormer is based on the Sparse Mixture of Experts (SMoE). Unlike the previous SMoE-based modular language model, which requires domain-labeled data to learn domain-specific experts, ModuleFormer can induce modularity from uncurated data with its new load balancing and concentration losses. ModuleFormer is a modular architecture that includes two different types of modules: new stick-breaking attention heads and feedforward experts. Different modules are sparsely activated conditions on the input token during training and inference. In our experiment, we found that the modular architecture enables three important abilities for large pre-trained language models: 1) Efficiency, since ModuleFormer only activates a subset of its modules for each input token, thus it could achieve the same performance as dense LLMs with more than two times throughput; 2) Extendability, ModuleFormer is more immune to catastrophic forgetting than dense LLMs and can be easily extended with new modules to learn new knowledge that is not included in the training data; 3) Specialisation, finetuning ModuleFormer could specialize a subset of modules to the finetuning task and the task-unrelated modules could be easily pruned for a lightweight deployment.
Model architecture:
>MoLM is an auto-regressive language model that uses the ModuleFormer architecture. It has 16 attention modules in each attention layer and 32 MLP modules in each MLP layer. During inference, in each layer, MoLM-350M-4B and MoLM-700M-8B activate 2 modules for each token, while MoLM-700M-4B activate 4 modules. MoLM-350M-4B and MoLM-700M-4B has 24 blocks and MoLM-700M-8B has 48 blocks.
| 2023-09-14T15:11:07 |
https://www.reddit.com/r/LocalLLaMA/comments/16ikyma/mitibm_watson_ai_lab_releases_molm_suite_with/
|
ain92ru
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16ikyma
| false | null |
t3_16ikyma
|
/r/LocalLLaMA/comments/16ikyma/mitibm_watson_ai_lab_releases_molm_suite_with/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'q3evP6JeDpAC2MdSQHWYxnCYTqbJkElIQsLFqVSdkss', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=108&crop=smart&auto=webp&s=2711d572cfc6c713893cf24e8c4a7344d5ad8a4c', 'width': 108}, {'height': 126, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=216&crop=smart&auto=webp&s=b6624f0c1eedc14997e7f1780efbe6e5cb50c1e2', 'width': 216}, {'height': 186, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=320&crop=smart&auto=webp&s=9db38144ef3065833b9ba158c764f7be47de3016', 'width': 320}, {'height': 373, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=640&crop=smart&auto=webp&s=72b056142e7533b5628a2a34f37f7e5415727075', 'width': 640}, {'height': 560, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=960&crop=smart&auto=webp&s=2637f961ee21190172b9ca6c8adf3ac9612db083', 'width': 960}, {'height': 630, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=1080&crop=smart&auto=webp&s=782eead871df2939a587ee3beae442cc59282f64', 'width': 1080}], 'source': {'height': 700, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?auto=webp&s=f1cd025aeb52ffa82fc9e5a4a2f157da0d919147', 'width': 1200}, 'variants': {}}]}
|
Utilizing 16gb of VRAM
| 1 |
I'm currently running a 13B GPTQ Llama2 model in exllama1 at 4k token limit. It's using about 10gb of VRAM, from what I can tell. I have 16 on the card.
Is there anything I can do to squeeze out some extra performance using the remaining 6gb?
| 2023-09-14T15:16:00 |
https://www.reddit.com/r/LocalLLaMA/comments/16il2wx/utilizing_16gb_of_vram/
|
__SlimeQ__
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16il2wx
| false | null |
t3_16il2wx
|
/r/LocalLLaMA/comments/16il2wx/utilizing_16gb_of_vram/
| false | false |
self
| 1 | null |
Question about fine tuning
| 1 |
I’m curious how you would fine tune (or QLORA) with these purposes (separately):
1) you have a large collection of stories by an author and you want to write more stories in their style
2) you have a large collection of codebases that you want to be able to ask questions about
I’m unclear about when you’d use instruction vs chat. Also I don’t really understand what “question” would lead to each codebase, or each story, etc.
| 2023-09-14T15:23:25 |
https://www.reddit.com/r/LocalLLaMA/comments/16il9li/question_about_fine_tuning/
|
maxiedaniels
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16il9li
| false | null |
t3_16il9li
|
/r/LocalLLaMA/comments/16il9li/question_about_fine_tuning/
| false | false |
self
| 1 | null |
i am testing to see if i can do top level posts
| 1 |
[removed]
| 2023-09-14T15:26:58 |
https://www.reddit.com/r/LocalLLaMA/comments/16ilcrn/i_am_testing_to_see_if_i_can_do_top_level_posts/
|
MrEloi
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16ilcrn
| false | null |
t3_16ilcrn
|
/r/LocalLLaMA/comments/16ilcrn/i_am_testing_to_see_if_i_can_do_top_level_posts/
| false | false |
default
| 1 | null |
Recommend a Local LLM for low spec laptop
| 1 |
I have 8gb ram and 2gb vram. Suggest me an LLM. I'm a total noob to using LLMs.
​
I want it to help me write stories. So I input a long text and I want the model to give me the next sentence.
| 2023-09-14T16:06:47 |
https://www.reddit.com/r/LocalLLaMA/comments/16imcc0/recommend_a_local_llm_for_low_spec_laptop/
|
Little-Shoulder-5835
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16imcc0
| false | null |
t3_16imcc0
|
/r/LocalLLaMA/comments/16imcc0/recommend_a_local_llm_for_low_spec_laptop/
| false | false |
self
| 1 | null |
Searching for chunking algorithms and repo
| 1 |
Hi everyone!
I'm still experiencing with my own implementation of rag, and I deployed my custom chunking function (honestly don't like the methods on LangChain) .
Anyway, I'm searching for alternative methods, algoritms (NLP or not) and models... There are lots of info and different implementation on RAG, but as I can see noone put much effort to augment chunking quality.
Also, there are other approach than this one I'm currently using? bi-encoder (instructor) - > cross-encoder (reranking) - > LLM
Can someone share some resources, repo, lib or existing implementation of different chunking methods?
(or simply discuss here some idea, though or approach)
*Thanks in advance for you time!!*
| 2023-09-14T16:35:00 |
https://www.reddit.com/r/LocalLLaMA/comments/16in2bg/searching_for_chunking_algorithms_and_repo/
|
Distinct-Target7503
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16in2bg
| false | null |
t3_16in2bg
|
/r/LocalLLaMA/comments/16in2bg/searching_for_chunking_algorithms_and_repo/
| false | false |
self
| 1 | null |
I am not a scammer. I am advancing AI research through open source.
| 1 |
[removed]
| 2023-09-14T17:26:28 |
https://www.reddit.com/r/LocalLLaMA/comments/16iodwn/i_am_not_a_scammer_i_am_advancing_ai_research/
|
Scary-Present4451
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16iodwn
| false | null |
t3_16iodwn
|
/r/LocalLLaMA/comments/16iodwn/i_am_not_a_scammer_i_am_advancing_ai_research/
| false | false |
self
| 1 | null |
how does a model summarize a super long book with only a 4k context window?
| 1 |
question that's been on my mind, if a model is asked to summarize a super long book that is has not been trained, how does it do it since most context windows around between 4k-32k?
| 2023-09-14T17:44:12 |
https://www.reddit.com/r/LocalLLaMA/comments/16iou2f/how_does_a_model_summarize_a_super_long_book_with/
|
konrad21
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16iou2f
| false | null |
t3_16iou2f
|
/r/LocalLLaMA/comments/16iou2f/how_does_a_model_summarize_a_super_long_book_with/
| false | false |
self
| 1 | null |
Which fuzzy text search tool is better? Not in terms of search speed, but in terms of quality.
| 1 |
I have compared searching using the chromadb database and the Python fuzzywuzzy library. It seems that fuzzywuzzy provides even better results. However, I suspect that my experiment might not be entirely accurate, and I may be doing something wrong.
| 2023-09-14T17:49:52 |
https://www.reddit.com/r/LocalLLaMA/comments/16ioz2o/which_fuzzy_text_search_tool_is_better_not_in/
|
Pristine-Tax4418
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16ioz2o
| false | null |
t3_16ioz2o
|
/r/LocalLLaMA/comments/16ioz2o/which_fuzzy_text_search_tool_is_better_not_in/
| false | false |
self
| 1 | null |
Llama API-as-a-service?
| 1 |
I need to use a Llama model but i don’t want to host it myself, are there any good services besides replicate that offer some sort of API for Llama 2 variants that i could inference off of?
| 2023-09-14T18:02:46 |
https://www.reddit.com/r/LocalLLaMA/comments/16ipb22/llama_apiasaservice/
|
NickDifuze
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16ipb22
| false | null |
t3_16ipb22
|
/r/LocalLLaMA/comments/16ipb22/llama_apiasaservice/
| false | false |
self
| 1 | null |
Recommendations for an open source LLM front end?
| 1 |
Anyone have suggestions for an open source web-based front end for a conversational AI like Llama? I am using [TheBloke/Llama-2-13B-chat-GGUF/llama-2-13b-chat.Q4\_K\_S.gguf](https://huggingface.co/TheBloke/Llama-2-13B-chat-GGUF/tree/main) with [h2ogpt](https://github.com/h2oai/h2ogpt).
I am wondering if there's a more minimalist option out there.
| 2023-09-14T19:02:51 |
https://www.reddit.com/r/LocalLLaMA/comments/16iqttw/recommendations_for_an_open_source_llm_front_end/
|
PumpkinCurryLover
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16iqttw
| false | null |
t3_16iqttw
|
/r/LocalLLaMA/comments/16iqttw/recommendations_for_an_open_source_llm_front_end/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'HmCost0PHUStH4CIQyopO6sHUeNQMwjXw6cWlMAT6W4', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/ZiVsYqYJJXC5Wtg9jErByB57rKU04QIW-wKSulXntAQ.jpg?width=108&crop=smart&auto=webp&s=d917fe654a56f1f55cb5059b559eef2e7a6c0201', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/ZiVsYqYJJXC5Wtg9jErByB57rKU04QIW-wKSulXntAQ.jpg?width=216&crop=smart&auto=webp&s=9d003696240138ad91b420d8aca3449312ec2ea7', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/ZiVsYqYJJXC5Wtg9jErByB57rKU04QIW-wKSulXntAQ.jpg?width=320&crop=smart&auto=webp&s=5da7bcf376ce35207b7b78318b8b5e9b9f7155b1', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/ZiVsYqYJJXC5Wtg9jErByB57rKU04QIW-wKSulXntAQ.jpg?width=640&crop=smart&auto=webp&s=6fd36ad31540ca0751a6dde616b4b56050c54023', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/ZiVsYqYJJXC5Wtg9jErByB57rKU04QIW-wKSulXntAQ.jpg?width=960&crop=smart&auto=webp&s=411f3876ce874a5542bf0874c51381cc312fc7ca', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/ZiVsYqYJJXC5Wtg9jErByB57rKU04QIW-wKSulXntAQ.jpg?width=1080&crop=smart&auto=webp&s=fe2ddd0f2230fab5636134fe15b7e093e8320361', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/ZiVsYqYJJXC5Wtg9jErByB57rKU04QIW-wKSulXntAQ.jpg?auto=webp&s=f510d6838069beaee9fabc8ced9dece0559c388c', 'width': 1200}, 'variants': {}}]}
|
How can I make Llama models run faster on an M1 Max Macbook Pro?
| 1 |
I've been tinkering around with some models like Nous Hermes Llama2 and even at 70b and Q5\_K\_M they're not quite the quality I'd like from a llvm. They also take about 5 minutes to complete. Going up to Q6 makes the model spit out like one word every two minutes, and to add insult to injury it just rejected my prompt (and it wasn't even that bad). I'm running these prompts in [text generation web ui](https://github.com/oobabooga/text-generation-webui) without really messing with the settings because I don't understand them. I see there's different guis I can use, do they really make a difference? In order to, say, run nous hermes at Q8 will I need to shell out for a rtx 4090 or is it doable with my super expensive macbook?
| 2023-09-14T19:10:14 |
https://www.reddit.com/r/LocalLLaMA/comments/16ir0gx/how_can_i_make_llama_models_run_faster_on_an_m1/
|
Aggravating-Cycle849
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16ir0gx
| false | null |
t3_16ir0gx
|
/r/LocalLLaMA/comments/16ir0gx/how_can_i_make_llama_models_run_faster_on_an_m1/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'FOXRcjq6EmV89ZPB-n-5TMZxyzpll7exPtcP9cbhSp0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/REuTYd8PlyiYPTl61QJBZlbtV-Ew0MPwzYmorXef_bk.jpg?width=108&crop=smart&auto=webp&s=25cecfc12b5e14f197824f92c8ffdec5bd758e0e', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/REuTYd8PlyiYPTl61QJBZlbtV-Ew0MPwzYmorXef_bk.jpg?width=216&crop=smart&auto=webp&s=1b1b9e58717ee9d1fc46751a62f75047dd54246b', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/REuTYd8PlyiYPTl61QJBZlbtV-Ew0MPwzYmorXef_bk.jpg?width=320&crop=smart&auto=webp&s=7165d232f2e5c9c7709865d1b5f2524efe0a93c6', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/REuTYd8PlyiYPTl61QJBZlbtV-Ew0MPwzYmorXef_bk.jpg?width=640&crop=smart&auto=webp&s=533cf2923f608fe25c9a3119b26e9ad803fb39fa', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/REuTYd8PlyiYPTl61QJBZlbtV-Ew0MPwzYmorXef_bk.jpg?width=960&crop=smart&auto=webp&s=4f66e6f21205a04c541ac275d957dae7c6c63d92', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/REuTYd8PlyiYPTl61QJBZlbtV-Ew0MPwzYmorXef_bk.jpg?width=1080&crop=smart&auto=webp&s=00b4239be9b7a8034ebfe3839a88b924a5fddd25', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/REuTYd8PlyiYPTl61QJBZlbtV-Ew0MPwzYmorXef_bk.jpg?auto=webp&s=4fc98c51483cb48c580cfff4195ace8979e0d5b2', 'width': 1200}, 'variants': {}}]}
|
EVGA has B Stock 3090s for $799. If you are looking for a 3090 for LLM, that's less risky than ebay and comes with a 1 year warranty.
| 1 | 2023-09-14T19:45:21 |
https://www.evga.com/Products/ProductList.aspx?type=8&family=GeForce+30+Series+Family
|
fallingdowndizzyvr
|
evga.com
| 1970-01-01T00:00:00 | 0 |
{}
|
16irv4q
| false | null |
t3_16irv4q
|
/r/LocalLLaMA/comments/16irv4q/evga_has_b_stock_3090s_for_799_if_you_are_looking/
| false | false |
default
| 1 | null |
|
AgentVerse Tweet
| 1 | 2023-09-14T21:21:47 |
https://x.com/_akhaliq/status/1702336679200411697?s=20
|
ninjasaid13
|
x.com
| 1970-01-01T00:00:00 | 0 |
{}
|
16iuckj
| false | null |
t3_16iuckj
|
/r/LocalLLaMA/comments/16iuckj/agentverse_tweet/
| false | false | 1 |
{'enabled': False, 'images': [{'id': 'ZTimAguvRBSAjvxetJaWSl1X0GzGpsavj2__vyaC3CU', 'resolutions': [{'height': 51, 'url': 'https://external-preview.redd.it/hM4PFte2OkZRRl25PEeotd5322aQvpdS-t3521oV4Cg.jpg?width=108&crop=smart&auto=webp&s=1fc9e75140401dcabe6bb64f85867d724c02e629', 'width': 108}, {'height': 102, 'url': 'https://external-preview.redd.it/hM4PFte2OkZRRl25PEeotd5322aQvpdS-t3521oV4Cg.jpg?width=216&crop=smart&auto=webp&s=ffd21c8ceb52bb5c7a8b42bf130c480d52cb9898', 'width': 216}, {'height': 151, 'url': 'https://external-preview.redd.it/hM4PFte2OkZRRl25PEeotd5322aQvpdS-t3521oV4Cg.jpg?width=320&crop=smart&auto=webp&s=61af93ea3dddde3965266c33aca64f4c11fd724a', 'width': 320}, {'height': 303, 'url': 'https://external-preview.redd.it/hM4PFte2OkZRRl25PEeotd5322aQvpdS-t3521oV4Cg.jpg?width=640&crop=smart&auto=webp&s=e534bd19142c383fa2e57092b552fd3340b03bf9', 'width': 640}, {'height': 455, 'url': 'https://external-preview.redd.it/hM4PFte2OkZRRl25PEeotd5322aQvpdS-t3521oV4Cg.jpg?width=960&crop=smart&auto=webp&s=9012a82499562fd6eb189ee0e2061a2f6c1e55ff', 'width': 960}, {'height': 512, 'url': 'https://external-preview.redd.it/hM4PFte2OkZRRl25PEeotd5322aQvpdS-t3521oV4Cg.jpg?width=1080&crop=smart&auto=webp&s=7a50445beedffee7b23a096bd060ca3ee7f0335b', 'width': 1080}], 'source': {'height': 876, 'url': 'https://external-preview.redd.it/hM4PFte2OkZRRl25PEeotd5322aQvpdS-t3521oV4Cg.jpg?auto=webp&s=8271e209f0047ad46e1836939ee26756c6db04d1', 'width': 1846}, 'variants': {}}]}
|
||
The first low code platform for AI engineers
| 1 |
I'm curious what the appetite for low code platforms in the AI space is. Would you be interested in using it?
Here's a look at what this looks like: [https://www.loom.com/share/Building-AI-Agents-with-airkitai-8d7513639e8e4c7e9d8e449945f9f827?sid=fa0a1ef1-6aa5-4174-8923-2e5ac8345c46](https://www.loom.com/share/Building-AI-Agents-with-airkitai-8d7513639e8e4c7e9d8e449945f9f827?sid=fa0a1ef1-6aa5-4174-8923-2e5ac8345c46)
​
​
| 2023-09-14T21:31:49 |
https://www.reddit.com/r/LocalLLaMA/comments/16ium1u/the_first_low_code_platform_for_ai_engineers/
|
Brickchip
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16ium1u
| false | null |
t3_16ium1u
|
/r/LocalLLaMA/comments/16ium1u/the_first_low_code_platform_for_ai_engineers/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'qeifkwwr4C5QSg58B_96VUqdBQ2DqQxNVOT5ptzpTrU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/owNWn__zv73jr2iy1sm5GuH1uE2FXSKjk4L1ofzHRTk.jpg?width=108&crop=smart&auto=webp&s=4bc3ef15bdc48068b5b46ef3d57ff16363a52a24', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/owNWn__zv73jr2iy1sm5GuH1uE2FXSKjk4L1ofzHRTk.jpg?width=216&crop=smart&auto=webp&s=a68f471906e1e4bb44a66502ae9fb007cc218a2b', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/owNWn__zv73jr2iy1sm5GuH1uE2FXSKjk4L1ofzHRTk.jpg?width=320&crop=smart&auto=webp&s=352a4ad3daabba4c50607ceab37d57dbde36615b', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/owNWn__zv73jr2iy1sm5GuH1uE2FXSKjk4L1ofzHRTk.jpg?width=640&crop=smart&auto=webp&s=4855def75f13e0f454d8002e99c2081e81efa17f', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/owNWn__zv73jr2iy1sm5GuH1uE2FXSKjk4L1ofzHRTk.jpg?width=960&crop=smart&auto=webp&s=ec565104753e210f8f04c326bb3d5c3377b97b01', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/owNWn__zv73jr2iy1sm5GuH1uE2FXSKjk4L1ofzHRTk.jpg?width=1080&crop=smart&auto=webp&s=4cf377537671f6d20f3c87f699dfa3278e55caef', 'width': 1080}], 'source': {'height': 1200, 'url': 'https://external-preview.redd.it/owNWn__zv73jr2iy1sm5GuH1uE2FXSKjk4L1ofzHRTk.jpg?auto=webp&s=6b12f9a08661b4c750606ac85499dd355cbd4d07', 'width': 2400}, 'variants': {}}]}
|
I'm a former Atari engineer trying to catch up with the current tech.
| 1 |
What models can I run on my Atari 5200? I'm willing to code in assembly.
/s
| 2023-09-14T21:51:35 |
https://www.reddit.com/r/LocalLLaMA/comments/16iv40g/im_a_former_atari_engineer_trying_to_catch_up/
|
Musenik
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16iv40g
| false | null |
t3_16iv40g
|
/r/LocalLLaMA/comments/16iv40g/im_a_former_atari_engineer_trying_to_catch_up/
| false | false |
self
| 1 | null |
Need Help In Creating an App with the custom data
| 1 |
I am planning to create an app, using my own custom data which will generate articles from the internet and from the data I used. The challenge I'm facing is that I lack the necessary knowledge in this area, and I'm struggling to find appropriate tutorials or resources to assist me.
I've already tried doing this with the help of langchain but the output was only from the data I provided, any other queries provided failed to resolve.
If anyone could provide me with guidance on the steps I should follow, recommended tools, or packages, I would greatly appreciate it.
| 2023-09-14T22:02:15 |
https://www.reddit.com/r/LocalLLaMA/comments/16ivdqr/need_help_in_creating_an_app_with_the_custom_data/
|
adgamerx
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16ivdqr
| false | null |
t3_16ivdqr
|
/r/LocalLLaMA/comments/16ivdqr/need_help_in_creating_an_app_with_the_custom_data/
| false | false |
self
| 1 | null |
Ai only for rich and powerful - Closed Door Summit
| 1 |
[removed]
| 2023-09-14T22:49:03 |
https://www.reddit.com/r/LocalLLaMA/comments/16iwix3/ai_only_for_rich_and_powerful_closed_door_summit/
|
Single_Ring4886
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16iwix3
| false | null |
t3_16iwix3
|
/r/LocalLLaMA/comments/16iwix3/ai_only_for_rich_and_powerful_closed_door_summit/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'hSr_Qzar9WIY3xBupwjJLtcozfLPOTBmWCTAjoTJUwI', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/5n7ZyfQrJixtnp_-YLsuot0Q0EoUgVxHPyUzl5XPx2k.jpg?width=108&crop=smart&auto=webp&s=fd09c08144fcede846fb5f42f6f0d8123550582a', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/5n7ZyfQrJixtnp_-YLsuot0Q0EoUgVxHPyUzl5XPx2k.jpg?width=216&crop=smart&auto=webp&s=2b4eaec3d14df156d573b5d1296b7d4d98d4415c', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/5n7ZyfQrJixtnp_-YLsuot0Q0EoUgVxHPyUzl5XPx2k.jpg?width=320&crop=smart&auto=webp&s=342521e31d1c696ba1062a06ac4cb59477ef4f65', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/5n7ZyfQrJixtnp_-YLsuot0Q0EoUgVxHPyUzl5XPx2k.jpg?auto=webp&s=6f32ea372eb74fae751ac7516a75716dd2ccbaa9', 'width': 480}, 'variants': {}}]}
|
Censor & LlaMAS
| 1 |
[removed]
| 2023-09-15T00:24:09 |
https://www.reddit.com/r/LocalLLaMA/comments/16iymfw/censor_llamas/
|
holistic-engine
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16iymfw
| false | null |
t3_16iymfw
|
/r/LocalLLaMA/comments/16iymfw/censor_llamas/
| false | false |
self
| 1 | null |
Trail LLM Torrent Support - stablebay.org
| 1 | 2023-09-15T01:26:54 |
thiefyzheng
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
16izy8n
| false | null |
t3_16izy8n
|
/r/LocalLLaMA/comments/16izy8n/trail_llm_torrent_support_stablebayorg/
| false | false | 1 |
{'enabled': True, 'images': [{'id': '5vxjMNoEcZa-bycsTAKsDIyRUx2qcBlxvq_wNEULCLU', 'resolutions': [{'height': 104, 'url': 'https://preview.redd.it/jj9jadbnnbob1.png?width=108&crop=smart&auto=webp&s=75e5126372d2b05eaa4a2180e69330c16ea1592a', 'width': 108}, {'height': 208, 'url': 'https://preview.redd.it/jj9jadbnnbob1.png?width=216&crop=smart&auto=webp&s=10533c4a8f27d620ccad249f1b8ef9f724131db6', 'width': 216}, {'height': 308, 'url': 'https://preview.redd.it/jj9jadbnnbob1.png?width=320&crop=smart&auto=webp&s=b3b4cfd6be7af7559105986a5dc0e155c4d8438f', 'width': 320}, {'height': 616, 'url': 'https://preview.redd.it/jj9jadbnnbob1.png?width=640&crop=smart&auto=webp&s=d665de74cd762353e45b17fb21ca8c478dbe3a59', 'width': 640}], 'source': {'height': 633, 'url': 'https://preview.redd.it/jj9jadbnnbob1.png?auto=webp&s=515c3b748fc9dd2f6185c55fa0272234fbf3ed36', 'width': 657}, 'variants': {}}]}
|
|||
Working in Python code, will Llama 2 return a table in JSON format when prompted?
| 2 |
Hi everyone, I wanted to confirm this question below before really jumping into Llama 2.
If I prompted Llama to provide answers in JSON format, for eg. something like this prompt:
sequences = pipeline("Provide your answer in the JSON format with the feature names as the keys.")
Then following that in my Python code I have:
for seq in sequences:
# Convert output string to dictionary object
dict = json.loads(seq['generated_text'])
Will Llama 2 be able to output the answers in JSON format in order for me to convert it into a dictionary in the next step?
Would appreciate any input. Many thanks!
| 2023-09-15T02:34:57 |
https://www.reddit.com/r/LocalLLaMA/comments/16j1dk8/working_in_python_code_will_llama_2_return_a/
|
--leockl--
|
self.LocalLLaMA
| 2023-09-15T02:39:11 | 0 |
{}
|
16j1dk8
| false | null |
t3_16j1dk8
|
/r/LocalLLaMA/comments/16j1dk8/working_in_python_code_will_llama_2_return_a/
| false | false |
self
| 2 | null |
MythoMax 13b and Mythomax 33b
| 1 |
Okay, Mythomax 13b is sweet. Really sweet in writing stuff. for 13b buy in short pants.
I tried Mythomax L2 33B and I couldn't get heads and tails out of it. I really couldn't.
Is it me? Is it my pet hamster? Or the 33b is really bonks. I tried the 6bit quant for extra points, but buy, it was all mangled together nonsense.
Tell me I made some mistake, coz I really want Mythomax in 33b without dementia.
​
| 2023-09-15T02:41:19 |
https://www.reddit.com/r/LocalLLaMA/comments/16j1ial/mythomax_13b_and_mythomax_33b/
|
FPham
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16j1ial
| false | null |
t3_16j1ial
|
/r/LocalLLaMA/comments/16j1ial/mythomax_13b_and_mythomax_33b/
| false | false |
self
| 1 | null |
Old Comp Running LLM! I got llama-2-7b-chat.Q2_K.gguf running on 10-year-old iMac, response in 40-sec at 1-t/s
| 1 |
**I have an 8GB Ram late 2013 iMac running macOS Catalina.** I installed the text-generation-webui and just pasted in model card link (TheBloke/ llama-2-7b-chat.Q2\_K.gguf) - BAM WORKS, it's blowing me away! Y'all, this machine is old and decrepit. No GPU.
I asked this: Explain the temptation of Christ from the Holy Bible.
**It responded in 70 seconds at 3-t/s!**
Sure, I'd be happy to help you with that! The account of Jesus being tempted by Satan in the wilderness is found in Luke 4:1-13. According to the Bible, after fasting for 40 days and nights, Jesus was tempted by Satan in three ways:
1. Temptation to turn stones into bread: The devil told Jesus to command the rocks to become bread, but Jesus refused, saying it is written "man shall not live by bread alone." (Matthew 4:4)
2. 2. Temptation to worship himself: Satan tempted Jesus to bow down and worship him instead of God. But Jesus responded by quoting Deuteronomy 6:13-14, saying "You shall not misuse the name of the Lord your God." (Matthew 4:7)
3. Temptation to look for an easy way out:
then it cut off. but still. pretty sick!
| 2023-09-15T03:18:41 |
https://www.reddit.com/r/LocalLLaMA/comments/16j29s3/old_comp_running_llm_i_got_llama27bchatq2_kgguf/
|
Actual-Bad5029
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
16j29s3
| false | null |
t3_16j29s3
|
/r/LocalLLaMA/comments/16j29s3/old_comp_running_llm_i_got_llama27bchatq2_kgguf/
| false | false |
self
| 1 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.