name string | body string | score int64 | controversiality int64 | created timestamp[us] | author string | collapsed bool | edited timestamp[us] | gilded int64 | id string | locked bool | permalink string | stickied bool | ups int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
t1_jc2h7kw | without the [llama.py](https://llama.py) changes, I get this error:
Traceback (most recent call last):
File "/home/<>/text-generation-webui/server.py", line 191, in <module>
shared.model, shared.tokenizer = load\_model(shared.model\_name)
File "/home/<>/text-generation-webui/modules/models.py", line 94, in load\_model
model = load\_quantized\_LLaMA(model\_name)
File "/home/<>/text-generation-webui/modules/quantized\_LLaMA.py", line 43, in load\_quantized\_LLaMA
model = load\_quant(path\_to\_model, str(pt\_path), bits)
File "/home/<>/text-generation-webui/repositories/GPTQ-for-LLaMa/llama.py", line 246, in load\_quant
model.load\_state\_dict(torch.load(checkpoint))
File "/home/<>/miniconda3/envs/GPTQ-for-LLaMa/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1671, in load\_state\_dict
raise RuntimeError('Error(s) in loading state\_dict for {}:\\n\\t{}'.format(
RuntimeError: Error(s) in loading state\_dict for LLaMAForCausalLM:
Missing key(s) in state\_dict: "model.decoder.embed\_tokens.weight", "model.decoder.layers.0.self\_attn.q\_proj.zeros", "model.decoder.layers.0.self\_attn.q\_proj.scales", "model.decoder.layers.0.self\_attn.q\_proj.bias", "model.decoder.layers.0.self\_attn.q\_proj.qweight", "model.decoder.layers.0.self\_attn.k\_proj.zeros", "model.decoder.layers.0.self\_attn.k\_proj.scales", "model.decoder. | 1 | 0 | 2023-03-13T15:47:30 | Tasty-Attitude-7893 | false | null | 0 | jc2h7kw | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jc2h7kw/ | false | 1 |
t1_jc2gxd4 | This is the diff I had to use to get past the dictionary error on loading at first, where it spews a bunch of missing keys:
diff --git a/llama.py b/llama.py
index 09b527e..dee2ac0 100644
\--- a/llama.py
\+++ b/llama.py
@@ -240,9 +240,9 @@ def load\_quant(model, checkpoint, wbits):
print('Loading model ...')
if checkpoint.endswith('.safetensors'):
from safetensors.torch import load\_file as safe\_load
\- model.load\_state\_dict(safe\_load(checkpoint))
\+ model.load\_state\_dict(safe\_load(checkpoint),strict=False)
else:
\- model.load\_state\_dict(torch.load(checkpoint))
\+ model.load\_state\_dict(torch.load(checkpoint),strict=False)
model.seqlen = 2048
print('Done.') | 1 | 0 | 2023-03-13T15:45:39 | Tasty-Attitude-7893 | false | null | 0 | jc2gxd4 | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jc2gxd4/ | false | 1 |
t1_jc2etra | [removed] | 1 | 0 | 2023-03-13T15:31:47 | [deleted] | true | null | 0 | jc2etra | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jc2etra/ | false | 1 |
t1_jc2ei13 | [removed] | 1 | 0 | 2023-03-13T15:29:37 | [deleted] | true | null | 0 | jc2ei13 | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jc2ei13/ | false | 1 |
t1_jc2dn50 | Are you sure everything is setup correctly and you're using the model [downloaded here](https://huggingface.co/decapoda-research/llama-30b-hf/tree/main)? I've tested following these steps from the beginning on fresh Ubuntu and Windows installs and haven't run into any errors or problems.
decapoda-research said they were going to upload new conversions of all models so you can also try waiting for that if you're still having issues with the 30B. | 3 | 0 | 2023-03-13T15:23:55 | Technical_Leather949 | false | null | 0 | jc2dn50 | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jc2dn50/ | false | 3 |
t1_jc2c8pk | What is (if any) the drawbacks of using 3/4 bit instead of 8? | 1 | 0 | 2023-03-13T15:14:24 | skripp11 | false | null | 0 | jc2c8pk | false | /r/LocalLLaMA/comments/11pw62f/does_anyone_have_a_download_for_the_3bit/jc2c8pk/ | false | 1 |
t1_jc24pot | [removed] | 1 | 0 | 2023-03-13T14:22:04 | [deleted] | true | null | 0 | jc24pot | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jc24pot/ | false | 1 |
t1_jc24a25 | [removed] | 1 | 0 | 2023-03-13T14:18:57 | [deleted] | true | null | 0 | jc24a25 | false | /r/LocalLLaMA/comments/11o7ja0/testing_llama_13b_with_a_few_challenge_questions/jc24a25/ | false | 1 |
t1_jc1lsd5 | What are the system requirements for 3-bit? | 3 | 0 | 2023-03-13T11:36:37 | PartySunday | false | null | 0 | jc1lsd5 | false | /r/LocalLLaMA/comments/11pw62f/does_anyone_have_a_download_for_the_3bit/jc1lsd5/ | false | 3 |
t1_jc14r1i | I think you should add `--listen` to the argument in the bath file that lunches the server | 1 | 0 | 2023-03-13T07:42:48 | curtwagner1984 | false | null | 0 | jc14r1i | false | /r/LocalLLaMA/comments/11pkp1u/oobabooga_ui_windows_11_does_someone_know_what/jc14r1i/ | false | 1 |
t1_jc0h4gf | Yeah, 3-bit LLaMA 7B, 13B and 30B available here
[https://huggingface.co/decapoda-research/llama-smallint-pt/tree/main](https://huggingface.co/decapoda-research/llama-smallint-pt/tree/main) | 3 | 0 | 2023-03-13T03:16:36 | Irrationalender | false | null | 0 | jc0h4gf | false | /r/LocalLLaMA/comments/11pw62f/does_anyone_have_a_download_for_the_3bit/jc0h4gf/ | false | 3 |
t1_jc0a6en | Thanks for this! After struggling for hours trying to get it to run on Windows, I got it up and running with zero headaches using Ubuntu on Windows Subsystem for Linux. | 3 | 0 | 2023-03-13T02:20:34 | iJeff | false | 2023-03-13T03:29:38 | 0 | jc0a6en | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jc0a6en/ | false | 3 |
t1_jc00oai | I edited the code to take away the strict model loading and it loaded after downloading an tokenizer from HF, but it now just spits out jibberish. I used the one from the Decapoda-research unquantified model for 30b. Do you think that's the issue? | 1 | 0 | 2023-03-13T01:05:02 | Tasty-Attitude-7893 | false | null | 0 | jc00oai | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jc00oai/ | false | 1 |
t1_jbztfbo | I had the same error(RuntimeError:....lots of missing dict stuff) and I tried two different torrents from the official install guide and the weights from huggingface. on ubuntu 22.04. I had a terrible time in CUDA land just trying to get the cpp file to compile and I've been doing cpp for almost 30 years :(. I just hate when there's a whole bunch of stuff you need to learn in order to get something simple to compile and build. I know this is a part time project, but does anyone have any clues? 13b on 8 bit runs nice on my GPU and I want to try 30b to see the 1.4t goodness. | 3 | 0 | 2023-03-13T00:09:00 | Tasty-Attitude-7893 | false | null | 0 | jbztfbo | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jbztfbo/ | false | 3 |
t1_jbzgdjt | It depends on your settings, but I can get a response as quick as 5 seconds, mostly 10 or under. Some can go 20-30 with settings turned up (using an 13B on an RTX 3080 10GB). | 3 | 0 | 2023-03-12T22:31:49 | iJeff | false | null | 0 | jbzgdjt | false | /r/LocalLLaMA/comments/11o7ja0/testing_llama_13b_with_a_few_challenge_questions/jbzgdjt/ | false | 3 |
t1_jbzg7ws | [This](https://cocktailpeanut.github.io/dalai/#/) is as a good as it gets. | 3 | 0 | 2023-03-12T22:30:41 | iJeff | false | null | 0 | jbzg7ws | false | /r/LocalLLaMA/comments/11o7ja0/testing_llama_13b_with_a_few_challenge_questions/jbzg7ws/ | false | 3 |
t1_jbzcccr | Yes, results are more coherent and higher quality for everything. I've tested language translation, chatting, question answering, etc, and 13B is a good baseline. | 3 | 0 | 2023-03-12T22:02:51 | Technical_Leather949 | false | null | 0 | jbzcccr | false | /r/LocalLLaMA/comments/11pkp1u/oobabooga_ui_windows_11_does_someone_know_what/jbzcccr/ | false | 3 |
t1_jbzbk2m | I don't mind. Thanks for making the web UI! All of this is more accessible because of it. | 11 | 0 | 2023-03-12T21:57:20 | Technical_Leather949 | false | null | 0 | jbzbk2m | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jbzbk2m/ | false | 11 |
t1_jbza6zc | Try `-—share` when you launch the server | 1 | 0 | 2023-03-12T21:47:52 | pdaddyo | false | null | 0 | jbza6zc | false | /r/LocalLLaMA/comments/11pkp1u/oobabooga_ui_windows_11_does_someone_know_what/jbza6zc/ | false | 1 |
t1_jbz5o8i | i am using [oobabooga](https://github.com/oobabooga)/[**text-generation-webui**](https://github.com/oobabooga/text-generation-webui)
can pls someone help i dont know how to code or anything but i need just a small help i wanna make the UI/chat accessable on my tablet but i dont know how , where is this \`launch()\`
and how can i change it
To create a public link, set \`share=True\` in \`launch()\` | 2 | 0 | 2023-03-12T21:16:05 | Regmas0 | false | null | 0 | jbz5o8i | false | /r/LocalLLaMA/comments/11pkp1u/oobabooga_ui_windows_11_does_someone_know_what/jbz5o8i/ | false | 2 |
t1_jbz2rlw | I have borrowed your instructions. I hope you don't mind :)
[https://github.com/oobabooga/text-generation-webui/wiki/Installation-instructions-for-human-beings](https://github.com/oobabooga/text-generation-webui/wiki/Installation-instructions-for-human-beings) | 12 | 0 | 2023-03-12T20:55:17 | oobabooga1 | false | null | 0 | jbz2rlw | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jbz2rlw/ | false | 12 |
t1_jbz279c | I'll try it out. Is the 13B 4bit significantly smarter than the 7B one? | 1 | 0 | 2023-03-12T20:51:25 | curtwagner1984 | false | null | 0 | jbz279c | false | /r/LocalLLaMA/comments/11pkp1u/oobabooga_ui_windows_11_does_someone_know_what/jbz279c/ | false | 1 |
t1_jbyt4hx | I think you may have skipped a few steps. If you're following the instructions on the [GitHub](https://github.com/oobabooga/text-generation-webui#installation-option-1-conda) page, your conda environment should be named textgen.
Try starting over using the 4bit [instructions here](https://www.reddit.com/r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/). I've tested this on a fresh Windows install and it works. | 2 | 0 | 2023-03-12T19:48:06 | Technical_Leather949 | false | null | 0 | jbyt4hx | false | /r/LocalLLaMA/comments/11pkp1u/oobabooga_ui_windows_11_does_someone_know_what/jbyt4hx/ | false | 2 |
t1_jbxrg2f | I can't wait for the 4090 titan so that I can run these models at home. Thank you for the tutorial. | 1 | 0 | 2023-03-12T15:26:11 | RabbitHole32 | false | null | 0 | jbxrg2f | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jbxrg2f/ | false | 1 |
t1_jbwo8zz | What is the speed of these responses? I'm interested in running llama locally but not sure how it performs. | 3 | 0 | 2023-03-12T08:10:16 | andrejg57 | false | null | 0 | jbwo8zz | false | /r/LocalLLaMA/comments/11o7ja0/testing_llama_13b_with_a_few_challenge_questions/jbwo8zz/ | false | 3 |
t1_jbvj2f4 | ChatGPT with 175b parameters and instruction-tuning (that no open-source model has been able to replicate yet) also confidently bullshits and invents information.
These models are next-token predictors. It's expected that they are dumb. The question is how good they are at pretending to be smart. | 13 | 0 | 2023-03-12T01:19:11 | oobabooga1 | false | null | 0 | jbvj2f4 | false | /r/LocalLLaMA/comments/11o7ja0/testing_llama_13b_with_a_few_challenge_questions/jbvj2f4/ | false | 13 |
t1_jbvi4ef | I seem to be getting an error at the end about not finding a file.
PS C:\Users\X\text-generation-webui\repositories\GPTQ-for-LLaMa>python setup_cuda.py install
No CUDA runtime is found, using CUDA_HOME='C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.1'
running install
C:\Python310\lib\site-packages\setuptools\command\install.py:34: SetuptoolsDeprecationWarning: setup.py install is deprecated. Use build and pip and other standards-based tools.
warnings.warn(
C:\Python310\lib\site-packages\setuptools\command\easy_install.py:144: EasyInstallDeprecationWarning: easy_install command is deprecated. Use build and pip and other standards-based tools.
warnings.warn(
running bdist_egg
running egg_info
writing quant_cuda.egg-info\PKG-INFO
writing dependency_links to quant_cuda.egg-info\dependency_links.txt
writing top-level names to quant_cuda.egg-info\top_level.txt
C:\Python310\lib\site-packages\torch\utils\cpp_extension.py:476: UserWarning: Attempted to use ninja as the BuildExtension backend but we could not find ninja.. Falling back to using the slow distutils backend.
warnings.warn(msg.format('we could not find ninja.'))
reading manifest file 'quant_cuda.egg-info\SOURCES.txt'
writing manifest file 'quant_cuda.egg-info\SOURCES.txt'
installing library code to build\bdist.win-amd64\egg
running install_lib
running build_ext
error: [WinError 2] The system cannot find the file specified
Edit: I just went ahead and redid it in WSL Ubuntu. Working beautifully! | 2 | 0 | 2023-03-12T01:11:22 | iJeff | false | 2023-03-12T03:33:39 | 0 | jbvi4ef | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jbvi4ef/ | false | 2 |
t1_jbvbhps | It's for making characters in the style of TavernAI. I used it as a simple way to create a very basic initial prompt similar to what ChatGPT or [Bing Chat](https://www.make-safe-ai.com/is-bing-chat-safe/Prompts_Conversations.txt) uses. | 3 | 0 | 2023-03-12T00:18:56 | Technical_Leather949 | false | null | 0 | jbvbhps | false | /r/LocalLLaMA/comments/11o7ja0/testing_llama_13b_with_a_few_challenge_questions/jbvbhps/ | false | 3 |
t1_jburnjs | I'm less than impressed. The AI does get the answers correct, but none of the explanations make sense. For the first question, the AI says:
> Since there are no other subjects mentioned beforehand, we must assume the subject of the preceding clause is also the subject of the following clause (i.e., the school bus).
There is no rule that a pronoun can only refer to the subject of the preceding clause. It could equally well refer to the object, as becomes clear when you change the sentence to: "The school bus passed the race car because it was driving so *slowly*". Suddenly the most likely referent is the race car.
The correct explanation would be that the passing vehicle must necessarily drive more quickly than the vehicle it passes. So if the reason for passing is given as "it was driving quickly" this logically refers to the faster vehicle (technically they could both be driving "quickly", but since "it" refers to only one vehicle, the logical choice is the faster bus). The AI never touched on this.
The same thing happens with the second question, where the AI prints out a lot of irrelevant information, then says:
> Thus, the most logical place where the cook might have put the bags of rice and potatoes is on top of each other.
It acts like it has cleverly deduced this fact, but this was literally plainly stated in the question already. It's basically just wasting time up to this point. Then it concludes:
> So, based on all of those clues, we can conclude that the bag of rice had to be moved.
But the AI has not presented any relevant "clues" to justify this conclusion. It's basically the Chewbacca defense in action: bringing up irrelevant facts and then jumping to a conclusion.
The correct reasoning is along the lines of: if two bags are placed on top of each other, the bag on top may obstruct access to the bag below, but not the other way around. Given that a bag of rice was placed on top of a bag of potatoes, and given that one bag had to be moved, the bag that was moved must have been the bag of rice (and the inferred reason for moving it is that the cook wanted to access the potatoes).
In both of these scenarios the AI doesn't seem to understand the real world information that humans use to resolve coreferences. It doesn't admit that, however: it just bullshits you pretending to know what it's talking about. | 5 | 0 | 2023-03-11T21:45:41 | [deleted] | false | null | 0 | jburnjs | false | /r/LocalLLaMA/comments/11o7ja0/testing_llama_13b_with_a_few_challenge_questions/jburnjs/ | false | 5 |
t1_jbuqa24 | lmaooooooooooooo | 9 | 0 | 2023-03-11T21:35:33 | bittytoy | false | null | 0 | jbuqa24 | false | /r/LocalLLaMA/comments/11o7ja0/testing_llama_13b_with_a_few_challenge_questions/jbuqa24/ | false | 9 |
t1_jbujse4 | I've been playing with llama.cpp, which I don't think text-generation-webui supports yet. Anyways, is this json file something that is from text-generation-webui? I'm guessing it's a way to tell text-generation-webui which prompt to "pre-inject", so to speak? Just researching some good prompts for llama 13B and came across this, so just wondering. | 2 | 0 | 2023-03-11T20:47:35 | anarchos | false | null | 0 | jbujse4 | false | /r/LocalLLaMA/comments/11o7ja0/testing_llama_13b_with_a_few_challenge_questions/jbujse4/ | false | 2 |
t1_jbu6te9 | This is very promising. I have created an extension that loads your csv and lets users pick a prompt to use:
[https://github.com/oobabooga/text-generation-webui/blob/main/extensions/llama\_prompts/script.py](https://github.com/oobabooga/text-generation-webui/blob/main/extensions/llama_prompts/script.py)
&#x200B;
&#x200B;
https://preview.redd.it/21bd1sfoa7na1.png?width=858&format=png&auto=webp&v=enabled&s=d7f7eb1213d0eb3a763352aba2e71e4e690b20be | 9 | 0 | 2023-03-11T19:13:48 | oobabooga1 | false | null | 0 | jbu6te9 | false | /r/LocalLLaMA/comments/11oqbvx/repository_of_llama_prompts/jbu6te9/ | false | 9 |
t1_jbu3lk4 | What hardware are you using to run this? | 5 | 0 | 2023-03-11T18:51:19 | 2muchnet42day | false | null | 0 | jbu3lk4 | false | /r/LocalLLaMA/comments/11o7ja0/testing_llama_13b_with_a_few_challenge_questions/jbu3lk4/ | false | 5 |
t1_jbu1ocq | Thank you for your message. Is there an estimated time of arrival for a **user-friendly** installation method that is compatible with the WebUI? | 1 | 0 | 2023-03-11T18:37:53 | curtwagner1984 | false | null | 0 | jbu1ocq | false | /r/LocalLLaMA/comments/11o7ja0/testing_llama_13b_with_a_few_challenge_questions/jbu1ocq/ | false | 1 |
t1_jbtng6x | A user on github provided the whl required for windows which SHOULD significantly shorten the 4-bit installation process, i believe foregoing the need to install Visual Studio altogether.
[GPTQ quantization(3 or 4 bit quantization) support for LLaMa · Issue #177 · oobabooga/text-generation-webui · GitHub](https://github.com/oobabooga/text-generation-webui/issues/177#issuecomment-1464844721)
That said, I've done the installation process and am running into an error:
`Starting the web UI...`
`Loading the extension "gallery"... Ok.`
`Loading llama-7b...`
`CUDA extension not installed.`
`Loading model ...`
`Traceback (most recent call last):`
`File "D:\MachineLearning\TextWebui\text-generation-webui\server.py", line 194, inshared.model, shared.tokenizer = load_model(shared.model_name)`
`File "D:\MachineLearning\TextWebui\text-generation-webui\modules\models.py", line 119, in load_modelmodel = load_quant(path_to_model, Path(f"models/{pt_model}"), 4)`
`File "D:\MachineLearning\TextWebui\text-generation-webui\repositories\GPTQ-for-LLaMa\llama.py", line 241, in load_quantmodel.load_state_dict(torch.load(checkpoint))`
`File "D:\MachineLearning\TextWebui\installer_files\env\lib\site-packages\torch\nn\modules\module.py", line 1671, in load_state_dictraise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(RuntimeError: Error(s) in loading state_dict for LLaMAForCausalLM:`
`Missing key(s) in state_dict: "model.decoder.embed_tokens.weight",`
`"model.decoder.layers.0.self_attn.q_proj.zeros",`
`[a whole bunch of layer errors]` | 3 | 0 | 2023-03-11T16:58:19 | j4nds4 | false | null | 0 | jbtng6x | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jbtng6x/ | false | 3 |
t1_jbtl1op | [https://www.reddit.com/r/LocalLLaMA/comments/11o6o3f/how\_to\_install\_llama\_8bit\_and\_4bit/](https://www.reddit.com/r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/) | 4 | 0 | 2023-03-11T16:41:43 | Kamehameha90 | false | null | 0 | jbtl1op | false | /r/LocalLLaMA/comments/11o7ja0/testing_llama_13b_with_a_few_challenge_questions/jbtl1op/ | false | 4 |
t1_jbtkk02 | Where can one get the model? | 1 | 0 | 2023-03-11T16:38:23 | curtwagner1984 | false | null | 0 | jbtkk02 | false | /r/LocalLLaMA/comments/11o7ja0/testing_llama_13b_with_a_few_challenge_questions/jbtkk02/ | false | 1 |
t1_jbsuftk | Thanks a lot for this guide! All is working and I had no errors, but if I press "generate" I get this error:
`Traceback (most recent call last):`
`File "C:\Users\still\miniconda3\envs\textgen\lib\site-packages\gradio\`[`routes.py`](https://routes.py)`", line 374, in run_predict`
`output = await app.get_blocks().process_api(`
`File "C:\Users\still\miniconda3\envs\textgen\lib\site-packages\gradio\`[`blocks.py`](https://blocks.py)`", line 1017, in process_api`
`result = await self.call_function(`
`File "C:\Users\still\miniconda3\envs\textgen\lib\site-packages\gradio\`[`blocks.py`](https://blocks.py)`", line 849, in call_function`
`prediction = await anyio.to_thread.run_sync(`
`File "C:\Users\still\miniconda3\envs\textgen\lib\site-packages\anyio\to_thread.py", line 31, in run_sync`
`return await get_asynclib().run_sync_in_worker_thread(`
`File "C:\Users\still\miniconda3\envs\textgen\lib\site-packages\anyio\_backends\_asyncio.py", line 937, in run_sync_in_worker_thread`
`return await future`
`File "C:\Users\still\miniconda3\envs\textgen\lib\site-packages\anyio\_backends\_asyncio.py", line 867, in run`
`result =` [`context.run`](https://context.run)`(func, *args)`
`File "C:\Users\still\miniconda3\envs\textgen\lib\site-packages\gradio\`[`utils.py`](https://utils.py)`", line 453, in async_iteration`
`return next(iterator)`
`File "Q:\OogaBooga\text-generation-webui\modules\text_generation.py", line 170, in generate_reply`
`output = eval(f"shared.model.generate({', '.join(generate_params)}){cuda}")[0]`
`File "<string>", line 1, in <module>`
`File "C:\Users\still\miniconda3\envs\textgen\lib\site-packages\torch\autograd\grad_mode.py", line 27, in decorate_context`
`return func(*args, **kwargs)`
`File "C:\Users\still\miniconda3\envs\textgen\lib\site-packages\transformers\generation\`[`utils.py`](https://utils.py)`", line 1452, in generate`
`return self.sample(`
`File "C:\Users\still\miniconda3\envs\textgen\lib\site-packages\transformers\generation\`[`utils.py`](https://utils.py)`", line 2468, in sample`
`outputs = self(`
`File "C:\Users\still\miniconda3\envs\textgen\lib\site-packages\torch\nn\modules\`[`module.py`](https://module.py)`", line 1130, in _call_impl`
`return forward_call(*input, **kwargs)`
`File "C:\Users\still\miniconda3\envs\textgen\lib\site-packages\transformers\models\llama\modeling_llama.py", line 772, in forward`
`outputs = self.model(`
`File "C:\Users\still\miniconda3\envs\textgen\lib\site-packages\torch\nn\modules\`[`module.py`](https://module.py)`", line 1130, in _call_impl`
`return forward_call(*input, **kwargs)`
`File "C:\Users\still\miniconda3\envs\textgen\lib\site-packages\transformers\models\llama\modeling_llama.py", line 621, in forward`
`layer_outputs = decoder_layer(`
`File "C:\Users\still\miniconda3\envs\textgen\lib\site-packages\torch\nn\modules\`[`module.py`](https://module.py)`", line 1130, in _call_impl`
`return forward_call(*input, **kwargs)`
`File "C:\Users\still\miniconda3\envs\textgen\lib\site-packages\transformers\models\llama\modeling_llama.py", line 318, in forward`
`hidden_states, self_attn_weights, present_key_value = self.self_attn(`
`File "C:\Users\still\miniconda3\envs\textgen\lib\site-packages\torch\nn\modules\`[`module.py`](https://module.py)`", line 1130, in _call_impl`
`return forward_call(*input, **kwargs)`
`File "C:\Users\still\miniconda3\envs\textgen\lib\site-packages\transformers\models\llama\modeling_llama.py", line 218, in forward`
`query_states = self.q_proj(hidden_states).view(bsz, q_len, self.num_heads, self.head_dim).transpose(1, 2)`
`File "C:\Users\still\miniconda3\envs\textgen\lib\site-packages\torch\nn\modules\`[`module.py`](https://module.py)`", line 1130, in _call_impl`
`return forward_call(*input, **kwargs)`
`File "Q:\OogaBooga\text-generation-webui\repositories\GPTQ-for-LLaMa\`[`quant.py`](https://quant.py)`", line 198, in forward`
`quant_cuda.vecquant4matmul(x, self.qweight, y, self.scales, self.zeros)`
**NameError: name 'quant\_cuda' is not defined**
Another user of the WebUI posted the same error on Github (**NameError: name 'quant\_cuda' is not defined)**, but no answer as of now.
I use a 4090, 64GB RAM and the 30b model (4bit).
Edit: I also get "CUDA extension not installed." when I start the WebUI.
Edit2: Ok, I did all again and there is indeed 1 error, if I try to run:
1. python setup\_cuda.py install
I get:
`Traceback (most recent call last):`
`File "Q:\OogaBooga\text-generation-webui\repositories\GPTQ-for-LLaMa\setup_cuda.py", line 4, in <module>`
`setup(`
`File "C:\Users\still\miniconda3\envs\textgen\lib\site-packages\setuptools\__init__.py", line 87, in setup`
`return distutils.core.setup(**attrs)`
`File "C:\Users\still\miniconda3\envs\textgen\lib\site-packages\setuptools\_distutils\`[`core.py`](https://core.py)`", line 185, in setup`
`return run_commands(dist)`
`File "C:\Users\still\miniconda3\envs\textgen\lib\site-packages\setuptools\_distutils\`[`core.py`](https://core.py)`", line 201, in run_commands`
`dist.run_commands()`
`File "C:\Users\still\miniconda3\envs\textgen\lib\site-packages\setuptools\_distutils\`[`dist.py`](https://dist.py)`", line 969, in run_commands`
`self.run_command(cmd)`
`File "C:\Users\still\miniconda3\envs\textgen\lib\site-packages\setuptools\`[`dist.py`](https://dist.py)`", line 1208, in run_command`
`super().run_command(command)`
`File "C:\Users\still\miniconda3\envs\textgen\lib\site-packages\setuptools\_distutils\`[`dist.py`](https://dist.py)`", line 988, in run_command`
`cmd_obj.run()`
`File "C:\Users\still\miniconda3\envs\textgen\lib\site-packages\setuptools\command\`[`install.py`](https://install.py)`", line 74, in run`
`self.do_egg_install()`
`File "C:\Users\still\miniconda3\envs\textgen\lib\site-packages\setuptools\command\`[`install.py`](https://install.py)`", line 123, in do_egg_install`
`self.run_command('bdist_egg')`
`File "C:\Users\still\miniconda3\envs\textgen\lib\site-packages\setuptools\_distutils\`[`cmd.py`](https://cmd.py)`", line 318, in run_command`
`self.distribution.run_command(command)`
`File "C:\Users\still\miniconda3\envs\textgen\lib\site-packages\setuptools\`[`dist.py`](https://dist.py)`", line 1208, in run_command`
`super().run_command(command)`
`File "C:\Users\still\miniconda3\envs\textgen\lib\site-packages\setuptools\_distutils\`[`dist.py`](https://dist.py)`", line 988, in run_command`
`cmd_obj.run()`
`File "C:\Users\still\miniconda3\envs\textgen\lib\site-packages\setuptools\command\bdist_egg.py", line 165, in run`
`cmd = self.call_command('install_lib', warn_dir=0)`
`File "C:\Users\still\miniconda3\envs\textgen\lib\site-packages\setuptools\command\bdist_egg.py", line 151, in call_command`
`self.run_command(cmdname)`
`File "C:\Users\still\miniconda3\envs\textgen\lib\site-packages\setuptools\_distutils\`[`cmd.py`](https://cmd.py)`", line 318, in run_command`
`self.distribution.run_command(command)`
`File "C:\Users\still\miniconda3\envs\textgen\lib\site-packages\setuptools\`[`dist.py`](https://dist.py)`", line 1208, in run_command`
`super().run_command(command)`
`File "C:\Users\still\miniconda3\envs\textgen\lib\site-packages\setuptools\_distutils\`[`dist.py`](https://dist.py)`", line 988, in run_command`
`cmd_obj.run()`
`File "C:\Users\still\miniconda3\envs\textgen\lib\site-packages\setuptools\command\install_lib.py", line 11, in run`
[`self.build`](https://self.build)`()`
`File "C:\Users\still\miniconda3\envs\textgen\lib\site-packages\setuptools\_distutils\command\install_lib.py", line 112, in build`
`self.run_command('build_ext')`
`File "C:\Users\still\miniconda3\envs\textgen\lib\site-packages\setuptools\_distutils\`[`cmd.py`](https://cmd.py)`", line 318, in run_command`
`self.distribution.run_command(command)`
`File "C:\Users\still\miniconda3\envs\textgen\lib\site-packages\setuptools\`[`dist.py`](https://dist.py)`", line 1208, in run_command`
`super().run_command(command)`
`File "C:\Users\still\miniconda3\envs\textgen\lib\site-packages\setuptools\_distutils\`[`dist.py`](https://dist.py)`", line 988, in run_command`
`cmd_obj.run()`
`File "C:\Users\still\miniconda3\envs\textgen\lib\site-packages\setuptools\command\build_ext.py", line 84, in run`
`_build_ext.run(self)`
`File "C:\Users\still\miniconda3\envs\textgen\lib\site-packages\setuptools\_distutils\command\build_ext.py", line 346, in run`
`self.build_extensions()`
`File "C:\Users\still\miniconda3\envs\textgen\lib\site-packages\torch\utils\cpp_extension.py", line 420, in build_extensions`
`compiler_name, compiler_version = self._check_abi()`
`File "C:\Users\still\miniconda3\envs\textgen\lib\site-packages\torch\utils\cpp_extension.py", line 797, in _check_abi`
`raise UserWarning(msg)`
**UserWarning: It seems that the VC environment is activated but DISTUTILS\_USE\_SDK is not set.This may lead to multiple activations of the VC env.Please set \`DISTUTILS\_USE\_SDK=1\` and try again.**
I tried setting **DISTUTILS\_USE\_SDK=1**, but I still get the same error.
Edit4: Fixed! Just set **DISTUTILS\_USE\_SDK=1** in System-Variables and installed the Cuda Package, after that, it worked. | 2 | 0 | 2023-03-11T13:15:00 | Kamehameha90 | false | 2023-03-11T14:08:36 | 0 | jbsuftk | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jbsuftk/ | false | 2 |
t1_jbsdxsf | Really interresting great job! Thanks for sharing! | 3 | 0 | 2023-03-11T09:45:11 | alexl83 | false | null | 0 | jbsdxsf | false | /r/LocalLLaMA/comments/11o7ja0/testing_llama_13b_with_a_few_challenge_questions/jbsdxsf/ | false | 3 |
t1_jbrqxsw | Those replies are really impressive. I was used to messing with OPT and GPT-J before all this, and the responses were semi-coherent ramblings. LLaMA is extremely coherent in comparison. | 11 | 0 | 2023-03-11T04:55:42 | oobabooga1 | false | null | 0 | jbrqxsw | false | /r/LocalLLaMA/comments/11o7ja0/testing_llama_13b_with_a_few_challenge_questions/jbrqxsw/ | false | 11 |
t1_jbrh3e5 | Here is the text for the .json file I used. The example dialogue comes from the same preprompt Bing Chat uses, after seeing someone else get good results using a similar approach:
{
"char_name": "LLaMA-Precise",
"char_persona": "LLaMA-Precise is a helpful AI chatbot that always provides useful and detailed answers to User's requests and questions. LLaMA-Precise tries to be as informative and friendly as possible.",
"char_greeting": "Hello! I am LLaMA-Precise, your informative assistant. How may I help you today?",
"world_scenario": "",
"example_dialogue": "{{user}}: Hi. Can you help me with something?\n{{char}}: Hello, this is LLaMA-Precise. How can I help?\n{{user}}: Have you heard of the latest nuclear fusion experiment from South Korea? I heard their experiment got hotter than the sun.\n{{char}}: Yes, I have heard about the experiment. Scientists in South Korea have managed to sustain a nuclear fusion reaction running at temperatures in excess of 100 million°C for 30 seconds for the first time and have finally been able to achieve a net energy gain when carrying out a nuclear fusion experiment. That's nearly seven times hotter than the core of the Sun, which has a temperature of 15 million degrees kelvins! That's exciting!\n{{user}}: Wow! That's super interesting to know. Change of topic, I plan to change to the iPhone 14 this year.\n{{char}}: I see. What makes you want to change to iPhone 14?\n{{user}}: My phone right now is too old, so I want to upgrade.\n{{char}}: That's always a good reason to upgrade. You should be able to save money by trading in your old phone for credit. I hope you enjoy your new phone when you upgrade."
}
I'm using the parameters as described in the guide I posted. These parameters are from very impressive [tests](https://gist.github.com/shawwn/63fe948dd4a6bc86ecfd6e51606a0b4b) run by another user with a slight modification to top\_p:
temp 0.7, repetition\_penalty 1.1764705882352942 (1/0.85), top\_k 40, and top\_p 0.1 | 10 | 0 | 2023-03-11T03:24:22 | Technical_Leather949 | false | null | 0 | jbrh3e5 | false | /r/LocalLLaMA/comments/11o7ja0/testing_llama_13b_with_a_few_challenge_questions/jbrh3e5/ | false | 10 |
t1_jbrfliw | Can you share the background description of a bot? | 2 | 0 | 2023-03-11T03:11:21 | polawiaczperel | false | null | 0 | jbrfliw | false | /r/LocalLLaMA/comments/11o7ja0/testing_llama_13b_with_a_few_challenge_questions/jbrfliw/ | false | 2 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.