Update README.md
Browse files
README.md
CHANGED
|
@@ -40,6 +40,8 @@ inference:
|
|
| 40 |
|
| 41 |
This model is a fine-tuned version of [facebook/opt-2.7b](https://huggingface.co/facebook/opt-2.7b) on about 80k whatsapp/text messages (mine). Please use responsibly :)
|
| 42 |
|
|
|
|
|
|
|
| 43 |
## Model description
|
| 44 |
|
| 45 |
- Exploring to see how OPT does in terms of dialogue/conversational applications
|
|
@@ -49,7 +51,7 @@ This model is a fine-tuned version of [facebook/opt-2.7b](https://huggingface.co
|
|
| 49 |
|
| 50 |
> The base model has a custom license which propogates to this one. Most importantly, it cannot be used commercially. Read more here: [facebook/opt-2.7b](https://huggingface.co/facebook/opt-2.7b)
|
| 51 |
|
| 52 |
-
- the model is probably too large to use via API here. Use in Python with GPU RAM / CPU RAM > 12 gb.
|
| 53 |
- alternatively, you can message [a bot on telegram](http://t.me/GPTPeter_bot) where I test LLMs for dialogue generation
|
| 54 |
- **any statements or claims made by this model do not reflect actual claims/statements by me.** Keep in mind it is a _fine-tuned_ version of the model on my data, so things from pre-training are also present in outputs.
|
| 55 |
|
|
|
|
| 40 |
|
| 41 |
This model is a fine-tuned version of [facebook/opt-2.7b](https://huggingface.co/facebook/opt-2.7b) on about 80k whatsapp/text messages (mine). Please use responsibly :)
|
| 42 |
|
| 43 |
+
Test it out on Google Colab [here](https://colab.research.google.com/gist/pszemraj/26a69775c9d012051396ab5ae980f5c1/example-text-gen-pszemraj-opt-peter-2-7b.ipynb)!
|
| 44 |
+
|
| 45 |
## Model description
|
| 46 |
|
| 47 |
- Exploring to see how OPT does in terms of dialogue/conversational applications
|
|
|
|
| 51 |
|
| 52 |
> The base model has a custom license which propogates to this one. Most importantly, it cannot be used commercially. Read more here: [facebook/opt-2.7b](https://huggingface.co/facebook/opt-2.7b)
|
| 53 |
|
| 54 |
+
- the model is probably too large to use via API here. Use in Python with GPU RAM / CPU RAM > 12 gb, Colab notebook linked above.
|
| 55 |
- alternatively, you can message [a bot on telegram](http://t.me/GPTPeter_bot) where I test LLMs for dialogue generation
|
| 56 |
- **any statements or claims made by this model do not reflect actual claims/statements by me.** Keep in mind it is a _fine-tuned_ version of the model on my data, so things from pre-training are also present in outputs.
|
| 57 |
|