mlabonne commited on
Commit
81716d6
·
verified ·
1 Parent(s): 050bde4

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -0
README.md CHANGED
@@ -197,6 +197,10 @@ for output in outputs:
197
  print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")
198
  ```
199
 
 
 
 
 
200
  ## 🔧 How to fine-tune LFM2
201
 
202
  We recommend fine-tuning LFM2 models on your use cases to maximize performance.
 
197
  print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")
198
  ```
199
 
200
+ ### 3. llama.cpp
201
+
202
+ You can run LFM2 with llama.cpp using its [GGUF checkpoint](https://huggingface.co/LiquidAI/LFM2-2.6B-Exp-GGUF). Find more information in the model card.
203
+
204
  ## 🔧 How to fine-tune LFM2
205
 
206
  We recommend fine-tuning LFM2 models on your use cases to maximize performance.