🚀 Modèle complet (LoRA mergé) - Compatible transformers standard
Browse files
README.md
CHANGED
|
@@ -11,13 +11,14 @@ tags:
|
|
| 11 |
base_model: unsloth/gemma-2-2b-it-bnb-4bit
|
| 12 |
model_type: gemma
|
| 13 |
pipeline_tag: text-generation
|
|
|
|
| 14 |
---
|
| 15 |
|
| 16 |
-
# 🏠 Gemma Smart Lamp Assistant
|
| 17 |
|
| 18 |
-
**Modèle IA complet pour
|
| 19 |
|
| 20 |
-
## 🚀 Utilisation
|
| 21 |
|
| 22 |
```python
|
| 23 |
from transformers import AutoTokenizer, AutoModelForCausalLM
|
|
@@ -28,10 +29,11 @@ model_name = "TomSft15/gemma-3-smart-lamp-assistant-fr"
|
|
| 28 |
tokenizer = AutoTokenizer.from_pretrained(model_name)
|
| 29 |
model = AutoModelForCausalLM.from_pretrained(
|
| 30 |
model_name,
|
| 31 |
-
torch_dtype=torch.float32, #
|
| 32 |
-
|
| 33 |
)
|
| 34 |
|
|
|
|
| 35 |
def control_lamp(instruction):
|
| 36 |
prompt = f"<bos><start_of_turn>user\n{instruction}<end_of_turn>\n<start_of_turn>model\n"
|
| 37 |
inputs = tokenizer(prompt, return_tensors="pt")
|
|
@@ -52,25 +54,26 @@ def control_lamp(instruction):
|
|
| 52 |
return response[start_idx:end_idx].strip()
|
| 53 |
return response
|
| 54 |
|
| 55 |
-
#
|
| 56 |
print(control_lamp("Allume la lampe")) # "J'ai allumé la lampe."
|
| 57 |
print(control_lamp("Couleur rouge")) # "La lampe est maintenant rouge."
|
| 58 |
print(control_lamp("Baisse à 50%")) # "La luminosité est à 50%."
|
| 59 |
```
|
| 60 |
|
| 61 |
-
## 📊
|
| 62 |
|
| 63 |
-
- **
|
| 64 |
-
- **
|
| 65 |
-
- **
|
|
|
|
| 66 |
|
| 67 |
-
##
|
| 68 |
|
| 69 |
-
|
|
|
|
|
|
|
| 70 |
|
| 71 |
-
|
|
|
|
| 72 |
|
| 73 |
-
-
|
| 74 |
-
- **Dataset** : 50+ exemples français
|
| 75 |
-
- **Taille** : ~1.8GB
|
| 76 |
-
- **Compatible** : CPU et GPU
|
|
|
|
| 11 |
base_model: unsloth/gemma-2-2b-it-bnb-4bit
|
| 12 |
model_type: gemma
|
| 13 |
pipeline_tag: text-generation
|
| 14 |
+
library_name: peft
|
| 15 |
---
|
| 16 |
|
| 17 |
+
# 🏠 Gemma Smart Lamp Assistant (Français)
|
| 18 |
|
| 19 |
+
**Modèle IA complet pour contrôle de lampes connectées**
|
| 20 |
|
| 21 |
+
## 🚀 Utilisation
|
| 22 |
|
| 23 |
```python
|
| 24 |
from transformers import AutoTokenizer, AutoModelForCausalLM
|
|
|
|
| 29 |
tokenizer = AutoTokenizer.from_pretrained(model_name)
|
| 30 |
model = AutoModelForCausalLM.from_pretrained(
|
| 31 |
model_name,
|
| 32 |
+
torch_dtype=torch.float32, # Pour CPU/Raspberry Pi
|
| 33 |
+
device_map="auto" # Pour GPU
|
| 34 |
)
|
| 35 |
|
| 36 |
+
# Contrôler la lampe
|
| 37 |
def control_lamp(instruction):
|
| 38 |
prompt = f"<bos><start_of_turn>user\n{instruction}<end_of_turn>\n<start_of_turn>model\n"
|
| 39 |
inputs = tokenizer(prompt, return_tensors="pt")
|
|
|
|
| 54 |
return response[start_idx:end_idx].strip()
|
| 55 |
return response
|
| 56 |
|
| 57 |
+
# Exemples
|
| 58 |
print(control_lamp("Allume la lampe")) # "J'ai allumé la lampe."
|
| 59 |
print(control_lamp("Couleur rouge")) # "La lampe est maintenant rouge."
|
| 60 |
print(control_lamp("Baisse à 50%")) # "La luminosité est à 50%."
|
| 61 |
```
|
| 62 |
|
| 63 |
+
## 📊 Performances
|
| 64 |
|
| 65 |
+
- **Modèle** : Gemma 2 2B + LoRA fine-tuning
|
| 66 |
+
- **Précision** : >90% sur commandes de base
|
| 67 |
+
- **Compatible** : CPU et GPU
|
| 68 |
+
- **Taille** : ~1.8GB (modèle complet)
|
| 69 |
|
| 70 |
+
## 🎯 Commandes supportées
|
| 71 |
|
| 72 |
+
- **Allumage/Extinction** : "Allume", "Éteins", "On", "Off"
|
| 73 |
+
- **Couleurs** : "Rouge", "Bleu", "Vert", "Jaune", "Blanc"
|
| 74 |
+
- **Luminosité** : "Plus fort", "Baisse", "50%", "Maximum"
|
| 75 |
|
| 76 |
+
✅ **Modèle complet** - Compatible avec `AutoModelForCausalLM`
|
| 77 |
+
### Framework versions
|
| 78 |
|
| 79 |
+
- PEFT 0.15.2
|
|
|
|
|
|
|
|
|