Update README.md
Browse files
README.md
CHANGED
|
@@ -1,3 +1,9 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
# Qwen2.5-3B-R1-MedicalReasoner
|
| 2 |
|
| 3 |
**Qwen2.5-3B-R1-MedicalReasoner** is a clinical reasoning language model fine-tuned for advanced diagnostic and case-based problem solving. It has been developed for applications in medical education, clinical decision support, and research, with the capability to generate detailed chain-of-thought responses that include both the reasoning process and the final answer.
|
|
@@ -48,4 +54,49 @@ outputs = model.fast_generate(
|
|
| 48 |
sampling_params=sampling_params,
|
| 49 |
lora_request=None # Use None if the LoRA adapter is already loaded
|
| 50 |
)
|
| 51 |
-
print(outputs[0].outputs[0].text)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
datasets:
|
| 3 |
+
- iimran/Medical-Intelligence-Questions
|
| 4 |
+
base_model:
|
| 5 |
+
- Qwen/Qwen2.5-3B
|
| 6 |
+
---
|
| 7 |
# Qwen2.5-3B-R1-MedicalReasoner
|
| 8 |
|
| 9 |
**Qwen2.5-3B-R1-MedicalReasoner** is a clinical reasoning language model fine-tuned for advanced diagnostic and case-based problem solving. It has been developed for applications in medical education, clinical decision support, and research, with the capability to generate detailed chain-of-thought responses that include both the reasoning process and the final answer.
|
|
|
|
| 54 |
sampling_params=sampling_params,
|
| 55 |
lora_request=None # Use None if the LoRA adapter is already loaded
|
| 56 |
)
|
| 57 |
+
print(outputs[0].outputs[0].text)
|
| 58 |
+
```
|
| 59 |
+
|
| 60 |
+
### Adapter Integration
|
| 61 |
+
|
| 62 |
+
For further fine-tuning or experiments with LoRA adapters, the LoRA adapter for this model is available in a separate repository.
|
| 63 |
+
|
| 64 |
+
- **LoRA Adapter Repo:** [iimran/Qwen2.5-3B-R1-MedicalReasoner-lora-adapter](https://huggingface.co/iimran/Qwen2.5-3B-R1-MedicalReasoner-lora-adapter)
|
| 65 |
+
|
| 66 |
+
To download and integrate the LoRA adapter:
|
| 67 |
+
```python
|
| 68 |
+
from huggingface_hub import snapshot_download
|
| 69 |
+
|
| 70 |
+
# Download the LoRA adapter repository:
|
| 71 |
+
lora_path = snapshot_download("iimran/Qwen2.5-3B-R1-MedicalReasoner-lora-adapter")
|
| 72 |
+
print("LoRA adapter downloaded to:", lora_path)
|
| 73 |
+
|
| 74 |
+
# Load the adapter into the model:
|
| 75 |
+
model.load_lora(lora_path)
|
| 76 |
+
```
|
| 77 |
+
|
| 78 |
+
## Installation
|
| 79 |
+
|
| 80 |
+
To use this model, install the required packages:
|
| 81 |
+
```bash
|
| 82 |
+
pip install unsloth vllm trl datasets huggingface-hub
|
| 83 |
+
```
|
| 84 |
+
|
| 85 |
+
A compatible GPU is recommended for optimal performance.
|
| 86 |
+
|
| 87 |
+
## Citation
|
| 88 |
+
|
| 89 |
+
If you use **Qwen2.5-3B-R1-MedicalReasoner** in your research, please cite:
|
| 90 |
+
```bibtex
|
| 91 |
+
@misc{imran2025model,
|
| 92 |
+
author = {Imran},
|
| 93 |
+
title = {Qwen2.5-3B-R1-MedicalReasoner: A Fine-Tuned Clinical Reasoning Model},
|
| 94 |
+
year = {2025},
|
| 95 |
+
publisher = {Hugging Face},
|
| 96 |
+
url = {https://huggingface.co/iimran/Qwen2.5-3B-R1-MedicalReasoner}
|
| 97 |
+
}
|
| 98 |
+
```
|
| 99 |
+
|
| 100 |
+
## Disclaimer
|
| 101 |
+
|
| 102 |
+
This model is intended for research and educational purposes only. It should not be used as the sole basis for clinical decision-making. All outputs should be validated by qualified healthcare professionals.
|