patrickvonplaten commited on
Commit
aa6cbde
·
verified ·
1 Parent(s): 19a74a7

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -3
README.md CHANGED
@@ -104,8 +104,10 @@ If this is your first time running Vibe, it will:
104
 
105
  The model can also be deployed with the following libraries:
106
  - [`vllm (recommended)`](https://github.com/vllm-project/vllm): See [here](#vllm-recommended)
 
107
  - [`transformers`](https://github.com/huggingface/transformers): See [here](#transformers)
108
 
 
109
  We're thankful to the llama.cpp team and their community as well as the LM Studio and Ollama teams that worked hard to make these models also available for their frameworks.
110
 
111
  You can now also run Devstral using these (alphabetical ordered) frameworks:
@@ -226,13 +228,13 @@ print(response.json()["choices"][0]["message"]["content"])
226
  ```
227
  </details>
228
 
229
- #### SGLang (recommended)
230
 
231
  <details>
232
  <summary>Expand</summary>
233
 
234
- We recommend using this model with [SGLang](https://github.com/sgl-project/sglang)
235
- to implement production-ready inference pipelines (OpenAI-compatible API server).
236
 
237
  **_Installation_**
238
 
 
104
 
105
  The model can also be deployed with the following libraries:
106
  - [`vllm (recommended)`](https://github.com/vllm-project/vllm): See [here](#vllm-recommended)
107
+ - [`sglang` (sglang)](https://github.com/sgl-project/sglang): See [here](#sglang)
108
  - [`transformers`](https://github.com/huggingface/transformers): See [here](#transformers)
109
 
110
+
111
  We're thankful to the llama.cpp team and their community as well as the LM Studio and Ollama teams that worked hard to make these models also available for their frameworks.
112
 
113
  You can now also run Devstral using these (alphabetical ordered) frameworks:
 
228
  ```
229
  </details>
230
 
231
+ #### SGLang
232
 
233
  <details>
234
  <summary>Expand</summary>
235
 
236
+ To use this model with [SGLang](https://github.com/sgl-project/sglang) to implement a production-ready inference pipelines (OpenAI-compatible API server),
237
+ see the following sections.
238
 
239
  **_Installation_**
240