Upload README.md with huggingface_hub
Browse files
README.md
CHANGED
|
@@ -8,7 +8,7 @@ library_name: diffusers
|
|
| 8 |
---
|
| 9 |
|
| 10 |
## π₯π₯π₯ News!!
|
| 11 |
-
Nov 26, 2025: π We release [Step1X-Edit-v1p2](https://huggingface.co/stepfun-ai/Step1X-Edit-v1p2), a native reasoning edit model with better performance on KRIS-Bench and GEdit-Bench.
|
| 12 |
<table>
|
| 13 |
<thead>
|
| 14 |
<tr>
|
|
@@ -21,6 +21,12 @@ Nov 26, 2025: π We release [Step1X-Edit-v1p2](https://huggingface.co/stepfun-
|
|
| 21 |
</tr>
|
| 22 |
</thead>
|
| 23 |
<tbody>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 24 |
<tr>
|
| 25 |
<td>Step1X-Edit v1.1 </td> <td>7.66</td> <td>7.35</td> <td>6.97</td> <td>53.05</td> <td>54.34</td> <td>44.66</td> <td>51.59</td>
|
| 26 |
</tr>
|
|
@@ -81,7 +87,7 @@ pipe_output.final_images[0].save(f"0001-final.jpg", lossless=True)
|
|
| 81 |
```
|
| 82 |
The results looks like:
|
| 83 |
<div align="center">
|
| 84 |
-
<img width="1080" alt="results" src="assets/v1p2_vis.
|
| 85 |
</div>
|
| 86 |
|
| 87 |
|
|
@@ -93,6 +99,13 @@ Step1X-Edit-v1p2 represents a step towards reasoning-enhanced image editing mode
|
|
| 93 |
|
| 94 |
## Citation
|
| 95 |
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 96 |
@article{liu2025step1x-edit,
|
| 97 |
title={Step1X-Edit: A Practical Framework for General Image Editing},
|
| 98 |
author={Shiyu Liu and Yucheng Han and Peng Xing and Fukun Yin and Rui Wang and Wei Cheng and Jiaqi Liao and Yingming Wang and Honghao Fu and Chunrui Han and Guopeng Li and Yuang Peng and Quan Sun and Jingwei Wu and Yan Cai and Zheng Ge and Ranchen Ming and Lei Xia and Xianfang Zeng and Yibo Zhu and Binxing Jiao and Xiangyu Zhang and Gang Yu and Daxin Jiang},
|
|
|
|
| 8 |
---
|
| 9 |
|
| 10 |
## π₯π₯π₯ News!!
|
| 11 |
+
* Nov 26, 2025: π We release [Step1X-Edit-v1p2](https://huggingface.co/stepfun-ai/Step1X-Edit-v1p2) (referred to as **ReasonEdit-S** in the paper), a native reasoning edit model with better performance on KRIS-Bench and GEdit-Bench. Technical report can be found [here](https://arxiv.org/abs/2511.22625).
|
| 12 |
<table>
|
| 13 |
<thead>
|
| 14 |
<tr>
|
|
|
|
| 21 |
</tr>
|
| 22 |
</thead>
|
| 23 |
<tbody>
|
| 24 |
+
<tr>
|
| 25 |
+
<td>Flux-Kontext-dev </td> <td>7.16</td> <td>7.37</td> <td>6.51</td> <td>53.28</td> <td>50.36</td> <td>42.53</td> <td>49.54</td>
|
| 26 |
+
</tr>
|
| 27 |
+
<tr>
|
| 28 |
+
<td>Qwen-Image-Edit-2509 </td> <td>8.00</td> <td>7.86</td> <td>7.56</td> <td>61.47</td> <td>56.79</td> <td>47.07</td> <td>56.15</td>
|
| 29 |
+
</tr>
|
| 30 |
<tr>
|
| 31 |
<td>Step1X-Edit v1.1 </td> <td>7.66</td> <td>7.35</td> <td>6.97</td> <td>53.05</td> <td>54.34</td> <td>44.66</td> <td>51.59</td>
|
| 32 |
</tr>
|
|
|
|
| 87 |
```
|
| 88 |
The results looks like:
|
| 89 |
<div align="center">
|
| 90 |
+
<img width="1080" alt="results" src="assets/v1p2_vis.jpeg">
|
| 91 |
</div>
|
| 92 |
|
| 93 |
|
|
|
|
| 99 |
|
| 100 |
## Citation
|
| 101 |
```
|
| 102 |
+
@article{yin2025reasonedit,
|
| 103 |
+
title={ReasonEdit: Towards Reasoning-Enhanced Image Editing Models},
|
| 104 |
+
author={Fukun Yin, Shiyu Liu, Yucheng Han, Zhibo Wang, Peng Xing, Rui Wang, Wei Cheng, Yingming Wang, Aojie Li, Zixin Yin, Pengtao Chen, Xiangyu Zhang, Daxin Jiang, Xianfang Zeng, Gang Yu},
|
| 105 |
+
journal={arXiv preprint arXiv:2511.22625},
|
| 106 |
+
year={2025}
|
| 107 |
+
}
|
| 108 |
+
|
| 109 |
@article{liu2025step1x-edit,
|
| 110 |
title={Step1X-Edit: A Practical Framework for General Image Editing},
|
| 111 |
author={Shiyu Liu and Yucheng Han and Peng Xing and Fukun Yin and Rui Wang and Wei Cheng and Jiaqi Liao and Yingming Wang and Honghao Fu and Chunrui Han and Guopeng Li and Yuang Peng and Quan Sun and Jingwei Wu and Yan Cai and Zheng Ge and Ranchen Ming and Lei Xia and Xianfang Zeng and Yibo Zhu and Binxing Jiao and Xiangyu Zhang and Gang Yu and Daxin Jiang},
|