File size: 5,600 Bytes
156f76c
 
 
 
 
 
423ec18
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
156f76c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
---
license: apache-2.0
library_name: transformers
base_model: internlm/JanusCoder-14B
---

# JanusCoder-14B AWQ - INT8

## Model Details

### Quantization Details

- **Quantization Method:** AWQ
- **Bits:** 8
- **Group Size:** 32
- **Calibration Dataset:** [nvidia/Llama-Nemotron-Post-Training-Dataset](https://huggingface.co/datasets/nvidia/Llama-Nemotron-Post-Training-Dataset)
- **Quantization Tool:** [llm-compressor](https://github.com/vllm-project/llm-compressor)

## Get Started

### Prerequisite

```bash
pip install -U vllm
```

### Basic Usage

```bash
vllm serve cyankiwi/JanusCoder-14B-AWQ-8bit
```

## Additional Information

### Changelog

- **v1.0.0** - Initial quantized release

### Authors

- **Name:** cyankiwi

- **Contacts:** [email protected]

# JanusCoder-14B

[💻Github Repo](https://github.com/InternLM/JanusCoder) • [🤗Model Collections](https://huggingface.co/collections/internlm/januscoder) • [📜Technical Report](https://www.arxiv.org/abs/2510.23538)

## Introduction

We introduce JanusCoder and JanusCoderV, a suite of open-source foundational models designed to establish a unified visual-programmatic interface for code intelligence.
This model suite is built upon open-source language models (such as Qwen3-8B and 14B) and multimodal models (such as Qwen2.5-VL and InternVL3.5-8B). The JanusCoder series is trained on JANUSCODE-800K—the largest multimodal code corpus to date, generated by an innovative synthesis toolkit, covering everything from standard charts to complex interactive Web UIs and code-driven animations.
This enables the models to uniformly handle diverse visual-programmatic tasks, such as generating code from textual instructions, visual inputs, or a combination of both, rather than building specialized models for isolated tasks. JanusCoder excels at flexible content generation (like data visualizations and interactive front-ends) as well as precise, program-driven editing of visual effects and complex animation construction.

## Model Downloads

| Model Name | Description | Download |
| --- | --- | --- |
| JanusCoder-8B | 8B text model based on Qwen3-8B. | 🤗 [Model](https://huggingface.co/internlm//JanusCoder-8B) |
| 👉 **JanusCoder-14B** | 14B text model based on Qwen3-14B. | 🤗 [Model](https://huggingface.co/internlm//JanusCoder-14B) |
| JanusCoderV-7B | 7B multimodal model based on Qwen2.5-VL-7B. | 🤗 [Model](https://huggingface.co/internlm//JanusCoderV-7B) |
| JanusCoderV-8B | 8B multimodal model based on InternVL3.5-8B. | 🤗 [Model](https://huggingface.co/internlm//JanusCoderV-8B) |

## Performance

We evaluate the JanusCoder model on various benchmarks that span code interlligence tasks on multiple PLs:

| Model | JanusCoder-14B | Qwen3-14B | Qwen2.5-Coder-32B-Instruct | LLaMA3-8B-Instruct | GPT-4o |
| --- | --- | --- | --- | --- | --- |
| PandasPlotBench (Task) | 86 | 78 | 82 | 69 | 85 |
| ArtifactsBench | 41.1 | 36.5 | 35.5 | 36.5 | 37.9 |
| DTVBench (Manim) | 8.41 | 6.63 | 9.61 | 4.92 | 10.60 |
| DTVBench (Wolfram) | 5.97 | 5.08 | 4.98 | 3.15 | 5.97 |

## Quick Start

**Transformers**

The following provides demo code illustrating how to generate text using JanusCoder-14B.

> Please use transformers >= 4.55.0 to ensure the model works normally.

```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch

model_name = "internlm/JanusCoder-14B"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name, device_map="auto", torch_dtype="auto")

messages = [
    {
        "role": "user",
        "content": [
            {"type": "text", "text": "Create a line plot that illustrates function y=x."},
        ],
    }
]

inputs = tokenizer.apply_chat_template(messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt").to(model.device, dtype=torch.bfloat16)

generate_ids = model.generate(**inputs, max_new_tokens=32768)
decoded_output = processor.decode(generate_ids[0, inputs["input_ids"].shape[1] :], skip_special_tokens=True)
print(decoded_output)
```

## Citation
🫶  If you are interested in our work or find the repository / checkpoints / benchmark / data helpful, please consider using the following citation format when referencing our papers:

```bibtex
@article{sun2025januscoder,
  title={JanusCoder: Towards a Foundational Visual-Programmatic Interface for Code Intelligence},
  author={Sun, Qiushi and Gong, Jingyang and Liu, Yang and Chen, Qiaosheng and Li, Lei and Chen, Kai and Guo, Qipeng and Kao, Ben and Yuan, Fei},
  journal={arXiv preprint arXiv:2510.23538},
  year={2025}
}

@article{sun2024survey,
  title={A survey of neural code intelligence: Paradigms, advances and beyond},
  author={Sun, Qiushi and Chen, Zhirui and Xu, Fangzhi and Cheng, Kanzhi and Ma, Chang and Yin, Zhangyue and Wang, Jianing and Han, Chengcheng and Zhu, Renyu and Yuan, Shuai and others},
  journal={arXiv preprint arXiv:2403.14734},
  year={2024}
}

@article{chen2025interactscience,
  title={InteractScience: Programmatic and Visually-Grounded Evaluation of Interactive Scientific Demonstration Code Generation},
  author={Chen, Qiaosheng and Liu, Yang and Li, Lei and Chen, Kai and Guo, Qipeng and Cheng, Gong and Yuan, Fei},
  journal={arXiv preprint arXiv:2510.09724},
  year={2025}
}

@article{sun2025codeevo,
  title={CodeEvo: Interaction-Driven Synthesis of Code-centric Data through Hybrid and Iterative Feedback},
  author={Sun, Qiushi and Gong, Jinyang and Li, Lei and Guo, Qipeng and Yuan, Fei},
  journal={arXiv preprint arXiv:2507.22080},
  year={2025}
}
```