Trilogix1/Hugston_code-rl-Qwen3-4B-Instruct-2507-SFT-30b pipeline_tag: text-generation tags:
Qwen3 Instruct
Vision 8B
Hugston
Original weights at: https://huggingface.co/Lamapi/next-ocr
This model is converted and quantized version by Hugston Team created with Quanta (see Github to get it for free). This is a real, proof-of-concept and implementation on how to convert and quantize a .safetensor llm model in GGUF.
Quantization was performed using an automatic and faster method, which leads to less time and faster results.
This model was made possible by: https://Hugston.com
You can use the model with HugstonOne Enterprise Edition
Tested in Vision and coding tasks. Loaded with images and asked to extract text and describe the image.
Tested also in recreating images and websites by giving an image and getting back a webpage identical.
It is very accurate, but it loops in long coding.
Still it is very impressive for it´s size and considering being an instruct model.
Watch HugstonOne coding and preview in action:
https://vimeo.com/1121493834?share=copy&fl=sv&fe=ci
-Download App HugstonOne at Hugston.com or at https://github.com/Mainframework
-Download model from https://hugston.com/explore?folder=llm_models or Huggingface
-If you already have the Llm Model downloaded chose it by clicking pick model in HugstonOne -Then click Load model in Cli or Server
-For multimodal use you need a VL/multimodal LLM model with the Mmproj file in the same folder. -Select model and select mmproj.
-Note: if the mmproj is inside the same folder with other models non multimodal, the non model will not load unless the mmproj is moved from folder.
- Downloads last month
- 191
4-bit
5-bit
6-bit
8-bit
16-bit
32-bit

