Uzumaki
Narutoouz
AI & ML interests
None yet
Recent Activity
new activity about 15 hours ago
OrionLLM/GRM2-3b:Ideal Sampling parameters to reproduce benchmarks liked a model about 15 hours ago
rednote-hilab/dots.mocr-svg liked a model about 15 hours ago
OrionLLM/GRM2-3bOrganizations
Ideal Sampling parameters to reproduce benchmarks
1
#3 opened about 15 hours ago
by
Narutoouz
can we get minimax-m2.7
🤗 12
3
#49 opened 8 days ago
by
CHNtentes
Feature Request: TFLite Q4/Q6/Q8 Quantizations for Nanbeige4.1-3B
1
#42 opened 10 days ago
by
Narutoouz
Need support for mlx inference
1
#1 opened 12 days ago
by
Narutoouz
please upload benchmarks
1
#2 opened 14 days ago
by
Narutoouz
mlx lm support
👍 1
#7 opened 17 days ago
by
Narutoouz
Any Plans for an Instruct Model?
🤗🔥 6
6
#15 opened about 1 month ago
by
Ashacorporation
Model "thinks" for too long
👍 3
11
#12 opened about 1 month ago
by
Moisha1985
mlx version please
#1 opened 21 days ago
by
Narutoouz
Insufficient context length
4
#2 opened 29 days ago
by
X-SZM
please make mlx lm and gguf version
🚀 1
1
#1 opened 29 days ago
by
Narutoouz
Can you make dwq 3bit and 4bit quant
#2 opened about 1 month ago
by
Narutoouz
can you make nvfp4 quant
#1 opened about 1 month ago
by
Narutoouz
please make 3 bit & 4 bit dwq quant of cerebras/MiniMax-M2.5-REAP-172B-A10B
#5 opened about 1 month ago
by
Narutoouz
can anybody make nvfp4 mlx quant ?
#2 opened about 1 month ago
by
Narutoouz
mlx lm and llama.cpp support
#9 opened about 1 month ago
by
Narutoouz
Support for mlx lm and llama.cpp
👍 1
#8 opened about 1 month ago
by
Narutoouz
can u make 3bit and 4bit dwq quants of cerebras/MiniMax-M2.5-REAP-172B-A10B ?
#3 opened about 1 month ago
by
Narutoouz
support for mlx lm and llama.cpp
🚀 1
1
#3 opened about 1 month ago
by
Narutoouz
make 30b 3a model variant or using gpt 20b model
1
#1 opened about 1 month ago
by
Narutoouz