Inference Providers
Active filters: llama-2
Text Generation
• 7B • Updated • 926k
• 2.28k
meta-llama/Llama-2-7b-chat-hf
Text Generation
• Updated • 392k
• 4.72k
Text Generation
• Updated • 252
• 4.46k
meta-llama/Llama-2-13b-chat
Text Generation
• Updated • 17
• 296
TheBloke/Llama-2-13B-chat-GPTQ
Text Generation
• 13B • Updated • 13.1k
• 363
TheBloke/Llama-2-70B-Chat-GPTQ
Text Generation
• 69B • Updated • 3.04k
• 259
TheBloke/Llama-2-7B-Chat-GGUF
Text Generation
• 7B • Updated • 164k
• 513
TheBloke/Llama-2-13B-GGUF
Text Generation
• 13B • Updated • 1.59k
• 69
TheBloke/Llama-2-70B-Chat-GGUF
Text Generation
• 69B • Updated • 2.07k
• 122
TheBloke/Llama-2-70B-Chat-AWQ
Text Generation
• 69B • Updated • 2.2k
• 24
AstroMLab/astrollama-2-70b-base_aic
Text Generation
• Updated • 4
• 1
DavidAU/Maximizing-Model-Performance-All-Quants-Types-And-Full-Precision-by-Samplers_Parameters
Updated • 171
meta-llama/Llama-2-7b-chat
Text Generation
• Updated • 55
• 616
Text Generation
• Updated • 34
• 352
Text Generation
• Updated • 21
• 538
meta-llama/Llama-2-70b-hf
Text Generation
• Updated • 12.3k
• 854
meta-llama/Llama-2-13b-chat-hf
Text Generation
• Updated • 146k
• • 1.11k
meta-llama/Llama-2-13b-hf
Text Generation
• Updated • 32.4k
• 621
meta-llama/Llama-2-70b-chat
Text Generation
• Updated • 4
• 398
meta-llama/Llama-2-70b-chat-hf
Text Generation
• Updated • 21.8k
• 2.2k
Text Generation
• Updated • 100
• 219
Text Generation
• 7B • Updated • 30.5k
• 81
TheBloke/Llama-2-13B-GPTQ
Text Generation
• 13B • Updated • 205
• 120
TheBloke/Llama-2-13B-GGML
Text Generation
• Updated • 74
• 174
TheBloke/Llama-2-7B-Chat-GGML
Text Generation
• Updated • 1.34k
• 872
TheBloke/Llama-2-7B-Chat-GPTQ
Text Generation
• 7B • Updated • 23.4k
• 267
TheBloke/Llama-2-13B-chat-GGML
Text Generation
• Updated • 137
• 695
anonymous4chan/llama-2-7b
Text Generation
• 7B • Updated • 94
NousResearch/Llama-2-7b-hf
Text Generation
• 7B • Updated • 155k
• 172
NousResearch/Llama-2-13b-hf
Text Generation
• Updated • 3.08k
• 74