DarkIdol β Llama 3.1 8B Instruct 1.3_Uncensored
This repository contains the DarkIdol-Llama-3.1-8B-Instruct-1.3-Uncensored model β an instruction-oriented 8B parameter variant designed for users who want a highly responsive, lightly-restricted assistant for offline or local workflows.
Model Overview
- Model Name: DarkIdol β Llama 3.1 8B Instruct 1.3
- Base Architecture: Meta Llama-3.1 (8B parameters)
- Developer / Maintainer: aifeifei798
- License: Follows the licensing terms of the original Llama-3.1 model
- Intended Use: Local deployments where users want tight control over alignment, filter behavior, and conversational style
What is DarkIdol?
DarkIdol is part of a series of locally-focused instruction-tuned language models that emphasize: User-directed alignment Minimal artificial guardrails High responsiveness in conversational and reasoning tasks Strong support for multi-step thinking and long interactions
This version (1.3) aims to provide an assistant that feels adaptable and direct, suited for power users, tinkerers, developers, and those running LLMs privately.
Chat Template & Conversation Format
The model uses a ChatML-like interaction structure:
<|im_start|>system
{system message}
<|im_end|>
<|im_start|>user
{your prompt here}
<|im_end|>
<|im_start|>assistant
This template provides clean separation between roles and improves controllability during conversation.
Key Features & Capabilities
- Instruction-tuned for concise, user-aligned responses
- Uncensored behavioural tuning for flexible, research-friendly use
- Effective at conversational, creative, and reasoning tasks
- Supports locally controlled alignment and personality shaping
- Suitable for a wide range of runtimes, including CPU and GPU inference tools
- Stable output patterns for long and complex dialogues
Intended Use Cases
- Local assistant frameworks β general chat, role-play systems, personal productivity
- Coding helper β explanations, snippets, small-scale debugging
- Reasoning tasks β step-by-step answers, structured problem solving
- Experimentation β alignment research, prompt engineering, uncensored model behavior studies
- Offline / private deployments β scenarios where user control and data locality matter
Acknowledgements
Special thanks to: The creators and maintainers of the base Llama-3.1 architecture The open-source ecosystems supporting training, quantization, and deployment Community contributors who provide feedback and testing across diverse hardware setups
- Downloads last month
- 4,568
2-bit
3-bit
4-bit
5-bit
6-bit
8-bit
16-bit