LuxBannerVelvetCafe

Velvet_Cafe


MN-VelvetCafe-RP-12B-V2

Thanks for the feedback β€” it helped fix two main issues in the previous release
  • Bad preset ("Iggy's-RP-Preset")

    • Apologies if you used it
    • DRY sampler settings were wrong (different from what I actually tested)
    • Likely caused by duplicating/renaming in SillyTavern β€” my mistake
  • Wrong tokenizer for quants

    • Accidentally used Mistral Nemo tokenizer instead of the base (Neona) one
    • Caused formatting issues, especially under strong Dan's PE influence
    • Fixed by: re-merging with identical SLERP config + using correct tokenizer from the start
    • Then quantized (extensively tested on Q4_K_M)
  • Old preset behavior (dry_multiplier=1, rep_pen=1.12, freq_pen=0.1):

    • Early chat (first 10–30 messages): crisp, varied, nice emphasis formatting
    • Later chat: strong degradation
      • excessive bold + italic quoted speech
      • repetitive dramatic patterns
      • forced/unnatural prose
      • eventual chaos + noticeable quality drop
  • This version should feel much more consistent

    • Better format stability
    • Significantly less degradation in long roleplays (when using proper sampler settings)
  • Hope you enjoy the update β€” feedback always welcome!

  • Next goals

    • Experiment with other merge methods
    • Try adding a 3rd model to increase response variety and quality

Static Quants:

https://huggingface.co/IggyLux/MN-VelvetCafe-RP-12B-V2-Q4_K_M https://huggingface.co/IggyLux/MN-VelvetCafe-RP-12B-Q8_0-GGUF


About MN-VelvetCafe-RP-12B-V2

This is my 5th merge attempt, I'm personally limited to 12B models due to 8GB VRAM My preferred RP is focused on multi-character group chat RP (2+ characters)

What makes this merge stand out:

  • Excellent scene/position/clothing tracking β†’ immersive long-term RP
  • Balanced, narrative-appropriate emotions (no random aggression/refusals)
  • Reliable handling of author's notes & system prompts

Goal: Combine Dan's PE (strong character/clothes/personality consistency) with Neona (great style adaptation & instruction following) β†’ visually detailed, consistent RP without losing emotional stability

Big thanks to the creators β€” highly recommend trying both bases:

Preferred SillyTavern Templates:

  • ChatML
  • Mistral V3-Tekken

SamplerSettingsModelButton200

Feel free to use the preset as a base, but tweak it to match your taste β€” everyone's RP style is different!


Character Cards & Roleplay Usage/Examples

My approach (I call this the "Iggy format" since I rarely see others do it this way):

  • I avoid putting opening messages on character cards
  • I avoid example dialogue entirely β€” especially any that speaks for the user or introduces third parties

Why skip these:

  • Heavy example dialogue (especially interviewer-style) can make the model:
    • Introduce unwanted extra characters
    • Start speaking as the interviewer
    • Or even speak for your character
    • Example of problematic example-dialogue style: image

Outdated bloated formats I skip:

  • 1500–2000 token cards full of P-lists, rigid bullets, personality tables, etc.
    • Example of dense P-list style: image
  • Modern models read standard plain text formatting just fine β€” no need for over-engineered lists anymore

How I actually start RPs in SillyTavern:

  1. Use /sys to send a narrative summary first (sets scenario before any messages)
  2. Let the character send the real first message
  3. If context limit pushes the /sys prompt out:
    • Repurpose it into the Group Chat Scenario field
    • Or condense into Author's Notes

ModelButtonIL200

This keeps immersion high and context efficient β€” especially for group/multi-character RP.

Years of low-VRAM local RP taught me these habits. If you hit any snags or want help troubleshooting, just ask!


This is a merge of pre-trained language models created using mergekit.

Merge Details

Merge Method

This model was merged using the SLERP merge method.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

models:
  - model: kyx0r/Neona-12B
  - model: PocketDoc/Dans-PersonalityEngine-V1.3.0-12b
merge_method: slerp
base_model: kyx0r/Neona-12B
parameters:
  t:
    - value: 0.2
    - filter: self_attn
      value: [0, 0.2, 0.4, 0.6, 0.8, 1]
    - filter: mlp
      value: [1, 0.8, 0.6, 0.4, 0.2, 0]
dtype: bfloat16
chat_template: "chatml"
tokenizer:
  source: "base"
Downloads last month
84
Safetensors
Model size
12B params
Tensor type
BF16
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for IggyLux/MN-VelvetCafe-RP-12B-V2