Over the last year, Mag Mell has enjoyed a consistent presence in the roleplay scene, with 100,000 downloads total on the full-precision weights, and likely more than that (although I don't control those repos so I can't see) on the quantized versions. (Not bad for a craft-produced merge spun up by someone who barely understood what a tensor was at the time...)
Since then, I've become a de-facto community manager for the allura-org organization, a group who I'm immensely proud of for what we've achieved in the last year - a space built by and for queer, trans and gender-nonconforming members of the AI space across different use cases. (This has taken up basically all my mental bandwidth, which is the primary reason I haven't done more LLMs! That and Factorio. IYKYK.)
When I picked my name, I expected to burn it in a couple months' time, but instead it's become something of a rent-lowering bulwark, attracting the type of people we want and deflecting the ones we don't, and I've felt more comfortable being my whole weirdo self in the space as a result. While this hasn't resulted in much that I can share on Hugging Face, it's regardless been a very fun and fulfilling time.
If anybody reading is a fan of MM or anything else that I do or just have something funny to ask, feel free to leave any questions in the comments. I may reply directly, or I may collect them into a blog post on the 16th. We'll see how the next week goes for me.
Pressure from far-right activist groups is inducing payment processors to crack down on adult content on mainstream platforms (where previously they were restricted in scope to adult platforms), starting first with Steam, now recently itch.io. We saw similar on Civit recently, except they were unlucky enough to have partnered with a payment processor who didn't care that they complied and pulled out anyway. A friend of mine recently lost all her income because of similar stuff (and the Itch move badly hurt other people in my circles.)
This is pretty far-removed from a lot of the people reading this, andI don't want to tell you how to feel about this happening, *but all the same, don't assume that Hugging Face is going to be safe because it's an "academic-coded" platform.* If you make things for mature audiences here, please make sure you have local backups of your data and models.
Alfitaria/Q25-1.5B-VeoLu Q2.5-1.5-VeoLu is a 1.5 billion parameter General Purpose Creative model trained on Qwen2.5-1.5B-Instruct. Intended mostly as an educational process for myself, Veo Lu nevertheless manages to be usable most of the time, while also being light enough to potentially run on a smartphone.
THANK YOU for bringing Mag Mell to 10,000 downloads across its quantizations!! I'm over the moon with how well it's done, and with everyone's kind feedback.
I'm in a team now! Allura are a group of alumni from various reaches of the LLM roleplay scene. allura-org
!!SEE UPDATE BELOW!! I don't know who still needs to hear this, but if you're using Mistral Nemo-based models, you might have been using the wrong completions format. This is a signal boost from MarinaraSpaghetti's model card for NemoMix-Unleashed: MarinaraSpaghetti/NemoMix-Unleashed-12B A lot of people have been working with a version of Nemo that's been reconfigured for ChatML, and while that works great, simply using the right format might be just as effective at correcting weirdness people in the AIRP scene sometimes have with Nemo.
Huge ups to Marinara for pointing this out, and to the MistralAI team member who let her know.
PRs for KoboldCPP's chat adapters and KoboldAI Lite *have been merged* and are coming in their respective releases (probably the next time KoboldCPP updates -- it didn't make it for 1.75.1, but you could just grab 'em from the repo!)
inflatebot/MN-12B-Mag-Mell-R1 MN-12B-Mag-Mell is a multi-stage merge, inspired by hypermerges like Tiefighter and Umbral Mind, intended for use as a general-purpose "Best of Nemo" model for co-writing, roleplay, and text adventures.
Consistently, Mag Mell produced prose that shocked testers, with a minimum of "slop". It also exhibited a unique sense of humor, and a propensity for inserting bespoke details into adventuring scenarios.
Anybody ever play Final Fantasy: Crystal Chronicles? Like, *really* play it?
Mag Mell has been in my head recently. What a place that was.
Those cocoons looked like I could lay down inside of one, and it would be the most powerful sleep of a lifetime, with dreams that would last one thousand years, and I'd wake up with the wisdom of generations.
i wanted to share an experiment i did with upcycling phi-3 mini into an moe recently. while benchmarks are definitely within a margin of error and they performed similarly, i think it's an interesting base to try and see if you can improve phi's performance! (maybe looking into HuggingFaceFW/fineweb-edu could be interesting, i also left some other notes if anyone with more compute access wants to try it themselves)
Is anyone looking into some sort of decentralized/federated dataset generation or classification by humans instead of synthetically?
From my experience with trying models, a *lot* of modern finetunes are trained on what amounts to, in essence, GPT-4 generated slop that makes everything sound like a rip-off GPT-4 (refer to i.e. the Dolphin finetunes). I have a feeling that this is a lot of the reason people haven't been quite as successful as Meta's instruct tunes of Llama 3.