⭐ Text Classification Model β€” DistilBERT

A lightweight and efficient DistilBERT-based text classification model designed for binary text classification tasks such as sentiment analysis, opinion detection, or simple natural-language classification projects.

This repository contains:

  • βœ” A clean inference script (model.py)
  • βœ” A training script for fine-tuning (train.py)
  • βœ” Configuration files (config.json)
  • βœ” A model card (model_card.md)
  • βœ” Example input samples (example_inputs.txt)
  • βœ” requirements.txt for dependencies

πŸš€ Features

  • Built on DistilBERT, optimized for speed and smaller memory footprint compared to full BERT.
  • Easy to fine-tune on any binary text dataset.
  • Works on CPU, GPU, and cloud platforms.
  • Minimal and beginner-friendly structure.

πŸ“‚ Repository Structure

.
β”œβ”€β”€ README.md
β”œβ”€β”€ requirements.txt
β”œβ”€β”€ model.py
β”œβ”€β”€ train.py
β”œβ”€β”€ config.json
β”œβ”€β”€ model_card.md
└── example_inputs.txt

πŸ›  Installation

Create a virtual environment (recommended) and install dependencies:

python -m venv venv
source venv/bin/activate   # Unix/macOS
venv\Scripts\activate      # Windows

pip install -r requirements.txt

πŸ” Inference Example

from model import load_model_and_predict, model_predict

model, tokenizer = load_model_and_predict(load_only=True)
output = model_predict("This movie was fantastic!", model, tokenizer)
print(output)

🧠 Model Details

  • Model name: text-classification-distilbert
  • Base Model: distilbert-base-uncased
  • Task: Binary Text Classification
  • Language: English
  • License: Apache-2.0

πŸ§ͺ Training

python train.py

πŸ“Œ Example Inputs

I really like this!
This is absolutely terrible.

⚠️ Limitations

  • Binary labels only
  • Performance depends on training data
  • Possible dataset bias
Downloads last month
46
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Dataset used to train hmnshudhmn24/text-classification-distilbert