β Text Classification Model β DistilBERT
A lightweight and efficient DistilBERT-based text classification model designed for binary text classification tasks such as sentiment analysis, opinion detection, or simple natural-language classification projects.
This repository contains:
- β A clean inference script (
model.py) - β A training script for fine-tuning (
train.py) - β Configuration files (
config.json) - β A model card (
model_card.md) - β Example input samples (
example_inputs.txt) - β
requirements.txtfor dependencies
π Features
- Built on DistilBERT, optimized for speed and smaller memory footprint compared to full BERT.
- Easy to fine-tune on any binary text dataset.
- Works on CPU, GPU, and cloud platforms.
- Minimal and beginner-friendly structure.
π Repository Structure
.
βββ README.md
βββ requirements.txt
βββ model.py
βββ train.py
βββ config.json
βββ model_card.md
βββ example_inputs.txt
π Installation
Create a virtual environment (recommended) and install dependencies:
python -m venv venv
source venv/bin/activate # Unix/macOS
venv\Scripts\activate # Windows
pip install -r requirements.txt
π Inference Example
from model import load_model_and_predict, model_predict
model, tokenizer = load_model_and_predict(load_only=True)
output = model_predict("This movie was fantastic!", model, tokenizer)
print(output)
π§ Model Details
- Model name: text-classification-distilbert
- Base Model: distilbert-base-uncased
- Task: Binary Text Classification
- Language: English
- License: Apache-2.0
π§ͺ Training
python train.py
π Example Inputs
I really like this!
This is absolutely terrible.
β οΈ Limitations
- Binary labels only
- Performance depends on training data
- Possible dataset bias
- Downloads last month
- 46