--- language: - vi library_name: transformers license: mit pipeline_tag: text-classification tags: - SemViQA - three-class-classification - fact-checking --- # SemViQA-TC: Vietnamese Three-class Classification for Claim Verification ## Model Description The rise of misinformation, exacerbated by Large Language Models (LLMs) like GPT and Gemini, demands robust fact-checking solutions, especially for low-resource languages like Vietnamese. Existing methods struggle with semantic ambiguity, homonyms, and complex linguistic structures, often trading accuracy for efficiency. We introduce SemViQA, a novel Vietnamese fact-checking framework integrating Semantic-based Evidence Retrieval (SER) and Two-step Verdict Classification (TVC). Our approach balances precision and speed, achieving state-of-the-art results with 78.97\% strict accuracy on ISE-DSC01 and 80.82\% on ViWikiFC, securing 1st place in the UIT Data Science Challenge. Additionally, SemViQA Faster improves inference speed 7x while maintaining competitive accuracy. SemViQA sets a new benchmark for Vietnamese fact verification, advancing the fight against misinformation. **SemViQA-TC** is one of the key components of the **SemViQA** system, designed for **three-class classification** in Vietnamese fact-checking. This model classifies a given claim into one of three categories: **SUPPORTED**, **REFUTED**, or **NOT ENOUGH INFORMATION (NEI)** based on retrieved evidence. To address these challenges, SemViQA integrates: - **Semantic-based Evidence Retrieval (SER)**: Combines **TF-IDF** with a **Question Answering Token Classifier (QATC)** to enhance retrieval precision while reducing inference time. - **Two-step Verdict Classification (TVC)**: Uses hierarchical classification optimized with **Cross-Entropy and Focal Loss**, improving claim verification across three categories: - **Supported** ✅ - **Refuted** ❌ - **Not Enough Information (NEI)** 🤷♂️ ### **Model Information** - **Developed by:** [SemViQA Research Team](https://huggingface.co/SemViQA) - **Fine-tuned model:** [InfoXLM](https://huggingface.co/microsoft/infoxlm-large) - **Supported Language:** Vietnamese - **Task:** Three-Class Classification (Fact Verification) - **Dataset:** [ViWikiFC](https://arxiv.org/abs/2405.07615) SemViQA-TC serves as the **first step in the two-step classification process** of the SemViQA system. It initially categorizes claims into three classes: **SUPPORTED, REFUTED, or NEI**. For claims classified as **SUPPORTED** or **REFUTED**, a secondary **binary classification model (SemViQA-BC)** further refines the prediction. This hierarchical classification strategy enhances the accuracy of fact verification. ### **🏆 Achievements** - **1st place** in the **UIT Data Science Challenge** 🏅 - **State-of-the-art** performance on: - **ISE-DSC01** → **78.97% strict accuracy** - **ViWikiFC** → **80.82% strict accuracy** - **SemViQA Faster**: **7x speed improvement** over the standard model 🚀 These results establish **SemViQA** as a **benchmark for Vietnamese fact verification**, advancing efforts to combat misinformation and ensure **information integrity**. ## Usage Example Direct Model Usage ```Python # Install semviqa !pip install semviqa # Initalize a pipeline import torch import torch.nn.functional as F from transformers import AutoTokenizer from semviqa.tvc.model import ClaimModelForClassification tokenizer = AutoTokenizer.from_pretrained("SemViQA/tc-infoxlm-viwikifc") model = ClaimModelForClassification.from_pretrained("SemViQA/tc-infoxlm-viwikifc") claim = "Chiến tranh với Campuchia đã kết thúc trước khi Việt Nam thống nhất." evidence = "Sau khi thống nhất, Việt Nam tiếp tục gặp khó khăn do sự sụp đổ và tan rã của đồng minh Liên Xô cùng Khối phía Đông, các lệnh cấm vận của Hoa Kỳ, chiến tranh với Campuchia, biên giới giáp Trung Quốc và hậu quả của chính sách bao cấp sau nhiều năm áp dụng." inputs = tokenizer( claim, evidence, truncation="only_second", add_special_tokens=True, max_length=256, padding='max_length', return_attention_mask=True, return_token_type_ids=False, return_tensors='pt', ) labels = ["NEI", "SUPPORTED", "REFUTED"] with torch.no_grad(): outputs = model(**inputs) logits = outputs["logits"] probabilities = F.softmax(logits, dim=1).squeeze() for i, (label, prob) in enumerate(zip(labels, probabilities.tolist()), start=1): print(f"{i}) {label} {prob:.4f}") # 1) NEI 0.0001 # 2) SUPPORTED 0.0001 # 3) REFUTED 0.9998 ``` ## **Evaluation Results** SemViQA-TC is one of the key components of the two-step classification (TVC) approach in the SemViQA system. SemViQA-TC achieved impressive results on the test set, demonstrating accurate and efficient classification capabilities. The detailed evaluation of SemViQA-TC is presented in the table below.
| Method | ViWikiFC | ||||
|---|---|---|---|---|---|
| ER | VC | Strict Acc | VC Acc | ER Acc | Time (s) |
| TF-IDF | InfoXLMlarge | 75.56 | 82.21 | 90.15 | 131 |
| XLM-Rlarge | 76.47 | 82.78 | 90.15 | 134 | |
| Ernie-Mlarge | 75.56 | 81.83 | 90.15 | 144 | |
| BM25 | InfoXLMlarge | 70.44 | 79.01 | 83.50 | 130 |
| XLM-Rlarge | 70.97 | 78.91 | 83.50 | 132 | |
| Ernie-Mlarge | 70.21 | 78.29 | 83.50 | 141 | |
| SBert | InfoXLMlarge | 74.99 | 81.59 | 89.72 | 195 |
| XLM-Rlarge | 75.80 | 82.35 | 89.72 | 194 | |
| Ernie-Mlarge | 75.13 | 81.44 | 89.72 | 203 | |
| QA-based approaches | VC | ||||
| ViMRClarge | InfoXLMlarge | 77.28 | 81.97 | 92.49 | 3778 |
| XLM-Rlarge | 78.29 | 82.83 | 92.49 | 3824 | |
| Ernie-Mlarge | 77.38 | 81.92 | 92.49 | 3785 | |
| InfoXLMlarge | InfoXLMlarge | 78.14 | 82.07 | 93.45 | 4092 |
| XLM-Rlarge | 79.20 | 83.07 | 93.45 | 4096 | |
| Ernie-Mlarge | 78.24 | 82.21 | 93.45 | 4102 | |
| LLM | |||||
| Qwen2.5-1.5B-Instruct | 51.03 | 65.18 | 78.96 | 7665 | |
| Qwen2.5-3B-Instruct | 44.38 | 62.31 | 71.35 | 12123 | |
| LLM | VC | ||||
| Qwen2.5-1.5B-Instruct | InfoXLMlarge | 66.14 | 76.47 | 78.96 | 7788 |
| XLM-Rlarge | 67.67 | 78.10 | 78.96 | 7789 | |
| Ernie-Mlarge | 66.52 | 76.52 | 78.96 | 7794 | |
| Qwen2.5-3B-Instruct | InfoXLMlarge | 59.88 | 72.50 | 71.35 | 12246 |
| XLM-Rlarge | 60.74 | 73.08 | 71.35 | 12246 | |
| Ernie-Mlarge | 60.02 | 72.21 | 71.35 | 12251 | |
| SER Faster (ours) | TVC (ours) | ||||
| TF-IDF + ViMRClarge | Ernie-Mlarge | 79.44 | 82.93 | 94.60 | 410 |
| TF-IDF + InfoXLMlarge | Ernie-Mlarge | 79.77 | 83.07 | 95.03 | 487 |
| SER (ours) | TVC (ours) | ||||
| TF-IDF + ViMRClarge | InfoXLMlarge | 80.25 | 83.84 | 94.69 | 2731 |
| XLM-Rlarge | 80.34 | 83.64 | 94.69 | 2733 | |
| Ernie-Mlarge | 79.53 | 82.97 | 94.69 | 2733 | |
| TF-IDF + InfoXLMlarge | InfoXLMlarge | 80.68 | 83.98 | 95.31 | 3860 |
| XLM-Rlarge | 80.82 | 83.88 | 95.31 | 3843 | |
| Ernie-Mlarge | 80.06 | 83.17 | 95.31 | 3891 | |