PerkinsFund's picture
Update README.md
136ead8 verified
metadata
license: cc-by-4.0
tags:
  - malware
  - cybersecurity
  - ATT&CK
  - MBC
  - pe-files
  - elf-files
  - binary-classification
  - tabular-data
  - threat-intelligence
  - digital-forensics
  - reverse-engineering
  - incident-response
  - security-telemetry
  - ai-security
  - security-ml
  - mitre-attack
  - mitre-mbc
  - windows
  - linux
  - executable-files
  - static-analysis
  - behavioral-analysis
  - classification
  - anomaly-detection
  - intrusion-detection
  - explainable-ai
  - model-evaluation
  - benchmarking
  - training
  - evaluation
  - research
  - education
  - teaching
pretty_name: traceix-ai-telemetry

Traceix AI Security Telemetry

Each dataset is a JSONL file where each line describes a single file analyzed by Traceix. For every file you get:

  • file_capabilities – high-level behaviors and capabilities (CAPA-style + mapped to ATT&CK and MBC tags like Execution/T1129, Discovery/T1083, etc.).
  • file_exif_data – parsed EXIF metadata (file size, type, timestamps, company/product info, subsystem, linker/OS versions, etc.).
  • model_classification_infoTraceix model verdict (safe / malicious), classification timestamp, and inference latency in seconds.
  • decrypted_training_data – numeric feature vector actually used for training/inference (PE header fields, section statistics, imports/resources counts, entropy stats, etc.).
  • metadata – model version and accuracy, upload metadata (timestamp, SHA-256, license), and payment information (THRT amount, Solana transaction hash + explorer URL, price at time of payment).

All records are focused on malware analysis and are stored in JSONL format. Datasets are automatically exported by Traceix on a monthly schedule and published as-is under the CC BY 4.0 license.

You can quickly load and sanity-check any monthly corpus using:

from datasets import load_dataset
import pandas as pd
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import train_test_split


# Load the Traceix telemetry dataset
ds = load_dataset(
    "PerkinsFund/traceix-ai-security-telemetry",
    data_files="traceix-telemetry-corpus-2025-12.jsonl",  # Or whatever month you want
    split="train",
)

# We will need to flatten nested JSON into columns
df = ds.to_pandas()
df_flat = pd.json_normalize(df.to_dict(orient="records"))

# Define the features and label based on schema
feature_cols = [
    "decrypted_training_data.SizeOfCode",
    "decrypted_training_data.SectionsMeanEntropy",
    "decrypted_training_data.ImportsNb",
]

label_col = "model_classification_info.identified_class"

# We don't have to but we will drop the rows with missing data
df_flat = df_flat.dropna(subset=feature_cols + [label_col])

X = df_flat[feature_cols].values
y = (df_flat[label_col] == "malicious").astype(int)

# Start the training and test split
X_train, X_test, y_train, y_test = train_test_split(
    X, y, test_size=0.2, random_state=42
)

# Test the basic file classifier
clf = LogisticRegression(max_iter=1000)
clf.fit(X_train, y_train)

print("Test accuracy:", clf.score(X_test, y_test))