File size: 3,178 Bytes
82048b7
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
a22491c
82048b7
 
ec55bb3
 
136ead8
dd5572a
 
 
 
 
 
 
 
 
 
 
ec55bb3
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
82048b7
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
---
license: cc-by-4.0
tags:
- malware
- cybersecurity
- ATT&CK
- MBC
- pe-files
- elf-files
- binary-classification
- tabular-data
- threat-intelligence
- digital-forensics
- reverse-engineering
- incident-response
- security-telemetry
- ai-security
- security-ml
- mitre-attack
- mitre-mbc
- windows
- linux
- executable-files
- static-analysis
- behavioral-analysis
- classification
- anomaly-detection
- intrusion-detection
- explainable-ai
- model-evaluation
- benchmarking
- training
- evaluation
- research
- education
- teaching
pretty_name: traceix-ai-telemetry
---

# Traceix AI Security Telemetry

Each dataset is a JSONL file where **each line describes a single file analyzed by [Traceix](https://traceix.com)**. For every file you get:

* **`file_capabilities`** – high-level behaviors and capabilities (CAPA-style + mapped to ATT&CK and MBC tags like `Execution/T1129`, `Discovery/T1083`, etc.).
* **`file_exif_data`** – parsed EXIF metadata (file size, type, timestamps, company/product info, subsystem, linker/OS versions, etc.).
* **`model_classification_info`** – [Traceix](https://traceix.com) model verdict (`safe` / `malicious`), classification timestamp, and inference latency in seconds.
* **`decrypted_training_data`** – numeric feature vector actually used for training/inference (PE header fields, section statistics, imports/resources counts, entropy stats, etc.).
* **`metadata`** – model version and accuracy, upload metadata (timestamp, SHA-256, license), and payment information (THRT amount, Solana transaction hash + explorer URL, price at time of payment).

All records are focused on **malware analysis** and are stored in **JSONL format**.
Datasets are **automatically exported by [Traceix](https://traceix.com) on a monthly schedule** and published **as-is** under the **CC BY 4.0** license.

You can quickly load and sanity-check any monthly corpus using:

```python
from datasets import load_dataset
import pandas as pd
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import train_test_split


# Load the Traceix telemetry dataset
ds = load_dataset(
    "PerkinsFund/traceix-ai-security-telemetry",
    data_files="traceix-telemetry-corpus-2025-12.jsonl",  # Or whatever month you want
    split="train",
)

# We will need to flatten nested JSON into columns
df = ds.to_pandas()
df_flat = pd.json_normalize(df.to_dict(orient="records"))

# Define the features and label based on schema
feature_cols = [
    "decrypted_training_data.SizeOfCode",
    "decrypted_training_data.SectionsMeanEntropy",
    "decrypted_training_data.ImportsNb",
]

label_col = "model_classification_info.identified_class"

# We don't have to but we will drop the rows with missing data
df_flat = df_flat.dropna(subset=feature_cols + [label_col])

X = df_flat[feature_cols].values
y = (df_flat[label_col] == "malicious").astype(int)

# Start the training and test split
X_train, X_test, y_train, y_test = train_test_split(
    X, y, test_size=0.2, random_state=42
)

# Test the basic file classifier
clf = LogisticRegression(max_iter=1000)
clf.fit(X_train, y_train)

print("Test accuracy:", clf.score(X_test, y_test))
```