Datasets:
image imagewidth (px) 1.92k 1.92k |
|---|
UAV Drone Detection and Tracking
This project detects UAV drones in video using a fine-tuned YOLO detector and tracks their trajectories using a Kalman filter.
Dataset
Training Dataset:
YouTube videos used as evaluation inputs:
These two videos are unlabeled out-of-distribution (OOD) inputs.
Because the evaluation videos are unlabeled, formal metrics such as mAP, precision, or recall cannot be computed. Detection counts and confidence values are therefore reported as observational statistics rather than benchmark metrics.
Detection Statistics on OOD Videos
| Model | Inference Size | Avg Detections per Video | Avg Confidence |
|---|---|---|---|
| best.pt (baseline) | 640 | 345 | 0.5235 |
| finetuned.pt | 640 | 338 | 0.5610 |
| finetuned.pt | 1920 | 863.5 | 0.5518 |
The larger inference resolution significantly improved detection of very small drones in Video 2.
Training Configuration
Pretraining
# pretrain a model for 100 epoch mb size = 32 take the best performer
model = YOLO("yolo26n.yaml").load("yolo26n.pt")
results = model.train(
data="drone-detection-3/data.yaml",
batch=32,
epochs=100,
imgsz=640,
device=0
)
Finetuning
Fine-tuning was applied to improve detection of very small drones, which were often missed in the OOD evaluation videos.
Augmentations used:
hsv_h,hsv_s,hsv_v– color distortions to encourage color-independent drone featuresbgr– background/foreground color swapping augmentationscale– improves robustness to variation in apparent drone size
model = YOLO("yolo26n.yaml").load("best.pt")
results = model.train(
data="drone-detection-3/data.yaml",
hsv_h=0.015,
hsv_s=0.7,
hsv_v=0.4,
scale=0.5,
bgr=0.125,
batch=32,
epochs=50,
imgsz=640,
device=0
)
Inference Configuration
The fine-tuned model produced 1727 total detections across both evaluation videos using the following inference configuration.
results = model.predict(
source=in_path,
imgsz=1920,
conf=0.2,
stream=True,
device=0
)
The inference resolution was increased from 640 → 1920 because the drones in Video 2 occupy only a few pixels in many frames.
Detection Pipeline
The detection pipeline processes all .mp4 files in a directory, not just the two test videos.
!pip install python-ffmpeg
import ffmpeg
from pathlib import Path
vids_path = Path("./videos")
vids = list(vids_path.glob("*.mp4"))
out_frames = Path("./data/frames/")
for vid in vids:
out_pattern = f"{out_frames}/{vid.stem}_frame_%04d.jpg"
(
ffmpeg
.input(str(vid))
.filter('fps', fps=5)
.output(out_pattern)
.run(overwrite_output=True, quiet=True)
)
print(f"{vid} processed into frames")
This assumes the videos were downloaded using the assignment-recommended method:
yt-dlp -f "bestvideo[ext=mp4]+bestaudio[ext=m4a]/best[ext=mp4]" \
-o "drone_video_1.mp4" \
"https://www.youtube.com/watch?v=DhmZ6W1UAv4"
Parquet Schema
Detections are stored in a parquet dataset.
data = {
'video': pd.Series(dtype="string"),
'timestamp': pd.Series(dtype="int64"),
'confidence': pd.Series(dtype="float32"),
'Bpred xywh': pd.Series(dtype="object")
}
Each row represents a detection with:
- source video
- frame index
- model confidence
- bounding box coordinates
Parquet Creation Pipeline
import torch
import pandas as pd
from ultralytics import YOLO
from pathlib import Path
from PIL import Image
data = {
'video': pd.Series(dtype="string"),
'timestamp': pd.Series(dtype="int64"),
'confidence': pd.Series(dtype="float32"),
'Bpred xywh': pd.Series(dtype="object")
}
df = pd.DataFrame(data)
model = YOLO("./models/finetuned.pt")
out_dir = Path("./data/detections/")
in_path = "./data/frames/*.jpg"
results = model.predict(
source=in_path,
imgsz=1920,
conf=0.2,
stream=True,
device=0
)
cnt = 0
for i, r in enumerate(results):
frame_path = Path(r.path)
video_name = frame_path.name.split('_frame_')[0] + ".mp4"
if (r.boxes.cls).size()[0] > 0:
if (r.boxes.cls).size()[0] > 1:
for c in range((r.boxes.conf).size()[0]):
df.loc[cnt] = [
video_name,
i,
r.boxes.conf[c].item(),
(r.boxes.xywh).tolist()[c]
]
cnt += 1
continue
df.loc[cnt] = [
video_name,
i,
(r.boxes.conf)[0].item(),
(r.boxes.xywh).tolist()[0]
]
cnt += 1
r.save(filename=f"{out_dir}/resultsVid2_frame{i}.jpg")
df.to_parquet(
'DroneDetectionsAttallah.parquet',
engine='pyarrow',
compression='snappy'
)
print(
f"Parquet file {cnt} rows : "
"'DroneDetectionsAttallah.parquet' created successfully using pandas."
)
Model Limitations and Observations
The model performs significantly better on Video 1, where the drone appears closer and more clearly.
Video 2 is much more difficult because the drone often occupies only a few pixels and frequently leaves the frame.
Increasing the inference resolution from 640 to 1920 significantly improved detection performance for these small targets.
Despite these improvements, the detector still occasionally:
- misses the drone when it becomes extremely small
- misclassifies small gaps between clouds as drones
Because the Kalman filter treats detector outputs as measurements, these false positives can cause the predicted trajectory to drift toward background features until a correct detection appears again.
The model performed well on the labeled training/validation/test splits of the dataset, but because the evaluation videos are unlabeled OOD inputs, improvements are reported qualitatively rather than through formal metrics.
In particular, the fine-tuned model with larger inference resolution produced visibly more consistent detections on Video 2, where the baseline model frequently failed to detect the drone at all.
- Downloads last month
- 10