BrainSegFounder: UK Biobank Brain MRI Foundation Models
Overview
This repository contains pretrained model weights derived from UK Biobank MRI data and downstream medical imaging datasets. These weights correspond to the self-supervised pretraining and supervised finetuning stages described in the BrainSegFounder framework.
Paper: BrainSegFounder: Self-supervised Learning for Brain MRI Segmentation
Please cite this paper if you use these model weights.
Available Model Weights
This repository includes:
| Model File | Description | Training Data | Subjects |
|---|---|---|---|
model_weights_UKB-pretrain.pt |
SSL pretraining weights | UK Biobank MRI (fields 20252-20253) | ~41,000 |
model_weights_BRATS-pretrain.pt |
SSL pretraining weights | UKB + BraTS (multimodal MRI) | 42,470 |
model_weights_BRATS-finetune.pt |
SwinUNETR finetuning weights | BraTS tumor segmentation | 1,470 |
model_weights_ATLAS-pretrain.pt |
SSL pretraining weights | UKB + ATLAS v2.0 (multimodal MRI) | 42,271 |
model_weights_ATLAS-finetune.pt |
SwinUNETR finetuning weights | ATLAS stroke lesion segmentation | 1,271 |
SSL_Head.py |
Source code | SSL head implementation for pretraining | - |
This README provides instructions for loading the weights, architecture descriptions, dataset information, and required UK Biobank privacy/policy statements.
Quick Start
Installation
All models use MONAI components and require:
pip install git+https://github.com/Project-MONAI/MONAI.git@a23c7f54
pip install torch
Usage Examples
Loading UKB-only Pretraining Weights
import torch
from huggingface_hub import hf_hub_download
from SSL_Head import SSLHead # Download SSL_Head.py from this repo
from argparse import Namespace
# Download model weights
model_path = hf_hub_download(
repo_id="smilelab/BrainSegFounder",
filename="model_weights_UKB-pretrain.pt"
)
# Configure model
args = Namespace(
in_channels=2, # T1 + T2
spatial_dims=3,
bottleneck_depth=768,
feature_size=48,
num_swin_blocks_per_stage=[2,2,2,2],
num_heads_per_stage=[3,6,12,24],
dropout_path_rate=0.0,
use_checkpoint=False
)
# Load model
model = SSLHead(args)
state_dict = torch.load(model_path, map_location="cpu")
model.load_state_dict(state_dict)
model.eval()
Loading Multimodal Pretraining Weights (UKB + BraTS/ATLAS)
import torch
from huggingface_hub import hf_hub_download
from SSL_Head import SSLHead
from argparse import Namespace
# Download BRATS or ATLAS pretrain weights
model_path = hf_hub_download(
repo_id="smilelab/BrainSegFounder",
filename="model_weights_BRATS-pretrain.pt" # or model_weights_ATLAS-pretrain.pt
)
args = Namespace(
in_channels=2,
spatial_dims=3,
bottleneck_depth=768,
feature_size=48,
num_swin_blocks_per_stage=[2,2,2,2],
num_heads_per_stage=[3,6,12,24],
dropout_path_rate=0.0,
use_checkpoint=False
)
model = SSLHead(args)
state_dict = torch.load(model_path, map_location="cpu")
model.load_state_dict(state_dict)
model.eval()
Loading Finetuned Segmentation Weights (SwinUNETR)
import torch
from huggingface_hub import hf_hub_download
from monai.networks.nets import SwinUNETR
# Download BRATS or ATLAS finetune weights
model_path = hf_hub_download(
repo_id="smilelab/BrainSegFounder",
filename="model_weights_BRATS-finetune.pt" # or model_weights_ATLAS-finetune.pt
)
# Configure SwinUNETR
depths = [2, 2, 2, 2]
num_heads = [3, 6, 12, 24]
model = SwinUNETR(
img_size=(96, 96, 96),
in_channels=4,
out_channels=3,
feature_size=48,
use_checkpoint=False,
depths=depths,
num_heads=num_heads
)
state_dict = torch.load(model_path, map_location="cpu")
model.load_state_dict(state_dict)
model.eval()
Architecture Description
This project uses two architectures:
- SSLHead - Self-supervised pretraining model built on a 3D Swin Transformer encoder
- SwinUNETR - Supervised segmentation model for downstream tasks
SSLHead (Pretraining Model)
The SSL pretraining model is a 3D SwinViT encoder equipped with self-supervised learning heads for rotation prediction and contrastive representation learning, as well as a VAE-style trilinear decoder for volume reconstruction.
Key components:
- Backbone: 3D Swin Transformer (
SwinViTfrom MONAI)- Patch size:
[2,2,2] - Window size:
[7,7,7] - Embedding dimension: 48
- Depths:
[2,2,2,2] - Number of heads:
[3,6,12,24]
- Patch size:
- Bottleneck dimension: 768
- Self-supervised heads:
- Rotation prediction:
nn.Linear(768, 4) - Contrastive learning:
nn.Linear(768, 512)
- Rotation prediction:
- Reconstruction decoder: 5-stage upsampling with 3D convolutions + InstanceNorm + LeakyReLU (trilinear interpolation)
The SSLHead combines three self-supervised learning objectives in equal proportion:
- Reconstruction loss: VAE-style image reconstruction
- Rotation prediction: 4-way rotation classification
- Contrastive loss: Feature representation learning
See SSL_Head.py for the complete implementation.
SwinUNETR (Finetuning Model)
The finetuning model uses MONAI's 3D SwinUNETR, which integrates a Swin Transformer encoder with a U-Net-style hierarchical decoder for dense segmentation tasks.
Key components:
- Patch-based 3D Swin Transformer encoder
- U-Net-style symmetric decoder with skip connections
- Depths:
[2,2,2,2] - Number of heads:
[3,6,12,24] - Input image size:
(96, 96, 96) - Output channels: 3 (task-dependent segmentation classes)
Both architectures are implemented using MONAI (commit a23c7f54).
Training Data
UK Biobank (Pretraining)
- Subjects: ~41,000
- Data fields: 20252 (T1-weighted MRI), 20253 (T2-weighted MRI)
- Preprocessing: Standard MONAI transforms (resizing and normalization)
- Used in:
model_weights_UKB-pretrain.pt
BraTS (Pretraining and Finetuning)
- Subjects: 1,470
- Task: Brain tumor segmentation
- Modalities: Multi-modal 3D MRI
- Preprocessing: Standard normalization and cropping
- Used in:
model_weights_BRATS-pretrain.pt,model_weights_BRATS-finetune.pt
ATLAS v2.0 (Pretraining and Finetuning)
- Subjects: 1,271
- Task: Stroke lesion segmentation
- Preprocessing: Standard MONAI transforms
- Used in:
model_weights_ATLAS-pretrain.pt,model_weights_ATLAS-finetune.pt
Citation
If you use these model weights, please cite:
@article{brainsegfounder2024,
title={BrainSegFounder: Self-supervised Learning for Brain MRI Segmentation},
journal={Medical Image Analysis},
year={2024},
doi={10.1016/j.media.2024.103301},
url={https://doi.org/10.1016/j.media.2024.103301}
}
Privacy and Data Protection Statement
The returned model parameters do not contain any UK Biobank participant-level data, do not embed identifiable features, and cannot be used to reconstruct or infer individual-level MRI volumes.
The models store only aggregated statistical representations learned across tens of thousands of participants. They do not contain per-participant embeddings, IDs, or reconstructed images.
This submission complies with the UK Biobank requirement that any parameters derived from UKB data be returned as derived variables, while ensuring that no participant-level data or re-identifiable elements are included.
License and Allowed Use
These derived variables (model weights) may be accessed by future approved UK Biobank researchers under UK Biobank's standard access procedures.
The model architectures and research software used to train these models remain the intellectual property of the submitting researcher. The learned parameters, however, are derived from UK Biobank data and are therefore returned in accordance with:
- The UK Biobank Material Transfer Agreement (MTA)
- The UK Biobank "Use of Artificial Intelligence Applications and Models" policy
License: The model weights are subject to the UK Biobank Material Transfer Agreement.
Contact
For questions about these models or collaborations, please open an issue in this repository or contact the authors through the paper.