Open Source

Software & Datasets

Open-source tools, libraries, and datasets from InfoLab research — freely available for the research community.

21 Repositories
2 pip Libraries
5 Research Areas
AdvEdge / AdvEdge+
InfoLab-SKKU/AdvEdge-Attack

Introduces two white-box adversarial attacks that simultaneously fool a DNN classifier and its coupled interpretation model (GradCAM, LIME, SHAP). Demonstrates that interpretable deep learning systems are vulnerable to adversarial inputs designed to produce misleading yet visually plausible explanations.

Adversarial Attack XAI ImageNet PyTorch
SingleADV
InfoLab-SKKU/SingleClassADV

SingleADV generates a universal perturbation targeting an entire category of objects, fooling both the DNN prediction model and its interpretation model simultaneously in white-box and black-box settings. Limits unintended cross-class fooling for targeted stealth.

Universal Perturbation Black-box ImageNet CAM
AdViT
InfoLab-SKKU/AdViT

Adversarial attack framework targeting interpretable Vision Transformers. Attacks the Transformer Interpreter explanation model, generating adversarial samples whose attribution maps closely resemble those of benign examples. Supports DeiT-B as source model and ViT-B as target.

Vision Transformer ViT Attack DeiT XAI
QuScore
InfoLab-SKKU/QuScore

A stealthy, query-efficient score-based black-box attack against interpretable deep learning systems using a microbial genetic algorithm. Achieves 95–100% attack success rate on Inception, ResNet, VGG, and DenseNet across ImageNet and CIFAR with minimal queries. Resilient against JPEG, bit-depth reduction, and median smoothing defenses.

Black-box Attack Query-Efficient Genetic Algorithm ImageNet
Black-Box Attacks Analysis
InfoLab-SKKU/black-box-attacks

Empirical study of three black-box attacks — SimBA, HopSkipJump, BoundaryAttack — across CNN architectures (ResNet, VGG, DenseNet) on ImageNet and CIFAR-100. Investigates model complexity vs. robustness, model diversity, cross-dataset transferability, and preprocessing-based defenses (JPEG, median smoothing, bit squeezing).

SimBA HopSkipJump BoundaryAttack Defense
Adversarial Attacks Analysis
InfoLab-SKKU/Adversarial-Attacks-Analysis

Multi-dimensional study of DNN robustness examining model complexity vs. adversarial robustness, diversity effects of heterogeneous model ensembles, cross-dataset attack transferability (ImageNet, CIFAR-100), and effectiveness of preprocessing-based defenses across diverse architectures.

Robustness Model Diversity Transferability Defense
HARFed
InfoLab-SKKU/harfed

A Streamlit-based simulator for federated learning experiments focused on heterogeneity, attacks, and robustness. Supports Dirichlet and IID partitioning, FedAvg/FedMedian/FedProx strategies, configurable adversarial clients, local differential privacy, and real-time GPU monitoring with accuracy and attack success rate plots exportable as PNG/PDF.

Federated Learning Streamlit FedAvg Differential Privacy
SecurityAnalysisFL
InfoLab-SKKU/SecurityAnalysisFL

Investigates data-poisoning attacks against federated learning under heterogeneous client data distributions. Uses PyTorch and the Flower framework to simulate distributed MobileNetV2 training. Evaluates attack robustness with local differential privacy and studies the interplay between data heterogeneity and model vulnerability.

Poisoning Attacks Flower MobileNetV2 LDP
4DfCF — 4D fMRI CrossFormer
InfoLab-SKKU/4DfCF

Novel vision transformer with cross-scale embeddings and hierarchical 4D short/long-distance attention for spatiotemporal brain disorder classification from 4D fMRI. Evaluated on ADHD-200, ADNI (Alzheimer's), and ABIDE (Autism), consistently outperforming 3D-CNN and SwiFT baselines. Integrated Gradients XAI maps highlight disorder-relevant brain regions.

4D fMRI Vision Transformer ADHD-200 • ADNI • ABIDE XAI
5DfCF — 5D fMRI CrossFormer
InfoLab-SKKU/5DfCF

Extends spatiotemporal fMRI modeling to longitudinal (multi-session) data with a Period CrossFormer Block that fuses intra-session 4D attention with inter-session dynamics using period-aware positional embeddings. Achieves 94.3% accuracy and 94.1% AUC on ADNI MCI-to-dementia conversion, outperforming all baselines by 3–10 points.

Longitudinal fMRI Alzheimer's Progression MCI Conversion ADNI
MML-3DCrossFormer
InfoLab-SKKU/MML-3DCrossFormer

3D MRI CrossFormer with multimodal intermediate fusion of volumetric MRI embeddings (via 3D-LDA/SDA dual-range attention) and structured clinical data (MMSE, CDR-SB, ADAS13). Achieves 99.3% accuracy and 99.7% AUC on ADNI Alzheimer's diagnosis. Guided Grad-CAM highlights the hippocampus, entorhinal cortex, and medial temporal lobe.

3D MRI Multimodal Fusion Alzheimer's Grad-CAM
4DViTADHD
InfoLab-SKKU/4DViTADHD

Multimodal framework combining a 4D Vision Transformer for high-dimensional fMRI with an MLP for clinical and demographic tabular data (age, gender, IQ, behavioral scores) for ADHD diagnosis on ADHD-200. Compares intermediate and decision fusion strategies. SHAP and Integrated Gradients provide interpretability across both modalities.

ADHD fMRI + Tabular Multimodal Fusion SHAP
MPMS MRI Progression
InfoLab-SKKU/mpms-mri-progression

Multi-plane, multi-slice longitudinal MRI deep ensemble for Alzheimer's progression detection. Keras-based with pluggable CNN backbones (EfficientNet, ResNet, ConvNext, DenseNet, XceptionNet), optional CBAM attention, and Bayesian-optimized classification heads (MLP, LSTM, multi-head self-attention). Run via command-line with flexible configuration flags.

Longitudinal MRI Keras CBAM Bayesian Optimization
AD Progression Detection (MRI)
InfoLab-SKKU/AD-progression-detection-MRI

3D-CNN-BRNN framework for Alzheimer's progression from multi-timestep longitudinal MRI. A 3D-CNN extracts deep volumetric features; a Bidirectional RNN models temporal dynamics across visits. Visual XAI highlights spatiotemporal brain regions most predictive of progression. Tested at baseline, 6-month, and 12-month ADNI timepoints.

3D-CNN Bidirectional RNN Longitudinal MRI XAI
DES4Depression
InfoLab-SKKU/DES4Depression

Two-stage dynamic ensemble framework for depression detection and severity prediction using NSHAP data. Stage 1 detects depression (FIRE-KNOP DES: 88.33% accuracy); Stage 2 predicts severity among depressed patients (83.68%). SHAP and feature network diagrams provide clinical explainability for older-adult populations.

Depression Dynamic Ensemble NSHAP SHAP
Explainable Ensemble Hypoglycemia
InfoLab-SKKU/Explainable-Ensemble-Hypoglycemia

Predicts severe hypoglycemic episodes in Type-1 Diabetes using multimodal data (clinical, psychological, cognitive features) with early and late fusion strategies. Benchmarks classical ML, static ensembles, and Dynamic Ensemble Selection. Best results: AUC-ROC 0.877 (late fusion) and accuracy 0.798 (early fusion). Dataset from Jaeb Center for Health Research.

Type-1 Diabetes Hypoglycemia Multimodal Fusion DES
Infodeslib
InfoLab-SKKU/infodeslib
pip install infodeslib

Open-source Python library for Dynamic Ensemble Selection with late fusion of multimodal data and integrated SHAP-based explainability. Implements 4 dynamic classifier selection and 7 DES techniques. Each model in the pool can train on different feature sets (modalities), and the predict() call handles competence estimation, selection, and explanation in one step.

Python Library Dynamic Ensemble Late Fusion SHAP
XAI-DESReg
InfoLab-SKKU/xaidesreg
pip install xaidesreg

Dynamic Ensemble Selection for regression with built-in explainability. Selects the most competent regressors per query using k-NN region-of-competence modeling. The predict_xai() method returns per-model predictions, competence scores, and the neighbor samples in the region of competence — making ensemble decisions fully transparent. Compatible with any scikit-learn regressor.

Python Library Regression Dynamic Ensemble XAI
MM-DES
InfoLab-SKKU/mm-des

Multimodal clinical prediction framework combining joint contrastive embeddings across image, clinical text, and tabular modalities with Dynamic Ensemble Selection. Region-of-Competence modeling adapts ensemble composition per query. Built-in XAI explains which modality and which classifiers contributed to each decision. Robust to noisy and heterogeneous clinical datasets.

Multimodal Contrastive Learning Dynamic Ensemble Clinical AI
Low-Cost Human Activity Detection
InfoLab-SKKU/Low-Cost-Human-Activity-Detection

Real-time human detection and activity recognition using the MLX90640 low-resolution infrared sensor (32×24 pixels). Demonstrates that reliable activity detection is achievable with accessible, low-cost thermal hardware without high-resolution cameras. Includes the thermal image dataset repository for replication and benchmarking.

Thermal Imaging MLX90640 Real-time Activity Recognition
Thermal Human Detection Dataset
InfoLab-SKKU/Thermal-Human-Detection
Dataset

Low-resolution infrared thermal image dataset collected with the MLX90640 sensor (32×24 IR resolution). Captures human presence and activity patterns across standardized scenarios. Accompanies the low-cost human activity detection paper and provides a benchmark for embedded thermal imaging and edge-deployed detection systems.

Dataset Thermal IR 32×24 px Human Detection

Using Our Work

If you use any InfoLab code or datasets in your research, please cite the corresponding paper. BibTeX entries are available on the Publications page via the Cite button on each paper card.

For questions, collaborations, or data access requests, contact us.