Software & Datasets
Open-source tools, libraries, and datasets from InfoLab research — freely available for the research community.
Introduces two white-box adversarial attacks that simultaneously fool a DNN classifier and its coupled interpretation model (GradCAM, LIME, SHAP). Demonstrates that interpretable deep learning systems are vulnerable to adversarial inputs designed to produce misleading yet visually plausible explanations.
SingleADV generates a universal perturbation targeting an entire category of objects, fooling both the DNN prediction model and its interpretation model simultaneously in white-box and black-box settings. Limits unintended cross-class fooling for targeted stealth.
Adversarial attack framework targeting interpretable Vision Transformers. Attacks the Transformer Interpreter explanation model, generating adversarial samples whose attribution maps closely resemble those of benign examples. Supports DeiT-B as source model and ViT-B as target.
A stealthy, query-efficient score-based black-box attack against interpretable deep learning systems using a microbial genetic algorithm. Achieves 95–100% attack success rate on Inception, ResNet, VGG, and DenseNet across ImageNet and CIFAR with minimal queries. Resilient against JPEG, bit-depth reduction, and median smoothing defenses.
Empirical study of three black-box attacks — SimBA, HopSkipJump, BoundaryAttack — across CNN architectures (ResNet, VGG, DenseNet) on ImageNet and CIFAR-100. Investigates model complexity vs. robustness, model diversity, cross-dataset transferability, and preprocessing-based defenses (JPEG, median smoothing, bit squeezing).
Multi-dimensional study of DNN robustness examining model complexity vs. adversarial robustness, diversity effects of heterogeneous model ensembles, cross-dataset attack transferability (ImageNet, CIFAR-100), and effectiveness of preprocessing-based defenses across diverse architectures.
A Streamlit-based simulator for federated learning experiments focused on heterogeneity, attacks, and robustness. Supports Dirichlet and IID partitioning, FedAvg/FedMedian/FedProx strategies, configurable adversarial clients, local differential privacy, and real-time GPU monitoring with accuracy and attack success rate plots exportable as PNG/PDF.
Investigates data-poisoning attacks against federated learning under heterogeneous client data distributions. Uses PyTorch and the Flower framework to simulate distributed MobileNetV2 training. Evaluates attack robustness with local differential privacy and studies the interplay between data heterogeneity and model vulnerability.
Novel vision transformer with cross-scale embeddings and hierarchical 4D short/long-distance attention for spatiotemporal brain disorder classification from 4D fMRI. Evaluated on ADHD-200, ADNI (Alzheimer's), and ABIDE (Autism), consistently outperforming 3D-CNN and SwiFT baselines. Integrated Gradients XAI maps highlight disorder-relevant brain regions.
Extends spatiotemporal fMRI modeling to longitudinal (multi-session) data with a Period CrossFormer Block that fuses intra-session 4D attention with inter-session dynamics using period-aware positional embeddings. Achieves 94.3% accuracy and 94.1% AUC on ADNI MCI-to-dementia conversion, outperforming all baselines by 3–10 points.
3D MRI CrossFormer with multimodal intermediate fusion of volumetric MRI embeddings (via 3D-LDA/SDA dual-range attention) and structured clinical data (MMSE, CDR-SB, ADAS13). Achieves 99.3% accuracy and 99.7% AUC on ADNI Alzheimer's diagnosis. Guided Grad-CAM highlights the hippocampus, entorhinal cortex, and medial temporal lobe.
Multimodal framework combining a 4D Vision Transformer for high-dimensional fMRI with an MLP for clinical and demographic tabular data (age, gender, IQ, behavioral scores) for ADHD diagnosis on ADHD-200. Compares intermediate and decision fusion strategies. SHAP and Integrated Gradients provide interpretability across both modalities.
Multi-plane, multi-slice longitudinal MRI deep ensemble for Alzheimer's progression detection. Keras-based with pluggable CNN backbones (EfficientNet, ResNet, ConvNext, DenseNet, XceptionNet), optional CBAM attention, and Bayesian-optimized classification heads (MLP, LSTM, multi-head self-attention). Run via command-line with flexible configuration flags.
3D-CNN-BRNN framework for Alzheimer's progression from multi-timestep longitudinal MRI. A 3D-CNN extracts deep volumetric features; a Bidirectional RNN models temporal dynamics across visits. Visual XAI highlights spatiotemporal brain regions most predictive of progression. Tested at baseline, 6-month, and 12-month ADNI timepoints.
Two-stage dynamic ensemble framework for depression detection and severity prediction using NSHAP data. Stage 1 detects depression (FIRE-KNOP DES: 88.33% accuracy); Stage 2 predicts severity among depressed patients (83.68%). SHAP and feature network diagrams provide clinical explainability for older-adult populations.
Predicts severe hypoglycemic episodes in Type-1 Diabetes using multimodal data (clinical, psychological, cognitive features) with early and late fusion strategies. Benchmarks classical ML, static ensembles, and Dynamic Ensemble Selection. Best results: AUC-ROC 0.877 (late fusion) and accuracy 0.798 (early fusion). Dataset from Jaeb Center for Health Research.
Open-source Python library for Dynamic Ensemble Selection with late fusion of multimodal data and integrated SHAP-based explainability. Implements 4 dynamic classifier selection and 7 DES techniques. Each model in the pool can train on different feature sets (modalities), and the predict() call handles competence estimation, selection, and explanation in one step.
Dynamic Ensemble Selection for regression with built-in explainability. Selects the most competent regressors per query using k-NN region-of-competence modeling. The predict_xai() method returns per-model predictions, competence scores, and the neighbor samples in the region of competence — making ensemble decisions fully transparent. Compatible with any scikit-learn regressor.
Multimodal clinical prediction framework combining joint contrastive embeddings across image, clinical text, and tabular modalities with Dynamic Ensemble Selection. Region-of-Competence modeling adapts ensemble composition per query. Built-in XAI explains which modality and which classifiers contributed to each decision. Robust to noisy and heterogeneous clinical datasets.
Real-time human detection and activity recognition using the MLX90640 low-resolution infrared sensor (32×24 pixels). Demonstrates that reliable activity detection is achievable with accessible, low-cost thermal hardware without high-resolution cameras. Includes the thermal image dataset repository for replication and benchmarking.
Low-resolution infrared thermal image dataset collected with the MLX90640 sensor (32×24 IR resolution). Captures human presence and activity patterns across standardized scenarios. Accompanies the low-cost human activity detection paper and provides a benchmark for embedded thermal imaging and edge-deployed detection systems.
Using Our Work
If you use any InfoLab code or datasets in your research, please cite the corresponding paper. BibTeX entries are available on the Publications page via the Cite button on each paper card.
For questions, collaborations, or data access requests, contact us.