New Publication Alert: GenBlosum Introduces Codon-Aware Neutral Modeling to Interpret Cancer Mutations

We’re excited to share a new paper in Genes introducing GenBlosum, a codon-aware neutral modeling
framework to help determine whether observed cancer mutations are more consistent with oncogenic selection or stochastic
mutational processes. By integrating BLOSUM62 substitution severity with base-pair substitution likelihoods
and Monte Carlo–generated neutral expectations, GenBlosum offers a statistical way to contextualize missense mutations,
supporting interpretation of variants that are difficult to classify in clinical and cohort settings.

Paper: GenBlosum: On Determining Whether Cancer Mutations Are Functional or Random (Genes, 2026).
Authors: Alejandro Leyva; Muhammad Khalid Khan Niazi.
DOI: https://doi.org/10.3390/genes17010055

Highlights from AI4Path Lab at AACR 2026

AACR 2026 Accepted Abstracts

Three AI4Path Lab studies highlight multimodal metastatic-site prediction, rapid HER2 inference from H&E, and real-world clinical deployment in pancreatic cancer.

1. Integrating Image and Text-Based AI Improves Identification of Metastatic Sites from Whole-Slide Pathology Images (James)

A multimodal WSI + textual-prototype model uses visual–text similarity to focus on metastatic cues and predict the dissemination site (AUC 88%, accuracy 74%, macro-F1 60%).

2. Low-Magnification Deep Learning Model for Rapid HER2 Status Prediction from H&E Whole-Slide Images (Dr. Su)

A low-magnification deep learning pipeline rapidly predicts HER2 status directly from routine H&E whole-slide images to enable scalable screening and triage.

3. Deploying Artificial Intelligence–Driven Digital Pathology for Real World Clinical Decision-Making in Pancreatic Cancer (Abdul & Alex)

A deployment-focused study integrates AI-driven digital pathology into pancreatic cancer workflows to support practical, real-world clinical decision-making.

Celebrating Major Milestones — Abdul Rehman Akbar and Usama Sajjad Advance to PhD Candidacy

The AI4Path Lab is proud to celebrate an important academic milestone achieved by two of our members.

Abdul Rehman Akbar and Usama Sajjad, Graduate Research Associates in the AI4Path Lab and PhD students in Biomedical Engineering, have successfully passed their PhD candidacy examinations and have officially advanced to PhD Candidates.

This achievement marks a significant step in their doctoral journeys and reflects their rapid growth as independent researchers at the intersection of artificial intelligence, computational pathology, and translational biomedical research.

Since joining the AI4Path Lab in Fall 2024, Abdul and Usama have demonstrated exceptional dedication to research with real-world clinical impact. Over the past year, their work has resulted in multiple first-author manuscripts prepared for submission, additional manuscripts in progress, and several abstracts accepted at national conferences, including platform presentations.

Beyond research productivity, they have shown strong commitment to interdisciplinary collaboration, working closely with clinicians and pathologists to ensure that AI-driven methods remain clinically meaningful and patient-centered.

We extend our sincere congratulations to Abdul and Usama and express our gratitude to their advisor, Dr. Khalid Niazi, as well as their mentors, collaborators, and committee members for their continued guidance and support.

The AI4Path Lab is excited to see Abdul and Usama continue to grow as researchers and leaders as they move forward into the next phase of their doctoral training.

Congratulations, Abdul and Usama! 🎓🚀



View Abdul Rehman Akbar’s announcement on LinkedIn →

Highlights from AI4Path Lab at USCAP 2026

USCAP 2026 Accepted Abstracts

Five studies from the AI4Path Lab highlight advances in computational pathology — from molecular subtyping and genomic prediction to clinical AI deployment and hematopathology automation.

1. AI-Driven Subtyping in Pancreatic Ductal Adenocarcinoma Using H&E Whole Slide Images

PanSubNet predicts PDAC molecular subtypes (basal-like vs classical) directly from H&E slides, reaching an AUC of 78%. It offers a scalable, cost-effective alternative to transcriptomic profiling.

2. An AI Pathology Assistant for Clinical Deployment

An anatomy-agnostic AI assistant integrates pathology reports and WSIs for evidence-based case retrieval and automated reporting, achieving 50.6% context-based retrieval accuracy and improving workflow efficiency.

3. Bridging Histology and Genomics in Colorectal Cancer

AI models predict key mutations (TP53, KRAS, PIK3CA) from H&E slides with up to 17% higher AUC than baselines, enabling rapid, non-invasive genomic stratification for precision oncology.

4. An AI Virtual Agent for Interpretable Prognosis

An AI virtual agent translates black-box survival predictions into morphology-based reasoning, linking spatial cell patterns to outcomes and enhancing clinical interpretability.

5. AI-Based Detection of Megakaryocyte Dysplasia in Myelodysplastic Neoplasms

An automated classifier detects dysplastic megakaryocytes in bone marrow smears with 98% accuracy and AUC 0.99, providing a reliable and objective diagnostic tool for hematopathology.

Bridging H&E and IHC: A Multi-Stage Framework for Computational Stain Translation

We’re pleased to announce a new contribution from the AI4Path LabProgressive Translation of H&E to IHC with Enhanced Structural Fidelity, led by Yuhang Kang, Ziyu Su, Tianyang Wang, Zaibo Li, Wei Chen, and Muhammad Khalid Khan Niazi.


🔍 What’s the Study About?

Computational stain translation remains a critical challenge in digital pathology. Most existing H&E-to-IHC translation models combine multiple loss terms through simple weighted summation, often producing images that lack optimal structural fidelity, color accuracy, or cellular detail. To address this, the team developed ProgASP, a progressive generative framework that decouples image synthesis into three sequential stages — structure generation, DAB-guided color enhancement, and gradient-refined cell boundary refinement. Evaluated on the MIST dataset for HER2 and ER biomarkers, ProgASP demonstrates superior performance in color consistency, cellular boundary clarity, and overall structural realism.

Figure 1. Progressive generative framework for H&E to IHC translation

Figure 1. Overview of ProgASP: A Progressive Generative Framework for Virtual IHC. The model consists of three sequential stages: (1) Structure Generation using ASP loss to ensure robust tissue morphology alignment, (2) DAB-Guided Color Fidelity that leverages 3,3′-DAB channel intensity to enforce biochemical accuracy and protein expression visualization, and (3) Gradient-Guided Cell Boundary Refinement that combines image gradients with DAB-weighted supervision to sharpen cellular boundaries. Each stage receives the output of the preceding module as input, with parameters frozen post-training to preserve feature stability. This hierarchical approach decouples color and structural optimization, enabling targeted refinement of each visual component for clinically interpretable virtual IHC generation.

📊 Key Findings

  • Superior structural fidelity: SSIM scores of 0.2138 (HER2) and 0.2034 (ER), outperforming ASP and Stable Diffusion.
  • Enhanced color accuracy: DAB-guided loss ensures pixel-level agreement in protein expression intensity and localization.
  • Sharper cellular boundaries: Gradient MSE of 0.002234 (HER2) with precise membrane delineation for diagnostic relevance.
  • Improved distributional consistency: FID reduced to 49.6 (HER2) and 40.1 (ER), reflecting superior feature preservation.
  • Robust progressive training: Sequential stage-wise optimization prevents interference between objectives while maintaining stability.

🔬 These advances highlight that decoupled optimization of structural, chromatic, and morphological features is essential for generating diagnostically reliable virtual IHC images. The progressive framework opens new possibilities for cost-effective, tissue-efficient digital pathology workflows without compromising diagnostic quality.

👥 Meet the Authors

Yuhang Kang, Ziyu Su, Tianyang Wang, Zaibo Li, Wei Chen, and Muhammad Khalid Khan Niazi (PI, AI4Path Lab)

Stay tuned for more studies from AI4Path at the intersection of computational pathology, stain translation, and diagnostic AI.

🧪 New Study: Enhancing Reproducibility in Deep Learning Model Training for Computational Pathology

We’re pleased to announce a new contribution from the AI4Path LabHyperparameter Optimization and Reproducibility in Deep Learning Model Training, led by Usman Afzaal, Ziyu Su, Usama Sajjad, Hao Lu, Mostafa Rezapour, Metin Nafi Gurcan, and Muhammad Khalid Khan Niazi.


🔍 What’s the Study About?

Reproducibility remains a major challenge in foundation model training for histopathology.
Software randomness, hardware non-determinism, and incomplete hyperparameter reporting
often lead to inconsistent results across research groups.

To address this, the team systematically evaluated reproducibility by training a CLIP model on the
QUILT-1M dataset, exploring how different hyperparameter settings and augmentation strategies
influence downstream performance on three key datasets — PatchCamelyon, LC25000-Lung, and LC25000-Colon.

Figure 1. Joint training of image and text encoders for multimodal embedding alignment


Figure 1. Overview of our joint image–text representation learning framework.
The model jointly trains an image encoder and a text encoder to learn a shared multimodal embedding space by
maximizing the cosine similarity of matched image–text pairs within a batch.
Image patches are processed by the image encoder to obtain latent visual representations
(u1, u2, …, un), while corresponding textual descriptions are embedded through the text encoder
into feature vectors (v1, v2, …, vn).
Pairwise similarities (ui·vj) form a contrastive learning objective that aligns semantically
related histopathology images and diagnostic texts in a unified latent space, enabling the model to capture morphological–linguistic correlations crucial for computational pathology.

📊 Key Findings

  • Optimal augmentation: RandomResizedCrop values of 0.7–0.8 outperformed more extreme settings.
  • Training stability: Distributed training without local loss produced the most consistent convergence.
  • Learning rate sensitivity: Rates below 5.0e−5 consistently degraded model performance.
  • Benchmark robustness: The LC25000 (Colon) dataset showed the highest reproducibility across runs.

⚙️ These experiments highlight that achieving reproducible AI in digital pathology depends not only on open reporting
but also on careful experimental design and hyperparameter tuning.
The authors provide practical recommendations for building reliable, reproducible foundation models in the field.

👥 Meet the Authors

Usman Afzaal, Ziyu Su, Usama Sajjad, Hao Lu, Mostafa Rezapour, Metin Nafi Gurcan, and
Muhammad Khalid Khan Niazi (PI, AI4Path Lab)

Stay tuned for more studies from AI4Path at the intersection of foundation models,
computational pathology, and AI reproducibility.

New Publication Alert: AI4Path Lab Introduces Morphology-Aware Prognostic Modeling for Colorectal Cancer

We’re proud to announce a new study from the AI4Path Lab — the release of Morphology-Aware Prognostic Model for Five-Year Survival Prediction in Colorectal Cancer from H&E Whole Slide Images (PRISM), led by Usama Sajjad, Abdul Rehman Akbar, Ziyu Su, Deborah Knight, Wendy L. Frankel, Metin N. Gurcan, Wei Chen, and Muhammad Khalid Khan Niazi.


🔍 What’s the Study About?

Colorectal cancer (CRC) remains the world’s third most common malignancy, with over 150,000 new cases expected in 2025.
While foundation models have advanced computational pathology, their task-agnostic nature often overlooks
organ-specific morphological cues that are vital to understanding tumor biology and predicting patient outcomes.

To bridge this gap, the AI4Path team developed PRISM — an interpretable, morphology-aware prognostic model that
captures the continuous spectrum of phenotypic variability within tumor architecture, reflecting how
malignant evolution unfolds gradually rather than abruptly.

Figure 1. Overview of the PRISM framework


Figure 1. An overview of our PRISM framework.
(a) We first tessellate whole slide images (WSIs) into n non-overlapping patches, each patch undergoing dual feature extraction.
(b) We perform cross-feature interaction between universal pathology features from UNI and morphology-aware features that encode tissue architecture and histopathological patterns.
We then fuse these complementary feature representations fi,j at the patch level to create comprehensive morphological embeddings.
An attention mechanism computes importance scores for each patch feature based on its prognostic relevance, enabling the model to focus on histologically relevant regions.
We aggregate attention-weighted patch embeddings into a slide-level representation that captures the overall morphological landscape for five-year survival prediction.
(c) During patch feature aggregation, we project features using two neural networks (Wg, Wm) and aggregate the results through Wfusion to obtain morphology-aware patch features.
(d) Based on the predicted probability, we train a time-to-event Cox hazards model or perform risk stratification using the concordance index.

📊 Key Results

  • Trained on 8.74 million H&E image patches from 424 stage III CRC patients.
  • Achieved superior five-year overall survival prediction (AUC = 0.70 ± 0.04, accuracy = 68.4% ± 4.8%).
  • Hazard Ratio = 3.34 (95% CI = 2.28–4.90, p < 0.0001).
  • Outperformed existing CRC-specific models by 15% and AI foundation models by ~23%.
  • Demonstrated sex-agnostic and treatment-consistent performance across clinical subgroups.

🧠 PRISM highlights the importance of morphology-aware AI — integrating
histological diversity and spatial context to enable more personalized risk assessment
and treatment planning for colorectal cancer patients.

👥 Meet the Authors

Usama Sajjad, Abdul Rehman Akbar, Ziyu Su, Deborah Knight, Wendy L. Frankel, Metin N. Gurcan, Wei Chen, and Muhammad Khalid Khan Niazi (PI, AI4Path Lab)

Stay tuned for more pioneering research from AI4Path, where we continue to push the boundaries
of computational pathology and AI-driven oncology.

New Publication Alert: AI4Path Lab Streamlines Foundation Models with Cross-Magnification Distillation

We’re excited to share another milestone from the AI4Path Lab — the release of Streamline Pathology Foundation Model by Cross-Magnification Distillation (XMAG), led by Ziyu Su, Abdul Rehman Akbar, Usama Sajjad, Anil V Parwani, and Muhammad Khalid Khan Niazi.


🔍 What’s the Study About?

Foundation models (FMs) have revolutionized computational pathology, but their enormous size and high-magnification image requirements
make them challenging to deploy in real-world clinical workflows.

To overcome these barriers, our team developed XMAG — a lightweight, efficient foundation model built through
cross-magnification distillation. This innovative framework transfers knowledge from a
state-of-the-art 20× magnification teacher to a compact 5× magnification student network,
preserving diagnostic power while dramatically reducing computational cost.

Figure 1. Overview of the XMAG framework


Figure 1. Overview of the XMAG. XMAG transfers knowledge from high-magnification foundation
models to compact low-magnification architectures, achieving comparable diagnostic performance with dramatically improved
processing efficiency across multiple clinical tasks.
(a) Traditional 20× approaches process ~6,000 patches per WSI through large foundation models,
while XMAG operates at 5× magnification with ~500 patches, maintaining diagnostic accuracy with improved efficiency.
(b) Cross-magnification distillation architecture with global/local alignment between teacher (UNI2) and student (DINOv2-ViT-B).
(c) Pretraining dataset: 6,703 WSIs across 15 cancer types (3.49M patches).
(d) XMAG achieves optimal balance between diagnostic performance (AUC) and processing speed (8.8 WSIs/min).
Bubble size indicates parameter count.

💡 Key Innovations

  • Cross-magnification knowledge transfer — distills both global and local representations from 20× to 5× magnification.
  • Compact architecture — operates entirely at 5×, requiring 11.3× fewer patches per whole slide image (WSI).
  • Dual-level distillation — aligns global image features and local spatial token mappings for robust multi-scale learning.
  • End-to-end optimization — further boosts the student model to closely match large-scale FM performance.

📊 Results at a Glance

  • Trained on 3.49 million histopathology images from public datasets.
  • Validated across six clinically relevant tasks spanning multiple cancer types.
  • Achieved diagnostic accuracy within 1% of large foundation models.
  • Delivered 30× faster processing speed — reaching 8.8 WSIs per minute.
  • Confirmed cross-institutional robustness and generalization.

🚀 These results demonstrate that XMAG offers near-FM-level accuracy with real-time performance —
enabling the practical integration of AI into pathology diagnostics even in resource-constrained clinical settings.

🌍 Why It Matters

XMAG redefines what’s possible for foundation models in pathology.
By distilling knowledge across magnifications, it bridges the gap between
research-scale AI and scalable clinical deployment — paving the way for
real-time, cost-efficient pathology AI systems.

👥 Meet the Authors

Ziyu Su, Abdul Rehman Akbar, Usama Sajjad, Anil V Parwani, and
Muhammad Khalid Khan Niazi (PI, AI4Path Lab)

Stay tuned for more pioneering research from AI4Path, where we continue to advance
the boundaries of computational pathology and AI-driven precision medicine.

🧠 New Publication Alert: AI4Path Lab Decodes the Cellular “Language” of Pathology with CellEcoNet

We’re excited to announce that a groundbreaking new study from the AI4Path Lab, led by
Abdul Rehman Akbar under the supervision of Dr. Muhammad Khalid Khan Niazi,
has been released on arXiv:

Read: CellEcoNet: Decoding the Cellular Language of Pathology with Deep Learning for Invasive Lung Adenocarcinoma Recurrence Prediction


🔍 What’s the Study About?

Despite surgical resection, nearly 70% of invasive lung adenocarcinoma (ILA) patients experience recurrence
within five years. Current clinical grading and staging systems often fail to identify high-risk patients who might
benefit from adjuvant therapy.

To tackle this challenge, our team developed CellEcoNet, a
spatially aware deep learning framework that treats histopathology as a form of language —
where cells are words, cellular neighborhoods form phrases,
and tissue architecture creates sentences.

By modeling this “language of pathology,” CellEcoNet automatically learns context-dependent cellular interactions that
reveal the underlying biology of cancer recurrence.

Figure 2. Architecture of the CellEcoNet framework


Figure 1. Schematic overview of the CellEcoNet architecture for invasive lung adenocarcinoma (ILA) recurrence prediction.
The framework integrates tissue-level embeddings (E) and cell-level embeddings (C) through a Cell Patch Fusion module,
producing unified feature representations (f₁ … fₖ). These fused features are aggregated via an attention-based mechanism to generate a global
slide-level embedding (F), which is then used for recurrence classification.
The lower panel illustrates the fusion mechanism, where query, key, and value projections are computed from both patch and cell tokens,
followed by spatially biased attention and a fusion MLP to yield the final fused representation (fₚ).

💡 Key Innovations

  • Language-inspired representation of tissue — translating microscopic structure into a contextual model of cellular communication.
  • Spatial awareness — integrating both cell-level and microenvironment-level information for precise risk prediction.
  • Fair and robust performance — maintaining accuracy across diverse demographic and clinical subgroups.

📊 Results at a Glance

Method AUC (%) Hazard Ratio (HR)
CellEcoNet 77.8 9.54
IASLC Grading 71.4 2.36
AJCC Stage 64.0 1.17
Other Computational Models 62.2–67.4

📈 CellEcoNet achieved superior predictive power, significantly outperforming both established
pathological grading systems and state-of-the-art AI methods.

🌍 Why It Matters

CellEcoNet goes beyond prediction — it decodes how the tumor microenvironment “speaks.”
By understanding how subtle variations in cellular arrangements and spatial context encode recurrence risk,
the model opens new avenues for precision oncology, enabling more informed decisions for post-surgical treatment.

👥 Meet the Authors

Abdul Rehman Akbar, Usama Sajjad, Ziyu Su, Wencheng Li, Fei Xing, Jimmy Ruiz, Wei Chen, and
Muhammad Khalid Khan Niazi (PI, AI4Path Lab).

Stay tuned for more pioneering research from AI4Path, where we continue to push the boundaries of computational pathology and AI-driven precision medicine.

AI4Path’s Abdul Rehman Akbar Receives Travel Award for PathVisions 2025!

We are proud to share that Abdul Rehman Akbar, a Graduate Research Associate in the AI4Path Lab, has been awarded a prestigious Travel Award by the Digital Pathology Association (DPA) to attend PathVisions 2025, taking place October 5–7 in San Diego, California.

This highly competitive award recognizes outstanding early-career researchers and supports their participation in one of the world’s leading conferences in digital pathology and AI.

✈️ What the Award Covers

As a Travel Award recipient, Abdul will receive:

✅ Full conference registration (complimentary)
🏨 Hotel accommodation at the Manchester Grand Hyatt
🛫 Round-trip airfare, with travel expenses fully covered
💼 The opportunity to present his accepted abstract and engage with leaders in computational pathology

This award enables promising trainees like Abdul to share their work, build meaningful collaborations, and gain exposure to the latest innovations in the field.

👏 Congratulations!

Please join us in congratulating Abdul on this exciting achievement! His dedication and innovation continue to embody AI4Path’s mission to advance patient-centered, AI-powered digital pathology.

We’re thrilled to see Abdul represent the lab on the national stage and look forward to his continued contributions to computational pathology and precision oncology.

📍 Stay tuned for more updates from AI4Path at PathVisions 2025!