Publications & Preprints


Building Foundation Models to Characterize Cellular Interactions via Geometric Self-Supervised Learning on Spatial Genomics
Y. You, Z. Wang, K. Fleisher, Rex Liu, and M. Thomson ICLR MLGenX Workshop, 2025 | Preprint

CI-FM integrates spatial genomics with geometric self-supervised learning to model cellular interactions. Leveraging graph neural networks, it processes multi-million cell datasets to embed microenvironments and predict context-dependent gene expression. The model demonstrates efficacy in tumor microenvironment analysis and T-cell response simulation, establishing a framework for computational tissue modeling.

Cost-Saving LLM Cascades with Early Abstention
M. Zellinger, Rex Liu, and M. Thomson In Review, 2025 | Preprint

This work introduces an LLM cascade with early abstention mechanisms that optimize cost-efficiency and reliability. Small models abstain on low-confidence queries, routing only challenging inputs to larger models by exploiting cross-model error correlations. Evaluations on GSM8K, MedMCQA, and MMLU demonstrate a 2.2% reduction in test loss, 13% decrease in inference costs, and 5% lower error rates.

Engineering Flexible Machine Learning Systems by Traversing Functionally-Invariant Paths
G. Raghavan, B. Tharwatt, S. Hari, D. Satani, Rex Liu, and M. Thomson Nature Machine Intelligence, 2024 | Paper

FIP introduces a differential geometric framework that models neural network adaptation as geodesic traversal in Riemannian weight space. By defining functionally invariant paths that preserve network outputs under weight perturbations, the method enables adaptation to secondary objectives (continual learning, sparsification, adversarial robustness) without catastrophic forgetting. Leveraging low-rank subspaces from the metric tensor’s spectrum, FIP achieves SOTA performance with minimal computational overhead.

Herd: Using Multiple, Smaller LLMs to Match the Performances of Proprietary, Large LLMs via an Intelligent Composer
S. Hari, Rex Liu, and M. Thomson NeurIPS ENLSP Workshop, 2023 | Paper

Herd is an intelligent routing framework that combines diverse, open-source LLMs to match proprietary models like ChatGPT. Using Q-learning, it dynamically routes queries to the most suitable model in the ensemble. Despite using models 2.5× smaller than large proprietary systems, Herd achieves competitive accuracy and successfully handles 40% of queries where single large models fail.