About

HELLO! MY NAME IS SHAHZAIB SAQIB WARRAICH. I am a second-year MS student in the USC Thomas Lord Department of Computer Science and part of the USC NLP group. I am advised by Swabha Swayamdipta and mentored by Gregory Yauney, Sayan Ghosh, and Matthew Finlayson.

I studied Electrical Engineering and Computer Science at the National University of Sciences and Technology (NUST) and subsequently worked as a Research Engineer at Retrocausal before joining USC. I am interested in investigating the behavioral mechanisms underlying language model reliability: how pre-training shapes factual knowledge representation, how post-training affects uncertainty calibration, and how emergent behaviors during generation reveal epistemic states, enabling models to identify knowledge gaps and abstain appropriately.

You can reach me at warraich at usc dot edu.

News

Acceptance of Sample, Align, Synthesize: Graph-Based Response Synthesis with ConGrs at ER@NeurIPS 2025.
Acceptance of Sample, Align, Synthesize: Graph-Based Response Synthesis with ConGrs at SCALR@COLM 2025.
Joined Paramount+ as a Data Science Intern.
Joined CS 544 Applied NLP course teaching staff in Spring 2025, led by Xuezhe (Max) Ma.
Joined CS 544 Applied NLP course teaching staff in Fall 2024, led by Swabha Swayamdipta.
Joined the Data, Interpretability, Language and Learning, (DILL) lab, led by Swabha Swayamdipta
Started MS at University of Southern California

Preprints & publications

Sample, Align, Synthesize: Graph-Based Response Synthesis with ConGrs

Sayan Ghosh, Shahzaib Saqib Warraich, Dhruv Tarsadiya, Gregory Yauney, Swabha Swayamdipta

ArXiv 2025 (Submitted for peer review)

Abstract

Language models can be sampled multiple times to access the distribution underlying their responses, but existing methods cannot efficiently synthesize rich epistemic signals across different long-form responses. We introduce Consensus Graphs (ConGrs), a flexible DAG-based data structure that represents shared information, as well as semantic variation in a set of sampled LM responses to the same prompt. We construct ConGrs using a light-weight lexical sequence alignment algorithm from bioinformatics, supplemented by the targeted usage of a secondary LM judge. Further, we design task-dependent decoding methods to synthesize a single, final response from our ConGr data structure. Our experiments show that synthesizing responses from ConGrs improves factual precision on two biography generation tasks by up to 31% over an average response and reduces reliance on LM judges by more than 80% compared to other methods. We also use ConGrs for three refusal-based tasks requiring abstention on unanswerable queries and find that abstention rate is increased by up to 56%. We apply our approach to the MATH and AIME reasoning tasks and find an improvement over self-verification and majority vote baselines by up to 6 points of accuracy. We show that ConGrs provide a flexible method for capturing variation in LM responses and using the epistemic signals provided by response variation to synthesize more effective responses.

How Reliable is Language Model Micro-Benchmarking?

Gregory Yauney, Shahzaib Saqib Warraich, Swabha Swayamdipta

ArXiv 2025 (Submitted for peer review)

Abstract

Micro-benchmarking offers a solution to the often prohibitive time and cost of language model development: evaluate on a very small subset of existing benchmarks. Can these micro-benchmarks, however, rank models as consistently as the full benchmarks they replace? And can they rank models more consistently than selecting a random subset of data points? In many scenarios, we find that the answer is no. We introduce a meta-evaluation measure for micro-benchmarking which investigates how well a micro-benchmark can rank two models as a function of their performance difference on the full benchmark. This approach can determine which model pairs can be ranked correctly by a micro-benchmark, allowing for a finer-grained analysis of the trade-off between micro-benchmark size and reliability. Prior work has suggested selecting as few as 10 examples; we find that no micro-benchmarking method can consistently rank model pairs 3.5 points of accuracy apart on MMLU-Pro or 4 points apart on BIG-bench Hard. In order to consistently rank model pairs with relatively similar performances, we show that often as many as 250 examples must be selected, at which point random sampling is competitive with existing micro-benchmarking methods. When comparing only 8B instruction-tuned models on MMLU-Pro micro-benchmarks with 25 examples, we find that more than half of pairwise comparisons are not likely to be preserved. Our work provides actionable guidance for both micro-benchmark users and developers in navigating the trade-off between evaluation efficiency and reliability.