I am a Computer Science Senior at the Indian Institute of
Technology, Jodhpur. I love AI research and Math. My main
areas of interest are Trustworthy ML and Mechanistic Interpretability in multimodal
systems. When it comes to research, my primary agenda is to achieve a trifecta of privacy, safety,
and trustworthiness. I am
extremely grateful to be
jointly mentored by Dr. Mayank Vatsa, Dr. Richa
Singh, and
Dr. Shivang
Agarwal at the Image Analytics and Biometrics (IAB) Lab.
Throughout my undergraduate studies, I've actively worked on
side-quests where AI research can help significantly contribute.
This has given me the privilege to explore projects at some
really cool places:
Thoughtworks AI Labs (TAILS), under the mentorship of
Shayan
Mohanty
and team, on Fine-grained Incompleteness Evaluation of
Summaries from Language Models.
Dr.
Tiago Bresolin, at the Center for Digital Agriculture (UIUC), on
Foundation Models for Livestock Images.
Dr. Deepak Mishra
at IIT Jodhpur, on Deep Learning-based particle physics
simulations (HEP) with CERN.
I'm a strong believer of Bruce Lee's
10000-hour to mastery rule; and am forever grateful to
all my mentors who have shaped my research.
/
/
/
/
/
"Whereof one cannot
speak, thereof one must be silent." ~ Ludwig Wittgenstein
News
September, 2025: Paper accepted to PRICAI-PKAW,
2025 🎉!
August, 2025: Paper accepted to ICCV, U&ME
Workshop! 🎉
August, 2025: TAing CSL2010 - Introduction to Machine
Learning, F25. Instructor - Dr. Rajendra Nagar
Summer, 2025: Research Intern @ Thoughtworks AI Labs.
January, 2025: TAing CSL1020 - Introduction to Computer Science,
S25. Instructor - Dr. Mayank Vatsa
November, 2024: Joining the
Image Analytics and Biometrics Lab
as an undergraduate RA, working on "Machine Unlearning for
Multimodal systems". PI - Dr. Mayank Vatsa, Co-PI - Dr.
Shivang Agarwal
Evaluating the incompleteness of machine-generated summaries
remains a critical challenge in natural language generation. In
domains such as healthcare, education, and policy, summaries
that omit or distort key content can appear fluent yet undermine
reliability and trust. We introduce a Lie Algebra-based Semantic
Flow framework that treats summarization as a geometric
transformation in embedding space, where incompleteness
manifests as segments with low semantic flow magnitudes. Unlike
prior metrics that collapse evaluation into a single similarity
score, our approach provides interpretable, segment-level
signals through a dynamic mean-scaled thresholding mechanism,
requiring no supervision. We evaluate our method on UniSumEval
and SIGHT benchmarks, showing average improvements of +10.6
macro-F1 over existing methods. Theoretical analysis establishes
rotation invariance, bounded flow magnitudes, and a
severity-coverage Pareto frontier that explains observed
trade-offs. Qualitative case studies further demonstrate that
our framework identifies variety of incompleteness often missed
by existing methods.
Bias-Aware Machine Unlearning: Towards Fairer Vision Models
via Controllable Forgetting
Deep neural networks often rely on spurious correlations in
training data, resulting in biased or unfair predictions,
particularly in safety-critical applications. While conventional
bias mitigation methods typically require retraining from
scratch or redesigning the data pipeline, recent advances in
machine unlearning offer a promising alternative for post-hoc
model correction. In this work, we explore the trifecta of
efficiency, fairness, and model utility post unlearning via
Bias-Aware Machine Unlearning, a paradigm that
selectively forgets biased samples or feature representations to
address various forms of bias in vision models.