AI Researcher · Mechanistic Interpretability · Geometric Deep Learning · AI Safety & Alignment
I build differential-geometric and information-geometric frameworks to understand how LLMs encode, propagate, and transform beliefs across layers — and how alignment, fine-tuning, and cultural model merging alter that internal geometry. Published at NeurIPS 2025 and SpringerNature ICOMP’24.
github.com/pps121 · torsional-belief-vector-field · pps121.github.io
I am an AI researcher and lecturer with over 12 years of combined experience in industry (Infosys, BirlaSoft/J&J, Wipro, IIT Kanpur) and academia (BITS Pilani M.Tech — 9.08 GPA, top 5%), now focused full-time on foundational research in mechanistic interpretability, AI alignment, and geometric deep learning.
My research programme develops information-geometric frameworks for tracing how LLMs encode, propagate, and transform beliefs across layers — and how alignment, fine-tuning, cultural model merging, and distillation alter that internal geometry. Three active research threads: (1) AI Safety & Mechanistic Interpretability via Torsional Belief Vector Fields; (2) Cross-cultural & Multilingual AI via Semantic Helix / nDNA; and (3) Privacy-Preserving Federated Learning for healthcare (published SpringerNature ICOMP’24).
My long-term vision: AI systems aligned not only behaviourally but geometrically — with transparent, auditable internal representations that can be verified, not just tested. I am particularly motivated by applications where responsible, interpretable AI can have measurable global impact: global health, diagnostic AI, equitable NLP, and human-centred AI systems for underserved communities.
I currently serve as Lecturer in Computer Science at Nalhati Government Polytechnic College (West Bengal, India), where I teach ML, Deep Learning, IoT, and Python, and supervise 50+ students on AI/NLP research projects. I have been selected as a NeurIPS 2025 Reviewer, AWS AI & ML Scholar, and received full scholarship to multiple premier summer schools (Duke ML, Cohere, Armenian LLM, University of Chicago DSI).
Unifies fine-tuning, alignment, distillation, and merging as measurable deformations of the same depth-wise semantic flow via spectral curvature κℓ and thermodynamic length ℒℓ (epistemic effort across layers). Investigates epistemic inheritance in merged LLMs using Fisher-Rao geometry, producing neural offspring; emergent cultural nDNA measured via spectral curvature deviation Δκℓ and thermodynamic length divergence Δℒℓ. Cultures studied: African, Latin American, South Asian, East Asian, Arabic, Indigenous, European, Pacific Islander.
Collaborative Federated Learning (CFL) cloud-based system separates datasets into public and private sets based on the removal of PHI/PII, enabling personalised GPT-like systems without centralising sensitive data. Enables AI for healthcare at scale while preserving patient privacy — a critical requirement for responsible, equitable AI deployment globally.
Investigates whether formal robustness certificates for LLMs can be derived from data contamination, label noise, and adversarial attacks rather than input-perturbation bounds. DPO-aligned models show compressed Fisher-norm spectra versus SFT counterparts, suggesting alignment induces geometric contraction doubling as a robustness mechanism. Framework: Lipschitz robustness bounds guaranteeing up to 40% data contamination holds almost natural robustness. Models: ViT, MedSigLIP, CLIP.
I am actively seeking fully-funded PhD positions starting in 2026 at research universities with strong programmes in AI safety, mechanistic interpretability, geometric/theoretical ML, or responsible AI for global impact.
I am also open to research fellowships, collaborations, and discussions with faculty and research scientists working on interpretable, responsible AI — particularly in applications relevant to global health, education, or equitable systems.
If you are a professor, research scientist, or program director interested in my work, please reach out directly. I respond within 24 hours.