Publications

Here are my research publications and contributions to the academic community. My work spans AI Security, Machine Learning Systems, LLM Security, and Applied Deep Learning.


Published & Accepted Link to heading

SLMs as Compiler Experts: Auto-Parallelization for Heterogeneous Systems Link to heading

NeurIPS 2025 Workshop on Machine Learning for Systems

Authors: Prathamesh Devadiga, et al.

Abstract: We demonstrate that Small Language Models (SLMs) can serve as effective compiler experts for auto-parallelization in heterogeneous systems. Our approach achieves 6.81x average speedup (43.25x peak) against LLVM Polly, TVM, and Triton baselines, showcasing the potential of SLMs in compiler optimization tasks.

Keywords: Compiler Optimization, Small Language Models, Auto-Parallelization, Heterogeneous Systems


PyraFuseNet: Dual-Path Network for Resource-Constrained Vision Link to heading

ICIAI NTU Singapore

Authors: Prathamesh Devadiga, et al.

Abstract: We present PyraFuseNet, a dual-path network architecture designed for resource-constrained vision tasks. The framework achieves 55% computational reduction compared to ResNet-18 while maintaining competitive performance.

Keywords: Computer Vision, Neural Architecture Search, Resource-Constrained ML, Efficient Networks


PDF Malware Detection with Adversarial Robustness Link to heading

Applied Soft Computing (Q1 Journal)

Authors: Prathamesh Devadiga, et al.

Abstract: We present the KASPER framework for PDF malware detection, achieving 99.5% accuracy with strong adversarial robustness. The system maintains 99.5% detection accuracy against FGSM and PGD adversarial attacks, demonstrating production-ready resilience through custom malware injection pipelines and comprehensive evaluation.

Keywords: Malware Detection, Deep Learning, Adversarial Robustness, PDF Security

Institution: Indian Institute of Technology, Indore


Under Review Link to heading

Can Jailbreaks Force Regurgitation? Connecting Safety Bypasses to Privacy Leakage in Large Language Models Link to heading

Under review at ICLR 2026 Main Conference

Authors: Prathamesh Devadiga, Shawn Shan, et al.

Abstract: We investigate jailbreak-to-data-extraction vulnerabilities in large language models, demonstrating 100% verbatim regurgitation of memorized training data across 9 model architectures. Our work reveals an “architecture over size” paradox where smaller models leak more sensitive data than larger ones, challenging conventional assumptions about model scale and security. We also identify critical privacy-alignment tradeoffs where safety training inadvertently creates data extraction vulnerabilities.

Keywords: LLM Security, Privacy, Data Extraction, Jailbreaking, Alignment Safety

Institution: Dartmouth College


Making Large Language Models Speak Tulu: Structured In-Context Learning for an Extremely Low-Resource Language Link to heading

Under review at ACL 2025

Authors: Prathamesh Devadiga, P. Chopra

Abstract: We address the challenge of training LLMs for extremely low-resource languages, focusing on Tulu with approximately 0.001% of typical training data. Our framework uses structured in-context learning to enable effective language modeling in resource-constrained settings.

Keywords: Low-Resource NLP, Multilingual Models, In-Context Learning, Indic Languages


Can LLMs Learn Tulu? Teaching Low-Resource Languages Through Hard Constraint Prompting Link to heading

Under review at EACL 2025 LoResLM Workshop

Authors: Prathamesh Devadiga, et al.

Abstract: We develop a novel framework using hard negative constraints to reduce catastrophic language leakage from 80% to 5% in low-resource language modeling. Our approach demonstrates that explicit prohibitions outperform positive instructions for maintaining language integrity in multilingual models.

Keywords: Low-Resource NLP, Constraint-Based Learning, Multilingual Models, Language Preservation

Institution: Lossfunk


A Low-Latency Framework for Real-Time Jailbreak Prevention in LLMs Link to heading

Under review at NDSS 2026 Workshop on LLM Assisted Security and Trust Exploration (LAST-X)

Authors: Prathamesh Devadiga, et al.

Abstract: We present a low-latency framework for real-time jailbreak prevention in large language models, achieving sub-5ms detection latency. The system demonstrates robust performance against GCG, AutoDAN, and PAIR attacks, enabling practical deployment in production environments.

Keywords: LLM Security, Jailbreak Prevention, Real-Time Defense, Low-Latency Systems


GUARDIAN: Multi-Agent Defense System for Large Language Model Security Link to heading

Under review at ICIAI Waseda University

Authors: Prathamesh Devadiga, et al.

Abstract: We introduce GUARDIAN, a multi-agent defense system for large language model security. The architecture achieves 99.5% prompt injection and jailbreak detection accuracy through coordinated multi-agent analysis and response mechanisms.

Keywords: LLM Security, Multi-Agent Systems, Prompt Injection, Jailbreak Detection


Preprints Link to heading

SAMVAD: Multi-Agent Framework for Modeling Judicial Reasoning in Indian Jurisprudence Link to heading

arXiv:2509.03793

Authors: Prathamesh Devadiga, et al.

Abstract: We present SAMVAD, a multi-agent framework designed to model judicial reasoning in Indian jurisprudence. The system leverages multi-agent coordination to capture the nuanced decision-making processes in legal contexts.

Keywords: Legal AI, Multi-Agent Systems, Judicial Reasoning, Indian Law


RegimeNAS: Regime-Aware Neural Architecture Search for Financial Trading Link to heading

arXiv:2508.11338

Authors: Prathamesh Devadiga, et al.

Abstract: We introduce RegimeNAS, a regime-aware neural architecture search framework for financial trading. The system adapts to different market regimes, optimizing architecture selection based on market conditions.

Keywords: Neural Architecture Search, Financial ML, Regime Detection, Trading Systems


MorphNAS: Differentiable Neural Architecture Search for Multilingual NER Link to heading

arXiv:2508.15836

Authors: Prathamesh Devadiga, et al.

Abstract: We present MorphNAS, a differentiable neural architecture search framework for multilingual named entity recognition. The system optimizes architecture selection across multiple languages, improving performance and efficiency.

Keywords: Neural Architecture Search, Multilingual NLP, Named Entity Recognition, Differentiable Search


Research Areas Link to heading

My research interests include:

  • Machine Learning Security: Adversarial robustness, LLM jailbreaking and data extraction, constraint-based defenses
  • Production AI Systems: Low-latency inference, distributed training, efficient serving infrastructure
  • Alignment Safety: Privacy-alignment tradeoffs, safety training vulnerabilities, defense mechanisms
  • Low-Resource Language Modeling: Extremely low-resource languages, structured learning, language preservation
  • Compiler Optimization: Auto-parallelization, heterogeneous systems, SLM-based optimization
  • Applied Deep Learning: Real-world applications in security, legal tech, and financial systems

Research Collaborations Link to heading

I actively collaborate with researchers and institutions including:

  • Dartmouth College - LLM Security, Privacy, and Alignment Safety (Advisor: Prof. Shawn Shan)
  • Lossfunk - Low-Resource Language Modeling and Indic LLMs
  • Indian Institute of Technology, Indore - PDF Malware Detection and Adversarial ML (Advisor: Prof. Nagendra Kumar)
  • University of California, Santa Cruz - Vector Embeddings and ANN Algorithms (Google Summer of Code 2025)
  • Ādhāra AI Labs - Independent research laboratory focused on Generative AI, LLMs, and RAG systems

For inquiries about my research, collaboration opportunities, or access to preprints, please reach out via contact or email at devadigapratham8@gmail.com.