I am a Ph.D. student at University of Wisconsin-Madison majoring in Computer Sciences. My research interests lie in the intersection of Machine Learning and Security. I work with Prof. Somesh Jha and Prof. Kassem Fawaz at MADS&P, and with Prof. Earlence Fernandes. I did my undergraduate at Indian Institute of Technology Delhi, majoring in Electrical Engineering with minor in Computer Science.
Drop me an email if you want to chat!
|
Internship with the Android Security and Learning for Code teams. Worked on evaluating program
semantics understanding of Large Language Models for Code.
|
|
Internship with the AWS Security Analytics and AI Research team. Worked on efficient training of
Graph Neural Network for intrusion detection on billion node scale graphs.
|
|
Functional Homotopy: Smoothing Discrete Optimization Via Continous Parameters
for LLM Jailbreak Attacks
PAPER
ICLR 2025 (International Conference on Learning Representations)
Zi Wang*, Divyam Anshumaan*, Ashish Hooda, Yudong Chen,
Somesh Jha
|
|
Fun-tuning: Characterizing the Vulnerability of Proprietary LLMs to Optimization-based Prompt Injection Attacks via the Fine-Tuning Interface
PAPER
IEEE S&P (IEEE Symposium on Security and Privacy)
Andrey Labunets, Nishit Pandya, Ashish Hooda, Xiaohan Fu, Earlence Fernandes
|
|
|
PolicyLR: A LLM compiler for Logic based Representation for Privacy Policies
PAPER
NeurIPS Workshop 2024 (Safe & Trustworthy Agents)
Ashish Hooda, Rishabh Khandelwal, Prasad Chalasani, Kassem Fawaz, Somesh Jha
|
|
Synthetic Counterfactual Faces
Preprint
Guruprasad V Ramesh, Harrison Rosenberg, Ashish Hooda,
Kassem Fawaz
|
|
PRP: Propagating Universal Perturbations to
Attack Large Language Model Guard-Rails
ACL 2024 (Association for Computational Linguistics)
Neal Mangaokar*, Ashish Hooda*, Jihye Choi,
Shreyas Chandrashekaran,
Kassem Fawaz, Somesh Jha, Atul Prakash
|
|
Do Large Code Models Understand Programming Concepts? Counterfactual Analysis for Code Predicates
ICML 2024 (International Conference on Machine Learning)
Ashish Hooda, Mihai Christodorescu, Miltos Allamanis, Aaron Wilson, Kassem Fawaz, Somesh
Jha.
|
|
Experimental Analyses of the Physical
Surveillance Risks in Client-Side Content Scanning
NDSS 2024 (Network and Distributed System Security Symposium)
Ashish Hooda, Andrey Labunets, Tadayoshi Kohno,
Earlence Fernandes.
|
|
D4: Detection of Adversarial Diffusion Deepfakes
Using Disjoint Ensembles
WACV 2024 (IEEE/CVF Winter Conference on Applications of Computer Vision) |
|
Stateful Defenses for
Machine Learning Models Are Not Yet Secure Against Black-
box Attacks
CCS 2023 (ACM Conference on Computer and Communications Security) |
|
Theoretically Principled Trade-off for
Stateful Defenses against Query-Based Black-Box Attacks
ICML Workshop 2023 (AdvML-Frontiers'23) |
|
SkillFence: A Systems
Approach to Mitigating Voice-Based Confusion Attacks
IMWUT / UBICOMP 2022 (ACM Interactive, Mobile, Wearable and Ubiquitous Technologies)
Ashish Hooda, Matthew Wallace, Kushal Jhunjhunwalla,
Earlence Fernandes , Kassem Fawaz.
|
|
Invisible Perturbations: Physical Adv Examples Exploiting the Rolling Shutter Effect
CVPR 2021 (Conference on Computer Vision and Pattern Recognition)
Athena Sayles*, Ashish Hooda*, Mohit Gupta, Rahul Chatterjee, Earlence Fernandes.
|
|
Do Large Code Models Understand Programming Concepts? Counterfactual Analysis for Code Predicates
JetBrains Research, Oct 2024 |
|
Is Attack Detection A Viable Defense For Adversarial Machine Learning?
Visa Research, Jun 2024 |
|
Do Code LLMs understand program semantics?
Google Learning for Code Team, Nov 2023 |
|
Do Stateful Defenses Work Against Black-Box Attacks?
Google AI Red Team, Oct 2023 |
|
Deepfake Detection Against Adaptive Attackers
Google AI Red Team, Aug 2023 |