Adversarial Machine Learning, Security, and Trustworthy AI

In the old days, hackers modify code. Now, hackers can modify DATA. Such an adversary can force machine learning to make mistakes. We study why this happens and how to defend against it. Our research ranges from test-time attacks, training data poisoning attacks to other subtle forms of adversarial attacks. This page contains our research on the theory, algorithms, and applications of adversarial learning, security, and trustworthy AI.

Talks

Publications

In the media

Back to Professor Zhu's home page