Biography
Welcome to Zhen Xiang’s homepage!
I am an assistant professor in the School of Computing at the University of Georgia. Before that, I was a postdoc affiliated with the Secure Learning Lab (SLL) led by Prof Bo Li at the Department of Computer Science, University of Illinois Urbana-Champaign. I received my B.E. in Electronics and Computer Engineering from Hong Kong University of Science and Technology in 2014, my M.S. in Electrical Engineering from University of Pennsylvania, and my Ph.D. in Electrical Engineering from Pennsylvania State University supervised by Prof David J. Miller and Prof George Kesidis in 2022.
I work on trustworthy machine learning, large foundation models, and AI agents. My recent research primarily focuses on AI agents powered by large foundation models, encompassing:
- The deployment of AI agents in healthcare, autonomy, education, and scientific tasks.
- The safety and security of AI agents in high-stakes applications.
- The creation of guardrail agents tackling safety, privacy, and fairness issues within AI applications.
I am looking for self-motivated PhD students for Fall 2025. If you are interested in working with me, please feel free to email me
News
- 10/2024: One paper accepted by NeuroComputing.
- 9/2024: Two papers accepted by NeurIPS 2024.
- 8/2024: I am starting a new journey as an assistant professor at University of Georigia!
- 6/2024: One paper accepted by IROS 2024 (oral).
- 5/2024: Our proposal for “The LLM and Agent Safety Competition 2024” is accepted to the competition track NeurIPS 2024! The website will be ready soon.
- 5/2024: Our paper ArtPrompt: ASCII Art-based Jailbreak Attacks against Aligned LLMs is accepted by ACL 2024! Congrats to Fengqing!
- 1/2024: Our paper ‘BIC-based Mixture Model Defense against Data Poisoning Attacks on Classifiers: A Comprehensive Study’ is accepted by TKDE!
- 1/2024: Our paper BadChain: Backdoor Chain-of-Thought Prompting for Large Language Models is accepted by ICLR 2024!
- 11/2023: I will be serving as an Associated Editor for IEEE TCSVT from 1/2024 to 12/2025.
- 9/2023: Our paper CBD: A Certified Backdoor Detector Based on Local Dominant Probability is accepted by NeurIPS 2023!
- 7/2023: Our paper MMBD: Post-Training Detection of Backdoor Attacks with Arbitrary Backdoor Pattern Types Using a Maximum Margin Statistic is accepted by IEEE S&P 2024!
- 7/2023: We are organizing The Trojan Detection Challenge 2023 (LLM Edition).
- 5/2023: Our paper UMD: Unsupervised Model Detection for X2X Backdoor Attacks is accepted by ICML2023!
- 4/2023: Our book Adversarial Learning and Secure AI is accepted by the Cambridge University Press and will be released in December 2023.
- 12/2022: We are organizing the first IEEE Trojan Removal Competition.