Princeton AI Alignment and Safety and Seminar
Safety and Alignment are important to ensure AI systems operate as intended without causing harm, to prevent their misuse for malicious purposes, to align AI actions with ethical standards, and to build public trust in AI technology. Topics of this seminar include but are not limited to:
- Safety: Security, Privacy, Copyright, Misinformation, Legal Compliance …
 - Alignment: Fine-tuning, Instruction-tuning, Reinforcement learning (with human feedback), Prompt tuning, Human oversight …
 
Princeton AI Alignment and Safety and Seminar (PA2SS) serves as a collaborative platform for researchers to present their findings, participate in insightful conversations, and identify collaboration opportunities for novel solutions to these emerging alignment and safety challenges.
Join our mailing list and  to get notified of speakers and livestream links!
Schedule
Please find previous events here.
| 02/02/2024 [Title] [Speaker] | 
|---|
| Abstract: | 
| 01/19/2024 [Title] [Speaker] | 
|---|
| Abstract: | 
Organizers
Faculty organizers
(alphabetical order)
| Danqi Chen  (Princeton)  |  Elad Hazan  (Princeton)  |  Peter Henderson  (Princeton)  |  Kai Li  (Princeton)  |  Prateek Mittal  (Princeton)  |  Dawn Song  (UC Berkeley)  |  
![]()  |  ![]()  |  ![]()  |  ![]()  |  ![]()  |  ![]()  |  
Student organizers
- Guest host and lead organizer: Yangsibo Huang
 - Team members (alphabetical order): Lucy He, Kaixuan Huang, Xiangyu Qi, Vikash Sehwag, Mengzhou Xia, Tinghao Xie, Yi Zeng
 





