
Angelica is an independent AI safety researcher with a particular interest in sociotechnical AI safety. She holds a Master's degree in Data Science from the University of Sydney.
Over the past few months, she has participated in various AI safety programmes and fellowships, including the AI Ethics, Safety and Society (AISES) course by the Centre for AI Safety, EleutherAI Summer of AI Research (SOAR), Supervisory Program for Alignment Research (SPAR), and the Human-Aligned AI Summer School.
She is currently a research fellow for the Mentorship for Alignment Research Students (MARS) programme, where she is working on the "AI Governance Mapping Project" by MIT FutureTech. In addition, she is a research fellow for the Future Impact Group (FIG), where she is working as an affiliate under the MINT Lab with Seth Lazer on evaluating LLM agent normative competence, and with Richard Mallah on researching adversarially credible AI evaluation standards.
Professionally, she has been a data scientist for over five years, working across consulting, insurance, banking and fintech. She is also involved with Women in AI, recently served as an industry partner for the University of Technology Sydney's (UTS) Transdisciplinary School on their Master of Data Science capstone programme, and worked as a Review Editor at the AI Ethics Journal.