Responsible AI

Outside of my day job, I am passionate about responsible AI. In this, I’ve conducted research and worked on projects.

Projects

I’ve contributed to several projects promoting responsible AI. One project was the IndieLabel Space on HuggingFace, “a collaboration between researchers at the Stanford HCI Group and ARVA (The AI Risk and Vulnerability Alliance).” Another project was recognized as a finalist in the Stanford Institute for Human-centered Artificial Intelligence (HAI) AI Audit Challenge.

Research

In the area of responsible AI, I’ve published two pieces of research.

Comparing the Perceived Legitimacy of Content Moderation Processes: Contractors, Algorithms, Expert Panels, and Digital Juries investigates how the public perceives the legitimacy of algorithms - such as AI - in comparison to other common forms of content moderation. It was published in CSCW 2022. My fellow authors and I were invited to write a policy brief for HAI.

In addition, I wrote an honors thesis at Stanford investigating how society can compel developers to incorporate transparency into ML systems during the development process. It was nominated for the Firestone medal, which recognizes the top 10 percent of Stanford honors theses completed in a given year.