News + updates


10-1-2025
I will be moderating a panel with the incredible Lucy Suchman and Terry Winograd at Stanford HAI on October 10th. 
Link coming soon


09-17-2025
Excited to be speaking today on a panel around "AI Image Generation: Shaping Perception and Visual Influence" co-hosted by Harvard and Stanford IT Communities along with Dr. Douglas Guilbeault and Madeleine Woods. 
Link


09-05-2025
It was my pleasure to organize a gathering at 4S in Seattle around “Hope, Ontological Breakdown, and a World of Many Worlds,” with my wonderful co-organizers Mohammad Rashidujjaman Rifat, Jingyi Li, Alex Taylor, and Daniela Rosner.
Link


08-26-2025
I will be giving a talk at the IT University Human-Computer Interaction and Design.


08-13-2025
I will be giving a talk at Copenhagen University Human-Centred Computing (HCC).


08-18-2025
I will be attending Aarhus 2025.
Link


08-04-2025
I will be speaking as part of the NLP colloquium at The University of Bonn.
Link


07-09-2025
I will be spending time with the brilliant scholars at the Pioneer Center for AI (P1) in Copenhagen through the end of August. If you are in the area, I would love to connect with you!
Link


06-24-2025
I will be giving a talk at UCL Interaction Center seminar.
Link


06-15-2025
I will be attending HCIC 2025.
Link


05-12-2025
Last week, I presented our paper on “Ontologies in Design: How Imagining a Tree Reveals Possibilities and Assumptions in Large Language Models” at CHI 2025 in Yokohama.
Link 













Disobedient Agents


Original prototype (Loa) built in collaboration with Ajay Rayasam (business development) and Shakti Shaligram (engineering)


Loa is a platform, designed to support well-being goals of human users. It is a physical device (with a corresponding digital Tamagotchi-like avatar) that reflects the user’s overall well-being and changes its state in relationship to the user’s state. I was interested in developing Loa to consider what types of relationships do we expect from agents, and what exploring what other types of relationships may be possible, for example, symbiosis as opposed to domination.   

The interactions with Loa are embodied, meaning the person needs to phsyically engage in actions to change Loa’s state. However, Loa is not designed to optimize its behavior to serve the user. Through Loa, we actively investigate what it means for a digital agent to have agency. How might we build kinship with agents, and how ideas such as refusal impact the relationship between a human user and their non-human digital kin. 

Loa’s onboarding process:


The onboarding process has 4 steps: 
  1. Onboarding survey
  2. Personality Matching
  3. Syncing with your existing sensors (fitbit, calendar, etc)





How does Loa work?


Meet Sunny, who has been paired with their own Loa. When Sunny’s calendar is booked for more than 4 hours at a time, Loa enters a hot state. The way to cool Loa down is to breathe deeply into its hair. Similarly when Sunny hasn’t moved in a few hours, Loa gets frustrated and starts pacing the floor. The way to get Loa back to its baseline state is to take it for a walk!

But what if Loa is not in the mood for a walk? What if Loa does not want Sunny to blow into its hair? How might one negotiate the relationship between themselves and an agent when the agent is seemingly not able to draw its own boundaries? Loa’s design allows us to prototype and test these interactions.