They call it HCI for a reason: finding balance between humans and AI in UX

The next BostonCHI meeting is They call it HCI for a reason: finding balance between humans and AI in UX on Thu, Jan 29 at 5:30 PM.

Register here

BostonCHI in partnership with NU Center for Design at CAMD presents a hybrid talk by Clare Cady

They call it HCI for a reason: finding the balance between humans and AI in UX
As AI tools flood the market, a disturbing trend is taking hold. Lack of accessibility, convoluted workflows, and poor onboarding are becoming the norm. In the race to launch first, companies are cutting corners and racking up massive UX debt, which must be repaid if they want to survive. This talk explores the impact of the AI boom on UX Research, Design, and Writing, and what happens when these disciplines are left out. With humorous (and sometimes horrifying) real product examples, we’ll examine the growing UX deficit in AI and why strong UX will be the key differentiator in the future

About our speaker
Clare Cady is a research and product strategist who specializes in the creation and use of ethical AI tools that solve real-world problems. Her work spans a range of disciplines including UX, academia, nonprofits, and tech startups, and has been showcased by NPR, MSNBC, Johns Hopkins University Press, and UXPA. In 2025 she founded CLC Strategies, a consulting firm that turns gut feelings into smart decisions with data, strategy, and ethical AI. When she isn’t helping founders start up right, you can find her in her kitchen or garden in Worcester, MA, where she lives with her partner, dog, and sixty houseplants.

Naviagation: Enter the building through this gate and take left.

LLMs as UXR Participants?: A How-to Guide and Comparative Analysis

The next BostonCHI meeting is LLMs as UXR Participants?: A How-to Guide and Comparative Analysis on Thu, Dec 11 at 6:00 PM.

Register here

BostonCHI in partnership with NU Center for Design at CAMD presents a hybrid talk by Aaron Gardony

LLMs as UXR Participants?: A How-to Guide and Comparative Analysis
This talk explores the potential and limitations of using Large Language Models (LLMs) as surrogate research participants through a series of simulated choice-based survey experiments. The first half details an open-source Python program I built that runs Maximum Difference Scaling (MaxDiff) experiments—a survey method where participants choose the most and least important items from sets of options—using LLM users, including customizable personas and comprehensive analytics reporting. The talk will walk through the AI-assisted development process, laying out best practices for AI-assisted software development, covering key considerations like building in stages, implementing unit tests, enforcing structured LLM outputs, and managing API costs effectively.

The second half describes the methods and findings of an experiment using this application. By comparing a large sample of LLM-generated personas against real data from humans, I demonstrate that LLMs can achieve moderate alignment with aggregate human preferences but fundamentally fail to capture human variability, even at maximum temperature settings. Most strikingly, removing a single seemingly-innocuous sentence from the system prompt completely reshuffled individual model-human alignment while leaving aggregate alignment relatively unchanged. These findings reveal the stark and often unpredictable sensitivity of LLM models to prompt engineering, an effect that may be moderated by model temperature. These findings have important implications for responsible AI and user research applications. As we increasingly rely on AI for understanding human needs and preferences, it is critical to recognize that subtle prompt variations can alter research outcomes in unpredictable ways, with the potential to amplify or obscure bias baked into LLMs and underscoring the need for rigorous prompt testing and evaluation.

About our speaker
Dr. Aaron Gardony was a Cognitive Scientist at the DEVCOM Soldier Center and a Visiting Scientist at the Center for Applied Brain and Cognitive Sciences (CABCS) at the time of this work. He received his joint doctorate in Psychology and Cognitive Science from Tufts University in 2016, a Master of Science from Tufts University in 2014, and a BA from Tufts University in 2009. His current work focuses on Responsible AI and Safety Evaluation.

Naviagation: Enter the building through this gate and take left.

Expanding the Design Space for Explainable AI in Human-AI Interactions

The next BostonCHI meeting is Expanding the Design Space for Explainable AI in Human-AI Interactions on Mon, Nov 3 at 6:00 PM.

Register here

BostonCHI in partnership with NU Center for Design at CAMD presents a hybrid talk by Katelyn Morrison

Expanding the Design Space for Explainable AI in Human-AI Interactions 

Explainable AI (XAI) has largely been designed and evaluated through the lens of four recurring metrics: Trust, Reliance, Acceptance, and Performance (TRAP). While these metrics are essential for developing safe and responsible AI, they can also trap us in a constrained design space for how explanations provide value in human-AI interactions. Furthermore, mixed results on whether XAI actually helps calibrate reliance or foster appropriate trust raise the question of whether we are designing XAI with the right goals in mind. This talk explores how we can expand the design space for XAI by moving beyond the TRAP goals. I will discuss how domain experts appropriate AI explanations for purposes unanticipated by designers, how AI explanations can mediate understanding between physicians and other stakeholders, and how we can repurpose generative AI as an explanation tool to support various goals. By reframing XAI as a practical tool for reasoning and human–human interaction, rather than solely as a transparency mechanism, this talk invites us to consider what’s next for explainable AI

About our speaker
Katelyn Morrison is a 5th-year Ph.D. candidate in the Human-Computer Interaction Institute at Carnegie Mellon University’s School of Computer Science, advised by Adam Perer. Her research bridges technical machine learning approaches and human-centered methods to design and evaluate human-centered explainable AI (XAI) systems in high-stakes contexts, such as healthcare. In recognition of her work at the intersection of AI and health, she was awarded a Digital Health Innovations Fellowship from the Center for Machine Learning and Health at Carnegie Mellon University. Her research experience spans industry, government, and non-profit organizations, including the Software Engineering Institute, Microsoft Research, and IBM Research. Before joining Carnegie Mellon University, Katelyn earned her bachelor’s degree in Computer Science with a certificate in Sustainability from the University of Pittsburgh. She is currently on the job market for faculty, postdoc, and research scientist positions.

Naviagation: Enter the building through this gate and take left.

The Human Side of Tech