From Human-Human Collaboration to Human-AI Collaboration

The next BostonCHI meeting is From Human-Human Collaboration to Human-AI Collaboration on Tue, Nov 19 at 6:00 PM.

Register here

BostonCHI presents a hybrid talk by Dakuo Wang jointly with Northeastern ACM Chapter

Human-Centered AI (HCAI) refers to the research effort that aims to design and implement AI techniques to support various human tasks, while taking human needs into consideration and preserving human control. Prior work has focused on human-AI interaction interface design and explainable AI research (XAI). However, despite these fruitful research results, why do many so-called “human-centered” AI systems still fail in the real world? In this talk, I will discuss the human-AI interaction paradigm, and show how we can learn from human-humancollaboration to design and build AI systems that lead to a successful interaction paradigm, especially in this LLM era. This work serves as a cornerstone towards the ultimate goal of human-AI collaboration, where AI and humans can take complementary and indispensable roles to achieve a better outcome and experience.

About the speaker
Dr. Dakuo Wang is an Associate Professor at Northeastern University. His research lies at the intersection of human-computer interaction(HCI), artificial intelligence (AI), and computer-supported team collaboration (CSCW), with a focus on the exploration, development, and evaluation of human-centered AI (HCAI) systems. The overarching research goal is to democratize AI for every person and every organization, so that they can easily access AI and collaboratewith AI to accomplish real-world tasks better — the “human-AI collaboration” paradigm. Before joining Northeastern, Dr.Wang worked as a research lead at IBM Research, and a principal investigator at MIT-IBM Watson AI Lab. He got his Ph.D. from the University of California Irvine (“how people write together now” co-advised by Judith Olson and Gary Olson). He has worked as a designer, researcher, and engineer in the U.S., China, and France. He has served in various organizing committees, program committees, and editorial boards for conferences and journals, and ACM has recognized him as an ACM Distinguished Speaker.

CSCW Paper Swap Workshop

The next BostonCHI meeting is CSCW Paper Swap Workshop on Fri, Oct 11 at 2:30 PM.

Register here

Come join us at the CSCW Paper Swap Workshop to exchange research papers and network with fellow researchers!

Welcome to the CSCW Paper Swap Workshop!

Are you submitting a paper to CSCW? Would you like feedback on it?

We’re doing this year on Friday, October 11th from 2:30pm -4:30 pm. Come for the hybrid meetup and share your papers whatever state they are in (including workshopping only individual parts of your paper)! Expect to read someone else’s paper during the swap time. Bring copies of your drafts.

We’ll plan roughly an hour per swap–30 minutes to read, 30 minutes for discussion and feedback. You can participate in as many or as few swap sessions as you’d like.

If you’re interested in attending, please “register” and let us know you are coming.

Not submitting to CSCW? Come anyway to read other people’s drafts, and get a sneak preview of the work coming out of this year!

Scaling Experience Sampling with Microinteractions

The next BostonCHI meeting is Scaling Experience Sampling with Microinteractions on Tue, Sep 17 at 7:00 PM.

Register here

BostonCHI presents a hybrid talk by Aditya Ponnada

Abstract

Mobile technologies create new opportunities to develop personalized human-computer interfaces that respond to everyday changes in behaviors. Such interventions are fueled by computational models that need information on behaviors in the real-world. Ideally, sensors embedded on mobile devices could measure these behaviors. But we currently need self-report to capture subjective experiences that sensors cannot measure directly (e.g., fatigue, pain, and productivity). Ecological momentary assessment (EMA) or experience sampling method (ESM) is one such approach that enables in-situ self-report data collection using smartphones. In EMA, users are prompted several times a day on their phones to answer sets of multiple-choice questions. The in-situ nature of EMA reduces recall bias compared to long-form traditional questionnaires. Additionally, the repeated nature of EMA helps capture variations in experiences unique to each user. But, this method of data collection poses a heavy burden on the end users, impacting both the quality and quantity of collected survey data. Aditya’s work is driven by a fundamental question: Given this trade-off between user burden and data quantity, how can we scale experience sampling in real-world settings?

In this talk, Aditya will present a novel (and first-of-its-kind) EMA approach called μEMA that uses micro-interactions on the Smartwatch to collect survey data in natural settings – both at high-frequency and at large-scale. The μEMA restricts EMA interruptions to single, cognitively simple questions that can be answered on a smartwatch with a single tap – a quick, glanceable microinteraction like checking time. Because of this microinteraction, μEMA permits substantially higher interruption than EMA without as much user burden. Aditya will present results from several pilot studies and an year long longitudinal evaluation of μEMA to show that this method: 1) yields higher response rates from end users, 2) poses less burden on users, and 3) produces accurate information on user’s momentary experiences. Finally, Aditya will discuss potential applications of μEMA method for data collection, user behavior modeling, and personalization.

About Aditya

Aditya is an interdisciplinary researcher interested in human-computer interaction and behavior science, with a focus on measuring and modeling user experiences. He is currently working as a Sr. Researcher at MongoDB focusing on developer experiences and growth. Previously he was a Research Scientist at Spotify (as a member of Human-AI Interaction Lab), where he developed and studied interactive recommender systems and computational approaches to help smaller podcasters and newer music content grow audience on the platform. He completed his PhD in Computer and Health Sciences from Northeastern University, Boston, MA, where his research led to the development of novel experience

sampling tools, crowdsourcing platforms, and open source tools to perform multi-level modeling and data annotation on high-frequency longitudinal mobile data. His research has been published in peer-reviewed competitive venues including ACM IMWUT (UbiComp), ACM CSCW, ACM IUI, ACM WebSci, ACM CHIPLAY, IEEE PerComm, NeurIPS Workshops, Behavior Research Methods, Journal of Medical Internet Research, and Translational Behavior Medicine and has won two best paper awards.

This Event will be held in person at 11 Leon Street, Boston, MA. Ryder Hall, Room 180 (first floor), and remotely.

The Human Side of Tech