Robust Intelligence for Assistive Robots

Elaine Short and Boop

The next BostonCHI meeting is Robust Intelligence for Assistive Robots, with Elaine Short.

Tue, Dec 14 at 6:45 PM

Register here

 

Abstract:

We would like for robots to be able to adaptively help people in their day-to-day lives, but the state-of-the-art in robot learning is typically either under-informed about the needs and abilities of actual users or is designed and tested in highly-controlled environments and interactions that fail to reflect real-world noise and complexity. In our work, we focus on identifying the real-world situations where current human-robot interaction (HRI) and robot learning algorithms fail, and developing new methods that enable robots to robustly learn to assist non-expert teachers under real-world noise and complexity. This includes using human-centered design to develop more realistic simulated teachers for early algorithm development, incorporating both teacher and environmental reward into state-of-the-art deep reinforcement learning algorithms, finding new ways to model and take advantage of rich-but-noisy human feedback, and designing novel models that enable robot-robot collaboration to improve detection of human attention. Finally, throughout all of this work, we seek to break down the artificial disciplinary divide between service robotics for non-disabled users and assistive robotics for users with disabilities, and insure that our robots treat all users as valued partners who are integrated into the social and physical environments in which they live their lives.

Bio:

Elaine Schaertl Short is the Clare Boothe Luce Assistant Professor of Computer Science at Tufts University. She completed her PhD under the supervision of Prof. Maja Matarić in the Department of Computer Science at the University of Southern California (USC). She received her MS in Computer Science from USC in 2012 and her BS in Computer Science from Yale University in 2010. From 2017-2019 she worked as a postdoctoral researcher in the Socially Intelligent Machines Lab at the University of Texas at Austin. At USC, she received numerous awards for her contributions to research, teaching, and service, including being one of very few PhD students to have received all three of the CS department Best TA, Best RA, and Service awards.

Elaine’s research seeks to improve the computational foundations of human-robot interaction by designing new algorithms that succeed in contexts where other algorithms’ assumptions frequently fail, such as in child-robot interaction, in minimally-supervised public space deployments, and in assistive interactions. As a disabled faculty member, Elaine is particularly passionate about disability rights in her service work. In addition to having recently joined the new AccessComputing Leadership Corps, she is the Communications Chair and Community Liaison of AccessSIGCHI, an advocacy group that works to increase the accessibility of the 24 SIGCHI conferences.

Schedule – EST (UTC-5)

6:45 – 7:00: Networking (in Zoom)

7:00 – 8:00: Presentation

8:00 – 8:30: Q & A

 

Coding On-the-Fly: Needs and Support

The next BostonCHI meeting is Coding On-the-Fly: Needs and Support on Tue, Nov 9 at 6:45 PM.

Register here

BostonCHI November 2021, featuring Michelle Brachman

Abstract:

The need for and use of programming across domains continues to expand. Users ranging from children to experienced programmers learn on-the-fly as they code. While their goals may necessitate coding, their focus is often not on learning to code as a skill in and of itself. For example, many children think of coding as a way to create something, such as an animation or a game. A programmer may need to build a component for their project, and along the way they need to learn a new API. A marketing professional may want to automate their workflow, but doesn’t have time to learn to program formally. My work has aimed to understand and support programmers who need to learn or expand their skills on-the-fly, without formal learning like tutorials or assignments. By considering the way people process code and other complex information using theories like Cognitive Load Theory, we can better understand users’ needs and design systems that support users in accomplishing their tasks and learning along the way.

Bio:

Michelle Brachman (formerly Ichinco) is an HCI research scientist in the AI Experience group at IBM Research in Cambridge, MA. She was previously an Assistant Professor of Computer Science at UMass Lowell and received her PhD from Washington University in St. Louis after getting her start in HCI at Tufts. Her research has roots in understanding and designing systems to support novice and end-user programmers. She now ties this to her current explorations of human-AI collaboration. You can find her papers in venues like ACM CHI and IUI and she has won two best paper awards at the IEEE Conference on Visual Languages and Human-Centric Computing.

Schedule – EST (UTC-5)

6:45 – 7:00: Networking (in Zoom)

7:00 – 8:00: Presentation

8:00 – 8:30: Q & A

Uncomfortable Interactions

The next BostonCHI meeting is Uncomfortable Interactions on Tue, Oct 12 at 6:30 PM.

Register here

BostonCHI October 2021, featuring Steve Benford

Abstract:

UX and HCI have typically been concerned with comfortable interactions that are efficient, ergonomic, satisfying, legible and predictable. However, an increasing focus on cultural experiences, from highbrow arts to mainstream entertainment, changes the game. Our experience of artworks is often far from comfortable. Our engagements with games and sports may push our minds and bodies to the limit. I will argue for deliberately and systematically designing uncomfortable interactions to deliver entertaining, enlightening and socially bonding experiences. I will reflect on interactive artworks that have deliberately employed discomfort to create powerful and provocative interactive experiences. I will explore four strategies for designing with discomfort – visceral, cultural, control and intimacy. I will consider how these need to be carefully embedded into an overall trajectory of experience that offers resolution and reflection. Finally, I will consider the ethical challenges of such an approach, revisiting issues of consent, withdrawal, privacy and risk.

Bio:

Steve Benford is the Dunford Professor of Computer Science at the University of Nottingham where he founded the Mixed Reality Laboratory in 2000 and has directed the Horizon Centre or Doctoral Training since 2009. His research explores how digital technologies, and foundational concepts and methods to underpin these, can support cultural and creative experiences. He previously held an EPSRC Dream Fellowship, was a Visiting Professor at the BBC and also a Visiting Researcher at Microsoft. He was elected to the CHI Academy in 2012. His collaborations with artists have also led to the award of the Prix Arts Electronica Golden Nica for Interactive Art, Mindtrek Award and four BAFTA nominations.

Schedule – EST (UTC-5)

6:30 – 6:45: Networking (Using Zoom breakout rooms)

6:45 – 7:45: Presentation

7:45 – 8:15: Q & A

The Human Side of Tech