On the occasion of the 100th Note on "Human-Centered Artificial Intelligence"

Contents:

  • Foreword by the General Chair of the HCI International (HCII) Conference Series
  • Human Centered Artificial Intelligence: Short history of the HCAI Google Group by Ben Shneiderman

Foreword

The field of Artificial Intelligence (AI), coined in the 1950s and commonly referring to the attempt at simulating human intelligence by machines, has undergone continuous development and changes in emphasis, primarily aiming to combine the ever-increasing computational power and memory with the progressively larger datasets available, in order to produce computer-based inference and problem-solving. Currently, the digitalization of most aspects of human activity has produced massive amounts of data for training algorithms and Machine Learning approaches are at the heart of many recent advances in AI, promoting its widespread use across a broad spectrum of application domains. In parallel with this progress, the societal impact and consequences resulting from the adoption of emerging AI approaches and systems are becoming the subject of intensive debate; amongst others, this is because of potential adverse effects arising from automated decision-making.

Ben Shneiderman, a foremost pioneer in the HCI field and one of the first ones to raise these compelling issues, took the initiative to lead a scientific debate in order to address the underlying topics and to argue for a future in which advances in AI augment rather than replace humans, respect their values, and improve their lives. He advocates that the ultimate goal of technological developments cannot be achieved without considering human needs in the first place, thus requiring a change of paradigm in the way AI technologies are designed and developed, in order to embrace the ‘human in the loop’ approach.

The HCI International (HCII) conference series has been privileged to actively contribute to Ben’s efforts in promoting the goal of ‘Human-Centered Artificial Intelligence’ (HCAI). The HCII2021 and HCII2022 conferences hosted a series of Special Thematic Sessions where distinguished speakers (experts and advocates of HCAI) were invited to deliver compelling presentations of their work in this area, stimulating captivating discussions with conference attendees. Moreover, during HCII2021, a working group comprising these experts met for a one-day closed-workshop with the aim to produce a white paper as an HCAI charter for the way forward. This led to the publication of ‘Six Human-Centered Artificial Intelligence Grand Challenges, as an open access article in the International Journal of Human Computer Interaction (IJHCI).

In recognition of his significant scientific contribution and international leadership in the field of HCII, the HCII2022 conference has honored Ben Shneiderman as the first recipient of the “HCI medal for societal impact”. This year, the highlight of Ben’s contribution to the HCII2023 conference in Copenhagen is his distinguished Tutorial on Tuesday 25 July 2025, 13:30-17:30 CEST, entitled “Human-Centered AI: A Growing Research Direction” (an 'in-person' Tutorial which will be offered gratis to remote conference participants).

This is an extra-ordinary HCII News circular on the occasion of the posting of Ben’s 100th NOTE on the HCAI Google Group, republished here as “Part 1”; With Ben’s permission, his 101st NOTE, due to be posted on the 3rd of May, is pre-published here as “Part 2”. In doing so, we acknowledge his leadership in initiating and nurturing a new and important scientific community, but also his longstanding support and contributions to the HCI International conference since its foundation in 1984 by Gavriel Salvendy, General Chair Emeritus and Scientific Advisor.

Constantine Stephanidis, HCII Conference Series General Chair

 

 

Short history of the HCAI Google Group

by Ben Shneiderman ben@cs.umd.edu

 

Part 1: Motivations for starting the Google Group

In March 2021, I began writing notes to a few dozen colleagues who were interested in Human-Centered AI (HCAI). I focused on technology design to amplify, augment, empower, and enhance human performance. Now, more than two years later I have written my 100th note to the HCAI Google Group, which now has more than 3000 subscribers, who receive the once a week note from me.

I hoped that the group would become a platform for sharing ideas, forming collaborations, and building an active community. The key ideas that would bring the group together were: “Human-Centered AI is an important concept in promoting human values, rights, justice, and dignity. We can build tools that support human self-efficacy, creativity, responsibility, and social connections by developing reliable, safe, and trustworthy systems even in the face of threats from malicious actors, biased data, and flawed software. Thoughtful design strategies can deliver high levels of human control and high levels of automation, as they do already in digital camera, navigation tools, and much more. The future will be shaped by those who support human autonomy, well-being, and control over emerging technologies.”

Over these two years I learned that there are widely differing opinions about how to design future technologies and that established beliefs were hard to change. I realized that it was more realistic to seek safer systems rather that promise to have fully safe systems. At the same time, I became more confident that supporting human performance rather than replacing it was the right idea.

The HCAI themes began with three articles in the International Journal of Human-Computer Interaction, the IEEE Transactions on Technology & Society, and the ACM Transactions on Interactive Intelligent Systems, which I greatly expanded. Then HCAI group members commented generously, shaping the ideas and writing, steering me to references, and offering examples. Book writing is a lonely process, so their supportive notes, corrections, and suggestions gave me valued emotional boosts during the dark days of COVID. Oxford University Press’s production process and copyediting polished the book, leading to Human-Centered AI’s publication in early 2022. Positive reviews of the book in prominent publications raised its visibility and eventually eased my fears that the next review would be an attack.

Even after the book was published, I kept writing the weekly HCAI Google Group notes because it led me to read papers, think of fresh ideas, and figure out how to write for ever-wider audiences. HCAI is still a minority position, but it continues to gain strength with new courses, degree programs, research centers, workshops, corporate commitments, and government actions. Growing acceptance of HCAI is visible in the Google Scholar citations, which in 2017 totaled only 12, then 49 in 2019, 283 in 2020, 755 in 2021, and 1524 in 2022. Many authors used related terms like “human-centric”, “human-centered”, “responsible”, etc.

In parallel with the growing interest, there was stubborn resistance to the idea of Human-Centered AI. In February 2022, I posted an article on HCAI in Wikipedia with detailed information and many links to sources. Then I inserted a paragraph on HCAI in the article on Artificial Intelligence. However, both were removed by a Wikipedia admin with the comment “not really notable yet”. The message was clear: there’s still work to be done.

Recurring topics for the notes included the debates about Human-AI teams vs supertools, social robots vs active appliances, and the tradeoffs between machine autonomy and human autonomy. Larger themes such as human responsibility, preventing bias, and supporting human social connectedness appeared regularly.

As a designer, I value guidelines, so I drafted some for journalists writing about AI, for artists seeking to illustrate HCAI, and for developers seeking to ensure high levels of human control & high levels of automation. While I shared the widely held belief in the importance of explainability, my approaches leaned toward user interfaces with step-by-step guidance through a decision or interpretable models that made decisions understandable to users.

Conclusion

Each weekly note brings 5-20 responses with comments on the content and information about new papers or events. I relish the comments, especially the thoughtful respectful differences of opinions, which I summarize for inclusion in the next note. Then I send these summaries to the authors to give them a chance to edit, refine, or withdraw the item.

Sometimes, I engage in a debate an issue, which usually makes for lively content for a future note. Often, I get requests to place job notices, but I respond politely that this group is focused on ideas.

As the effort to write the weekly notes increased, I was pleased that some readers rose up to help me: Torrey Mortenson manages a Slack group to enable further discussionsMengnan Du maintains our list of resources, which includes research centers, courses, events, etc., and Chenhao Tan responds to requests for new members. Thanks to them and others for helping to build our community.  

It is a big effort to distill the rapidly emerging topics in HCAI, but I’m drawn to the process, so I plan to continue doing it, but welcome the help that others provide and would be glad to share the burdens and pleasures of developing the HCAI community with its important ideas. There is a lot of work to be done to ensure human control over technology and to amplify, augment, empower, and enhance human performance.  The goal is to produce supertools that give people superpowers.  Of course, this can lead to malicious uses, so designers need to do their best to guide the design and usage toward pro-social goals. The future is uncertain, but I retain a belief that the best is yet to come.

 

Part 2: Highlights from the two-year history

In Part 1, included in the 100th note, I explain my motivations for starting and continuing to send weekly notes to this HCAI Google Group. This Part 2 highlights issues that emerged. The 100 notes have about 120,000 words, so it is a challenge to distill the highlights, but here it is. My apologies to all the people whose work is not mentioned. All the notes are posted on the Google Group website and are searchable.

The weekly notes often led to memorable and lively discussions, further writing on my part, and sometimes a published outcome. For example. I posted a draft essay titled “Are there gendered ways to AI?” (40th note), which got interesting feedback, which I posted in the following note. I revised by essay, with a provocative, but mysterious, theme of “blue skies and muddy boots.” It was accepted as an opinion piece for Scientific American but the editorial process took six months. The opinion piece was published as Artificial Intelligence Needs Both Pragmatists and Blue-Sky Visionaries, with this subtitle: For humanity’s brightest future, the blue-sky, lofty thinkers in AI need the help of the muddy-boots pragmatists (mentioned in the 73rd note).

Similarly, the discussion on Granularity of Control (26th note), was extensively revised by the thorough editing process of the U.S. National Academy of Engineering to the much changed version that is now published as: Ensuring Human Control over AI-Infused Systems NAE Perspectives (mentioned in the 54th note). The analysis of the subtasks that are needed clarifies which ones can be done reliably and safely by AI systems and which ones human users seek to control. Digital cameras use AI to set aperture, focus, color balance, etc. but reserve for users the composition and the decisive moment to capture the photo.

I was particularly pleased that the draft Guidelines for journalists and editors about reporting on robots, AI, and computers, (60th note) triggered discussions that led to revised versions (61st and  62nd notes), and then publication on Medium. I feel strongly that journalists and editors need to change the ways they report on technology. The guidelines include encouragement to emphasize that computers are different from people and to clarify that people are responsible for use of technology.

I enjoyed an email exchange with Mica Endsley about AI as teammate vs. supertool, which was summarized as Teammate or Supertool: Weighing the value of design metaphors (65th note). I wrote that “Mica Endsley continues to promote the idea of teammates as a metaphor to guide design. However, I believe that a stronger metaphor is the “AI-infused supertool.” People are very different from computers, so the supertool metaphor may be more likely to steer designers to take advantage of the distinctive capabilities of computers… Supertools enable people to carry out tasks that are beyond what they can do by themselves.” Mica writes: “Ben Shneiderman has made a good case that people and AI should not be considered teammates. And with today’s technology, he is quite right that AI (like many other forms of automation) fails to deliver on any “teammate” like qualities… The question is whether in the future teaming with AI would be either possible or desirable? And if so, what sort of capabilities would be required?”

The taxonomy of HCAI systems by levels of harm and speed of operation (77th note) and the six-part taxonomy of AI applications (79th note) clarify contextual issues that inform decisions about level of oversight needed. My goal in these taxonomies is to shift thinking from the all-inclusive AI notion to specific applications, features, and user communities, so that designers can tailor the interfaces to the context.

Discussion of GPT-3 (56th note) and the discussion of ChatGPT (83rd note) continues to be an active issue. I found ChatGPT to be impressive as a supertool to amplify human performance, but agree that stronger protections (guardrails, oversight, regulation, etc.) are needed to prevent hallucinations, fabrications, and abusive language. I sided with those who sought to pause dissemination, while continuing development. My belief is that improved user interfaces would enable users to be more successful in getting what they wanted.

I was pleased to announce (85th note) that a remarkable international team of 26 coauthors produced an exceptionally inspiring 47-page report on "Six Human-Centered Artificial Intelligence Grand Challenges", which was published in the International Journal of Human–Computer Interaction.  The report, which emerged from a 2-year effort with workshops, zoom discussions, and many drafts (mentioned in several notes), was led by Ozlem Garibay and Brent Winslow. The six grand challenges of HCAI are that it (1) is centered in human wellbeing, (2) is designed responsibly, (3) respects privacy, (4) follows human-centered design principles, (5) is subject to appropriate governance and oversight, and (6) interacts with individuals while respecting human’s cognitive capacities.

I made a 2023 New Year’s commitment to celebrate human capabilities, which led to discussions of human expertise in the 89th, 90th and 91st notes. On reflection I expanded the ideas into an eight-part list of Exceptional Skills of Experts (95th note), triggering interesting feedback and a refined list in the 96th note.

My enthusiasm for the paper “How AI Fails Us” (92nd note) was based on the authors’ contrast between the dreams of Artificial General Intelligence (AGI), which they see as “poorly defined”, and the reality of Actually Existing AI (AEAI), which they see as counterproductive, mind-limiting, and harmful. They particularly fault the emphasis of AEAI on human competition, autonomy, and centralization, preferring collaborative, participatory strategies that they label Actually Existing Digital Pluralism (AEDP), including the Internet, Wikipedia, citizen science, and open-source coding projects. Their paper generated interesting comments from several people (93rd note).

I’ve championed other publications, such as Luke Munn’s paper (Uselessness of AI Ethics) and Simone Natale’s book (Deceitful Media: Artificial Intelligence and Social Life after the Turing Test) , which the authors appreciate. Some authors, like Cynthia Rudin, find provocative ways to make their point (Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead) and then back it up with deep thinking and clear writing.

I’m happy with vigorous debates, especially when they are polite, respectful, and constructive. A good example was my email interaction with Michael Muller On AI Anthropomorphism, which appeared in the new HCAI publication on Medium. While the subject matter of our discussions is technology, the dialogs result in meaningful human relationships, sometimes leading to happy face-to-face meetings. It’s the ideas and people that keep me going.