Skip to content Skip to navigation

The Stanford Institute for Human-Centered Artificial Intelligence (HAI) Announces 2020 Seed Grant Recipients

Dec 7 2020

Posted In:

Awards, Faculty, In the News

STACY PEÑA December 7, 2020

Today Stanford HAI announced it has awarded more than $2.5 million dollars in seed grants to fund interdisciplinary artificial intelligence (AI) research at Stanford University that aims to improve the human condition. This is the third year HAI has distributed seed grants across the university to fund new, ambitious, and speculative ideas with the objective of getting initial results.

Up to $75,000 was awarded to each of the 32 projects announced today. Project teams represent all seven Stanford schools and are composed of 76 faculty members ranging from computer science to political science to hospital medicine, performance studies and more. 

With HAI’s focus on interdisciplinary collaboration, the institute selected projects reflecting one or more of its key research areas: creating next generation AI technologies that are inspired by the depth and versatility of human intelligence, developing applications of AI that augment human capabilities, and studying the societal impact of AI technologies.

“The future of AI must be informed by experts from a range of disciplines, working together with a shared purpose to guide the technology towards greater public good,” said Deep Ganguli, Director of Research for Stanford HAI. “The projects that received funding this year represent a range of technical, historical, ethnographic, clinical, experimental, and inventive work. Many of the projects involve collaborations of faculty and students whose work bridges two or more departments or schools, which we believe is critical to fostering intellectually rich, human-centered AI research.”

This year's seed grant projects span diverse research topics. Some grantees will explore sustainability, such as using AI to study changing sea levels or for early wildfire detection. Others will focus on “explainable AI,” a nascent research area that seeks to demystify how complicated AI systems work in order to increase trust and transparency among humans who work with, or may be affected by, such systems.

Additional projects will examine bias and disinformation, as well as pursue advances in health care including an AI-based tool for studying parent-child interactions in early childhood development. Finally, some projects will focus on advancing the next generation of core AI technology — these projects are primarily motivated by understanding and building upon the depth and versatility of human intelligence.

Building Culturally-Resonant AI to Fight Affective Propaganda

According to the U.S government, the persistent spread of fake news and other types of misinformation is one of the main on-going threats to societal cohesion and trust. This misinformation is often disseminated by computational “bots” that first learn about specific social media landscapes and then tailor fake news to be “emotionally and culturally resonant” with those landscapes. Thus, while there is a clear need for digital defense tools, their effectiveness hinges on a deep understanding of why certain affective content garners increased engagement or is “culturally resonant”. Unfortunately, we know relatively little about how affective and cultural factors shape the spread of misinformation on social media. Based on our previous research (Hsu et al., 2020), we propose a values-violation account of virality, in which we predict that the affective content that is most likely to spread is that which violates dominant cultural values regarding emotion. According to this account, high arousal negative content spreads the most in the U.S. because it violates the American value placed on maximizing positive feelings and minimizing negative ones. To further test this account, we propose to: (1) build natural language processing tools to examine and compare the spread of affective content on social media, and (2) develop algorithms capable of supporting “affective filters” that can be deployed on social media platforms to flag or modify affective content, providing a culturally-adaptable defense against affectively viral misinformation. We will also examine affective virality in specific subgroups within each country. These studies will not only advance AI research by integrating machine learning algorithms with human emotion and culture, but also advance our understanding of the kinds of affective propaganda users are most vulnerable to so that organizations individuals can defend against them.

Name Role School Department
Jeanne Tsai PI School of Humanities and Sciences Psychology
Michael Bernstein Co-PI School of Engineering Computer Science
Johannes Eichstaedt Co-PI School of Humanities and Sciences Psychology
Jeffrey Hancock Co-PI School of Humanities and Sciences Communication
Brian Knutson Co-PI School of Humanities and Sciences Psychology

Human-Like Visual Learning with Developmentally-Appropriate Self-Supervised Deep Neural Networks

Deep neural networks have emerged as leading models for predicting neural data from a variety of brain areas and species. We will explore whether this modeling framework can be used to predict how neural representations of visual stimuli change over development. Like the human visual system, deep network models of the adult visual system are created from a combination of pre-specified “hardware” and an extensive period of exposure to visual images.  We will thus approach the modeling in two ways, one is to build models that represent the initial conditions of the visual cortex prior to the onset of visual experience and the second is to use the training phase of models that differ in their architectures and training rules as models of human brain development.  Critically, to test these models, we will acquire rich, high temporal resolution data sets from developing human infants using high-density EEG recordings.

Name Role School Department
Dan Yamins PI School of Humanities and Sciences Psychology
Anthony Norcia Co-PI School of Humanities and Sciences Psychology
Kalanit Grill-Spector Co-PI School of Humanities and Sciences Psychology

Leveraging AI to Promote Anti-racism and Foster Inclusion in Online Communities

In the wake of George Floyd’s death, calls for companies to fight racism and racial bias in their organizations and products have escalated. In the tech world, machine learning algorithms that reproduce, reinforce, and even amplify racial disparities and biases in society have been identified as key sites for intervention. Our team of social psychologists, computational linguists, and computer scientists will collaborate to develop a series of AI tools that will detect and root out bias rather than magnify it.  Working with Nextdoor, a neighborhood-focused social network, we plan to pair our bias detectors with behavioral nudges and interventions that encourage inclusive rather than exclusive behavior among users on the platform. With these detectors, we will be able to experimentally test and iterate multiple interventions across a large number of users, leveraging the strengths of AI and social science to support positive behavioral change and strengthen communities.

Name Role School Department
Jennifer Eberhardt PI School of Humanities and Sciences Psychology
Dan Jurafsky Co-PI School of Humanities and Sciences Humanities
Hazel Markus Co-PI School of Humanities and Sciences Behavioral Sciences

SEE: The Science and Engineering of Explanations

Explanations are critical to how humans understand and learn about the world. As humans, we readily go beyond what happened to reason about why it happened and how things could have played out differently. While today’s AI systems can achieve super-human performance on many challenging tasks, their lack of generalizability, interpretability, and ability to interact with humans limits their potential, especially in high-stakes settings. One key aspect of human intelligence that is missing is the ability to understand and communicate about causality. Drawing inspiration from how humans think and communicate about causality, the Science and Engineering of Explanations (SEE) project pursues the twin goal of developing AI systems that generate and understand causal explanations the way humans do, and that help to improve human explanatory abilities.

Name Role School Department
Tobias Gerstenberg PI School of Humanities and Sciences Psychology
Hyowon Gweon Co-PI School of Humanities and Sciences Psychology
Thomas Icard Co-PI School of Humanities and Sciences Philosophy
Percy Liang Co-PI School of Engineering Computer Science
Jiajun Wu Co-PI School of Engineering Computer Science