Research Fair 2026
Posters and Demonstrations
Projects are arranged by zone in the HUB South Ballroom
Projects are grouped into thematic zones, but research doesn’t stop at those boundaries. Each presentation includes up to three research area tags that reflect its core areas of focus and its connections to work across the fair.
Zone 1: Public Impact, Policy & Responsible Technology
Adversaries Turned Enemies: Partisan Media and Support for Repression on Fox News YouTube
Mert Bayar, Scott Radnitz, Soham De, Alex Efstratiou
Modern democracies rely on peaceful political contestation, yet partisan media ecosystems increasingly frame political opponents as illegitimate enemies. This project examines whether conspiratorial rhetoric in partisan online media correlates with public endorsement of repression against political adversaries, using Fox News’s YouTube channel as a focal case. It analyzes a large-scale dataset of Fox News YouTube videos and associated comments using an LLM-assisted, human-supervised annotation pipeline that measures conspiracism, delegitimization, insults, and repression endorsement. Bio: Mert Can Bayar is a postdoctoral scholar at the University of Washington’s Center for an Informed Public. His research examines how misinformation and conspiratorial narratives, elite communication, and digital platforms interact to shape public opinion and political behavior. He develops and applies mixed-method approaches, combining surveys and experiments with computational analysis to study democratic legitimacy, and pathways of democratic backsliding.
- Social Media & Online Platforms
- Data Science & Computational Methods
- Equity, Ethics & Justice
Algorithmic Effects on Account Visibility in Pre-X Twitter
Alexandros Efstratiou, Kayla Duskin, Kate Starbird, Emma S. Spiro
Algorithmic effects on social media platforms have come under recent scrutiny, with several works reporting that right-leaning accounts tend to receive more exposure. In this paper, we expand upon this body of work using data collected from user feeds after Twitter’s change of ownership but before its rebranding to X. We replicate findings from prior work regarding the increased exposure of right-leaning accounts to wider audiences in algorithmically curated compared to reverse-chronological feeds, and, crucially, we further unpack this effect to understand what correlated (and did not correlate) with these differences. Our results reveal that right-leaning accounts benefited not necessarily due to their political affiliation, but possibly because they behaved in ways associated with algorithmic rewards; namely, posting more agitating content and receiving attention from the platform’s owner, Elon Musk, who was the most central network account. We also demonstrate that legacy-verified accounts, like businesses and government officials, received less exposure in the algorithmic feed compared to non-verified or Twitter Blue-verified accounts. We discuss implications of these findings for the intersection between behavioral incentives for algorithmic reach and online trust and safety.
- Social Media & Online Platforms
- Data Science & Computational Methods
- Equity, Ethics & Justice
An Illusory Consensus Effect: The Mere Repetition of Information Increases Estimates that Others Would Believe or Already Know It
Madeline Jalbert, Raunak Pillai
How do people estimate the prevalence of beliefs and knowledge among others? Here, we examine the hypothesis that mere repetition of information increases such perceptions of consensus — an “illusory consensus effect.” Although existing evidence suggests that repeated exposure to information may increase its perceived consensus, the impact of repetition has not been tested in isolation from other source and contextual cues. We conducted two experiments to fill in this gap. Prolific participants located in the U.S. read a series of trivia claims — half true and half false in Experiment 1 and all true in Experiment 2. These claims were not attributed to any source. After a short delay, participants made consensus judgments about previously seen (repeated) and new trivia claims. Repetition significantly increased perceived consensus in both experiments; in Experiment 1, participants judged that more Americans would believe repeated (vs. new) information, and in Experiment 2, participants judged that more Americans knew repeated (vs. new) information. These findings provide strong evidence for an illusory consensus effect, such that mere exposure to information increases perceptions of two different measures of consensus: how many others would believe it as well as estimates of current public knowledge. These findings are relevant to our understanding of how our information environments may contribute to (mis)perceptions of consensus.
- Social Media & Online Platforms
- Data Science & Computational Methods
Biased AI Summaries in Search Reduce Information Seeking & Influence Attitude Polarization
Saloni Dash, Yiwei Xu, Lei Cai, Wang Liao, Emma Spiro
Large language models are increasingly used to synthesize results in popular search engines to facilitate fast, efficient information seeking from diverse sources. However, the impact of these artificial intelligence (AI)-generated summaries on users’ information seeking and attitudes remains critically underexplored. In a preregistered experiment (N = 1200), we found that AI summaries of politically polarizing topics broadly reduced information seeking. AI summaries that were manipulated to be incongruent with participants’ prior attitudes on the topic reduced attitude polarization, relative to attitude-congruent AI summaries. Notably, attitude-incongruent AI summaries were also evaluated less favorably than attitude-congruent AI summaries. These findings underscore the persuasive potential of biased AI summaries and have critical design and societal implications — from influencing users’ attitudes and information-seeking behaviors to broader concerns surrounding polarization and trust in AI-driven information ecosystems.
- AI & Machine Learning
- Equity, Ethics & Justice
- Human-Computer Interaction & UX
Biases Propagate in Encoder-Based Vision-Language Models: A Systematic Analysis from Intrinsic Measures to Zero-Shot Retrieval Outcomes
Kshitish Ghate, Tessa Charlesworth, Mona Diab, Aylin Caliskan
To build fair AI systems we need to understand how social-group biases intrinsic to foundational encoder-based vision-language models (VLMs) manifest in biases in downstream tasks. In this study, we demonstrate that intrinsic biases in VLM representations systematically "carry over'' or propagate into zero-shot retrieval tasks, revealing how deeply rooted biases shape a model's outputs. We introduce a controlled framework to measure this propagation by correlating (a) intrinsic measures of bias in the representational space with (b) extrinsic measures of bias in zero-shot text-to-image (TTI) and image-to-text (ITT) retrieval. Results show substantial correlations between intrinsic and extrinsic bias, with an average ρ = 0.83 ± 0.10. This pattern is consistent across 114 analyses, both retrieval directions, six social groups, and three distinct VLMs. Notably, we find that larger/better-performing models exhibit greater bias propagation, a finding that raises concerns given the trend towards increasingly complex AI models. Our framework introduces baseline evaluation tasks to measure the propagation of group and valence signals. Investigations reveal that underrepresented groups experience less robust propagation, further skewing their model-related outcomes.
- AI & Machine Learning
- Equity, Ethics & Justice
- Data Science & Computational Methods
Bridging Norms: What Do We Have in Common? What Does Disrespect Mean to You?
Belén Saldías, Sasha Krigel, Deb Roy
We present a scalable method for analyzing and reconciling speech norms across online communities, advancing understanding of normative conflict in decentralized governance. Using a dataset of 30,000 moderated Reddit comments from 332 subreddits, we define two generative tasks: identifying shared and divergent norms between communities and revealing nuanced contextual interpretations of "respect.'' Our results show that large language models can surface nuanced, community-specific expectations, enabling interventions that reduce conflict and foster mutual understanding. This work offers implications for moderation policy, newcomer onboarding, and norm-aware infrastructure in decentralized platforms.
- Social Media & Online Platforms
- AI & Machine Learning
- Information Policy, Law & Governance
Center for an Informed Public: Center Overview & Research Spotlight
Center for an Informed Public
The CIP helps individuals, communities and institutions navigate our complex information environments. Co-founded in 2019 through a partnership with the UW Information School, School of Law and the Department of Human-Centered Design & Engineering, the CIP has grown from 5 co-founders to 60+ team members representing 16 units across UW’s 3 campuses, comprising faculty, staff, postdoctoral scholars, and PhD students. The research produced by our interdisciplinary community has been prolific: 170+ peer-reviewed publications since our founding, appearing in journals such as Science, Nature, and Proceedings of the National Academy of Sciences.
- Social Media & Online Platforms
- Equity, Ethics & Justice
- Information Policy, Law & Governance
Dialect vs Demographics: Quantifying LLM Bias from Implicit Linguistic Signals vs. Explicit User Profiles
Irti Haq
As state-of-the-art Large Language Models (LLMs) have become ubiquitous, ensuring equitable performance across diverse demographics is critical. However, recent research suggested that these models exhibit "targeted underperformance," where models disproportionately refuse requests or degrade quality when users explicitly identify as belonging to certain marginalized groups. However, it remains unclear whether these disparities arise from the identity itself or from the way identity is signaled. In real-world interactions, users rarely announce "I am a Black male" before asking a question; instead, their identity is often conveyed implicitly through a complex combination of various Socio-linguistic factors. This study disentangles these signals by employing a factorial design with over 24,000 responses from two open-weight LLMs (Gemma-3-12B and Qwen-3-VL-8B), comparing prompts with explicitly announced user profiles against implicit dialect signals (e.g., AAVE, Singlish) across various sensitive domains. Our results uncover a unique paradox in LLM safety where users achieve "better" performance by sounding like a demographic than by stating they belong to it. Explicit identity prompts activate aggressive safety filters, increasing refusal rates and reducing semantic similarity compared to our reference text for Black users. In contrast, implicit dialect cues trigger a powerful “dialect jailbreak,” dramatically reducing refusal probability to near zero while simultaneously achieving higher factual consistency and semantic similarity to reference texts compared to Standard American English prompts. However, this "dialect jailbreak" introduces a critical safety trade-off regarding content sanitization. These findings suggest that current safety alignment techniques are brittle and over-indexed on Explicit Keywords, creating a bifurcated user experience where "standard" users receive cautious, sanitized information while dialect speakers navigate a less sanitized, more raw, and potentially a hostile information landscape, and highlight a fundamental tension in alignment—between equitable access, safety, and linguistic diversity—and underscores the need for safety mechanisms that generalize beyond explicit cues.
- AI & Machine Learning
- Equity, Ethics & Justice
- Data Science & Computational Methods
The End of Trust and Safety?: Examining the Future of Content Moderation and Upheavals in Professional Online Safety Efforts
Rachel E. Moran, Joseph S. Schafer, Mert Bayar, Kate Starbird
Trust & Safety (T&S) teams have become vital parts of tech platforms; ensuring safe platform use and combating abuse, harassment, and misinformation. However, between 2021 and 2023, T&S teams faced significant layoffs, impacted by broader downsizing in the tech industry. In addition, a reduction in T&S teams has also been attributed to partisan pressure against content moderation efforts designed to mitigate the spread of election and COVID-19-related misinformation. Accordingly, there exist crucial questions over the future of content moderation and T&S in the digital information environment, questions central to the work of CHI researchers interested in intervening in online harm through design, policy and user research. Through in-depth interviews with T&S professionals, this paper explores upheavals within the T&S industry, examining current perspectives of content moderation and broader strategies for maintaining safe digital environments.
- Social Media & Online Platforms
- Equity, Ethics & Justice
- Information Policy, Law & Governance
Epistemic Diversity Across Language Models Mitigates knowledge collapse
Damian Hodel, Jevin D. West
The growing use of artificial intelligence (AI) raises concerns of knowledge collapse, i.e., a reduction to the most dominant and central set of ideas. Prior work has demonstrated single model collapse, defined as performance decay in an AI model trained on its own output. Inspired by ecology, we ask whether AI ecosystem diversity, that is, diversity among models, can mitigate such a collapse. We build on the single-model approach but focus on ecosystems of models trained on their collective output. To study the effect of diversity on model performance, we segment the training data across language models and evaluate the resulting ecosystems over ten self-training iterations. We observe a U-shaped relationship between epistemic diversity and collapse. Our results suggest that multiple diverse models can preserve more informative data for one another than fewer, more similar models trained on larger samples. However, distributing data across too many models reduces data informativeness in the short term; it is this trade-off between short- and long-term effects that leads to the observed optimal level of diversity. In the context of AI monoculture, our results suggest the need to monitor diversity across AI systems and to develop policies that incentivize more domain- and community-specific models.
- AI & Machine Learning
- Data Science & Computational Methods
- Information Policy, Law & Governance
Exploring Influencer Creation and Shaping on Modern Social Media Platforms
Joseph S. Schafer
As people increasingly receive information from social media rather than traditional journalism venues, the role played by "influencers," or popular, trusted community members in these platforms who use their position to effectively communicate to and with their audiences, becomes more urgent to study. While top-down potentials of influencers to influence audiences are strong, the forces influencing these influencers— what topics they (don't) post about, the frames they use, their communicative styles, etc.— are underexplored. This project, and proposed dissertation work, presents three lenses to answer the overarching question of "who influences the influencers," aiming to understand the participatory, mutually shaping feedback loops of which influencers are a part. First, I focus on the role sudden attention plays in online ecosystems, and how this attention can kickstart users and creators into becoming influencers. Second, I will explore the mutual influence dynamics between influencers and their audiences, using mixed-methods analyses of social media trace data across three case studies to understand how frames are shared and updated through mutual interactions. Finally, I focus on the interactions that news influencers have with other forms of media, and how these different actors' interactions mutually shape fields of social media and journalism.
- Social Media & Online Platforms
- Data Science & Computational Methods
From Job Titles to Jawlines: Using <context voids> for Red-Teaming Generative AI Systems
Shahan Ali Memon, Soham De, Sungha Kang, Riyan Mujtaba, Bedoor AlShebli, Katie Davis, Jaime Snyder, Jevin D. West
In this paper, we introduce a speculative design methodology for studying the behavior of generative AI systems, framing design as a mode of inquiry. We propose bridging seemingly unrelated domains to generate intentional context voids, using these tasks as probes to elicit AI model behavior. We demonstrate this through a case study: probing the ChatGPT system (GPT-4 and DALL-E) to generate headshots from professional Curricula Vitae (CVs). In contrast to traditional ways, our approach assesses system behavior under conditions of radical uncertainty–when forced to invent entire swaths of missing context–revealing subtle stereotypes and value-laden assumptions. We qualitatively analyze how the system interprets identity and competence markers from CVs, translating them into visual portraits despite the missing context (i.e. physical descriptors). We show that within this context void, the AI system generates biased representations, potentially relying on stereotypical associations or blatant hallucinations.
- AI & Machine Learning
- Equity, Ethics & Justice
- Human-Computer Interaction & UX
Governing Knowledge Commons Under Attack
Zarine Kharazian
Public knowledge institutions – resources like online knowledge repositories and public broadcasters – have come under attack across multiple fronts. These attacks have targeted not only the quality of the knowledge these institutions generate, but the governance mechanisms and legitimacy of the underlying resource systems that make this knowledge accessible. Building on approaches to knowledge commons governance, Zarine’s dissertation offers a conceptual framework that reflects how knowledge resource systems are being contested, captured, and destroyed in ways that go far beyond the threats of under-provision and appropriation that have been central to previous research on public goods and commons.
- Information Policy, Law & Governance
- Equity, Ethics & Justice
- Libraries, Archives, Museums & Cultural Heritage
High-Mountain Digital Access: When Elders Hold Smartphones
Kunsang Choden
As digital technologies expand into remote regions, participation is often framed as inevitable, useful, and empowering. Yet, in the Nubri Valley, an off-grid, high-mountain Tibetan speaking community of Nepal, digital participation is experienced not as seamless connectivity, but as unstable, difficult, and shaped by aging, literacy and language barriers, and device breakdowns. Drawing on interviews, culturally embedded participatory tea conversations, and long-term fieldwork, this place-based research talk will highlight how aging adults in this high mountain communities, use, make sense of and struggle with digital systems that were not designed for their linguistic, cultural, or material realities.
- Human-Computer Interaction & UX
- Equity, Ethics & Justice
Investigating AI Adoption at Scale in Political Campaign Messaging of Local and State Election Candidates
Anna-Maria Gueorguieva, Nicholas Weber
Artificial intelligence (AI) has been identified to be used in political campaigns for voter outreach and political participation, however such usage can lower public trust in political institutions and spread misinformation. Given this, many states have passed legislation prohibiting or requiring the disclosure of AI usage in political campaigns. However, exactly how political candidates use AI in their election campaigns is unknown. This project aims to build a dataset of political candidate messaging from 2018 to 2025 for state and local elections. Then, we apply AI detection methods to determine the usage of AI across these elections, ultimately aiming to inform the effectiveness of existing legislation that aims to prohibit or require disclosure of AI usage in such local and state elections.
- AI & Machine Learning
- Equity, Ethics & Justice
- Data Science & Computational Methods
Location, Location, Location: Persona-Assignment in Language Models Reflects U.S. State Political Biases
Abdalla Abdalla, Chaitanya Sekhar, Trien Vuong, Iris Y. Zhong, Anna-Maria Gueorguieva, Aylin Caliskan, Ian Yang, Michael Saxon
Social and political biases occur in the context of location, where one geographic area may hold different beliefs and biases than another. In order to understand if large language models (LLMs) can replicate patterns of bias based on geographic location, this study conducts two experiments with four LLMs personalized using persona-assignment to U.S. states and compares results to data from people of the respective U.S. state. First, using Prompt Based Association Tests (PBATs), adaptations of Implicit Association Tests, we investigate how location-personalization alters LLMs associations of women, Muslims, and Native-Americans to with evaluative attributes related to historical biases. Results from U.S. state personas increase gender and racial bias in comparison to non-personalized prompting in 77% of outputs when comparing across all models, though not always in the direction of known human biases. We then apply location-personalization and conduct the Political Compass Test, a widely used political ideology survey, with LLMs to obtain a measure of political bias. Comparing results to voting patterns in a state, a proxy for ideology, we find that 3 out of the 4 location-personalized models present ideologies with high correlation to the voting patterns of their respective state. Results have implications for the usage of location personalization given the identified potential to reflect local bias. We discuss how our different approaches in the two experiments reveal more about the nature of personalization and conclude with considerations for bias mitigation, improving personalization evaluations, and future work extending such issues to global and multilingual contexts.
- AI & Machine Learning
- Equity, Ethics & Justice
- Data Science & Computational Methods
Loki's Loop: Games for Navigating a Complex Information Environment
Chris Coward, Jin Ha Lee, Lindsay Morse, Jason Yip, Nisha Devasia, Michelle Newman, Caroline Pitt, Runhua Zhao
How can games help people learn about—and foster agency—when interacting in online environments marked by a sharp rise in misleading content, synthetic media, algorithmic targeting, malicious social media bots, and other forms of deception and polarization? The Loki’s Loop project lies at the intersection of misinformation studies, media and information literacy (MIL), and games. Working with librarians, children, and partners around the world, we co-design escape rooms and other play-based activities tailored to diverse information environments. Our approach to design is guided by a focus on:
• Social: Current information challenges are largely driven by social media, an inherently social activity. Our games foster collective sensemaking, in contrast to the individual orientation of most MIL approaches.
• Context: The information practices of online fandom communities are different from breast cancer communities. We design for diverse contexts in contrast to the dominant universal skills paradigm.
• Narrative: Stories can create immersive and memorable experiences that mirror real-life situations. The challenges of misinformation are not only due to a lack of better facts or skills to tell truth from fiction. Rather, it is the influences of worldviews, identities, social pressures and other socio-affective dynamics.
Loki’s Loop is a collaboration among the Center for an Informed Public, GAMER Group, KidsTeam, and Marmot Solutions.
- Education & Learning Technologies
- Social Media & Online Platforms
- Human-Computer Interaction & UX
Narrative Building Blocks: A Trope-Based Framework for Analyzing Election Rumors
Mert Can Bayar, Stephen Prochaska, Ashlyn B. Aske, Joseph Schafer, Kate Starbird
The proliferation of rumors and conspiracy theories have become a persistent challenge undermining the integrity of U.S. democratic processes, supporting unfounded claims of stolen elections and widespread fraud. Although every rumor revolves around different events or claims, they often share certain features. This research builds on this insight by identifying the key pieces of election rumors to inform predictive and preventative measures aiming to communicate about election processes and procedures. We develop a framework of narrative tropes derived from iterative qualitative coding of nearly 600 unique election rumors tracked between 2020 and 2024. These tropes serve as the essential building blocks of election rumors, providing a lens for analysts to understand how audiences interpret novel, often complex events through familiar characters and storylines. The framework categorizes these tropes into six primary dimensions: actors, actor-actions, objects, object-actions, scene and setting, and suggested remedies or calls to action. By identifying these recurring themes, this work creates an operationalizable framework that demonstrates how individual rumors are not isolated incidents but are instead deeply embedded within larger, self-reinforcing political narratives that shape public perception of election legitimacy.
- Social Media & Online Platforms
- Equity, Ethics & Justice
People Analytics Research: Evidence-Based Insights on Work, Well-Being, and Performance
Heather Whiteman, Souporno Ghosh, Aswathy Kumar
People Analytics sits at the intersection of psychology, data, business, and technology, bringing together researchers, analysts, practitioners, and leaders who share a common goal: using information thoughtfully to improve how work is experienced and designed. At its core, this field focuses on understanding systems of work and creating healthier, more sustainable environments. This research explores how workplace experiences shape employee well-being, performance, and retention over time using real organizational data from an applied industry context. Our work asks how organizational design choices influence patterns of burnout, engagement, and growth across teams and roles. Our current research agenda spans three areas. First, we examine burnout dynamics: how exhaustion, cognitive strain, and disengagement develop and interact over time. Second, we analyze longitudinal performance trajectories to identify patterns of growth, plateau, decline, and recovery. Third, we investigate managerial structures, including span of control, to understand how management design affects employee experience. We translate findings into measurement frameworks and intervention ideas that organizations can evaluate over time. Grounded in rigorous methods and explicit ethical principles of privacy, transparency, and proportional data use, this work reflects the collaborative spirit of the People Analytics community: curious, interdisciplinary, and committed to using data as a force for good.
- Health & Well-being
- Data Science & Computational Methods
- Equity, Ethics & Justice
Political Content Exposure Across Facebook, Reddit, and X during the 2024 U.S. Election
Kayla Duskin
Social media, including discrete platforms with distinct affordances and user-bases, has become a key avenue for people to encounter and engage with political information. This has sparked concern over the quality and diversity of political information that users find there. Challenges of data accessibility has made it difficult to characterize how individual users navigate an online information ecosystem across a constellation of social media platforms and websites. In this study we examine the online experience of 2729 consenting users across Facebook, X, and Reddit in the months surrounding the 2024 U.S. presidential election. Specifically focusing on the content served to users via algorithmic recommendation, we pair logs of online behavior with surveys of these users and consider socio-demographic characteristics and self-reported attitudes, cognitions, and behaviors alongside on-platform data.
- Social Media & Online Platforms
- Data Science & Computational Methods
Real-Time Narrative Detection in Crisis Events
David Farr, Stephen Prochaska, Jack Moody, Lynnette Ng, Iain Cruickshank, Kate Starbird, Jevin D. West
Understanding the information environment (IE) during crisis events is challenging due to rapid discourse shifts and limited direct visibility into evolving narratives. Classification-based approaches often rely on predefined labels and static taxonomies, while network-based methods provide limited insight into semantic evolution over time. This work presents a systems-oriented framework for modeling emerging narratives as temporally evolving semantic structures without requiring prior label specification. By integrating semantic embeddings, density-based clustering, and rolling temporal linkage, the framework represents narratives as persistent yet adaptive entities within a shared semantic space. We apply the methodology to a real-world crisis event and evaluate system behavior through stratified cluster validation and temporal lifecycle analysis. Results demonstrate high cluster coherence and reveal heterogeneous narrative lifecycles characterized by both transient fragments and stable narrative anchors. Grounded in situational awareness theory, the approach supports perception and comprehension by transforming unstructured social media streams into interpretable, temporally structured representations. The resulting system provides a scalable foundation for monitoring and decision support in dynamic information environments.
- Social Media & Online Platforms
- Data Science & Computational Methods
- AI & Machine Learning
Research at the Technology & Social Change Group (TASCHA)
Jason C. Young
The Technology & Social Change Group (TASCHA) is an interdisciplinary center whose research explores the relationship between digital technologies and society, with an emphasis on applied work grounded in community engagement and international impact. TASCHA has been active in 50+ countries over 20 years, working towards a vision where people use information technologies to build more open, inclusive, and equitable societies. This poster provides an overview of TASCHA research in areas including community-based development, interventions against misinformation, Indigenous and rural technology design, community archiving, and more. It also describes a new TASCHA-sponsored research group, Researchers in Community (RiC), which supports students, faculty, and researchers that engage in community-based research.
- Social Media & Online Platforms
- Equity, Ethics & Justice
- Libraries, Archives, Museums & Cultural Heritage
Schemas of Suspicion: Human Assertions of Suspicious Behavior on Reddit
Emily Porter, Dr. Jelani Ince, Dr. Emma Spiro
In 2019, the Mueller Report revealed how Russian operatives exploited social media to manipulate public opinion and intensify ideological divisions in the United States through the creation of deceptive online identities. These inauthentic accounts demonstrated how the same platform affordances that enable rich forms of self-expression and community-building online can also be weaponized to spread disinformation and exploit socio-cultural tensions. While research and detection efforts have largely centered on automated bot accounts, human-mediated inauthentic accounts remain difficult to identify at scale due to their adaptability and contextual complexity. Despite growing concern about online deception, little research has examined how human perspectives within digital communities function as a mode of deception detection. Platforms like Reddit, where user engagement and community participation are highly transparent, provide a unique opportunity for users to observe and contextualize the behavior of other users at both the community and platform level. This study explores how digital community members on Reddit use social cues, cultural knowledge, and contextual awareness to interpret authenticity and identify potentially deceptive users.
- Social Media & Online Platforms
- Data Science & Computational Methods
Spider Jesus: Christian Visual AI Slop, Meaning Making, and Scams on Facebook
Nina Lutz, Joseph S. Schafer, Julie Vera, Sourojit Ghosh, Kate Starbird
In 2024, Facebook users noticed an influx of “AI Slop” on the platform. Although journalists and researchers worked to investigate the phenomenon, much remains unknown about users’ experiences and theories about how and why AI Slop pages operate. We present a mixed methods study of interviews with 15 users and 5 religious experts, alongside a computationally assisted analysis of 6000 images and engagement data across 100 AI Slop Facebook pages focused on a particular, popular subgenre of AI Slop: Jesus images. We present findings about AI Jesus Slop and its intersections with user experience, folk theories, and safety. We identify instances of digital volunteerism as well as tensions that are held in "folk stories" by users who seek to spread digital literacy about perceived risks related to AI Slop and visual spam. By doing so, we offer implications for building literacy regarding visual AI Slop and how to foster safety and digital commons when spam and scams intersect with intimate personal identities like religion.
- Social Media & Online Platforms
- Equity, Ethics & Justice
- AI & Machine Learning
Tools for Supporting Humans to Achieve Consensus within Crowd-Sourced Fact-Checking
Soham De, Haiwen Li (MIT), Jay Baxter (X Community Notes), Brad Miller (X Community Notes), Michiel A. Bakker (MIT), Martin Saveski
Community Notes on X is a crowd-sourced fact-checking system that allows users to annotate potentially misleading posts and displays them publicly if rated helpful by a diverse set of users. Although effective when shown, notes appear on relatively few posts: for 91% of posts with at least one proposed note, none achieve sufficient support to be displayed. To address this scarcity, we introduce Supernotes, AI-generated notes that synthesize multiple human-proposed notes and are designed to foster consensus among diverse users. Our framework uses an LLM to generate diverse Supernote candidates, which are then ranked by a novel personalised helpfulness model (PHM) trained on millions of historical Community Notes ratings. In controlled lab experiments, participants rated Supernotes as significantly more helpful than the best existing notes and preferred them 75.2% of the time. In a follow-up study, we extend our approach to support human note writing by integrating the PHM into an AI-powered writing assistant, CrowdPulse, that provides real-time feedback to note writers in the form of predicted reception by raters, rewriting suggestions and source recommendations. Through controlled lab experiments, we demonstrate how increasing levels of AI support affect the quality of human-written notes and discuss implications for AI-assisted consensus building.
- Social Media & Online Platforms
- Human-Computer Interaction & UX
- AI & Machine Learning
The Use of Science in U.S. AI Policymaking
Sarah Tran, Nicholas Weber
U.S. federal agencies are increasingly focused on Artificial Intelligence (AI), and more specially, developing a series of policy tools to address the rapid adoption of AI. Federal agencies have solicited thousands of public comments to inform AI-policy making. This work aims to examine these public comments to understand the varying ways that political actors and stakeholders frame and leverage evidence to achieve AI policy change. We conduct an in-depth analysis of public comments on major U.S. AI policies to determine the kinds of the evidence that is cited, but more broadly, how these actors construct and interpret this evidence. Understanding the application of evidence in rule-making – across time, agencies, and issue areas – is critical not only for promoting transparency in agency-decision making but advancing our understanding on the kinds of knowledge that go on to shape decisions on pressing societal issues.
- Information Policy, Law & Governance
- Equity, Ethics & Justice
- AI & Machine Learning
VEAT Quantifies Implicit Associations in Text-to-Video Generator Sora and Reveals Challenges in Bias Mitigation
Yongxu Sun, Michael Saxon, Ian Yang, Anna-Maria Gueorguieva, Aylin Caliskan
Recent Text-to-Video (T2V) generators such as Sora raise concerns about whether generated content reproduces societal biases. We introduce the Video Embedding Association Test (VEAT) and Single-Category VEAT (SC-VEAT) to quantify demographic associations in generated videos, extending prior embedding-based bias tests to the video domain. We validate VEAT/SC-VEAT by replicating established IAT scenario effects and OASIS image-category associations. Applying these methods to race (African American vs. European American) and gender (women vs. men) across valence, 7 awards, and 17 occupations, we find strong biases: European Americans and women are significantly more associated with pleasantness ((d>0.8)). Bias magnitudes correlate with real-world demographic distributions in occupations and awards (up to (r=0.99)), suggesting T2V outputs reflect historical disparities. Explicit "debiasing" prompts reduce effect sizes overall but can backfire, strengthening marginalized-group associations in cases already linked to those groups (e.g., janitor, postal service work). These results highlight that prompt-based mitigation is not uniformly safe and that T2V systems require rigorous evaluation before deployment.
- Data Science & Computational Methods
- Equity, Ethics & Justice
- AI & Machine Learning
Who Do We Trust: Evaluating Generative AI as an Agent of Health Information
Annie L. Zhang, Rachel E. Moran, Madeline Jalbert, Jevin D. West
As large language models (LLMs) become increasingly prominent sources for science and health information, understanding how people evaluate these models, particularly in relation to human experts, is critical for promoting a healthy communication ecosystem. This study therefore seeks to examine how source identity can shape public perceptions of trust, credibility, and expertise. In a between-subjects survey experiment, we will ask participants to evaluate one of five sources: an unbranded LLM, a branded LLM (e.g., ChatGPT), a medical doctor, an AI–doctor collaboration, or a health-specific LLM. Furthermore, messages will be presented in two issue contexts (vaccine v. dietary guidance) to assess how these patterns may hold for contexts that are differentially polarized. Outcomes include attitudes toward the message (perceived accuracy, quality, and acceptability), perceptions of the source (expertise, trustworthiness, credibility, and empathy), and behavioral intentions (information-seeking and willingness to follow recommendations). By isolating source effects, this project aims to provide evidence on how AI is evaluated as an agent of science and health communication and to clarify the role of human–AI collaborations in shaping trust in a rapidly-changing information environment.
- Health & Well-being
- Equity, Ethics & Justice
- AI & Machine Learning
“You Gotta Have a Record of Being Homeless”: Legibility in the Data Infrastructure of Homeless Services
Pelle Tracey
Data-driven policymaking requires good data. And good data rests on much careful labor—the work of enumeration, cleaning, curation, and infrastructure maintenance. But what does good data require of the people about whom the data are made? What if someone doesn’t want to “show up in the data”? This poster reports on ongoing ethnographic research into the data practices and politics undergirding U.S. homeless services systems. It details how some people experiencing homelessness respond to being the subjects of data collection, revealing a surprising set of “legibility practices”—strategies for being more or less amenable to being made into data. Based on these findings, I argue that policymakers and researchers should revise assumptions about how data collection works in homeless services and change how automated decision-making in this context is designed.
- Data Science & Computational Methods
- Information Policy, Law & Governance
- Equity, Ethics & Justice
Zone 2: Youth, Learning & Community Knowledge
Caring for the Furry Friends in the Smart Home: An Initial Exploration of Child-Centered Approach to Designing for Pets
Kaiwen Sun, Jade Li, Irene Chung, Jenny Radesky, Jason Yip, Christopher Brooks, Florian Schaub
Smart home technologies are often designed to meet the needs of adults, yet children and pets also live with these systems without much say in their design or function. HCI and CCI researchers have shown the value of studying children’s experiences and ideation of technologies used in the home. In this pictorial, we explore how children design smart home technologies for their pets. We analyze data from an in-home study with 6-to-11-year-olds. Our analysis identifies five kinds of experiences children linked to smart home technologies in pet caregiving: convenience, presence, physical comfort, emotional wellbeing, and responsibility. Grounded in children’s everyday routines of playing with and looking after their pets, this work offers design directions for domestic technologies that account for non-human household members.
- Human-Computer Interaction & UX
- Health & Well-being
- Education & Learning Technologies
Center for Learning, Computing, and Imagination
Amy J. Ko, R. Benjamin Shapiro, Kevin Lin
We are a community of researchers, educators, and students who are passionate about advancing computing education. Our work explores how people learn programming, data science, machine learning, and AI, as well as the broader ways computing shapes and transforms the world around us.
- Human-Computer Interaction & UX
- AI & Machine Learning
- Education & Learning Technologies
Children's Understanding of Free-to-Play Digital Game Monetization Designs (Demo)
Emilia Russo, Alexis Hiniker
Free-to-play video games and platforms common in children's online play such as Roblox and Fortnite demonstrate a variety of manipulative monetization designs. Prior work indicates that children struggle to reason about common monetization designs like multiple tiers of currency, bundling, and variable exchange rates. However, there is no work to date demonstrating a causal link between certain monetization designs and player's understanding of cost. We developed a video game demo with a custom storefront to test the impact of monetization design on player's understanding of prices. In this demo, we will offer research fair attendees an opportunity to participate in this study by playing the demo and answering questions about the value of game items.
- Education & Learning Technologies
- Human-Computer Interaction & UX
Designing Games Coast to Coast: Creating engineering games for peer-to-peer joint media engagement with GBH Kids and KidsTeam UW
Caroline Pitt, Daeun Yoo, Jessica Reuter Andrews, Melissa Carlson, Anessa Roth, Joyce Chou, Katie Duong, Stephanie Lee, Ziwen Meng, Ici Su, Josie Welin, Jason Yip
Since Fall 2023, researchers from UW have teamed up with GBH Kids to explore how participatory design (PD) with children can help create engineering games that promote peer-to-peer Joint Media Engagement (JME). Through extensive KidsTeam co-design, user testing, and technology probes, the team has gained new insights for designers, researchers, and families... and you can play the games on PBS Kids!
- Education & Learning Technologies
- Human-Computer Interaction & UX
The Engagement-Prolonging Designs Teens Encounter on Very Large Online Platforms
Yixin Chen, Yue Fu, Zeya Chen, Jenny Radesky, Alexis Hiniker
In the attention economy, online platforms are incentivized to design products that maximize user engagement, even when such practices conflict with users' best interests. We conducted a structured content analysis of all Very Large Online Platforms (VLOPs) to identify the designs these influential apps and sites use to capture attention and extend engagement. Specifically, we conducted this analysis posing as a teenager to identify the designs that young people are exposed to. We find that VLOPs use four strategies to extend teens' use: pressuring, enticing, trapping, and lulling them into spending more time online. We report on a hierarchical taxonomy organizing the 63 designs that fall under these categories. Applying this taxonomy to all 17 VLOPs, we identify 583 instances of engagement-prolonging designs, with social media platforms using twice as many as other VLOPs. We present three vignettes illustrating how these designs reinforce one another in practice. We further contribute a graphical dataset of videos illustrating these features in the wild.
- Social Media & Online Platforms
- Human-Computer Interaction & UX
- Information Policy, Law & Governance
“Families are messy”: From Parent-Child Tensions to Family-Centered Design of Smart Home Technologies
Jade Li, Jason Yip, Katie Davis, Florian Schaub, Christopher Brooks, Jenny Radesky, Kaiwen Sun
Smart home technologies have become common in family home, making even young children inevitable users of these technologies. However, these systems are typically designed for individual adults, creating family tensions and conflicts over children's access, safety, and appropriate smart home use. To investigate children and parents individual and joint smart home needs and dynamics, we conducted an in-home study with nine families (children aged 6-11). We identify four key parent-child tensions with smart home technologies, including struggles over parental protection versus childhood autonomy, differing views on technology's purpose, disagreements over technology-enforced routines, and children's vulnerability to embedded commercialism. Our work reconceptualizes parental mediation as a process of "tension management'' rather than the application of static rules. This research challenges the dominant individual-centric choice architecture in smart home design, calling for a family-centered approach that acknowledges and adapts to the fluid, complex, and negotiated reality of modern family life.
- Human-Computer Interaction & UX
- Equity, Ethics & Justice
From Emotional Mirroring to Emotional Attunement: Do LLMs and Humans Attune to Each Other?
Marx Wang, Robert Wolfe, Songling Ngo, Raghavi Putluri, Alexis Hiniker
Much prior work establishes that LLMs effectively mirror the affective state of a user. However, human social interaction depends not on immediate mirroring, but on emotional attunement, a process of bidirectional affective synchronization between individuals. In this work, we evaluate whether LLMs emotionally attune with users, comparing LLM-user interactions with client-therapist interactions. We find evidence for a ``hollow echo'' effect: LLMs strongly mirror user affect in immediate responses but fail to attune to user emotional state across multi-turn interactions. This contrasts with client-therapist interactions, where we observe a more durable and moderate form of attunement. Moreover, we find that while clients attune to their therapists, users do not attune to LLMs, such that user-LLM attunement cannot be said to be bidirectional. Our findings indicate that current LLMs are inadequate for relationally complex contexts, which require sustained attunement, rather than immediate mirroring.
- AI & Machine Learning
- Human-Computer Interaction & UX
- Health & Well-being
How Romantic Partners Build "Sound Relationship Houses" Through Shared Gaming
Emilia Russo, Nisha Devasia, Alexis Hiniker
We conducted semi-structured interviews with 13 co-located romantic couples (N = 26) to investigate their experiences jointly playing video games. We found, first, that couples used shared gaming as a rich ground to construct relational well-being. We use the Sound Relationship House Theory to describe how couples built love maps, expressed fondness and admiration, turned towards bids for connection, took the positive perspective, managed conflict, made life dreams happen, and created shared meaning through their experiences gaming together. Next, we found that couples expressed diverse and unique needs for shared gaming that were informed by their individual, relational, and environmental contexts. Lastly, we found that couples adapted game designs to meet their needs through creative play strategies. We present design provocations informed by these findings to enrich opportunities for couples' relational well-being.
- Human-Computer Interaction & UX
- Health & Well-being
Impact of School Phone Restriction Policies on Digital Stress Among Adolescents
Daniela Muñoz Lopez, Carly Gray, Kimberly Molaib, Yonatan Ambrosio Lomeli, Venus Rekow, Lucía Magis-Weinberg
Concerns regarding cell phones in classrooms due to potential impacts on academic achievement, social learning, and social and emotional well-being have largely influenced the implementation of various school phone restriction policies (Jason, 2024). These policies could potentially influence digital stress, which describes how the subjective experience of the qualitative and quantitative aspects of digital media function as stressors despite available coping resources (Steele et al., 2019). Previous work has found that digital stress moderates the impact of digital devices on psychosocial outcomes such as depression, anxiety, and loneliness (Hall et al., 2021). Therefore, it is important to characterize how endorsements of digital stress have been influenced by phone policies. In the current study, we explored the relationship between digital stress and school phone restriction policy type among middle and high school students in the US (N = 1209, Mage = 14.61, SDage = 1.93, 49% female adolescents). Linear models revealed that digital stress was lower for students who had stringent phone restriction policies (b = -0.26, t(1042) = -2.78, p < .001). Results from this study contribute to our understanding of digital stress and can serve future interventions geared towards addressing digital stress to promote well-being among youth.
- Health & Well-being
- Social Media & Online Platforms
- Information Policy, Law & Governance
Misinformation and Teenagers: Exploring the Role of Social and Emotional Learning in Media and Information Literacy and Teenagers’ Perceptions
Johnny Cho
This project explores how Social and Emotional Learning (SEL) can be integrated into misinformation education for teenagers through playful, game-based learning. Instead of focusing only on fact-checking skills, the project examines how emotions, empathy, trust, and social relationships shape how young people engage with misinformation. The work includes a misinformation game and a structured debriefing activity designed to support reflection, empathy, and responsible decision-making. Overall, the project aims to support the development of healthy digital citizenship by helping youth better understand the social and emotional consequences of misinformation.
- Health & Well-being
- Social Media & Online Platforms
- Education & Learning Technologies
Racial and Ethnic Differences in Student Perceptions of School Smartphone Restrictions
Sarrah Khan, Trisha Venkatesan, Kimberly Nielsen Molaib, Carly E. Gray, Yonatan Ambrosio Lomeli, Lucía Magis-Weinberg
Schools across the United States are increasingly adopting phone restriction policies, yet variation in their implementation raises important equity concerns. Using data from a 2025 study on the perceived impacts of phone restrictions, we examined racial and ethnic differences in students’ knowledge of and experiences with phone restriction policies in 13 middle and high schools (n = 4525) across 4 urban and suburban school districts in one U.S. state. Students reported consequences for policy violations (e.g., verbal warnings, phone confiscation, office referral). Second, we analyzed a subsample (n = 937) from five schools with racially representative response rates, which additionally reported perceptions of policy fairness and strictness. In the full sample, reported consequences differed by race and ethnicity, χ²(48, N = 4,437) = 107.58, p < .001; however, these differences were not observed in the representative subsample, where we found racial and ethnic differences in perceptions of phone policy fairness and strictness. Asian students reported greater policy fairness (β = 0.42, p = 0.00568) and strictness, and American Indian/Alaskan Native students (β = 0.27, p = 0.0209) preferred less strict phone policies. These exploratory findings suggest that while knowledge of phone rules is largely consistent across groups, perceptions of policy fairness vary by race and ethnicity. Future research should examine intersections with socioeconomic status and school context to better understand potential inequities.
- Education & Learning Technologies
- Equity, Ethics & Justice
- Information Policy, Law & Governance
Translating and adapting Social Media Test Drive for Latin American adolescents (Demo)
Anwita Kamath, Yonatan Ambrosio Lomeli, Lucía Magis-Weinberg
Social media is central to adolescents' lives and significantly impacts their mental health. Adolescents in low- and middle-income countries are the fastest-growing demographic for internet and social media use, yet they often lack access to digital literacy education. Early training in digital citizenship can maximize benefits and minimize risks, especially before adolescents open their first accounts. Social Media Test Drive (SMTD) is a web-based social media simulation designed by the Cornell University Social Media Lab in collaboration with Common Sense Education for young adolescents (9-13 years) who are new to social media. The platform provides a controlled environment for adolescents to practice digital citizenship skills. Built as a browser-based platform, it replicates core social media interactions (posts, comments, profiles, and timelines) without real-world risks. Each module includes four sections: a Tutorial introducing concepts, a Guided Activity with step-by-step practice, a Free-play section for exploration, and a Reflection for consolidation. This asynchronous learning simulation environment enables experiential learning without exposure to actual social media risks. Realizing the need for this tool in other communities, our team has been translating materials to Spanish and adapting the vignettes and characters for the Latin American context. In this Demo we will showcase the Spanish version of SMTD.
- Education & Learning Technologies
- Health & Well-being
- Social Media & Online Platforms
Unequal Classrooms, Unequal AI: How Infrastructure and Institutional Power Shape Indian Teachers’ First Encounters with ChatGPT
Hritvik Gaur, Upendra Kumar, Belén C Saldías
Generative AI tools such as ChatGPT are increasingly positioned as classroom supports, yet teachers’ ability to benefit from them is shaped by unequal institutional, linguistic, and infrastructural conditions. We study how Indian primary and high-school teachers experience generative AI when using it for the first time, and how perceptions shift after guided, classroom-aligned exposure. We conducted a mixed-method pilot with 20 teachers across varied demographics and school types, combining baseline interviews and scaled measures with an in-interview demonstration, a week of independent use (often via voice for language comfort), and follow-up interviews and post-interaction measures. We find that hands-on, context-specific interaction increases comfort and perceived usefulness, and reduces initial fear and job-displacement narratives by repositioning AI as an assistive tool for lesson planning, explanation, and content creation. However, gains are unevenly constrained by structural factors: public-school and low-resource contexts face persistent barriers related to infrastructure and device availability, unreliable connectivity, limited digital training and awareness, and lack of institutional and governing-body support, while linguistic accessibility shapes who can participate confidently. Risk awareness improves after exposure but often remains surface-level, highlighting accountability gaps when onboarding is treated as a one-time event rather than sustained support. We argue that “teacher-facing AI” is a fairness issue not only in model behavior but in the conditions of access and practice that determine who can safely experiment, learn, and benefit. We conclude with implications for equitable onboarding, language-inclusive interaction design, and institution-aware deployment strategies that center teachers’ agency.
- Education & Learning Technologies
- AI & Machine Learning
- Equity, Ethics & Justice
UW Center for Digital Youth
Katie Davis, Alexis Hiniker, Jason Yip, Lucía Magis-Weinberg
At the Center for Digital Youth (CDY), we are shaping the future of technology for and with youth. Our mission is to ensure that digital experiences empower young people: supporting their learning, wellbeing, and social development rather than exploiting or harming them. Technology is now a central part of young people’s lives—from early childhood apps, to social media, to AI-powered learning. However, the public conversation about young people and technology is often shaped by industry interests or moral panics rather than research backed strategies that support youth. The Center for Digital Youth fills this gap by producing rigorous, interdisciplinary research. While many research centers study the effects of technology on youth, the Center for Digital Youth is unique in its integration of multiple disciplines with a strong focus on design-based intervention. Unlike traditional psychology or education research groups, we don’t just study how technology affects young people, we actively build and test solutions in partnership with them.
- Education & Learning Technologies
- Health & Well-being
- Human-Computer Interaction & UX
UW Youth Advisory Board (UW-YAB)
Rotem Landesman, Ally Phan, Lucía Magis-Weinberg, Katie Davis
The UW Youth Advisory Board (UW-YAB) in the iSchool's Center for Digital Youth is a group of high school students (ages 14-17) who work with researchers to explore the opportunities and complexities technologies (like social media, AI tools, etc.) bring into our lives. Teens engage in co-design, analysis, and reflective practices to enrich research for and about youth, ensuring their voices are front and center in researchers' endeavors.
- Education & Learning Technologies
- Social Media & Online Platforms
- Human-Computer Interaction & UX
Zone 3: Human-Centered Design & HCI
The Ability-Based Design Mobile Toolkit (ABD-MT): Developer Support for Runtime Interface Adaptation Based on Users’ Abilities
Junhan Kong, Mingyuan Zhong, James Fogarty, Jacob O. Wobbrock
Despite significant progress in the capabilities of mobile devices and applications, most apps remain oblivious to their users' abilities. To enable apps to respond to users' situated abilities, we created the Ability-Based Design Mobile Toolkit (ABD-MT). ABD-MT integrates with an app's user input and sensors to observe a user's touches, gestures, physical activities, and attention at runtime, to measure and model these abilities, and to adapt interfaces accordingly. Conceptually, ABD-MT enables developers to engage with a user's "ability profile,'' which is built up over time and inspectable through our API. As validation, we created example apps to demonstrate ABD-MT, enabling ability-aware functionality in 91.5% fewer lines of code compared to not using our toolkit. Further, in a study with 11 Android developers, we showed that ABD-MT is easy to learn and use, is welcomed for future use, and is applicable to a variety of end-user scenarios.
- Human-Computer Interaction & UX
- AI & Machine Learning
Ability Heuristics for Conducting Accessibility Inspections
Claire L. Mitchell, Junhan Kong, Jesse J. Martinez, Shaun K. Kane, Amy J. Ko, Alexis Hiniker, Jacob O. Wobbrock
The accessibility of interactive technologies is often evaluated using checklists that are low-level, numerous, and platform specific. Such checklists are typically used by accessibility experts, leaving everyday designers and developers with little support for assessing their own interfaces. To make accessibility evaluations easier to conduct, we devised a set of nine ``ability heuristics'', akin to usability heuristics, that prompt designers to engage with accessibility. In this work, we describe how we created the heuristics and our motivation behind each one. Further, to understand the efficacy of the heuristics, we empirically evaluated these ability heuristics with master’s students in HCI and Design, comparing them to usability heuristics and WCAG. The students found the heuristics were as easy to use as the alternative methods. With this work, we argue that the heuristics help to move beyond binary notions of accessibility, pushing designers to consider the quality of accessibility features across diverse disabilities and the range of abilities within.
- Human-Computer Interaction & UX
- Equity, Ethics & Justice
Accessible Visual Creativity Through Multimodal, Verifiable, and Customizable AI Interactions
Zhuohao (Jerry) Zhang
Creativity is a fundamental expression of human agency, yet the tools and workflows that support creative production remain overwhelmingly visual. Blind and low-vision (BLV) individuals possess equal creative potential, but today's technologies still restrict their ability to explore, author, and refine visual media (e.g., presentation slides) on their own terms. While modern interfaces increasingly provide accessible labels that let BLV users navigate and operate visual authoring tools, these affordances only support assembling basic, functional artifacts. The ability to independently craft effective visual communications, however, remains largely out of reach. My research explores how we can design multimodal, verifiable, and customizable human-AI interactions to enable BLV users to engage in visual content creation in ways that were not possible before. First, I develop multimodal interaction techniques to support the accessible creation of visual artifacts such as artboards and slide decks within productivity applications. Second, I design human-AI interactions that let BLV users verify AI-generated design outcomes, including design suggestions, intermediate representations, and visual expressions, to confirm alignment with their creative goals. Third, I introduce customizable interactions powered by AI and LLMs to enable BLV users to define how digital content is perceived and manipulated based on their personal preferences and tasks.
- Human-Computer Interaction & UX
- AI & Machine Learning
- Equity, Ethics & Justice
Bespoke Encodings: The Importance of Radically Personal Visualizations
Jaime Snyder
From fitness apps to public health data dashboards, visualizations offer a lens into how wellness is made computationally legible within large, sprawling data systems. However, standardized visualizations of personal data can be detached from intimate, embodied experiences of self and personhood, subtly—and sometimes not so subtly—shaping what we think we know, and even what we consider knowable, about ourselves. As black-boxed data analytic technologies build capacity to cast doubt on the authenticity of embodied personal experience (“I thought I slept well last night, until I looked at my app…”), it is essential to interrogate the experience of datafication, to ask what is gained or lost in the process of being rendered legible through data, and to compare personal lived experiences with computable models. Grounded Visualization Design (GVD) is a research approach that does this by empowering individuals to challenge and subvert conventional analytics-driven methods for visualizing personal data. GVD is a collaborative design methodology that uses visual prompts, probes, and elicitations to enable individuals untrained in quantitative science to create bespoke systems for visually encoding personal data and lived experiences. A selection of GVD projects illustrates how this methodology has been a powerful tool for reflecting on the implications and consequences of rendering people visible through analytic datafication.
- Human-Computer Interaction & UX
- Health & Well-being
Building AI-Ready Health Systems for Equitable Diabetic Retinopathy Screening
Kennedy Orwa, Wanda Pratt, Mike Teodorescu, Yue Wu, Jason C. Young
Artificial intelligence (AI)–enabled tools have demonstrated strong potential for early detection of diabetic retinopathy (DR), with particular promise for expanding screening access in low-resource and underserved settings. However, diagnostic performance often declines when AI systems are deployed outside the populations, devices, and workflows represented in their training data. This study develops an analytical framework to examine how AI technical methods, institutional capacity, and ethical and policy constraints jointly shape the generalizability and equity of DR AI systems. We conducted a structured narrative review with multilevel analytical synthesis of the DR AI literature. Evidence was identified through targeted searches of PubMed, IEEE Xplore, and arXiv, supplemented by citation tracking. Rather than pooling performance estimates, the literature was treated as analytical input to synthesize insights across three interdependent domains: generalizability methods (model-centric, data-centric, and imaging- and workflow-centric), institutional data strategies, and ethics and policy. The synthesis shows that performance disparities in DR AI systems emerge from interactions among data representation, model design, workflow integration, institutional capacity, and regulatory constraints. Mitigation strategies cluster into complementary technical and organizational approaches, none of which is sufficient in isolation. Underrepresentation in training data consistently amplifies diagnostic risk in low-resource and marginalized populations.
- AI & Machine Learning
- Equity, Ethics & Justice
- Health & Well-being
Characterizing Usage and Impacts of Dating Safety Tools
Meira Gilbert, Erica Adams, Yael Eiger, Lindah Kotut
The online dating environment poses serious physical and emotional safety concerns including catfishing, doxing, harassment, and abuse. To counteract these risks, emerging “dating safety tools” (DSTs) position themselves as tools to protect women from the risks and harms of modern dating. One example is the popular regional Facebook group, “Are We Dating the Same Guy” (AWDTSG), which acts as an online space for members to share “red flags” and personal experiences with specific individuals. Other services such as the women-only app “Tea” operate similarly: Users can post personal information about men, leave reviews and comments about them, and purchase additional “safety tools” such as background checks, reverse image search, sex offender search, phone number lookup, and criminal record search. While intended to protect women’s safety, dating safety tools raise significant concerns, ranging from privacy and security issues to interpersonal and societal harms. Furthermore, it is difficult to reason how much safety these solutions provide, and for whom. Drawing on 16 interviews with people who either use or have been posted to a DST, we analyze participant conceptions of safety in online dating, highlight key privacy, security, and social risks of DSTs, and contextualize how the imaginaries of “safety” DSTs rely on are both reinforced and challenged by participant experiences.
- Social Media & Online Platforms
- Human-Computer Interaction & UX
- Equity, Ethics & Justice
ChatGPT!: Make Me Sound White!
Nassim Parvin
What if we remember that language itself is a technology? This research works from that premise to examine AI in relation to self-expression, narration, and participation in public and scholarly discourses. Drawing on two autobiographical vignettes, I explore my experiences as a nonnative speaker using ChatGPT across public and private contexts. These vignettes reveal a central paradox of language technologies: they can be liberating and simultaneously act as a form of erasure. I build upon this paradox to outline research areas centered on the themes of domination and marginalization connected to histories and connections of language to both liberation and creativity as well as colonialism and epistemic violence.
- AI & Machine Learning
- Equity, Ethics & Justice
- Human-Computer Interaction & UX
Co-Designing for the Triad: Design Considerations for Collaborative Decision-Making Technologies in Pediatric Chronic Care
Ray-Yuan Chung, Jaime Snyder, Zixuan Xu, Daeun Yoo, Athena Ortega, Wanda Pratt, Aaron Wightman, Ryan Hutson, Cozumel Pruette, Ari Pollack
In pediatric chronic care, the triadic relationship among patients, caregivers, and healthcare providers introduces unique challenges for youth in managing their conditions. Diverging values, roles, and asymmetrical situational awareness across decision-maker groups often hinder collaboration and affect health outcomes, highlighting the need to support collaborative decision-making. We conducted co-design workshops with 6 youth with chronic kidney disease, 6 caregivers, and 7 healthcare providers to explore how digital technologies can be designed to support collaborative decision-making. Findings identify barriers across all levels of situational awareness, ranging from individual cognitive and emotional constraints, misaligned mental models, to relational conflicts regarding care goals. We propose design implications that support continuous decision-making practice, align mental models, balance caregiver support and youth autonomy development, and surface potential care challenges. This work advances the design of collaborative decision-making technologies that promote shared understanding and empower families in pediatric chronic care.
- Human-Computer Interaction & UX
- Health & Well-being
Do Attachment Styles Shape ChatGPT Usage?
Marx Wang, Jade Li, Songling Ngo, Katie Davis, Alexis Hiniker
The widespread adoption of generative AI agents has raised questions about the relationships that users may be developing with machines. In this study, we ask whether users' attachment styles predict their how they interact with the generative AI agent, ChatGPT, and how they experience these interactions. We conducted a mixed-methods study, triangulating self-reported survey data (N = 168) with transcripts of users' ChatGPT conversational history (N = 19,330). We find that attachment anxiety strongly predicts emotional engagement with ChatGPT, trust in ChatGPT, and likelihood of adopting behavioral suggestions from ChatGPT, while attachment avoidance predicts reduced trust in ChatGPT and reduced self-efficacy. Further, we find that attachment anxiety is directly observable in transcripts of users' ChatGPT conversations, including increases in affect words, self-referential pronouns, and future-focused thinking. These findings identify anxiously attached individuals (approximately 20% of adults) as a vulnerable population whose needs should be considered in the design of generative AI interfaces.
- AI & Machine Learning
- Human-Computer Interaction & UX
- Health & Well-being
Formalizing Interaction-Time Arbitration in Multi-Tool AI Tasks: Quantifying Delegation Instability Under Concurrent AI Availability
Layomi Akinrinade
As AI systems become embedded across everyday platforms, users increasingly work with multiple AI tools concurrently. While prior research has examined trust calibration, automation levels, and output evaluation, less attention has been paid to cross-system delegation within a single task. This project introduces interaction-time arbitration as a measurable construct capturing within-task instability in execution commitment across concurrently available AI systems, holding task goals constant. I formalize arbitration episodes as routing transitions between systems and propose behavioral metrics including episode duration, switch frequency, and reopening rate. The planned within-subjects study compares single-system and multi-system conditions in goal-stable drafting tasks to evaluate whether arbitration cost predicts subjective workload, perceived control, and task-level outcomes. This work aims to isolate cross-system delegation from general task switching and provide an empirical foundation for studying agency fragmentation in multi-tool AI environments.
- AI & Machine Learning
- Human-Computer Interaction & UX
- Data Science & Computational Methods
Grossness is Nonlinear: Datafication of Visceral Observations in a Citizen Science Project
Jaime Snyder; Julia Parrish, School of Aquatic and Fishery Science/COASST; Zac Murphy; Allie Brown, COASST; Florence Sullivan, COASST
Impacts of climate change are becoming more noticeable in our daily lives, from flooding and hurricanes to heatwaves and mass animal die-offs. For many, especially those in vulnerable places like coastal communities, these signs of a changing climate are hard to ignore and can be highly distressing. At the same time, it remains crucial to observe, record, and understand these environmental shifts. Many citizen science projects, such as the University of Washington’s Coastal Observation and Seabird Survey Team (COASST), invite the public to contribute to this work by gathering structured and rigorous data about local environments. Recent mass wildlife die-offs on the Pacific Coast have made it clear that engaging communities in these efforts requires not only careful data collection but also an ability to process the tangle of emotional and intellectual responses that can stem from witnessing unsettling changes. We partnered with COASST to support the design of training and data-collection materials for this work. Through a series of walkabout interviews and visual card-sorting activities with COASST volunteers, we identified (1) how direct observations of change contribute to mental models of local ecosystems, and (2) how reactions to intense sights like decaying carcasses or injured animals can create emotional barriers to ongoing data collection. We introduce two potential tools, inoculation and reframing, for balancing and mitigating emotional dimensions of scientific investigation and discovery in the important work of citizen science-led environmental data collection.
- Climate & Sustainability
- Human-Computer Interaction & UX
- Health & Well-being
"It's Just Not Cool": Challenges of Creating Media Literacy Resources for Teens in Public Libraries
Stacey Wedlake, Chris Jowaisas, Jason C. Young
Public librarians want to offer media and information literacy education for teens in their libraries. However, most media and information literacy resources are designed for classroom settings, and unlike in schools, teens need to choose to come and participate in programs and services at public libraries. For our project, we worked with seven public libraries across the United States to co-design media literacy resources for teens. The librarians shared that if content is to overtly “educational,” teens will not engage. At the same time, librarians have to navigate complicated local politics that impacts the type of programming and services they offer to teens. We focused our design sessions on creating games or more play-based approaches. The libraries then tested one of these resources, an adaptation of a “Telestrations” game with their teens. Most of the librarians were able to successfully deploy the game, found it fit well into existing teen services, and planned to run it again in the future. In our next phase of work, we will continue creating resources with librarians, and later, adapt the resources with school librarians.
- Education & Learning Technologies
- Libraries, Archives, Museums & Cultural Heritage
- Social Media & Online Platforms
Mapping Information Pathways in Online Narratives of Psychiatric Hospitalization
Anastasia Schaadhardt, Emma McDonnell, Shirin Amouei, Akansha Vaswani-Bye, Justin Karter, and Wanda Pratt
Psychiatric hospitalization is acute psychiatric care in which the patient resides in a psychiatric ward or hospital to undergo treatment. Negative experiences during psychiatric hospitalization can decrease individuals’ trust in healthcare professionals and the healthcare system, which in turn can lead to avoidance or refusal of healthcare services and, consequently, negative clinical outcomes, such as worsening symptoms and increased likelihood of readmission. We explore the problem of negative inpatient psychiatric care experiences through an analysis of online personal narratives of psychiatric hospitalization from the Editorial section of the website Mad in America. We anchor this analysis in "information pathways," which trace the ways that information flows through psychiatric hospitals and shapes the experiences that writers of these narratives described.
- Data Science & Computational Methods
- Health & Well-being
- Social Media & Online Platforms
Radical Imagination as a Tool for Envisioning Climate Technology Futures
Amelia Lee Doğan, Nino Migineishvili, Isabel Carrera Zamanillo (Front and Centered), Lindah Kotut
Climate technologies, broadly a set of technologies that expand a user’s agency to affect climate, from solar panels to irrigation management systems, are often not developed for frontline communities and can exacerbate existing inequities. In this study, we explore how radical imagination can support frontline communities to envision their own futures with climate technology. We present a set of three workshops with Seattle-area frontline community members. Drawing on radical imagination and future workshops, we invited participants to identify climate justice issues in their communities and co-speculate climate technology solutions. Through qualitative analysis of workshop recording and participant produced artifacts, we found frontline community members envision climate technology that differs from the current dominant paradigm: 1) embedded and interconnected technological system, 2) transparent and community-controlled data infrastructure, 3) technologies designed with the local ecosystem, and 4) technologies that respect sovereign Indigenous governance. Our work suggests that radical imagination utilized in community-led workshops can shift how participants understand current climate technologies and offer pathways for rethinking how climate adaptation tools are designed and deployed. This work illustrates how participatory futuring practices rooted in radical imagination can contribute to the development of climate adaptation that address climate and intersecting injustices.
- Climate & Sustainability
- Equity, Ethics & Justice
SleepStreak: Redesigning the Smartphone to Support Healthy Sleep Habits
Longjie Guo, Antares Yuan, Zachary Liu, JaeWon Kim, Kara Duraccio, Jenny Radesky, Alexis Hiniker
Smartphone use is widely linked to poor sleep, yet much prior work treats phones primarily as tools for monitoring or improving sleep rather than as sources of disruption themselves. This project examines how everyday smartphone use affects sleep by foregrounding users’ experiences with phone interfaces during the hours leading up to bedtime. We report findings from a mixed-methods study with adolescents that combines surveys of sleep habits and nighttime phone use with participatory co-design activities.
- Human-Computer Interaction & UX
- Health & Well-being
SusBench: An Online Benchmark for Evaluating Dark Pattern Susceptibility of Computer-Use Agents
Longjie Guo, Chenjie Yuan, Mingyuan Zhong, Robert Wolfe, Ruican Zhong, Yue Xu, Bingbing Wen, Hua Shen, Lucy Lu Wang, Alexis Hiniker
As LLM-based computer-use agents (CUAs) begin to autonomously interact with real-world interfaces, understanding their vulnerability to manipulative interface designs becomes increasingly critical. We introduce SusBench, an online benchmark for evaluating the susceptibility of CUAs to UI dark patterns, designs that aim to manipulate or deceive users into taking unintentional actions. Drawing nine common dark pattern types from existing taxonomies, we developed a method for constructing believable dark patterns on real-world consumer websites through code injections and designed 313 evaluation tasks across 55 websites. Our study with 29 participants showed that humans perceived our dark pattern injections to be highly realistic, with the vast majority of participants not noticing that these had been injected by the research team. We evaluated five state-of-the-art CUAs on the benchmark. We found that both human participants and agents are particularly susceptible to the dark patterns of Preselection, Trick Wording, and Hidden Information, while being resilient to other overt dark patterns. Our findings inform the development of more trustworthy CUAs, their use as potential human proxies in evaluating deceptive designs, and the regulation of an online environment increasingly navigated by autonomous agents.
- AI & Machine Learning
- Human-Computer Interaction & UX
- Information Policy, Law & Governance
Value Sensitive Design: Shaping Technology with Moral Imagination (2nd edition)
Batya Friedman, David Hendry
In the midst of technological, environmental, and social turmoil, engineers, policymakers, and designers of all kinds seek approaches to responsible innovation, approaches that foreground human values. How do we make good on responsible AI and social media? On sustainable agriculture, energy, healthcare, and transportation systems? How do we develop inclusive and constructive tech policy? Value sensitive design offers a comprehensive approach for making progress on society’s toughest engineering and technical design problems, including practical methods for catalyzing and strengthening designers’ moral and technical imaginations.The second edition of Value Sensitive Design: Shaping Technology with Moral Imagination by Batya Friedman and David Hendry (MIT Press, available March 2026) expands upon the first and includes 40 percent new material, including: * 8 new hands-on instructional studios: for professional development and classroom use that provide practical experience with value sensitive design methods and skills; * 16 Envisioning Cards from the full toolkit: used throughout the instructional studios; * 5 new methods: data statements, diverse voices, values hierarchy, Metaphor Cards, and Security Cards (22 total methods); * 3 new application domains: bias in computing and information systems, materials and imagination, and tech policy (13 total application domains); * New theory about value sensitive design as a formative theory and new explication of the tripartite methodology in terms of robots in healthcare. We’ll be on hand to talk about our new book and answer your questions.
- Equity, Ethics & Justice
- Human-Computer Interaction & UX
- Information Policy, Law & Governance
Zone 4: Data, Computation & Information Science
Abstain-Switch: A Modular Abstention Stack to improve LLM Reliability
Bingbing Wen, Faeze Brahman, Zhan Su, Shangbin Feng, Yulia Tsvetkov, Lucy Lu Wang, Bill Howe
A reliable large language model (LLM) should be able to abstain from answering when appropriate, such as refusing unsafe requests or abstaining when its answer is uncertain or likely to be wrong. In this work, we investigate whether abstention can be learned as a modular capability that can be used to extend model reliability across a range of tasks. We propose Abstain-Switch, a modular abstention framework that composes a library of Abstention Modules (e.g., safety, incompleteness, unsupported, indeterminate) with task-specific Answering Modules via a lightweight token-level router. This separation of task and abstention specialization enables more accurate abstention while limiting over-refusal and adds calibrated abstention to existing task-adapted models without the need to retrain. Across QA tasks spanning knowledge, medicine, and science, Abstain-Switch improves average Effective Reliability by +8.1 points on LLaMA-3-8B-instruct. On unanswerable query benchmarks, our framework achieves gains of at least +16.8 point in in-domain and +5.4 points in out-of-domain settings over base LLMs while maintaining low over-refusal rates. Abstain-Switch consistently outperforms or matches strong adapter-merging baselines, providing a parameter- and data-efficient, extensible approach to reliable abstention.
- AI & Machine Learning
- Data Science & Computational Methods
Artificial Intelligence (AI) Readiness to Support Evidence Synthesis by Workflow: Findings from a Review of Reviews
Zijing Wei (Presenter), Luyanda Ngongoma, Jose Cols, Arina L. Bogdan, Ariel Lin, Claire Zhang, Yue Su, Nuno de Jesus Ximenes, Chloe Zhu, Yoav Ackerman (Presenter), Heather L Bullock, Juhua Hu, and Yanfang Su
Background: Evidence synthesis is crucial for informing evidence-based practice across various fields, yet the traditional methodology is resource-intensive and often produce findings that are outdated before publication. There is a growing trend towards integrating innovative solutions such as artificial intelligence (AI) into evidence synthesis to enhance efficiency, but standardized adoption is still pending. Objective: The goal of this study is to assess the readiness of AI for evidence synthesis. We aim to identify available AI-powered features of evidence synthesis tools and assess their performance. Methods: We searched MEDLINE, Embase, and Global Index Medicus in May 2025 to identify review articles that evaluated evidence synthesis tools. Relevant study reviews and tool reviews published in English between January 2020 and May 2025 were included in our review of reviews. Tool features and performance metrics were extracted according to stages of the evidence synthesis workflow, including search, screening, appraisal, extraction, and synthesis. Results: We included 21 studies in our review of reviews and identified 46 evidence synthesis tools. Nine tools supported all five stages of the evidence synthesis workflow, among which DistillerSR covered the most workflow-supporting features (19 out of 21). Ten of the identified tools reported recall rates for AI-powered title/abstract screening, all of which achieved ≥ 95% recall at least once. Reported recall rates of EPPI-Reviewer, Research Screener and SWIFT-Active Screener consistently reached the 95% threshold with varying degrees of automation. Conclusion: This review provides a structured assessment of AI readiness in evidence synthesis. DistillerSR and EPPI-Reviewer demonstrated the broadest feature support and strong evidence for title/abstract screening automation. The evidence base for AI-powered title/abstract screening is well established, whereas other AI-powered features lack comparable evaluation. Overall, our findings highlight the potential of AI to improve efficiency across evidence synthesis workflows.
- AI & Machine Learning
- Data Science & Computational Methods
- Health & Well-being
Beyond Readability Metrics: Plain Language Priorities in Disability Advocacy Organizations
Anukriti Kumar, Kate Glazko, Yueran Sun, Mark Harniss, Lucy Lu Wang, Jennifer Mankoff
Plain language materials enable people with intellectual and developmental disabilities (IDD) to access critical information about policy, healthcare, and civic participation. Disability advocacy organizations routinely produce these materials, yet we know little about how practitioners approach this work, what standards guide their judgments, or whether current evaluation metrics align with their priorities. Through focus groups and interviews with 11 practitioners across three U.S. disability advocacy organizations, individual walkthroughs where practitioners evaluated AI-simplified documents, and systematic analysis of 33 pairs of original and simplified documents from four organizations using 28 readability metrics, we document plain language production as specialized expertise requiring policy knowledge, community accountability, and multi-stage validation processes. Practitioners who use AI tools report treating outputs as provisional starting points requiring complete human verification rather than autonomous producers of publication-ready content. Organization-produced documents averaged a Flesch-Kincaid Grade Level of 10.2, exceeding all published guideline targets ranging from 3rd to 8th grade, yet practitioners described these materials as successfully meeting community needs. This suggests that published text simplification guidelines may not capture dimensions practitioners and communities consider essential for high-stakes accessibility work. Based on our findings, we propose design principles for text simplification tools that center verification and transparency rather than automation and call for evaluation frameworks that complement automated metrics with practitioner expertise and community accountability mechanisms.
- AI & Machine Learning
- Equity, Ethics & Justice
- Human-Computer Interaction & UX
Asking the Missing Piece: Context-Driven Clarification for Ambiguous VQA
Zongwan Cao, Bingbing Wen, Lucy Lu Wang
Visual Question Answering (VQA) can suffer from under-specification, where the same image-question pair may have multiple plausible answers depending on missing external context. Existing research highlights this limitation, but does not provide methods for teaching models to proactively seek for context. In this work, we study the task of open-ended clarification question generation for underspecified VQA. We curate a dataset of ambiguous VQA pairs annotated with human-verified clarification questions that capture cultural, temporal, spatial, or attribute-based uncertainty. To address this task, we develop a reinforcement learning framework, Grounded Reasoning Preference Optimization–Clarification Reasoning (GRPO-CR), which integrates tailored reward functions to ensure generated clarifications are effective at resolving ambiguity. Experimental results show that GRPO-CR enables VLMs to ask clarification questions that more reliably reduce uncertainty. Our work establishes open-ended, context-seeking clarification as a principled pathway toward interactive, trustworthy multimodal systems that know when and what to ask before answering.
- AI & Machine Learning
- Data Science & Computational Methods
- Human-Computer Interaction & UX
Code Contribution and Credit in Science
Eva Maxfield Brown, Isaac Slaughter, Nicholas Weber
Software development and scientific collaboration are fundamental aspects of contemporary research, yet quantitative science studies typically investigate these concepts separately. We develop a dataset of approximately 140,000 paired research articles and code repositories and a predictive model that matches research article authors with software repository developer accounts. With these resources, we bridge the two literatures —investigating how software development activities influence credit allocation in collaborative scientific settings. Our findings reveal significant patterns distinguishing software contributions from traditional authorship credit. Nearly 30% of articles include non-author code contributors—individuals who participated in software development but received no formal authorship recognition. While code-contributing authors show a modest ~5.1% increase in article citations, this effect becomes non-significant when controlling for domain, article type, and open access status. First authors are significantly more likely to be code contributors than other author positions. Notably, we identify a negative relationship between coding frequency and scholarly impact metrics. Authors who contribute code more frequently exhibit progressively lower h-indices than non-coding colleagues, even when controlling for publication count, author position, domain, and article type. These results suggest a disconnect between software contributions and credit, highlighting important implications for institutional reward structures and science policy.
- Data Science & Computational Methods
- Information Policy, Law & Governance
Community Notes Reduce Engagement with and Diffusion of False Information Online
Isaac Slaughter, Axel Peytavin, Johan Ugander, Martin Saveski
Much attention has been given to the spread of true vs. false content on online social platforms. However, much less is known about how platform interventions on false content alter the engagement it receives. In this work, we estimate the causal effects of Community Notes, a novel fact-checking system in place at X (formerly Twitter) to solicit and ratify crowdsourced context for misleading posts. We gather detailed time series data for 40,078 posts for which notes have been proposed and use synthetic control methods to estimate a range of counterfactual outcomes for these posts. We estimate that after being attached, on average, the notes resulted in reductions of 46.1% in reposts, 44.1% in likes, 21.9% in replies, and 13.5% in views received by a misinformational post. Over the posts’ entire lifespans, these reductions amount to 11.6% fewer reposts, 13.3% fewer likes, 6.9% fewer replies, and 5.5% fewer views on average. In reducing reposts, we observe that diffusion cascades for fact-checked content are less deep and less “viral,” but not less broad, than synthetic control estimates for non-fact-checked content with similar reach.
- Data Science & Computational Methods
- Social Media & Online Platforms
Community Notes Reduce Engagement with and Diffusion of False Information Online
Isaac Slaughter, Axel Peytavin, Johan Ugander, Martin Saveski
Much attention has been given to the spread of true vs. false content on online social platforms. However, much less is known about how platform interventions on false content alter the engagement it receives. In this work, we estimate the causal effects of Community Notes, a novel fact-checking system in place at X (formerly Twitter) to solicit and ratify crowdsourced context for misleading posts. We gather detailed time series data for 40,078 posts for which notes have been proposed and use synthetic control methods to estimate a range of counterfactual outcomes for these posts. We estimate that after being attached, on average, the notes resulted in reductions of 46.1% in reposts, 44.1% in likes, 21.9% in replies, and 13.5% in views received by a misinformational post. Over the posts’ entire lifespans, these reductions amount to 11.6% fewer reposts, 13.3% fewer likes, 6.9% fewer replies, and 5.5% fewer views on average. In reducing reposts, we observe that diffusion cascades for fact-checked content are less deep and less “viral,” but not less broad, than synthetic control estimates for non-fact-checked content with similar reach.
- Data Science & Computational Methods
- Social Media & Online Platforms
From Traces to Trees: Structured On-Policy Pruning of Long-Form Reasoning in Reasoning Language Models
Chenjun Xu, Zhennan Zhou, Zhan Su, Lucy Lu Wang, Bingbing Wen
Long chain-of-thought (Long CoT) reasoning substantially improves performance on multi-step problems, but it also induces overthinking: models generate lengthy traces dominated by redundant verification, backtracking, and low-yield exploration, increasing inference cost and latency. We propose On-Policy Trace2Tree-prune (OP-T2T-prune) framework, a structured framework for analyzing long-form reasoning traces and performing pruning. On-Policy Trace2Tree-prune first generates \emph{on-policy} reasoning traces and converts them into trees via heuristic segmentation and taxonomy annotation (where each step is classified as Clarification, Exploration, Verification, Backtracking, or Conclusion). We propose a pruning strategy, First Correct Conclusion Answer (FCCA), which retains only the minimal prefix up to the first Conclusion node with the correct answer. Experiments on DeepSeek-R1-Distill-Qwen-7B and DeepSeek-R1-Distill-LLaMA-3-8B across GSM8K, Math~500, and AIME~2024 show that on-policy-FCCA reduces generated tokens by 19.5-42.4\% while largely preserving accuracy. Further analysis reveals that on-policy-FCCA does not merely truncate reasoning but reallocates reasoning effort from redundant verification and backtracking toward more productive exploration.
- Data Science & Computational Methods
- AI & Machine Learning
Illusions of the Gold Standard: A Large-Scale Analysis of Human Evaluation Protocols for Long-Form Text Generation
Katelyn Xiaoying Mei, Yili Hsu, Minjoon Choi, Zongwan Cao, Chenjun Xu, Bingbing Wen, Su Lin Blodgett, Lucy Lu Wang
Human evaluation plays a critical role in assessing the quality of generated text. However, the reliability and reproducibility of these evaluations depend on transparent and well-documented protocols—details that are frequently missing in current practice. In this work, we conduct a large-scale analysis of human evaluation protocols for evaluating long-form generation tasks in *CL conference publications from 2023–2025, including a full manual review of 356 papers and LLM-assisted analysis for another 1.8k+ papers. We define a set of 20 reportable criteria related to reproducibility of human evaluation studies and apply these criteria to systematically examine reporting norms and practices within the community. We find widespread under-reporting of important aspects of human evaluation study design, leading to ambiguity about what was measured and how, who contributed judgments, and how judgments should be interpreted. Based on these findings, we outline actionable recommendations to support more transparent and reproducible reporting in future research.
- Data Science & Computational Methods
- AI & Machine Learning
Improving Firearm Violence Data Collection from Court Records Using Easy-to-access Large Language Models
Ott Toomet
Firearm violence is a leading cause of injury and death in the U.S., yet the related data collection faces multiple challenges. This study complements the literature by analyzing court records, in particular affidavits of probable cause, that contain rich narratives about the respective criminal incident. Previously, Kafka et al (2024) achieved reasonably good results by classical natural language processing methods using the same dataset. We extend the previous study by using small-scale LLM-s instead of decision trees. Small scale LLM-s can be run locally on a widely available hardware using relatively simple programming tools. This avoids privacy issues, extra costs, and shortage of programming skills, that are often a limiting factor for small public health research groups. We analyze 1469 records using Llama 3.2 3-billion model through Ollama framework. We are able to achieve results that exceed Kafka et al. (2004) decision-tree-based results using Implicit RAG-style prompting ($F$-score 0.809). However, LLM-s require substantially more computing resources, and are very sensitive to the prompting strategies. But we believe that as the models and the related computing hardware will develop rapidly in the near future, such methods will be an integral part of public health research.
- Data Science & Computational Methods
- AI & Machine Learning
- Health & Well-being
Interpretable by Design: Human-Centered Analytics with Knowledge-Guided Models
Steven Gustafson
This research demonstrates how analytic systems can be designed to be interpretable and usable by diverse audiences. A human-centered approach combines interpretable models with knowledge-based, visual, and narrative representations. Symbolic regression is used as a case study to illustrate how models learned from data can be connected to different semantic representations, enabling analytic and decision systems to be more accessible, explainable, and aligned with human cognitive needs.
- Data Science & Computational Methods
- AI & Machine Learning
iStartup: The New Entrepreneurship Lab at the University of Washington Information School
Mike Teodorescu, Jeremy Zaretzky
Founded in 2023 and housed in the Information School at the University of Washington, the iStartup Lab helps iSchool students develop entrepreneurial skills and launch data-driven social impact startups, as well as other innovation-driven social impact organizations. In support of the iSchool’s Grand Challenges, we teach student founders how to analyze startup companies and other innovation-driven organizations through a social impact lens and integrate social impact into the fabric of newly created organizations, using frameworks including the UN Sustainable Development Goals and Value Sensitive Design.
- Education & Learning Technologies
MixAtlas: Uncertainty-Aware Data Mixture Optimization for Large Multimodal Model Midtraining
Bingbing Wen, Sirajul Salekin, Feiyang Kang, Lucy Lu Wang, Bill Howe, Javier Movellan, Manjot Bilkhu
Principled domain reweighting can substantially improve sample efficiency and downstream generalization; however, data-mixture optimization for multimodal pretraining remains underexplored. Current multimodal training recipes tune mixtures from only a single perspective such as data format or task type. We introduce MixAtlas, a principled framework for compute-efficient multimodal mixture optimization via systematic domain decomposition and proxy-based search. MixAtlas factorizes the training data along two interpretable axes—image concepts and task supervision —enabling interpretable mixture control and fine-grained attribution of downstream performance to specific domains within each axis. Using small proxy models and a Gaussian-process surrogate, we explore the mixture space at ∼1/100th the cost of full-scale training. The resulting mixtures yield substantial improvements: up to 3× faster convergence to a target loss and consistent gains of 2–5% across diverse benchmarks over existing approaches, with especially strong boosts on text-rich benchmarks (ChartQA +10%, TextVQA +13%). Importantly, we show that mixtures obtained via proxy models transfer to larger scale model training, preserving both efficiency and accuracy gains. Overall, MixAtlas makes multi-modal mixture optimization practical and interpretable, providing concrete, compute-efficient recipes for training next-generation MLLMs.
- AI & Machine Learning
- Data Science & Computational Methods
Neutrality Bites: Gender Representation in LLM-Generated Animal Stories
Imani Finkley, Yuanxi Li, Melanie Walsh
Gender bias in AI-generated stories is a well-documented problem. While much attention has been paid to reducing or mitigating this bias, it is not always clear whether interventions produce genuinely fairer results. To investigate this issue, we examine how large language models (LLMs) handle gender assignment in a narrative context that is popular, highly ambiguous, and also known to closely reproduce human stereotypes: stories about talking animals. We prompt six leading LLMs to complete an English-language story about seven different anthropomorphic animal characters whose gender is unstated. We additionally iterate with four different narrative settings and a range of model temperatures. Across the 23.8K stories, we find that models frequently avoid gendering the animal character in the story (34% on average) or use gender-neutral language like “it” or “its” (33% on average). However, when gender is assigned, there is a significant masculine bias. Female animal characters are virtually absent, present in just 1.9% of stories vs. 30.7% that feature male characters. Our findings point to an overarching claim: neutrality bites. Models that prioritize neutrality in order to address social bias may actually contribute to the erasure of marginalized perspectives and identities. We suggest that alternative strategies beyond neutrality need to be pursued, such as ones that more equally distribute gender possibilities across imagined subjects.
- AI & Machine Learning
- Equity, Ethics & Justice
Re-grounding Generative Proactivity with Epistemic and Behavioral Insight
Kirandeep Kaur, Xingda Lyu, Chirag Shah
Generative AI agents equate understanding with resolving explicit queries, an assumption that confines interaction to what users can articulate. This assumption breaks down when users themselves lack awareness of what is missing, risky, or worth considering. In such conditions, proactivity is not merely an efficiency enhancement, but an epistemic necessity. We refer to this condition as epistemic incompleteness: where progress depends on engaging with unknown unknowns for effective partnership. Existing approaches to proactivity remain narrowly anticipatory, extrapolating from past behavior and presuming that goals are already well defined, thereby failing to support users meaningfully. However, surfacing possibilities beyond a user’s current awareness is not inherently beneficial. Unconstrained proactive interventions can misdirect attention, overwhelm users, or introduce harm. Proactive agents, therefore, require behavioral grounding: principled constraints on when, how, and to what extent an agent should intervene. Thus, we argue that generative proactivity must be grounded both epistemically and behaviorally. Drawing on the philosophy of ignorance and research on proactive behavior, we argue that these theories offer critical guidance for designing agents that can engage responsibly and foster meaningful partnerships.
- AI & Machine Learning
- Equity, Ethics & Justice
- Human-Computer Interaction & UX
Use of Tidal Volume Targets: “Paradoxical” Associations Between Gender, Height, and Clinical Decision Making?
Izzy Chaiken
When patients receive invasive mechanical ventilation in intensive care settings, clinicians must estimate a proper tidal volume (VT): the amount of air a patient takes in each breath. An appropriate VT is high enough to provide sufficient oxygenation and low enough to avoid lung injury. In practice, volumes are approximated using a formula including a patient’s height and binarized gender, with taller patients and men receiving higher approximated VT. Care settings vary in the extent to which clinicians utilize this formula to choose VT. Prior research demonstrates that patients who are women, shorter, or have higher BMI are more likely to receive VT above guidelines, risking lung injury. Using electronic health record datasets, we investigate tidal volume selection trends across several care settings. In this work, we describe how systematic differences in VT provision are associated with height and gender, and across care settings with varying levels of formula usage. We demonstrate that among patients of each height, clinicians may be more likely to assign elevated VT to men, while women receive higher VT settings overall. We plan to continue collecting data to examine how systematic gender-related differences in treatment emerge when clinicians utilize algorithms including gender as an input.
- Health & Well-being
- Data Science & Computational Methods
- Equity, Ethics & Justice
Zone 5: Libraries, Archives & Knowledge Systems
Accountability, Integrity: AI Policy in Public Libraries
Kathryn FitzGerald, Benjamin Charles Germain Lee
As organizations that are explicitly values-driven, public libraries play a critical role in building and maintaining a democratic, equitable, and sustainable information environment. With the growing potential of artificial intelligence (AI) to reshape library collections, services, and workflows, public libraries must determine how to engage with these technologies while maintaining longstanding library values. Despite widespread discussion of AI’s impact on public libraries, to our knowledge there exists no published analysis of American and Canadian public library AI policies to date. In this paper, we address this gap first through an environmental scan of public library websites to identify publicly available AI policy statements. We then analyze these policy statements according to how they include library values. In our scan of over 200 library websites, we identified just 16 publicly available AI policies. These policies all govern internal or staff usage rather than patron usage. All policies reference at least two library values, with privacy and security the most frequently cited.
- Libraries, Archives, Museums & Cultural Heritage
- Information Policy, Law & Governance
- AI & Machine Learning
Advancing Library Visibility in Africa Action Funds (ALVA AF)
Renee Lynch, B. Biira
Advancing Library Visibility in Africa Action Funds (ALVA AF) is a participatory grantmaking program for African libraries. Prior research from the ALVA project indicated that international development funding is often inequitable and inaccessible for “non-traditional” partners such as African libraries. As a result, we co-designed a new small grants program with the needs of African libraries in mind, with an Advisory Board of library professionals from 9 African countries. Findings from the co-design process suggested that flexibility in grant conditions combined with capacity-building support ultimately builds trust between grantor and grantee, leading to more effective partnerships. Findings also reveal challenges such as differing views on the extent to which grantees want to participate in grant making processes as well as opportunities of participatory processes including building community and knowledge among grantees that they can use for self-advocacy. ALVA AF launched in January 2026 with a pilot in Uganda and Malawi, and this presentation will share updates and insights from putting these ideas into practice thus far.
- Libraries, Archives, Museums & Cultural Heritage
- Equity, Ethics & Justice
Center for Advances in Libraries, Museums, and Archives
Sharon Streams, Brandon Locke
This poster introduces the work of the Information School’s Center for Advances in Libraries, Museums, and Archives, or CALMA for short. The center launched in October 2024 with a mission to build and strengthen connections among researchers, educators, and professionals across the librarianship, archival studies, and museology disciplines. CALMA showcases the broad array of research that is contributing to advances in theory and practice for libraries, archives, and museums. CALMA cultivates research that crosses disciplinary boundaries and brings scholarly work into conversation with teaching and practice. Through round table discussions, public symposia, and project-based initiatives, the center creates opportunities for ideas to cross-pollinate and new paths of inquiry to develop. CALMA has an Affiliate program for UW’s faculty, doctoral students, and librarians who are interested in working together to grow the center’s community, programming, and research activity. The center also provides competitive grants that support early‑stage and collaborative projects led by its Affiliates. Together, these activities demonstrate CALMA's role as a hub for connection, experimentation, and shared learning across the cultural heritage fields.
- Libraries, Archives, Museums & Cultural Heritage
- Education & Learning Technologies
- Equity, Ethics & Justice
Considering the Classification of Symmetries
Joseph T. Tennis
In the very human process of classification, one often seeks out some form of symmetry. For example, we want our surrogates, our representations, to be faithful to our shared understanding of the lived-in-world. But are there multiple types of this or other symmetries in classification and can we evaluate them? We are given this definition of symmetry by the New Oxford American Dictionary: symmetry | ˈsimətrē | noun (plural symmetries) the quality of being made up of exactly similar parts facing each other or around an axis: this series has a line of symmetry through its center | a crystal structure with hexagonal symmetry. • correct or pleasing proportion of the parts of a thing: an overall symmetry making the poem pleasant to the ear. • similarity or exact correspondence between different things: history sometimes exhibits weird symmetries between events | a lack of symmetry between men and women. • Physics Mathematics a law or operation in which a physical property or process has an equivalence in two or more directions. And this etymology: Mid 16th century (denoting proportion): from French symétrie or Latin symmetria, from Greek, from sun- ‘with’ + metron ‘measure’. What are all the aspects of classification "with measure" that we might consider in designing and evaluating classification schemes? This poster begins that inquiry.
- Data Science & Computational Methods
- Libraries, Archives, Museums & Cultural Heritage
Countering Domicide: Preserving Indigenous Knowledge using Large-Scale Ethnography and Participatory Design—The Zaatari Camp Syrian Refugee Cookbook
Karen E. Fisher
Can you imagine fleeing the brutality of the Syrian War to live in a refugee camp surrounded by desert? Living without modern conveniences, not knowing your future, drawing on centuries’ old knowledge to survive? On the Jordanian–Syrian border lies Za’atari Camp, little Syria. Refuge for people from primarily Dara’a (the Cradle of the Revolution), Za’atari is a closed, high security, high constraint, low resource camp. In this talk I share the making of “Zaatari: Culinary Traditions of the World’s Largest the Syrian Refugee Camp” (Goose Lane Editions, Canada, 2024). Created over six years using field ethnography, the book was co-designed with over 2000 Syrians to counter the effects of domicide—the deliberate, systematic displacement of populations and the erasure of people’s sense of belonging and community through destroying homes, living spaces and culture. Initially envisioned as a cookbook, the Zaatari book became a living testament where refugees tell their own story of surviving domicide, of preserving indigenous knowledge and way of life, of rebuilding community while promoting human dignity and livelihoods. In a twisted way, leaving Syria and surviving the war meant returning to Syria, preserving Syria’s ancient cultural knowledge and food practices for the world. Travel with me to Zaatari Camp, to the world’s most exclusive restaurant where we learn why Syrians are renowned as the best cooks in Arab world, how the Syrian War all but destroyed the country and its famous food culture with over 13 million people displaced, and how the people prevail against domicide. In learning about the making of the Zaatari book and rebuilding of community, we will cook alongside the women preparing foods for Ftoor, Ghada’, weddings, Aqeeqah, and Ramadan; dine Arabi-style; take a bicycle jawleh around the camp; shop the souks on the Shams Elysees; and explore Bedouin life and the secrets of Arab Medicine.
- Equity, Ethics & Justice
- Libraries, Archives, Museums & Cultural Heritage
- Human-Computer Interaction & UX
Data Services for Indigenous Scholarship and Sovereignty: Stewarding Indigenous research data with CARE
Carole Palmer, Sandra Littletree, Joshua Brown, David Strand
The Data Services for Indigenous Scholarship and Sovereignty (DSISS) collaboration is supporting responsible stewardship of Indigenous research data in libraries and repositories. The DSISS team of information science researchers, Indigenous scholars, and data repository professionals is advancing practical and technical solutions for integrating the CARE Principles for Indigenous Data Governance into the work of research data services and digital archives. DSISS is guided by the priorities of Indigenous scholars and communities, with a focus on translational applications to protect and represent Indigenous knowledge. The team is developing a repository testbed, based on Collaborative Curation Case Studies grounded in Indigenous research methods and data sovereignty. Another track of work is exploring how CARE can be applied to enhance stewardship of existing Indigenous digital collections in libraries and archives. DSISS aims to enhance contextual and relational integrity of research data collections and address the imperative of building trust between collecting institutions and Indigenous communities.
- Libraries, Archives, Museums & Cultural Heritage
- Equity, Ethics & Justice
- Information Policy, Law & Governance
“It’s like a War Zone”: Trauma Experiences of Public Library Staff as Secondary Responders on the Front Line in America’s Communities
Karen E. Fisher
As one of the last free public spaces, libraries are seeing greater numbers of patrons seeking help and safety. Austerity in social spending and decline in welfare services have transformed libraries into de facto sites for social service delivery. Patrons’ acute psychosocial problems are leading to increased disruptive incidents, abuse, violence, and other types of traumas such that institutions and staff are challenged in serving as an antidote to social and cultural fragmentation and polarization, intolerance, and rising anti-intellectualism. Across the country, unprecedented levels of patron trauma, low pay, low staffing, lawsuits, inadequate institutional support, aging infrastructure, hostility from local, state and/or federal government, and scant inadequate academic preparation have created crises for staff, leading to chronic anxiety and PTSD, health problems, injury, burnout, and death. Staff who are from marginalized groups or have histories of trauma are especially affected by these factors. Seeking relief, staff are changing libraries, leaving the profession, or retiring early. Staff, as essential frontline workers, are in crisis, vulnerable and themselves under attack. In this talk, Dr. Fisher shares findings from her IMLS funded, national study of how public library staff are experiencing trauma in the workplace, their vulnerabilities, and best ways for supporting. Covered by the New York Times, and Chicago Tribune and NBC News, the research is based on a survey and interviews with staff in all library sectors across the country.
- Libraries, Archives, Museums & Cultural Heritage
- Equity, Ethics & Justice
- Health & Well-being
Neighboring Washington Tribal Libraries
Sandra Littletree, Cindy Aden, Ash King, Ian Diedrich, David Strand
The Neighboring Washington Tribal Libraries (NWATL) project is exploring the relationships between non-tribal public libraries and tribal communities in Washington state. Funded by CALMA (Center for Advances in Libraries, Museums, and Archives), the project focuses on how Washington public libraries engage with their tribal neighbors, including how they address barriers and foster collaboration. A key area of investigation includes the history and awareness of RCW 27.12.285 Library Services for Indian Tribes—a 1975 state law that is responsible for encouraging Washington libraries to build relationships with nearby tribal communities, regardless of their taxing district. The history of the RCW and its current application in Washington provides a window into a variety of models of library services for Indigenous communities in the state. The team, led by iSchool faculty Sandy Littletree and Cindy Aden and accompanied by MLIS students Ian Diedrich and David Strand in Spring 2025, conducted 4 site visits and 12 zoom interviews with Washington public library leaders. The goal of NWATL is to describe the landscape of tribal and non-tribal library engagement activities in Washington and help foster similar research in other parts of the country.
- Libraries, Archives, Museums & Cultural Heritage
- Information Policy, Law & Governance
- Equity, Ethics & Justice
Relationality and Indigenous Librarianship
Sandra Littletree
The Centering Relationality model of Indigenous systems of knowledge, developed by Littletree et al. (2020), was designed as a pedagogical framework to support boundary-spanning and code-switching between Euro-American and Indigenous knowledge organization (KO) practices. Since its publication, librarians and information workers have increasingly sought to incorporate Indigenous perspectives not only into KO systems, but also into institutional policies, collection development, teaching, and research. This project aims to revisit the model to enhance our understanding of Indigenous librarianship, including the challenges and opportunities.
- Libraries, Archives, Museums & Cultural Heritage
- Equity, Ethics & Justice
Temporal Indexicality and Origination as Defining Factors in Data about Data
Joseph T. Tennis
With the increase of conversations about paradata in the information sciences, it is important that definitional work be crisp and clear. Many conceptions of paradata have been offered. In an effort to create order in this buffet of definitions offered, I propose that both (1) origination and (2) time when data about data are applied, i.e., temporal indexicality, we can clearly position paradata, metadata, and data.
- Data Science & Computational Methods
A U.S. Tribal Data Governance Framework for Indigenous Data Sovereignty Across Emerging Data Landscapes
Clarita Lefthand-Begay, Nicole S. Kuhn, Turam Purty, Tessa R. Campbell, Jesse Brisbois
American Indian and Alaska Native (AIAN) communities are navigating rapidly changing data ecosystems shaped by digitization, artificial intelligence, and intergovernmental data sharing. Many Tribal Nations are envisioning and/or advancing Tribal data governance frameworks, yet actionable, sovereignty-centered mechanisms and implementation resources remain uneven across many emerging data contexts. The central problem addressed in this study is the translation gap between Indigenous Data Sovereignty (IDS) as a normative framework and Tribal Data Governance (TDG) as an institutional practice. Through five interrelated strands of empirical and conceptual work, we developed a layered TDG Framework. First, we analyze how Tribal Research Review Boards (TRRBs) in the U.S. operationalize data governance through protocols across research lifecycles. Second, we conduct content analysis of TRRB documentation to develop ethical standards for AIAN social media research. Third, we evaluate alignment between IDS, GenAI systems and CARE (Collective Benefit, Authority to Control, Responsibility, and Ethics) principles. Fourth, we examine co-management agreements to inform data sharing protocols between Tribal Nations and land management agencies. Fifth, we analyze Tribal governance practices within Libraries, Archives, and Museums (LAMs) to understand sovereignty over Indigenous Knowledges and cultural heritage materials. Using a situated-knowledge approach, grounded in sustained engagement with Tribal Nations, we draw on observations of local data practices, informal conversations with Tribal practitioners, and the authors’ lived and professional experience. These inputs are complemented with a targeted review of peer-reviewed literature on IDS. Altogether, this work provides a governance framework adaptable to emerging technologies.
- Information Policy, Law & Governance
Lightning Talks
List of Presentations
1. Computing Cultural Heritage: Reimagining Search and Discovery
Benjamin Lee
In my presentation, I will share about my lab, the Lab for Computing Cultural Heritage, as well as the research we are conducting to reimagine search and discovery for digital collections. I will highlight a couple projects, including GovScape, a collaboration that I am leading to develop multimodal search over 10+ million government PDFs.
- Libraries, Archives, Museums & Cultural Heritage
- Data Science & Computational Methods
- AI & Machine Learning
2. Reranking partisan animosity in algorithmic social media feeds alters affective polarization
Martin Saveski
Today, social media platforms hold the sole power to study the effects of feed-ranking algorithms. We developed a platform-independent method that reranks participants’ feeds in real time and used this method to conduct a preregistered 10-day field experiment with 1256 participants on X during the 2024 U.S. presidential campaign. Our experiment used a large language model to rerank posts that expressed antidemocratic attitudes and partisan animosity (AAPA). Decreasing or increasing AAPA exposure shifted out-party partisan animosity by more than 2 points on a 100-point feeling thermometer, with no detectable differences across party lines, providing causal evidence that exposure to AAPA content alters affective polarization. This work establishes a method to study feed algorithms without requiring platform cooperation, enabling independent evaluation of ranking interventions in naturalistic settings.
- Social Media & Online Platforms
- Data Science & Computational Methods
- AI & Machine Learning
3. The University of Washington Institute for Neurodiversity and Employment
Hala Annabi
The newly established UW Institute for Neurodiversity and Employment serves as a collaborative hub for advancing rigorous, community-engaged research that improves employment pathways for neurodivergent people. This lightning talk introduces the Institute’s mission and its role in uniting scholars, practitioners, and community partners to address long‑standing gaps in neurodiversity and employment research. Built on interdisciplinary foundations, the Institute accelerates the translation of evidence into practice through five key pillars: translatable research, applied professional education, community empowerment, ecosystem advocacy, and leading neuroinclusive practices across the University of Washington. Attendees will learn how these pillars guide projects that span assistive technologies, career pathway programming, instructional innovation, and organizational inclusion strategies. We will also highlight collaborative opportunities across UW schools and research centers, showcasing how the Institute fosters a generative ecosystem for discovery, partnership, and meaningful impact on employment outcomes for neurodivergent individuals.
- Equity, Ethics & Justice
- Education & Learning Technologies
- Human-Computer Interaction & UX
4. Why Images Still Matter: Learning to look in the time of AI
Temi Odumosu
Images shape how we understand the world, but what happens when they can be endlessly generated, altered, and detached from the complexities of lived experience? This lightning talk explores my interdisciplinary research on visual culture in the age of artificial intelligence and digital archiving. I highlight how our image saturated social media feeds and Google search results pose important ethical questions about imbalances of power, unspoken harm, and visual sovereignty: who gets to look? who is misrepresented or erased? and what responsibilities do we hold as researchers, activists, creators, and consumers? Through ongoing research experiments that slow down our encounters with images, I argue for more ethical and attentive ways of seeing today.
- AI & Machine Learning
- Equity, Ethics & Justice
- Social Media & Online Platforms
5. Law & Technology: A Methodical Approach
Ryan Calo
Technology exerts a profound influence on contemporary society, shaping not just the tools we use but the environments in which we live. Law, uniquely among social forces, is positioned to guide and constrain the social fact of technology in the service of human flourishing. Yet, technology has proven disorienting to law: it presents itself as inevitable, makes a shell game of human responsibility, and daunts regulation. Drawing lessons from communities that critically assess emerging technologies, this book challenges the reflexive acceptance of innovation and critiques the widespread belief that technology is inevitable or ungovernable. It calls for a methodical, coherent approach to the legal analysis of technology—one capable of resisting technology’s disorienting qualities—thus equipping law to meet the demands of an increasingly technology-mediated world while helping to unify the field of law and technology itself.
- Information Policy, Law & Governance
- AI & Machine Learning
6. Technocreep and the Politics of Things Not Seen
Nassim Parvin
New and emerging technologies, especially ones that infiltrate intimate spaces, relations, homes, and bodies, are often referred to as creepy in media and political discourses. In this talk, I will briefly introduce my latest book titled Technocreep and the Politics of Things Not Seen, co-authored with Neda Atanasoski, in which we introduce a feminist theory of creep that we substantiate through critical engagement with smart homes, smart dust, smart desires, and smart forests. Oriented toward dreams of feminist futures, we ask what gets obscured, assumed, or dismissed in characterizations of technology as creepy or creeping?
- Human-Computer Interaction & UX
- Equity, Ethics & Justice
7. AI, Robots, and Religion
Wes King
In this lightning talk, I present a research agenda exploring the intersection of artificial intelligence and religion focusing on the social and cultural dimensions. I briefly introduce religious and philosophical foundations underlying technological progress, including AI and transhumanism; the ways religions conceptualize and incorporate AI, from robotic clergy to religious chatbots; and how ethical frameworks rooted in major world religions inform debates on AI and related technologies. I include a brief preliminary analysis of the Rome Call for AI Ethics, a global initiative seeking to guide the development of Artificial Intelligence (AI). This research engages with questions about what it means to be human and what it means to have human security in an increasingly complex world of human and computer interactions. Central to this inquiry is how AI is transforming human relationships with each other, with technology, and with faith. I show how these questions animate my teaching, inspiring students to grapple with AI's moral implications through religious and philosophical frameworks, preparing them to interrogate technology's embedded values and power structures.
- AI & Machine Learning
- Equity, Ethics & Justice
8. Touchscreens in Motion: Quantifying the Impact of Cognitive Load on Distracted Drivers
Seokhyun Hwang
This study investigates the interplay between a driver’s cognitive load, touchscreen interactions, and driving performance. Using an N - back task to induce four levels of cognitive load, we measured physiological responses (pupil diameter, electrodermal activity), subjective workload (NASA-TLX), touchscreen performance (Fitts’s law), and driving metrics (lateral deviation, throttle control). Our results reveal significant mutual performance degradation, with touchscreen pointing throughput decreasing by over 58.1% during driving conditions and lateral driving deviation increasing by 41.9% when touchscreen interactions were introduced. Under high cognitive load, participants demonstrated a 20.2% increase in pointing movement time, 16.6% decreased pointing throughput, and 26.3% reduced off-road glance durations. We identified a prevalent "hand-before-eye" phenomenon where ballistic hand movements frequently preceded visual attention shifts. These findings quantify the impact of cognitive load on multitasking performance and demonstrate how drivers adapt their visual attention and motor-visual coordination when cognitive resources are constrained.
- Human-Computer Interaction & UX
- Health & Well-being
- Data Science & Computational Methods
9. Rethinking Misinformation: A Holistic Community Model for Youth Resilience Through Socioemotional Learning and Sociocultural Design
Michele Newman
With the growing prevalence of online mis/disinformation encountered by children, digital media literacy has become an urgent concern. Much existing research emphasizes cognitive models, focusing on individual reasoning and specific quantitative criteria to classify people’s level of information literacy. However, critics argue that focusing solely on the cognitive approach neglects the social, emotional, and cultural contexts that shape how mis/disinformation is created and spread. In this study, we expand beyond the cognitive model by examining socio-emotional learning (SEL) and sociocultural (SC) perspectives. To explore how children conceptualize mis/disinformation through these lenses, we conducted co-design workshops (n = 25) with children ages 6–11 over a 2.5-year period. Empirically, our findings highlight children’s awareness of emotional responses, peer pressure, financial incentives, and the importance of community support. Conceptually, we advocate for a community-based model of design that ties the cognitive, SEL, and SC together to help children develop epistemic resilience towards mis/disinformation.
- Education & Learning Technologies
- Social Media & Online Platforms
- Equity, Ethics & Justice
10. Hiring in the Age of AI: Systematic Differences between Human and AI Evaluations
Mike Teodorescu
As firms increasingly utilize AI tools to assess prospective employees, managers need to understand how such AI-based evaluations may systematically differ from traditional assessments done by humans. To shed light on this question, we ran two studies with human evaluators from two different countries to study how hiring decisions might systematically differ between human and AI evaluations as well as examine how psychological and cultural factors may shape these differences. We collaborated with an international skill testing firm which runs video-based interview assessments for Fortune 500 companies using machine learning tools. Our study has two objectives: to compare the automatic assessments of the job interviews to assessments of the same interviews by human evaluators; and to examine which characteristics of human evaluators may lead to systematic differences in their assessments from AI. We uncover systematic differences across human evaluators and show how some of these differences can be mitigated by using AI-based evaluation. We conclude with implications for training models used in hiring processes.
- AI & Machine Learning
- Data Science & Computational Methods
- Equity, Ethics & Justice