Co-Intelligence Lab

The Co-Intelligence Lab explores how intelligence emerges when people and AI make sense together.

Rather than treating AI as a standalone system or humans as passive users, our research centers on co-intelligence: intelligence that is co-produced through interaction, collaboration, and alignment between humans, AI systems, and the communities in which they are embedded. We study how human–AI collaboration unfolds over time: how people learn through participation, how responsibility and expertise shift, and how systems can adapt to human intent, context, and values.

The lab designs and studies human–AI collaborative systems that support learning, sensemaking, and decision-making in complex and real-world settings. Our application domains include education, energy, transportation, health, security, and online safety. Across these contexts, we focus on questions such as: How can humans and AI collaborate effectively on uncertain or open-ended tasks? How can contributing to AI systems also become a learning experience? How do we design systems that respect community knowledge, constraints, and priorities? And how does human–AI collaboration evolve as people gain expertise and delegate responsibility?

Our research emphasizes bidirectional alignment. We study not only how AI systems can better align with human goals, values, and situational context, but also how humans adapt their strategies, mental models, and workflows when working with AI. We are explicitly not interested in replacing human judgment. Instead, we investigate when AI should lead, when humans should lead, and how that balance should change over time.

The Co-Intelligence Lab's work spans several interconnected research thrusts. We study AI-augmented learnersourcing and co-learning, examining how students, crowd workers, and domain experts can learn through contribution while collaborating with AI systems. We design community-serving human–AI systems in close partnership with communities, emphasizing participatory, equity-oriented, and context-aware design. We explore human–AI teaming and sensemaking, developing interfaces and workflows that help people and AI jointly interpret complex or large-scale information. Finally, we build crowd–AI infrastructure and data ecosystems that transform distributed human input into reliable, actionable intelligence at scale.

Methodologically, the lab combines human-centered AI and HCI approaches with participatory design, crowdsourcing and learnersourcing, empirical studies, and system building. We value research that is theoretically grounded, methodologically rigorous, and socially impactful. Many of our projects involve long-term engagement with real stakeholders, including students and educators, residents of affordable housing, caregivers and older adults, and crowd workers, because we believe co-intelligence only becomes visible when systems are embedded in real contexts with real constraints and consequences.

Our mentoring philosophy mirrors our research values. We emphasize learning through contribution, depth over breadth, and the development of strong research judgment. Students in the lab are encouraged to take ownership of ideas and artifacts, work collaboratively with AI tools while maintaining rigor and accountability, and engage in ethically grounded, community-centered research. We aim to train researchers who can design, study, and critique human–AI systems that expand human agency rather than diminish it.

At its core, the Co-Intelligence Lab is motivated by a simple belief: the future of AI is not artificial intelligence alone, but intelligence co-created by humans and machines.

Lab Information

Campus/Location
Building Location
KNOY B017
Contact