Purdue authors team up with high school students on deepfake detection paper accepted to ACM Multimedia 2025

Li Lin and Shu Hu. (Photos provided)

A team of authors led by a Purdue professor in Indianapolis recently had a paper accepted for the 33rd ACM International Conference on Multimedia, the premier international conference dedicated to advancing multimedia research and applications. Led by Shu Hu, an assistant professor in the Department of Computer and Information Technology (CIT) in Indianapolis, the team also includes Li Lin, Aryana Hou and Justin Li.

The conference brings together academia and industry to highlight state-of-the-art developments in artificial intelligence, human-computer interaction, multimedia signal processing and more. Attendees and presenters exchange ideas, address industrial challenges and explore transformative technologies with real-world impact.

The accepted paper titled “Rethinking Individual Fairness in Deepfake Detection” tackles an urgent and complex issue at the intersection of artificial intelligence, media ethics, and social responsibility. Synthetic images and videos have become increasingly realistic with the rise of generative AI and can be nearly indistinguishable from real media. These capabilities unlock new creative possibilities, but they also contribute to the problem of “deepfakes”—manipulated media that can convincingly impersonate individuals, posing threats to privacy and public trust.

Addressing individual fairness in deepfake detection

Although recent research has focused heavily on detecting deepfakes, Hu and the team believe that fairness in detection has received insufficient attention. Their study focuses on individual fairness, the principle that similar individuals should be treated similarly by a system. The team discovered that this principle actually fails in the deepfake detection setting due to the nature of fake media.

“As deepfakes become more widespread, ensuring fairness in detection is critical,” Lin said. “Many models perform worse on certain demographic groups, for example, Black females may have higher classification error rates than white males. This disparity leaves some communities more exposed to harm, so improving fairness helps ensure everyone is equally protected.”

With traditional individual fairness principle, deepfake detectors will assign fake images the same labels as real images, which contradicts the goal of deepfake detection, so the team proposed a new fairness-enhancing method in their paper.

“Most detectors perform inconsistently across different populations, which means some groups may be more vulnerable to deepfakes than others,” Hu said. “Prior studies try to fix this using demographic information like race or gender to assist detector's fair training, but such labels are often missing, incomplete, or unreliable, especially in real-world settings.”

After rethinking the conventional individual fairness loss formulation, the team developed a novel learning objective by leveraging anchor-based learning and introducing a new procedure aimed at exposing forgery-related feature representations for similarity calculation. Hu says this method can be integrated into deepfake detectors to enhance fairness, ensuring more equitable performance across diverse media. It also improves generalization, enabling better detection of new types of deepfakes.

The research has positive implications for the future of trustworthy AI, especially in combating misinformation and digital impersonation. Their work offers both a technical advance and a moral imperative: to build AI systems that are not only powerful but also responsible and fair.

Involving high school students in Purdue research

Two of the four co-authors on the paper are high school students, which is rare to see in academia. Aryana Hou is a student in 11th grade at Clarkstown High School South in West Nyack, New York. Justin Li is a 9th grade student from Carmel High School in Carmel, Indiana. Li Lin, a Ph.D. student from Purdue, also co-authored the paper and provided support to the teen contributors.

Both Hou and Li contacted Hu independently to ask about conducting research under his guidance.

“I initially reached out to Dr. Hu because I was really interested in signal processing and AI. After coming across one of his publications on machine learning, I felt deeply inspired,” Hou said. “I feel extremely lucky not only to take part in groundbreaking research but also to have found such an amazing mentor and team who guided me, challenged me, and helped me grow as a researcher.”

Hu assigned Hou a deepfake fair detection project and began meeting weekly to discuss her progress. They encountered some challenges in making progress, but Hu encouraged her to focus on building foundational skills, like reviewing related literature and learning to use graphics processing units (GPUs) for training deep learning models.

Hou’s work paid off when her results seemed to validate one of Hu’s early hypotheses, so Lin and Hou started working on the paper together. Around the same time, Justin Li also expressed interest in research. With the paper deadline approaching quickly, Hu put him to work conducting critical ablation studies, which determine how different parts of a model contribute to its overall performance.

“He quickly produced strong experimental results, which we incorporated into the main paper, strengthening the empirical evidence for our proposed method,” Hu said. Li also devoted a full week to proofreading and polishing the manuscript, despite a heavy workload at school.

The team submitted the completed paper in mid-April, and it was accepted by the conference in the beginning of July.

“It was inspiring to work alongside Aryana and Justin,” Lin said. “They brought fresh perspectives, asked insightful questions, and contributed meaningfully to the project. I believe this experience gave them early exposure to real-world research and the importance of ethical AI, which will serve them well in any future academic or professional pursuits. Their involvement also reminded me of the value of curiosity and collaboration at every stage of learning.”

Hou and Li will present the paper at the ACM conference in October, which has a “rich tradition of leveraging AI and systems research to handle big data and enhance user experiences.” Hu and Lin are unable to join them in Ireland, but Lin says it’s a “well-deserved opportunity for them to showcase their contributions to the broader research community.”

“I was honestly shocked at first—in the best way—when I heard our paper got accepted,” Hou said. “I knew that we had all put in an immense amount of work, but the moment felt surreal. This was the first real research project I took part in, and seeing it recognized made all the hard work worth it. I can't wait to see where the next step takes us in Ireland!”

“I'm very grateful to Shu Hu the team for giving me this opportunity, and I'm very inspired by their work,” Li said. “I'd like to take these skills—from training models to collaborating on complex projects—and apply them to future academics, contribute to school clubs, and continue research to push the bounds of AI.”

Registration for the 2025 ACM International Conference on Multimedia opened on July 11 and has both in-person and virtual options. The 2024 conference saw more than 4,300 main conference submissions and accepted more than 1,100 papers.

 

People in this Article: