Research aligns human text classification performance, reasoning with traditional machine, language learning models

Gaurav Nanda. (Purdue University photo/John O'Malley)

In a recently published paper from Nature’s Scientific Reports journal, Gaurav Nanda, assistant professor of engineering technology, and co-authors Jeevithashree Divya Venkatesh and Aparajita Jaiswal compared the text classification performance and explainability of non-expert humans with a pre-trained traditional machine learning model and a zero-shot large language model.

A domain-specific noisy textual dataset of 204 injury narratives had to be classified into 6 cause-of-injury codes. The narratives varied in terms of complexity and ease of categorization based on the distinctive nature of cause-of-injury code. The user study involved 51 participants whose eye-tracking data was recorded while they performed the text classification task.

The explainability of different approaches was compared based on the top words they used for making classification decision. These words were identified using eye-tracking for humans, explainable AI approach LIME for ML model, and prompts for LLM.

The classification performance of ML model was observed to be relatively better than zero-shot LLM and non-expert humans, overall, and particularly for narratives with high complexity and difficult categorization. The top-3 predictive words used by ML and LLM for classification agreed with humans to a greater extent as compared to later predictive words.

 

Additional information

People in this Article: