Exploring the impact of automated indexing on completeness of MeSH terms
Presented by Alexandre Amar-Zifkin (coauthored with Virginie Paquet, Taline Ekmekjian and Tara Landry).
About the event
The SAHLA Executive would like to invite you to our free summer online guest speaker event taking place on August 8, at 12-1pm (MST). We will be joined by Alexandre Amar-Zifkin and Tara Landry, who will share their recent CHLA conference presentation, coauthored with Virginie Paquet, Taline Ekmekjian. This presentation examines the impact of automated indexing on the completeness of MeSH terms.
Having won two CHLA People’s Choice Awards for their conference presentation, we are delighted to have them share this presentation and their research findings with us.
Please see below for the CHLA session abstract, shared from the CHLA 2023 Conference program.
- When: This event will take place at 12-1pm (MST) on August 8
- Where: via Google Meet; the invite link will be shared with attendees before the event.
- Please RSVP to sahla[at]chla-absca.ca before August 4.
Taken from the CHLA 2023 Conference Program
Abstract: Introduction: The use of controlled vocabulary to identify relevant articles is a central element of bibliographic database instruction in health sciences. Students learning to search MEDLINE are taught that MeSH yields precise results, and that 24 MeSH indexing increases an article’s findability, reliably describing an article’s contents. Indexing for MEDLINE was done completely by human indexers until 2011. Since April 2022, all articles are assigned MeSH via automated indexing (AI). Per the NLM, MeSH assigned by AI are determined based on terms in title, abstract, and terms and indexing of ‘related records’, with human review and curation of results “as appropriate”. As MEDLINE instruction typically starts with teaching learners to identify key elements or concepts in their research question and find appropriate MeSH for them, we sought to explore the following: how well does AI identify key concepts of an article? Are concepts missed more or less when compared to human indexers? Drawing on the PICO framework, are missing concepts more often any particular PICO element? Methods: We reviewed samples of automated and human-indexed records from shortly before April 2022, and some entirely-automated from later, to determine whether their main concepts were adequately represented with MeSH. Working in pairs, our team used a web form to assign key concepts (based on the PICO framework) that, per our experience, would be used to find it and similar articles based on title and abstract. Assigned MeSH were then displayed and analyzed to determine whether they adequately represented the key concepts of each record. Results & Conclusion: As the study is ongoing, results are forthcoming. Potential impacts of Automated Indexing on library instruction and basic searching will be discussed.