LEAP: LLM instruction-example adaptive prompting framework for biomedical relation extraction

Huixue Zhou, Mingchen Li, Yongkang Xiao, Han Yang, Rui Zhang

Research output: Contribution to journalArticlepeer-review

8 Scopus citations

Abstract

Objective: To investigate the demonstration in large language models (LLMs) for biomedical relation extraction. This study introduces a framework comprising three types of adaptive tuning methods to assess their impacts and effectiveness. Materials and Methods: Our study was conducted in two phases. Initially, we analyzed a range of demonstration components vital for LLMs’ biomedical data capabilities, including task descriptions and examples, experimenting with various combinations. Subsequently, we introduced the LLM instruction-example adaptive prompting (LEAP) framework, including instruction adaptive tuning, example adaptive tuning, and instruction-example adaptive tuning methods. This framework aims to systematically investigate both adaptive task descriptions and adaptive examples within the demonstration. We assessed the performance of the LEAP framework on the DDI, ChemProt, and BioRED datasets, employing LLMs such as Llama2-7b, Llama2-13b, and MedLLaMA_13B. Results: Our findings indicated that Instruction þ Options þ Example and its expanded form substantially improved F1 scores over the standard Instruction þ Options mode for zero-shot LLMs. The LEAP framework, particularly through its example adaptive prompting, demonstrated superior performance over conventional instruction tuning across all models. Notably, the MedLLAMA_13B model achieved an exceptional F1 score of 95.13 on the ChemProt dataset using this method. Significant improvements were also observed in the DDI 2013 and BioRED datasets, confirming the method’s robustness in sophisticated data extraction scenarios. Conclusion: The LEAP framework offers a compelling strategy for enhancing LLM training strategies, steering away from extensive fine-tuning towards more dynamic and contextually enriched prompting methodologies, showcasing in biomedical relation extraction.

Original languageEnglish (US)
Pages (from-to)2010-2018
Number of pages9
JournalJournal of the American Medical Informatics Association
Volume31
Issue number9
DOIs
StatePublished - Sep 1 2024

Bibliographical note

Publisher Copyright:
# The Author(s) 2024. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved.

Keywords

  • biomedical relation extraction
  • instruction tuning
  • instruction-example adaptive prompting
  • large language model
  • natural language processing

PubMed: MeSH publication types

  • Journal Article

Fingerprint

Dive into the research topics of 'LEAP: LLM instruction-example adaptive prompting framework for biomedical relation extraction'. Together they form a unique fingerprint.

Cite this