Context, Language Modeling, and Multimodal Data in Finance

Sanjiv Das, Connor Goggins, John He, George Karypis, Sandeep Krishnamurthy, Mitali Mahajan, Nagpurnanand Prabhala, Dylan Slack, Rob van Dusen, Shenghua Yue, Sheng Zha, Shuai Zheng

Research output: Contribution to journalArticlepeer-review

3 Scopus citations

Abstract

The authors enhance pretrained language models with Securities and Exchange Commission filings data to create better language representations for features used in a predictive model. Specifically, they train RoBERTa class models with additional financial regulatory text, which they denote as a class of RoBERTa-Fin models. Using different datasets, the authors assess whether there is material improvement over models that use only text-based numerical features (e.g., sentiment, readability, polarity), which is the traditional approach adopted in academia and practice. The RoBERTa-Fin models also outperform generic bidirectional encoder representations from transformers (BERT) class models that are not trained with financial text. The improvement in classification accuracy is material, suggesting that full text and context are important in classifying financial documents and that the benefits from the use of mixed data, (i.e., enhancing numerical tabular data with text) are feasible and fruitful in machine learning models in finance.

Original languageEnglish (US)
Pages (from-to)52-66
Number of pages15
JournalJournal of Financial Data Science
Volume3
Issue number3
DOIs
StatePublished - Jun 1 2021

Bibliographical note

Publisher Copyright:
© 2021 With Intelligence Ltd.

Keywords

  • Big data/machine learning
  • Information providers/credit ratings*
  • Legal/regulatory/public policy
  • Quantitative methods

Fingerprint

Dive into the research topics of 'Context, Language Modeling, and Multimodal Data in Finance'. Together they form a unique fingerprint.

Cite this