TY - JOUR
T1 - Context, Language Modeling, and Multimodal Data in Finance
AU - Das, Sanjiv
AU - Goggins, Connor
AU - He, John
AU - Karypis, George
AU - Krishnamurthy, Sandeep
AU - Mahajan, Mitali
AU - Prabhala, Nagpurnanand
AU - Slack, Dylan
AU - van Dusen, Rob
AU - Yue, Shenghua
AU - Zha, Sheng
AU - Zheng, Shuai
N1 - Publisher Copyright:
© 2021 With Intelligence Ltd.
PY - 2021/6/1
Y1 - 2021/6/1
N2 - The authors enhance pretrained language models with Securities and Exchange Commission filings data to create better language representations for features used in a predictive model. Specifically, they train RoBERTa class models with additional financial regulatory text, which they denote as a class of RoBERTa-Fin models. Using different datasets, the authors assess whether there is material improvement over models that use only text-based numerical features (e.g., sentiment, readability, polarity), which is the traditional approach adopted in academia and practice. The RoBERTa-Fin models also outperform generic bidirectional encoder representations from transformers (BERT) class models that are not trained with financial text. The improvement in classification accuracy is material, suggesting that full text and context are important in classifying financial documents and that the benefits from the use of mixed data, (i.e., enhancing numerical tabular data with text) are feasible and fruitful in machine learning models in finance.
AB - The authors enhance pretrained language models with Securities and Exchange Commission filings data to create better language representations for features used in a predictive model. Specifically, they train RoBERTa class models with additional financial regulatory text, which they denote as a class of RoBERTa-Fin models. Using different datasets, the authors assess whether there is material improvement over models that use only text-based numerical features (e.g., sentiment, readability, polarity), which is the traditional approach adopted in academia and practice. The RoBERTa-Fin models also outperform generic bidirectional encoder representations from transformers (BERT) class models that are not trained with financial text. The improvement in classification accuracy is material, suggesting that full text and context are important in classifying financial documents and that the benefits from the use of mixed data, (i.e., enhancing numerical tabular data with text) are feasible and fruitful in machine learning models in finance.
KW - Big data/machine learning
KW - Information providers/credit ratings
KW - Legal/regulatory/public policy
KW - Quantitative methods
UR - http://www.scopus.com/inward/record.url?scp=85126543423&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85126543423&partnerID=8YFLogxK
U2 - 10.3905/jfds.2021.1.063
DO - 10.3905/jfds.2021.1.063
M3 - Article
AN - SCOPUS:85126543423
SN - 2640-3943
VL - 3
SP - 52
EP - 66
JO - Journal of Financial Data Science
JF - Journal of Financial Data Science
IS - 3
ER -