SEDiff: scope-aware differential fuzzing to test internal function models in symbolic execution

Penghui Li, Wei Meng, Kangjie Lu

Research output: Chapter in Book/Report/Conference proceedingConference contribution

2 Scopus citations

Abstract

Symbolic execution has become a foundational program analysis technique. Performing symbolic execution unavoidably encounters internal functions (e.g., library functions) that provide basic operations such as string processing. Many symbolic execution engines construct internal function models that abstract function behaviors for scalability and compatibility concerns. Due to the high complexity of constructing the models, developers intentionally summarize only partial behaviors of a function, namely modeled functionalities, in the models. The correctness of the internal function models is critical because it would impact all applications of symbolic execution, e.g., bug detection and model checking. A naive solution to testing the correctness of internal function models is to cross-check whether the behaviors of the models comply with their corresponding original function implementations. However, such a solution would mostly detect overwhelming inconsistencies concerning the unmodeled functionalities, which are out of the scope of models and thus considered false reports. We argue that a reasonable testing approach should target only the functionalities that developers intend to model. While being necessary, automatically identifying the modeled functionalities, i.e., the scope, is a significant challenge. In this paper, we propose a scope-aware differential testing framework, SEDiff, to tackle this problem. We design a novel algorithm to automatically map the modeled functionalities to the code in the original implementations. SEDiff then applies scope-aware grey-box differential fuzzing to relevant code in the original implementations. It also equips a new scope-aware input generator and a tailored bug checker that efficiently and correctly detect erroneous inconsistencies. We extensively evaluated SEDiff on several popular real-world symbolic execution engines targeting binary, web and kernel. Our manual investigation shows that SEDiff precisely identifies the modeled functionalities and detects 46 new bugs in the internal function models used in the symbolic execution engines.

Original languageEnglish (US)
Title of host publicationESEC/FSE 2022 - Proceedings of the 30th ACM Joint Meeting European Software Engineering Conference and Symposium on the Foundations of Software Engineering
EditorsAbhik Roychoudhury, Cristian Cadar, Miryung Kim
PublisherAssociation for Computing Machinery, Inc
Pages57-69
Number of pages13
ISBN (Electronic)9781450394130
DOIs
StatePublished - Nov 7 2022
Event30th ACM Joint Meeting European Software Engineering Conference and Symposium on the Foundations of Software Engineering, ESEC/FSE 2022 - Singapore, Singapore
Duration: Nov 14 2022Nov 18 2022

Publication series

NameESEC/FSE 2022 - Proceedings of the 30th ACM Joint Meeting European Software Engineering Conference and Symposium on the Foundations of Software Engineering

Conference

Conference30th ACM Joint Meeting European Software Engineering Conference and Symposium on the Foundations of Software Engineering, ESEC/FSE 2022
Country/TerritorySingapore
CitySingapore
Period11/14/2211/18/22

Bibliographical note

Funding Information:
The authors would like to thank our shepherd and the anonymous reviewers for their helpful suggestions. The work described in this paper was supported in part by a grant from the Research Grants Council of the Hong Kong SAR, China (Project No.: CUHK 14210219). Kangjie Lu was supported in part by the NSF awards CNS-1931208 and CNS-2045478.

Publisher Copyright:
© 2022 ACM.

Keywords

  • Differential Testing
  • Internal Function Models
  • Symbolic Execution

Fingerprint

Dive into the research topics of 'SEDiff: scope-aware differential fuzzing to test internal function models in symbolic execution'. Together they form a unique fingerprint.

Cite this