The ability to explore and visualize clinical information is important for clinicians when reviewing and cognitively synthesizing electronic clinical documents for new patients contained in electronic health record (EHR) systems. In this study, we explore the use of language models for detecting new and potentially relevant information within an individual patient's collection of clinical documents using an expert-based reference standard for evaluation. We achieved good accuracy with a heterogeneous system based on a modified n-gram language model with statistically-derived and classic stop word removal and lexical normalization, as well as heuristic rules. This technique also identified relevant new information not identified with the expert-derived reference standard. These methods appear promising for providing an automated means to improve the use of electronic documents by clinicians.