Close reading for visualization evaluation

Annie Bares, Daniel F. Keefe, Francesca Samsel

Research output: Contribution to journalArticlepeer-review


Visualizations produced by collaborations between artists, scientists, and visualization experts lay claim to being not only more effective in delivering information but also more effective in their abilities to elicit qualities like human connection. However, as prior work in the visualization community has demonstrated, it is difficult to evaluate these claims because characteristics associated with human connection are not easily measured quantitatively. In this Visualization Viewpoints piece, we address this problem in the context of our work to develop methods of evaluating visualizations created by Sculpting Visualization, a multidisciplinary project that incorporates art and design theory and practice into the process of scientific visualization. We present the design and results of a study in which we used close reading, a formal methodology used by humanities scholars, as a way to test reactions and analyses from evaluation participants related to an image created using Sculpting Visualization. In addition to specific suggestions about how to improve future iterations of the visualization, we discuss key findings of the evaluation related to contextual information, visual perspective, and associations that individual viewers brought to bear on their experience with the visualization.

Original languageEnglish (US)
Article number9117084
Pages (from-to)84-95
Number of pages12
JournalIEEE Computer Graphics and Applications
Issue number4
StatePublished - Jul 1 2020

PubMed: MeSH publication types

  • Journal Article
  • Research Support, N.I.H., Extramural
  • Research Support, U.S. Gov't, Non-P.H.S.

Fingerprint Dive into the research topics of 'Close reading for visualization evaluation'. Together they form a unique fingerprint.

Cite this