The term graphical perception refers to the part played by visual perception in analyzing graphs. Computer graphics have stimulated interest in the perceptual pros and cons of different formats for displaying data. One way of evaluating the effectiveness of a display is to measure the efficiency (as defined by signal-detection theory) with which an observer extracts information from the graph. We measured observers' efficiencies in detecting differences in the means or variances of pairs of data sets sampled from Gaussian distributions. Sample size ranged from 1 to 20 for viewing times of 0.3 or 1 sec. The samples were displayed in three formats: numerical tables, scatterplots, and luminance-coded displays. Efficiency was highest for the scatterplots (≅60% for both means and variances) and was only weakly dependent on sample size and exposure time. The pattern of results suggests parallel perceptual computation in which a constant proportion of the available information is used. Efficiency was lowest for the numerical tables and depended more strongly on sample size and viewing time. The results suggest serial processing in which a fixed amount of the available information is processed in a given time.