Evaluating texts is an important activity associated with teaching statistics. Surprisingly, the statistical education literature offers little guidance on how these evaluations should be conducted. This lack of guidance may be at least partly responsible for the fact that published evaluations of statistics texts almost invariably employ evaluation criteria that lack any theory-based rationale. This failing is typically compounded by a lack of empirical evidence supporting the usefulness of the criteria. This article describes the construction and piloting of instruments for evaluating statistics texts that are grounded in the statistical education and text evaluation literatures. The study is an initial step in a line of research which we hope will result in the establishment and maintenance of a database of evaluations of statistical texts. Evaluative information of this kind should assist instructors wrestling with text selection decisions and individuals charged with performing evaluations, such as journal reviewers, and should ultimately benefit the direct consumers of these texts - the students.