Does `authority' mean quality? Predicting expert quality ratings of Web documents

Brian Amento, Loren Terveen, Will Hill

Research output: Contribution to journalConference articlepeer-review

166 Scopus citations


For many topics the World Wide Web contains hundreds or thousands of relevant documents of widely varying quality. Users face a daunting challenge in identifying a small subset of documents worthy of their attention. Link analysis algorithms have received much interest recently, in large part for their potential to identify high quality items. We report here on an experimental evaluation of this potential. We evaluated a number of link and content-based algorithms using a dataset of web documents rated for quality by human topic experts. Link-based metrics did a good job of picking out high-quality items. Precision at 5 is about 0.75, and precision at 10 is about 0.55; this is in a dataset where 0.32 of all documents were of high quality. Surprisingly, a simple content-based metric performed nearly as well; ranking documents by the total number of pages on their containing site.

Original languageEnglish (US)
Pages (from-to)296-303
Number of pages8
JournalUnknown Journal
StatePublished - 2000
EventProceedings of the 23rd International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR 2000) - Athens, Greece
Duration: Jul 24 2000Jul 28 2000


Dive into the research topics of 'Does `authority' mean quality? Predicting expert quality ratings of Web documents'. Together they form a unique fingerprint.

Cite this