A single node of a modern scalable multiprocessor consists of several ASICs comprising tens of millions of gates. This level of integration and complexity imposes an enormous onus on the verification process. A variety of tools, ranging from discrete-event logic simulation to formal model checking, can be used to attack this problem. Unfortunately, conventional simulation techniques, with their primitive interface to the hardware (i.e. test vectors), are inadequate tools for reasoning about the correctness of complex architectural features, such as cache coherence protocols and memory consistency models. Similarly, model checkers offer very limited utility on such large designs. We have previously proposed a novel verification framework, called Raven, that addresses many of these challenges. In this paper we examine the performance implications of verifying systems at higher levels of abstraction. A detailed performance analysis is conducted to compare this higher-level approach against an equivalent Verilog test bench. We establish lower and upper bounds on the performance of the Raven environment executing on a single-processor on a set of distributed processors, and on a shared-memory multiprocessor.