The cost of join computation, which uses a join-index in a sequential system with limited buffer space, depends primarily on the page access sequence used to fetch the pages of the base relations. In this paper, we introduce a graph-partitioning model that will minimize the length of the page access sequence thus minimizes the redundant I/O, given a fixed buffer. Experiments with Sequoia 2000 data sets show that, the graph-partitioning method outperforms the existing methods based on sorting and online clustering, particularly for a small number of buffers and high join selectivity.
|Original language||English (US)|
|Number of pages||7|
|Journal||Proceedings of the IEEE Symposium on Reliable Distributed Systems|
|State||Published - Dec 1 1998|