The rapid increase of World Wide Web users and the development of services with high bandwidth requirements
have caused the substantial increase of response times for users on the Internet. Web latency would be significantly
reduced, if browser, proxy or Web server software could make predictions about the pages that a user is
most likely to request next, while the user is viewing the current page, and prefetch their content.
In this paper we study Predictive Prefetching on a totally new Web system architecture. This is a system
that provides two levels of caching before information reaches the clients. This work analyses prefetching on a
Wide Area Network with the above mentioned characteristics. We first provide a structured overview of predictive
prefetching and show its wide applicability to various computer systems. TheWAN that we refer to is the GRNET
academic network in Greece. We rely on log files collected at the network’s Transparent cache (primary caching
point), located at GRNET’s edge connection to the Internet. We present the parameters that are most important for
prefetching on GRNET’s architecture and provide preliminary results of an experimental study, quantifying the
benefits of prefetching on theWAN. Our experimental study includes the evaluation of two prediction algorithms:
an “n most popular document” algorithm and a variation of the PPM (Prediction by Partial Matching) prediction
algorithm. Our analysis clearly shows that Predictive prefetching can improve Web response times inside the
GRNET WAN without substantial increase in network traffic due to prefetching.