Pure Demand-Fetch

Pure Demand-Fetch

 

Suppose a user arrives unexpectedly at an ISR site. If we wish to keep t4 as short as possible, we can use a pure demand-fetch policy to amortize the cost of retrieving the VM disk state over t5. In this policy, only virtual memory state (the file) is retrieved during t4; the transfer of disk state (the 256KB chunks corresponding to files) is deferred. As soon as the virtual memory state has arrived, the VM is launched. Then, during t5, disk accesses by the VM may result in Coda cache misses that cause slowdown. Demand-Fetch with Lookaside The performance of pure demand-fetch can be improved by using LKA in a number of different ways. If a user is willing to wait briefly at suspend, the file and an LKA index for it can be written to his dongle. He can then remove the dongle and carry it with him.

At the resume site, LKA can use the dongle to reduce t4. If read-only or read-write media with partial VM state are available at the resume site, LKA can use them to reduce the cost of cache misses during t5. This will reduce slowdown after resume. When combined with the use of a dongle, this can improve both resume latency and slowdown. Experimental Evaluation Benchmark ISR is intended for interactive workloads typical of laptop environments. We have developed a benchmark called the Common Desktop Application (CDA) that models an interactive Windows user. CDA uses Visual Basic scripting to drive Microsoft Office applications such as Word, Excel, Powerpoint, Access, and Internet Explorer. The operations mimic typical actions that might be performed by an office
worker. CDA pauses between operations to emulate think time. The pause is typically 10 seconds, but is 1 second for a few quick-response operations.

Methodology Our experimental infrastructure consists of 2.0 GHz Pentium 4 clients connected to a 1.2 GHz Pentium III Xeon
server through 100 Mb/s Ethernet. All machines have 1 GB of RAM, and run RedHat Linux. Clients use VMware Workstation 3.1 and have an 8 GB Coda file cache. The VM is configured to have 256 MB of RAM and 4GB of disk, and runs Windows XP as the guest OS. We use the NISTNet network emulator to control available bandwidth. Without ISR support, the benchmark time on our experimental setup is 1071 seconds. In this configuration, the files used by VMware are on the local file system rather than onThe effects of Fauxide, Vulpes and Coda are thus completely eliminated, but the effect of VMware is included. The figure of 1071 seconds is a lower bound on the benchmark time achievable by any state transfer policy in our experiments. Results: BaselineRelative to the metrics described in Section, we expect the baseline policy to exhibit poor resume latency because all state transfer takes place during the resume step. We also expect network bandwidth to be the dominant factor in determining this quantity. The column of Figure 4 labelled “Baseline” confirms this intuition. At 100 Mb/s, the resume latency is about 40 minutes. When bandwidth drops to 10 Mb/s, resume latency roughly doubles.

The reason it does not increase by a factor of ten (to match the drop in bandwidth) is that the data transfer rate at 100 Mb/s is limited by Coda rather than by the network. Only below 10 Mb/s does the network become the limiting factor. The results in Figure 4 show that the baseline policy is only viable at LAN speeds, and even then only for a limited number of usage scenarios. In contrast to resume latency, we expect slowdown to be negligible with the baseline policy because no ISR network accesses should be necessary once execution resumes. The “Baseline” column of Figure 5 confirms that slowdown is negligible at 100 Mb/s. The total running time of the benchmark increases from 1071 seconds to 1105 seconds. This translates to a slowdown of about 3.2%, where slowdown is defined as Tbw Tnoisr, with Tbw being the benchmark running time at the given bandwidth and Tnoisr its running time in VMware without ISR. As bandwidth drops below 100 Mb/s, the “Baseline” column of Figure 5 shows that slowdown grows slightly. It is about 9.2% at 10 Mb/s, 18.8% at 1 Mb/s, and 31.6% at 100 Kb/s. This slight dependence on bandwidth is due

Leave a Reply

Your email address will not be published. Required fields are marked *