Skip to content

Results: Fully Proactive

Fully Proactive

With a fully proactive policy one expects resume latency to be bandwidth-independent and very small because all necessary files are already cached. The “Fully Proactive” column of Figure 4 confirms this intuition. Resume latency is only 10 – 11 seconds at all bandwidths. Post-resume ISR execution under a fully proactive policy is indistinguishable from the baseline policy. The user experience, including slowdown, is identical.

Clearly, the fully proactive policy is very attractive from the viewpoint of resume latency and slowdown. What is the minimum travel time for a fully proactive policy to be feasible? This duration corresponds to t2 ✆ t3 in Figure 3. There are two extreme cases to consider. In the best case, the resume site is known well in advance and itscache has been closely tracking the cache state at the suspend site. All that needs to be transferred is the residual dirty state at suspend — the same state that is transferred to servers during t2. For our experimental configuration, we estimate this state to be about 47 MB at the mid-point of benchmark execution. Using observed throughput values in our prototype, this translates to minimum best case travel time of 45 seconds with a 100 Mb/s network, and about 90 seconds with a 10 Mb/s network. Both of these are credible bandwidths and minimum walking distances today between collaborating workers in a university campus, corporate campus or factory. At lower bandwidths, we estimate the best case travel time to be at least 800 seconds (roughly 14 minutes) at 1 Mb/s, and 8000 seconds (roughly 2 hours and 15 minutes) at 100 Kb/s. The 14 minute travel time is shorter than many commutes between home and work, and bandwidths close
to 1 Mb/s are available to many homes today. Over time, network infrastructure will improve, but travel times are unlikely to decrease. In the worst case, the resume site has a completely cold cache and is only identified at the moment ofsuspend. In that case, t3 must be long enough to transfer the entire state of the VM.

From the baseline resume latencies in Figure 4 and the value of t2 above, we estimate minimum travel time to b 2550 seconds (roughly 43 minutes) for a 100 Mb/s network, and 5250 seconds (88 minutes) for a 10 Mb/s network. Results: Pure Demand-Fetch In the pure demand-fetch policy, state transfer begins only at resume. However, in contrast to the baseline policy, only a very small amount of state is transferred. In our prototype, this corresponds to the compressed memory image of the VM at suspend (roughly 41 MB). The transfer time for this file is a lower bound on resume latency for pure demand-fetch at any bandwidth. As the “Pure DemandFetch” column of Figure 4 shows, resume latency rises from well under a minute at LAN speeds of 100 Mb/s and 10 Mb/s to well over an hour at 100 Kb/s.

We expect the slowdown for a pure demand-fetch policy to be very sensitive to workload. The “Pure DemandFetch” column of Figure 5 confirms this intuition. The total benchmark time rises from 1071 seconds without ISR to 1160 seconds at 100 Mb/s. This represents a slowdown of about 8.3%. As bandwidth drops, the slowdown rises to 30.1% at 10 Mb/s, 340.9% at 1 Mb/s, and well over an order of magnitude at 100 Kb/s. The slowdowns below 100 Mb/s will undoubtedly be noticeable to a user. But this must be balanced against the potential improvement in user productivity from being able to resume work anywhere, even from unexpected locations. Results: Demand-Fetch with Lookaside As discussed in Section 3.4, the use of transportable storage can reduce both the resume latency and slowdown of a demand-fetch state transfer policy. Our experiments show that these reductions can be substantial.

The “Dongle LKA” column of Figure 4 presents our results for the case where a dongle is updated with the compressed virtual memory image at suspend, and used as a lookaside device at resume. Comparing the “Dongle LKA” and “Pure Demand-Fetch” columns of Figure 4 we see that the improvement is noticeable below 100 Mb/s, and is dramatic at 100 Kb/s. A resume time of just 12 seconds rather than 317 seconds (at 1 Mb/s) or 4301 seconds (at 100 Kb/s) can make a world of a difference to a user with a few minutes of time in a coffee shop or a waiting room. To explore the impact of LKA on slowdown, we constructed a DVD with the VM state captured after installation of Windows XP and the Microsoft Office suite, but before any user-specific or benchmark-specific customizations. We used this DVD as a lookaside device for LKA during the running of the benchmark. The “DVD LKA” column of Figure 5 presents our results. Comparing the “DVD LKA” and “Pure Demand-Fetch” columns of Figure 5, we see that benchmark time is reduced at all bandwidths. The reduction is most noticeable at lower bandwidths. to the mid-1980’s. Both location transparency and client caching in AFS were motivated by this consideration. To quote a 1990 AFS paper [5]: “User mobility is supported: A user can walk up to any workstation and access any file in the shared name space. A user’s workstation is ‘personal’ only in the sense that he owns it.”

This capability falls short of ISR in two ways. First, only persistent state is saved and restored; volatile state such as the size and placement of windows is not preserved. Second, the user sees the native operating system and application environment of the client; in many cases, this may not be his preferred environment. ISR bears a close resemblance to process migration. The key difference lies in the level of abstraction at which the two mechanisms are implemented. ISR operates as a hardware-level abstraction, while process migration operates as an OS-level abstraction. In principle, this would seem to put ISR at a disadvantage because hardware state is much larger. In practice, the implementation complexity and software engineering concerns of process migration have proved to be greater challenges. Although successful implementations of process migration have been demonstrated, no OS in widespread use today supports it as a standard capability