Sunteți pe pagina 1din 9

Performance of Disk Scheduling Polices

for Desktop Audio Players

Alexander K. Ames
Computer Science Department
University of California at Santa Cruz
sasha@cse.ucsc.edu

very severe. In contrast to this case, the


Abstract result is merely poor audio performance.
This paper examines disk scheduling for the Nonetheless, in order to enable desktop audio
purpose of providing adequate performance to be an enjoyable part of a system and
for a desktop audio player. Workloads are contribute to the users' livelihoods, it should
created from traces of audio data requests and receive some consideration for proper
performance is measured using a disk performance within the greater overall system
simulator. Simulation results are compared that may contain all sorts of varying demands
for different disk scheduling policies available and priorities.
in the simulator. This paper has the following organization.
Section 2 describes background work in disk
scheduling and more recent related work.
1 Introduction
Section 3 presents the implementation needed
to set up a simulated experiment using
The use of spinning magnetic disks for workloads of real-time audio data. Section 4
data storage has introduced some interesting shows the specific experiments and their
problems that have greatly challenged results. Section 4 contains speculation on
computer systems researchers. One such future work in this area, and section 6
problem is finding methods of retrieving or concludes.
storing groups of data requests to certain
locations on a disk given its peculiar
geometry and the nature of having only a 2 Background and Related Work
single head to pass back and forth over the
tracks on the disk containing the blocks of Disk scheduling policy research initially
data. This area of research is known as disk focused on the comparison of several different
scheduling. Different approaches to how one scheduling policies. An early study [Teor72]
may read or write from the disk are called examined a number of policies that
disk scheduling policies or algorithms. researchers would continue to receive
Changing uses for computers have altered consideration for use despite the fact that the
the initial needs for disk usage, showing that characteristics of the disk drives differed then
there is no one correct policy to read data by huge orders of magnitude in physical size,
from a disk. In this paper we will explore the capacity, rotational and arm speeds.
use of different policies in reading audio data Performance tests were performed done
from a simulated disk. Audio data has the between First come First Served (FCFS),
special characteristic of being real-time, i.e. it Shortest Seek Time First (SSTF), SCAN,
must be read at precise intervals to produce a Eschenbach, C-SCAN, and an optimized
continuous playback that can be appreciated version of the latter. From the results they
by the user of the system. This type of concluded that two named variants of
application is in the domain of soft real-time, SCAN/C-SCAN, called LOOK and C-LOOK
as opposed to hard real-time where the provide the best observable performance for a
scheduling requirements are very strict such small number or large number of request
that the penalty for missed deadlines could be respectively.
Fifteen years later Geist and Daniel have considered adding the idea of lookahead
[Geis87] would formulate the notion of the to the CSCAN and SATF, that being
"Continuum" of algorithms between SSFT and respectively reordering some number of
SCAN. They would test their continuum with policies to minimize the sum of the service
some varied values of the continuum times, or to consider a number of requests in
algorithm (exactly how much between SSFT determining the request with the shortest
and SCAN) and increasing arrival times with time. Additionally, they considered some
at first a simulation and later an actual driver different variants of prioritizing reads before
implementation within UNIX (probably writes including an extreme case of waiting
4.2BSD). They show results of their for the read queue to be empty before
continuum set closer to SSFT than SCAN, but performing any writes, but their findings for
outperforming both. Additionally they can these techniques seems somewhat
conclude that all will outperform FCFS, which inconclusive.
at the time of their work still was widely Many investigations have also been
used. conducted in examining the problems of
After that study, a group at Berkeley dealing with scheduling policies for
would do some more important research in multimedia workloads. These studies differ
disk scheduling [Selt90] in which they would from many of the prior ones because these
consider some new algorithms. They note requests have deadlines that can be inferred if
that earlier studies all consider short queue they are not explicitly given as input. These
lengths due to the fact that these studies were requests could be audio or video. Thus the
using much older hardware with more comparison of Earliest Deadline First (EDF), a
limited memory and CPU processing power. policy known more in real-time CPU
Thus these researchers would deal with larger scheduling is added to consideration with the
queues given better hardware, and that other policies. One study [Chan00.1]
would allow them to test algorithms that may [Chan00.2] would try to develop a policy
have seemed impractical before. Two such more efficient than the SCAN-EDF hybrid
new algorithms to be considered were policy by placing requests into seek groups.
Grouped Shortest Time First and Weighted Other groups focused on the specific issue
Shortest Time First. of having a combination of real-time based
The work of another group at Michigan and non real-time workloads to consider in
[Wort94] several years later examined disk determining an acceptable scheduling policy
scheduling, accounting for improvements in to use [Romp98] [Park99]. The first compared
modern disk drive technologies. They major the use of one pass and two pass schedulers
differences they note that leave room for in how the schedulers treated queued
improvement are the use of logical block requests. The second would propose a
number to physical block mappings and the scheduling policy in which non real-time
addition of onboard caching to allow for the workloads are sorted by their relative position
prefetching of data. With these as if they would be retrieved by the next disk
improvements they consider several new head sweep like under the SCAN policy,
scheduling policies, and believe that Aged while the real-time requests should be
Shortest Positioning (w/Cache) Time First serviced in best order to meet their deadlines.
(ASPCTF) will eventually outperform C- A very recent work in this area [Ghan03]
LOOK, although C-LOOK is still performs examines a disk scheduling policy for
competitively as the number of requests multimedia based on using cost, a penalty for
grows very large. Furthermore, these missed deadlines.
researchers developed a very robust simulator Another interesting non real-time study
that will be valued by future researchers and [Iyer01] in disk scheduling looked into the
examined later in this work. problem of deceptive idleness where these
Research in scheduling policies is ongoing. researchers have found that over-scheduling
Some researchers [Thom01.1] [Thom01.2] the disk can actually lead to worse
performance when there are fewer present used for compilation tasks which would not
requests than expected. They propose the have soft real-time deadlines. The disk by
concept of anticipatory scheduling in which default will have no knowledge of deadlines
heuristics are applied to conventional so the disk scheduler would treat each the
scheduling policies that have problems with same or use a set priority if such existed. If in
deceptive idleness. The heuristics will a particularly loaded system, a result of poor
determine whether or not to wait to service a audio performance would make it useless for
service the request based on timings and the user.
access patterns. Additionally, they My ideal experiment would be to use a
acknowledge that perhaps their findings with simulated disk so I could modify the
some modifications may be applicable to soft scheduling policy for comparative analysis
real-time workloads, but they do not have a directly with the audio player. The player
present investigation of actual deceptive would be modified to read from a simulated
idleness affecting such workloads. file on the disk. The disk could then exist
possibly as a separate process, having streams
3 Simulating an audio workload opened to it for read requests from other
running processes. Thus the timings of the
reads would be based on the scheduling
Standard uncompressed CD quality audio
policy of the scheduler as opposed to that
has a sample rate of 44.1 KHz, stereo
fixed by the disk. We would have to consider
channels, and 16 bits per sample. This will
some amount of overhead would factor in to
yield 10MB/min or a throughput of 1.3 Mbps.
the use of a simulation.
This speed is very easily attainable.
Moreover, compressing audio further reduces Additionally, the audio player could be
the need of disk bandwidth. The average modified to write its uncompressed audio
throughput rate for the majority of desktop output to an input stream buffer instead of an
audio is 128Kbps, which is a factor of 10 actual audio device. The stream buffer could
smaller than uncompressed. While people measure if the incoming audio blocks were
may focus on the commonly known benefits making the deadlines for good quality audio,
of compression, those being faster download or missing by various amounts of time. The
rates and more economical usage of local advantage of doing this besides providing a
storage, the throughput rates from disk may metric would be to allow for multiple
not be considered. As disks become bigger instances of the audio player to be measured
and conserving space becomes less importing, while this would be difficult on a single
users may wish to store larger audio files with machine with only one audio output device.
less compression resulting in higher audio To conduct experiments I chose to use the
quality. Additionally faster local network DiskSim disk simulator [Gang02] developed
speeds means that the use of larger audio files by the team of researchers in Michigan
does not use up too much network bandwidth mentioned above. DiskSim supports a
when files are opened on remote machines. number of disks to be simulated, and also
However, when the files are loaded from a different controllers, drivers and busses. The
central file server, we are still bound by the simulator has a very large number of
potential throughput from the disk on that configuration options for each, including the
server. This will become a more costly issue choice of different scheduling policies that I
as the number of audio files loaded from a will examine. Moreover, it provides detailed
single server at once with soft-real time performance results of the simulation
deadlines increases. Also requests from other containing response timing data statistics.
processes become an additional factor in Simulations can be derived from either
many systems where one may not dedicate so synthetic workloads or workloads described
many resources toward maintaining a file in text-based trace files that will be discussed
server solely for desktop audio, for example in somewhat more detail. I have used version
the audio server may be shared with code 2.0 of DiskSim, to which the authors have
added enhanced features beyond the original I needed to then choose another audio
version that they created for their initial player that could be modifiable so I may
research. traces but not have cumbersome components
One unfortunate drawback to using that are not apparent and requiring special
DiskSim is that it does not do perform the one configuration to build. I had just heard that
aspect to process that I require to fully test the other researchers have used mpg123 [Mpg03]
real-time performance of the simulation for testing audio data in soft real-time
directly with the audio player. DiskSim scheduling research [Bana02]. I thought then
models performance based on the request perhaps I could modify it as well if it was also
characteristics of the workload. However, it open source, and fortunately for me it is. The
does not allow you to read or write actual code was not as well organized as the code in
data from the disk. The authors claim that it Zinf. I had to make one somewhat significant
would be possible to modify DiskSim to do so modification to have it deliver the trace data
perhaps in a larger system simulation. While the way I wanted, and that is to place the
such modifications sound feasible, the work timing data with the individual reads.
seemed much too ambitious for the scope of The player supported to read modes:
this work, and thus I would need to try a stream read and memory mapped I/O. The
different approach. default for the Linux based build was to use
Perhaps I could use DiskSim as the authors memory mapped I/O. I found it necessary to
have stated they intended it to be used: to make my modification to the I/O stream read
generate the performance statistics based on sections of code, and so I had to build a
workload traces. I could gather some different version for Linux using that. If one
information then about how the audio players were to compare actual performances of the
used the disk by providing the simulator with two versions they many find differences in
the timings of when the audio player would results due to such differences in how they
require additional blocks of data and the read from the disk.
number of blocks for each. All this would The trace was not delivering raw
mean would be modifying an audio player to uncompressed audio data but smaller chunks
trace the audio file reads and building input of compressed data that would then be
traces to the simulator from those. buffered after each read. This obviously
My original plan was to use the audio requires not even close a to real-time read of
application Zinf [Zinf02] to create the audio data from the disks. Every other read is a 4
data read timings. Zinf was a good choice bytes followed by a larger chunk read.
because: 1) it is a very popular desktop player Chunk sizes varied by the amount of
for Linux. This was due to good archival compression applied to the audio.
support for music collections. 2) I had prior To properly use the traces that audio
knowledge that its open source code was player would generate with the simulator, I
easily modifiable since an associate of mine was required to deliver the proper ASCII
had done it. When I tried to verify this for trace format specified for DiskSim. The ASCII
myself, I would notice the same. The code format requires one provide requests
was very logically structured with the data sequentially with the time of the request since
reading occurring in one single place, and this the start of the trace followed by the device
would be trivial to modify. number, the block number, number of blocks
Unfortunately, because Zinf has so many and request type (read or write). In all cases I
complex components that would needed to have chosen to read from a single device, thus
link with its main body of code to make it leaving block number and the number of
such a robust desktop player, I found myself blocks to be read.
struggling with building my own version and The complexity of this task increased from
made the decision to indefinitely postpone simply outputting the properly formatted
my use of Zinf. trace from the player instance since I wished
to consider numerous requests coming from
multiple audio players simultaneously. For compression rate: 32 Kbps, 128 Kbps, and 320
this reason I chose to develop a small Kbps. I wanted to use a fully uncompressed, i.e.
program to convert from the raw trace format CD quality 16 bit audio file, but I soon discovered
to DiskSim’s, as that would not be possible that mpg123 wouldn't support that format. The
directly with a single player instance running. workloads would then consist of groups of
requests from 1 to 10 files of each compression
To create a sense of somewhat varied
rate with the data blocks in either ascending or
temporal distribution of multiple audio
random order.
player usages on different hosts like one may
The audio tests ran on a DELL system with a
find on an actual network with a shared file
2.24GHz Pentium 4 processor with 512MB of
server, the beginning times of each file read memory running Red Hat Linux version 8. The
were staggered by a random value between 0 files themselves would be read from the file server
and 30 seconds after the last instance’s start storing the user accounts for the local network.
time. For the experiment, I chose to use the simulation
How much if any new data that needs to of the HP_C2249A, which is the default for
be read to fulfill the requested amount DiskSim. This disk is tiny in capacity by the
determines the number of blocks specified to standards of the moment (approx 1 GB), but
be read in the trace file. The means one new suitable for a comparative analysis as it is 5400
block would need to be read only if the data rpm like many drives still on the market. The
from the previously read block were disk has 2054864 blocks on 2051 cylinders and
exhausted or two if the chunk of data 13 surfaces. It is clocked with a 1.10 average
happens to be larger than a single block in seek time (m). It can perform read ahead
size. This is separate from the audio buffering caching of up to 128 blocks.
that is performed by the player. Groups of ten sample runs with the
Block numbers for each request in the number of files providing requests increasing
workload would either be random or in each were run in the simulator with five
sequential. Though not very realistic, this different scheduling policies. I chose to
represents the two possible extremes of data compare FCFS, SSTF, C-LOOK, Weighted
organization for the files. In cases where a shortest Position Time First (WPTF, similar to
workload generated contained requests from WSTF as mentioned earlier) and ASPCTF as
various files (or perhaps the same file in tested by DiskSim's authors in their paper. I
different locations on the disk) each would decided to keep FCFS, although already
have a random starting block number but known for having the poorest performance,
increase linearly from that point. The locality for comparative purposes as my predecessors
of these starting points is not checked, have done.
meaning it may be possible to generate a In considering acceptable disk
workload with overlapping blocks. performance for each compression rate, we
Nonetheless there would still be enough of should be acquainted with how soft real-time
difference in locality between requested deadlines may vary for increasing file sizes
blocks, and that I believe to be sufficient for (meaning decreasing amounts of compression
the purposes of my experimentation. With or higher audio quality). 32Kbps would
these three parameters filled, I would be able require 8 512KB blocks per second or one per
to have various trace scripts runnable within 125 ms. 128Kbps would require one per 31 ms
DiskSim. and 320Kbs would be every 12.5 ms but more
than one block may be read at once, and I
4 Experimentation and Results have calculated the average to be 2.041
meaning 25.5 ms response time needed. Thus
My samples from which to generate the as lower compression yields larger files that
experimental workloads to run the simulation were will increase the volume of requests on the
files of the same approximately 4 minute long disk scheduler, the tightness of the deadlines
song. I chose 3 compressed versions as they have also increases. This means even greater
different streaming requirements forcing more or performance requirements.
less work on the disk depending on the
FCFS Response Times SSTF Response Times
100000 200
32Kbs

Response Time (ms)


Response Time (ms)

10000 32Kbps
nonrandom 150 nonrandom
128Kps 128Kbps
1000
nonrandom 100 nonrandom
320 Kbps 320Kbps
100
nonrandom nonrandom
32 Kbps 50 32Kbps
10 random
random
0
1
1 3 5 7 9 1 3 5 7 9

# of Req Files # of Req Files

Figure 1 Figure 2

WPTF Response Times


C-LOOK Response Times

200 100000
32Kbs

Response Time (ms)


Response Time (ms)

32Kbs 10000 nonrandom


150 nonrandom 128Kps
128Kps 1000 nonrandom
100 nonrandom 320 Kbps
320 Kbps 100
nonrandom
50 nonrandom 32 Kbps
32 Kbps 10
random
random 1
0
1 3 5 7 9 1 3 5 7 9

# of Req Files # of Req Files

Figure 4
Figure 3

ASPCTF Response Times initially increases, then level off with some
120 variation, and finally increases as the
100 32Kbs requesting file count increases to 10.
Response Time (ms)

nonrandom
80 128Kps WTPF seems to exhibit the most interesting
nonrandom
60 320 Kbps performance behavior. It appears to degrade
40 nonrandom
32 Kbps
in each group of trials like FCFS, but as the
20 random requests increase for the 320Kbps trials it
0 actually shows improved performance. It is
1 3 5 7 9 uncertain if the improvement of WPTF would
# of Req Files eventually reach performance comparable to
the other policies that generally have
Figure 5
exhibited better performance as there is
Figure 1 shows the experimental trials apparently close to an order of magnitude
using the FCFS policy. We can see that there difference between them at the greatest
seems to be some points at which the number of requests, but perhaps this would
performance degrades, however it is be worthy of further investigation.
interesting to note that for some of the 320Kbs
Figures 6-8 show comparisons of the
trials, the performance improves for a number
various policies at the different rates of
of data points before degrading again. This is
compression. We can see that FCFS and
probably due to the varying locality of blocks
WPTF performance will degrade like in the
within the different data sets.
other examples as the number of requesting
SSFT, SCAN, and ASPCTF show very files increases, while the others retain close
similar performances in each of their trials values, with ASPCTF having the generally
(Figures 2, 3 & 5). All seem to increase their shortest response times.
response time as the number of requests
320Kbps Sample Performance 128Kbps Sample Performance

100000 100000
C-LOOK C-LOOK

Response Time (ms)


10000
Response Time (ms)

10000
SSFT SSFT
1000 1000 FCFS
FCFS
WPTF
WPTF 100
100 ASPCTF
ASPCTF
10
10
1
1
1 3 5 7 9
1 3 5 7 9
# of Req File s # of Req Files

Figure 6
Figure 7
32Kbps Sample Performance
32Kbps random block performance
60
100000
50 C-LOOK
Response Time (ms)

C-LOOK

Response Time (ms)


40 SSFT 10000
SSFT
FCFS
30 1000 FCFS
WPTF
20 WPTF
ASPCTF 100
ASPCTF
10
10
0
1 3 5 7 9 1
# of Req Files 1 3 5 7 9
# of Req Files

Figure 8 Figure 9
My findings in this area are really only the
Figure 9 is a comparison of the random block beginning of additional work that could be
requests just showing that FCFS and WPTF done in this area. The DiskSim environment
seem to degrade in orders of magnitude offers many more disks, configuration
worse than the others which exhibit close parameters and 17 additional scheduling
performance much like the cases with the policies that could be used to expand the
linearly increasing blocks. experiments I have already conducted. The
Now given these times and our estimated experiment itself could be made slightly more
soft real-time deadlines mentioned, we can complex by running the simulator with
see that in this model for 128Kbps our audio workloads containing data from files that
performance may become questionable when have different compression rates rather than
we have more than seven instances placing all the same in a given workload. Moreover,
requests, and with 320Kbps files open, we one could attempt to generate a block
may at greater than two. Of course, distribution that is in between being
additional improvements are possible with completely linear and completely random.
advances in hardware, such as faster rotation It might be worthwhile to attempt to create
and seek times within the disk itself, but a test environment like what I mentioned
shown in this demonstration that ASPCTF has early in section 3 in which data could be read
slightly better performance than the others, it from the simulator directly to the player and
may be the best policy to use to optimize then in turn writes to a simulated audio
request times for this type of data. device to detect if deadlines are being missed.
This is important since aggressive buffering of
the data may yield adequate performance at
5 Future Work
the cost of some overhead in a startup delay.
The overhead may be noticeable to the user
but he or she may consider that a fair tradeoff see value in continued experimentation such
to suffering poor real-time performance. that we can continue to meet changing needs.
Also to be considered might be more In this case we may find that the demands
complex data sources with differing real-time from desktop audio may continue to grow,
requirements. Video should have differing but we already will have innovative solutions
disk usage behavior, as the way in which to approaching the problem.
video compression works leads to less
consistent block sizes over a constant period Acknowledgements
of time. A further, much more complex
workload to be considered would be real-time
My thanks go out Scott Brandt for his initial
audio mixing. These may combine multiple
suggestion in pursuing this particular topic
non-repeating audio streams, random single
and guidance, to Scott Banachowski and Feng
events, and repeated events of varying length.
Wang for their time and assistance.
Furthermore, various events may require
differing amounts of CPU utilization at
different times given particular requirements References
for real-time signal processing added to the
mix. [Bana02] Scott Banachowski and Scott
Perhaps most importantly, the ultimate Brandt. "The BEST Scheduler for Integrated
goal of doing these comparative analyses is to Processing of Best-effort and Soft Real-time
work towards formulating a new scheduling Processes." Multimedia Computing and
policy that can best satisfy the needs of the Networking 2002 (MMCN '02), January 2002.
workloads that we set forth. It could be
possible to consider a related area in [Chan00.1] Chang, Ray-I, Shih, Wei-Kuan
developing such a new policy. One such area and Chang, Ruei-Chuan. ``Multimedia Real-
is real-time CPU scheduling. Consider a Time Disk Scheduling for Hybrid
dynamic CPU scheduler implementation to Local/Global Seek-Optimizing Approaches.''
benefit soft-real time application that does Parallel and Distributed Systems, 2000.
period detection and has heuristics to guess if Proceedings. Seventh International Conference.
the periodic requests are due to a soft-real July 2000. pp. 323-330.
time deadline [Bana02]. Perhaps this could be
applied to disk scheduling by combining with
[Chan00.2] Chang, Hsung-Pin, et al.
an existing policy for disk scheduling like
``Enlarged-Maximum-Scannable-Groups for
SCAN or SSFT. This would similar to
Real-Time Disk Scheduling in a Multimedia
combining concepts from SCAN and EDF that
System.'' Computer Software and Applications
gave us SCAN-EDF.
Conference, 2000. COMPSAC 2000. The 24th
Annual International. Oct. 2000 pp. 383-388.
6 Conclusion
[Gang02] Ganger, Greg, Worthington, Bruce
We may already know that the problem of and Patt, Yale. The DiskSim Simulation
disk scheduling has been shown to be Environment.
reducible to the NP-Complete problem of the http://www.ece.cmu.edu/~ganger/disksim
traveling salesperson, thus we presently may / 05/09/2002 02:21:47 PM
be unable to find an optimal solution for our
problem that will run in reasonable time.
[Geis87] Geist, Robert and Daniel, Stephen.
However, there exist many solutions today --
``A Continuum of Disk Scheduling
our disk scheduling policies -- that may
Algorithms.'' ACM Transactions on Computer
perform sufficiently enough to meet our
Systems. Vol. 5, No 1, February 1987 pp 77-92.
current needs. Given that we may notice
some differences in performance when we
test the various policies, we should continue
[Ghan03] Ghandeharizadeh, Shahram,
Huang, LiGuo and Kamel, Ibrahim. ``A Cost [Wort94] Worthington, Bruce L., Ganger,
Driven Scheduling Algorithm for Multimedia Gregory R., and Patt, Yale N. ``Scheduling
Object Retrieval.'' IEEE Transactions on Algorithms for Modern Disk Drives.'' Santa
Multimedia. Vol. 5, No. 2, June 2003. pp. 186- Clara: Sigmetrics (ACM) 1994. pp 241-251.
196.
[Mpg03] http://www.mpg123.com/
[Iyer01] Iyer, Sitaram and Drushel, Peter. Mpg123, Fast MP3 Player for Linux and UNIX
``Anticipatory Scheduling: A disk scheduling systems. Official site, 2003.
framework to overcome deceptive idleness in
synchronous I/O.'' ACM Operating Systems
[Zinf02] http://www.zinf.com/ The Zinf
Review (Acm), vol.35, no.5, Dec. 2001. 117-130.
audio player official site.

[Park99] Park, Eunjeong et al. ``Dynamic


Disk Scheduling for Multimedia Storage
Servers.'' TENCON 99. Proceedings of the IEEE
Region 10 Conference. Volume: 2, 15-17 Sept.
1999 pp: 1483-1486 vol. 2.

[Romp98] Rompogiannakis, V. et al. ``Disk


Scheduling for Mixed-Media Workloads in a
Multimedia Server.'' Proceedings of the sixth
ACM international conference on Multimedia
September 1998. pp 297-302.

[Selt90] Seltzer, Margo, Chen, Peter, and


Ousterhost, John. ``Disk Scheduling
Revisited.'' Washington: Winter proceedings of
USENIX. January, 1990.

[Teor72] Teorey, Toby J. and Pinkerton, Tad


B. ``A Comparative Analysis of Disk
Scheduling Policies.'' Communication of the
ACM. Vol. 15, No.3 March, 1972. pp. 177-184.

[Thom01.1] Thomasian, Alexander and Lui,


Chang. ``Some new Disk Scheduling Policies
and Their performance.'' ACM SIGMETRICS
Performance Evaluation Review , Proceedings
of the 2002 ACM SIGMETRICS international
conference on Measurement and modeling of
computer systems, Volume 30 Issue 1 June
2002 pp. 266-7.

[Thom01.2] Thomasian, Alexander and Lui,


Chang. ``Disk Scheduling Policies with
Lookahead'' ACM SIGMETRICS Performance
Evaluation Review. Volume 30 Issue 2
September 2002. pp. 31-40.