Documente Academic
Documente Profesional
Documente Cultură
Abstract
A TCP performance evaluation tool for the network simulator NS2 has been developed. This
document describes how to install and use this tool.
1 Introduction
Researchers frequently use the network simulator NS2 to evaluate the performance of their protocols
in the early stage of design. One particular area of recent intest is the congestion control protocols
(a.k.a., TCP alternatives) for high-speed, long-delay networks. There is significant overlap among
(but lack of a community-agreed set of) the topologies, traffic, and metrics used by many researchers
in the evaluation of TCP alternatives: effort could be saved by starting research from an existing
framework. As such, we developed a TCP performance evaluation tool. This tool includes several
typical topologies and traffic models; it measures some of the most important metrics commonly
used in TCP evaluation; and it can automatically generate simulation statistics and graphs ready
for inclusion in latex and html documents. The tool is very easy to use and contains an extendable
open-source framework.
This tool can be used not only for high-speed TCP protocols, but for other proposed changes to
congestion control mechanisms as well, such as ECN added to SYN/ACK packets, changes to make
small transfers more robust, changes in RTO estimation, and proposals to distinguish between loss
due to congestion or corruption, etc.
This simulation tool does not attempt to be a final one. Instead, it intends to serve as a starting
point. We invite community members to contribute to the project by helping to extend this tool
toward a widely-accepted, well-defined set of NS2 TCP evaluation benchmarks.
Below we describe how to install and use this tool for TCP performance evaluation.
2 Installation
This tool builds upon a set of previous work. There are two ways to install the tool: #1) install
all the required components one by one, or #2) install an “all-in-one” patch that includes all the
needed components. We recommend the approach #2, but will first describe the approach #1 for
clarity purpose.
1
2.1 Install the Components One-by-One
First you need to install NS2. Our tool has been tested with ns-2.29, ns-2.30, and ns-2.31. But we
recommond the most recent version. Suppose you install the ns-allinone-2.31 package (available at
http://www.isi.edu/nsnam/ns/ns-build.html) under the directory $HOME/ns-allinone-2.31.
Second, you need to install the RPI NS2 Graphing and Statistics package from http://www.ecse.
rpi.edu/∼harrisod/graph.html, which provides a set of classes for generating commonly used
graghs and gathering important statistics.
Third, you need the PackMime-HTTP Web Traffic Generator from http://dirt.cs.unc.edu/
packmime/. This package is implemented in NS2 by the researchers at UNC-Chapel Hill based on
a model developed at the Internet Traffic Research group of Bell Labs. It generates synthetic web
traffic in NS2 based on the recent Internet traffic traces.
Fourth, to test the high-speed TCP protocols you have to install them, e.g.,
* FAST, designed by S. Low, C. Jin, and D. Wei, implemented in NS2 by T. Cui and L.
Andrew, downloadable from http://www.cubinlab.ee.mu.oz.au/ns2fasttcp;
The ns-allinone-2.31 distribution includes other TCP protocols like Reno, SACK, HSTCP (S.
Floyd), and XCP (D. Katabi and M. Handley).You can also add whichever protocols according to
your need.
Finally, install our tool from http://labs.nec.com.cn/tcpeval.htm. Just unpack the package
to the NS2 root directory $HOME/ns-allinone-2.31.
> cd $HOME/ns-allinone-2.31
> tar zxvf tcpeval-0.1.tar.gz
This creates a directory called eval under $HOME/ns-allinone-2.31. The eval directory con-
tains all the scripts and documents of our tool. To use this tool, an environment variable TCPEVAL
must be defined. You can define it in the file $HOME/.bash profile to avoid doing this repeatedly.
2
Now you can try an example simulation provided in the tool.
Now rebuild the NS2 package. To do that, first, you need to configure the environment settings.
In the file $HOME/.bash profile, set NS to the directory containing the NS2 package, NSVER to the
NS2 version, and TCPEVAL to the directory of the TCP evaluation tool scripts.
Then configure the RPI Graphing and Statistics package and rebuild NS2.
> ns $HOME/ns-allinone-2.31/ns-2.31/tcl/rpi/configure.tcl
> cd $HOME/ns-allinone-2.31/ns-2.31
> ./configure
> make depend
> make
3
Figure 1: The architecture of our tool.
3 Tool Components
The architecture of our tool is shown in Figure 1, which is primarily composed of the following com-
ponents: network topologies, traffic models, performance evaluation metrics, and, after a simulation
is done, a set of result statistics and graphs generated.
4
Figure 2: A dumb-bell topology.
Core router
Src_2
Sink_N
Access router
Src_M
Bottleneck
Access router
Src_M+1 Src_N
5
3.2 Traffic Models
The tool attempts to apply the typical traffic settings. The applications involved include four
common traffic types.
6
For long-lived FTP traffic, it measures the transmitted traffic during some intervals in bits
per second.
For short-lived web traffic, the PackMime HTTP model collects request/response goodput
and response time to measure web traffic performance.
Voice and video traffic are different from above. Their performance is affected by packet
delay, delay jitter and packet loss rate as well as goodput. So their goodput is measured in
transmitted packet rate excluding lost packets and delayed packets in excess of a predefined
delay threshold.
• Delay
We use bottleneck queue size as an indication of queuing delay in bottlenecks. Besides mean
and max/min queue size statistics, we also use percentile queue size to indicate the queue
length during most of the time.
FTP traffic is not affected much by packet transmission delay.
For web traffic, we report on the response time, defined as the duration between the client’s
sending out requests and receiving the response from the server.
For streaming and interactive traffic, packet delay is a one-way measurement, as defined by
the duration between sending and receiving at the end nodes.
• Jitter
Delay jitter is quite important for delay sensitive traffic, such as voice and video. Large jitter
requires much more buffer size at the receiver side and may cause high loss rates in strict
delay requirements. We employ standard packet delay deviation to show jitter for interactive
and streaming traffic.
• Loss Rate
To obtain network statistics, we measure the bottleneck queue loss rate.
We do not collect loss rates for FTP and web traffic because they are less affected by this
metric.
For interactive and streaming traffic, high packet loss rates result in the failure of the receiver
to decode the packet. In this tool, they are measured during specified intervals. The received
packet is considered lost if its delay is beyond a predefined threshold.
7
3.3.3 Fairness and Convergence
In this tool, the fairness measurement uses Jain’s fairness index to measure the fair bandwidth
share of end-to-end FTP flows that traverse the same route.
Convergence times are the time elapsed between multiple flows from an unfair share of link
bandwidth to a fair state. They are quite important for environments with high-bandwidth, long-
delay flows. This tool includes scenarios to test the convergence performance.
4 Usage Details
Before using this tool, you should have some experience about NS2. All the examples shown below
are those commonly used in TCP performance evaluation.
The main body of this tool includes three files: create topology.tcl, create traffic.tcl,
and create graph.tcl in the $HOME/ns-allinone-2.31/eval/tcl directory. As their file names
indicate, create topology.tcl implements the three common network topologies discussed in
Section 3.1, create traffic.tcl defines the traffic model parameters in the simulation (see Section
3.2), and crate graph.tcl generates simulation statistics (see Section 3.3.1) and plots graphs at
the end of simulations.
Three example scripts are given in the $HOME/ns-allinone-2.31/eval/ex directory. They
are test dumb bell.tcl, test parking lot.tcl and test network 1.tcl for the above-discussed
topologies. Their parameters definitions are in def dumb bell.tcl, def parking lot.tcl, and
def network 1.tcl, respectively.
Here, we take the dumb bell topology simulation as an example; simulations for other topologies
are the similar.
> cd $TCPEVAL/ex
> ns test_dumb_bell.tcl
8
It will run the dumb-bell topology simulation with default parameters defined in
def dumb bell.tcl. The results can be reviewed by opening /tmp/index100.html.
The output format will be explained in the following Section 4.4. If one wants to write his own
examples, the following code should be incorporated into his tcl script.
source $TCPEVAL/tcl/create_topology.tcl
source $TCPEVAL/tcl/create_traffic.tcl
source $TCPEVAL/tcl/create_graph.tcl
http://networks.ecse.rpi.edu/~xiay/vcp.html
Then the configuration parameters for VCP need to be set in the procedure get tcp params of
create topology.tcl.
if { $scheme == "VCP" } {
set SRC TCP/Reno/VcpSrc # For VCP source and sink.
set SINK VcpSink
set QUEUE DropTail2/VcpQueue # Bottleneck Queue.
...
}
To simplify the above process, the all-in-one patch has included six other TCP variants’ im-
plementations and settings: STCP, HTCP, BIC, CUBIC, FAST, and VCP. For the details, please
refer to Section 2.1 for their implementation and typical settings.
9
in the simulation, such as the number of FTP traffic, what high-speed TCP protocol employed by
FTP, using AQM or not, and how long the simulation runs. Finally, you choose the performance
statistics to be generated (like bottleneck utilization, packet loss rate, etc.), and the graphs to be
displayed (e.g., queue length variation over time) after the simulation is done. Each item in the file
has its meaning explained.
For example, in the topology settings, per sets the static packet error rate in the bottlenecks.
The following command defines the packet error rate to 0.01. That is, when sending 100 packets
on the link, approximately there is a corrupted one. If set to 0, there is no packet error occurring
on the link.
Currently, there are four traffic models in this tool: long-lived FTP, short-lived web, interactive
voice and streaming video. These are explained in Section 3.2. For example, if we want to use XCP
for the FTP traffic, just do
If we want to generate the bottleneck statistics and graphs when the simulation finishes, just
set
If set to 0, it does not show graphs of bottleneck statistics after the simulation. Other parameters
can be set in a similar way.
Where $show bottleneck stats is set in def dumb bell.tcl as discuss above. Then the fol-
lowing command would run a dumb-bell simulation.
> ns test_dumb_bell.tcl
10
4.4 Example 4: Multiple Output Formats
All the simulation results are stored in /tmp/expX, where X stands for the simulation sequence
number. The data sub-directory contains the trace file and plot scripts used in the simulation. The
figure sub-directory stores the generated graphs. Mainly, there are three kinds of output formats
of the simulation results: text, html and eps. The selection is according to the def dumb bell.tcl
settings. It works like,
if ( verbose == 0 ) {
output text statistics
}
if ( verbose == 1 && html_index != -1 ) {
output indexN.html in /tmp directory.
where N is the html_index in def_dumb_bell.tcl.
}
output eps graph
Table 1: Text Output Columns and Meanings for the Dumb-Bell and Parking-Lot Topology
1. TCP Scheme 2. Number of bottleneck
3. Bandwidth of bottleneck (Mbps) 4. Rttp (ms)
5. Num. of forward FTP flows 6. Num. of reverse FTP flows
7. HTTP generation rate (/s) 8. Num. of voice flows
9. Num. of forward streaming flows 10. Num. of reverse streaming flows
11. No. bottleneck 12. Bottleneck utilization
13. Mean bottleneck queue length 14. Bottleneck buffer size
15. Percent of mean queue length 16. Percent of max queue length
17. Num. of drop packets 18. Packet drop rate
... repeat 11-18 number of bottleneck times ...
... Elapsed time
11
Table 2: Text Output Columns and Meanings for the Network Topology
1. TCP Scheme 2. Number of transit node
3. Bandwidth of core links (Mbps) 4. Delay of core links (ms)
5. Bandwidth of transit links (Mbps) 6. Delay of transit links (ms)
7. Bandwidth of stub links (Mbps) 8. Delay of stub links (ms)
9. Num. of FTP flows 10. HTTP generation rate flows (/s)
11. Num. of voice flows 12. Num. of streaming flows
13. No. core link 14. Core link utilization
15. Mean core link queue length 16. Core link buffer size
17. Percent of core link mean queue length 18. Percent of max core link queue length
19. Num. of core link drop packets 20. Packet drop rate in the core link
... repeat 13-20 number of core link times ...
Transit link statistics, the same as core links ...
... Elapsed time
12
Forward Bottleneck No.1 Utilization vs Time
0.8
utilization
0.6
0.4
0.2
Interval=1.0s
0
0 20 40 60 80 100
time (seconds)
1.2e+06
1e+06
Throughput (bps)
800000
600000
400000
200000
0
0 10 20 30 40 50 60 70 80 90 100
seconds
13
The total simulation time of this scenario is 1000 seconds. It has 5 reverse FTP flows which
start at the beginning of the simulation, and 5 forward flows which starts every 200 seconds. When
the simulation is done, the forward FTP throughput shown in Figure 8 presents the employed XCP
convergence speed (with the default parameters in def dumb bell.tcl).
8e+06
7e+06
6e+06
Throughput (bps)
5e+06
4e+06
3e+06
2e+06
1e+06
Interval=1.0s
0
0 100 200 300 400 500 600 700 800 900 1000
seconds
flow0 flow1 flow2 flow3 flow4
14
Link Utilization with BW Changes
100
80
40
RENO + RED
SACK + RED
HSTCP + RED
20 HTCP + RED
STCP + RED
BICTCP + RED
CUBIC + RED
XCP
VCP
1 10 100 1000
Bandwidth (Mbps) Log Scale
60
40
20
0
1 10 100 1000
Bandwidth (Mbps) Log Scale
Figure 10: Average bottleneck queue length variation when capacity changes
RENO + RED
SACK + RED
8 HSTCP + RED
HTCP + RED
STCP + RED
BICTCP + RED
CUBIC + RED
XCP
6 VCP
Packet Drop Rate (%)
1 10 100 1000
Bandwidth (Mbps) Log Scale
15
When the simulation finishes, a file named myreport.pdf is generated, which includes the
comparison graphs. For example, when the bottleneck capacity varies from 1 Mbps to 1000 Mbps
(the other parameters are fixed), Figures 9–11 illustrate how the bottleneck link utilization, the
average bottleneck queue length and the packet drop rate change accordingly.
In addition, there are many other parameters in def dumb bell.tcl. Users can set them
according to their needs. The parking-lot and the simple network simulations are similar to the
dumb-bell topology.
5 Acknowledgements
The authors would like to thank Dr. Sally Floyd of ICIR for her encouragement and a lot of
valuable advice. Part of David Harrison and Yong Xia’s work was conducted when they were PhD
students at Rensselaer Polytechnic Institute (RPI). They thank Prof. Shivkumar Kalyanaraman
of RPI for his support and guidance.
16