Documente Academic
Documente Profesional
Documente Cultură
Study Guide
Exam 199-01 for RiOS v5.0
June, 2009
Version 2.0
Table of Contents
Preface ..................................................................................................................................................................................................................... 4
Certification Overview ............................................................................................................................................................................................ 4
Benefits of Certification......................................................................................................................................................................................... 4
Exam Information.................................................................................................................................................................................................. 4
Certification Checklist ........................................................................................................................................................................................... 5
Recommended Resources for Study.................................................................................................................................................................... 5
RIVERBED CERTIFIED SOLUTIONS PROFESSIONAL STUDY GUIDE .............................................................................................................. 7
I. General Knowledge ............................................................................................................................................................................................. 7
Optimizations Performed by RiOS........................................................................................................................................................................ 7
TCP/IP ................................................................................................................................................................................................................ 12
Common Ports.................................................................................................................................................................................................... 12
RiOS Auto-discovery Process ............................................................................................................................................................................ 13
Enhanced Auto-Discovery Process .................................................................................................................................................................... 14
Connection Pooling............................................................................................................................................................................................. 15
In-path Rules ...................................................................................................................................................................................................... 15
Peering Rules ..................................................................................................................................................................................................... 16
Steelhead Appliance Models and Capabilities ................................................................................................................................................... 17
II. Deployment ....................................................................................................................................................................................................... 19
In-path................................................................................................................................................................................................................. 20
Out-of-Band (OOB) Splice .................................................................................................................................................................................. 21
Virtual In-path ..................................................................................................................................................................................................... 23
Policy-Based Routing (PBR)............................................................................................................................................................................... 23
WCCP Deployments........................................................................................................................................................................................... 24
Advanced WCCP Configuration ......................................................................................................................................................................... 27
Server-Side Out-of-Path Deployments ............................................................................................................................................................... 28
Asymmetric Route Detection .............................................................................................................................................................................. 30
Connection Forwarding....................................................................................................................................................................................... 31
Simplified Routing (SR) ...................................................................................................................................................................................... 32
Data Store Synchronization ................................................................................................................................................................................ 33
CIFS Prepopulation ............................................................................................................................................................................................ 33
Authentication and Authorization........................................................................................................................................................................ 33
SSL ..................................................................................................................................................................................................................... 34
Central Management Console (CMC) ................................................................................................................................................................ 35
Steelhead Mobile Solution (Steelhead Mobile Controller & Steelhead Mobile Client) ....................................................................................... 36
Interceptor Appliance.......................................................................................................................................................................................... 37
III. Features ............................................................................................................................................................................................................ 40
Feature Licensing ............................................................................................................................................................................................... 40
HighSpeed TCP (HSTCP) .................................................................................................................................................................................. 40
MX-TCP .............................................................................................................................................................................................................. 42
Quality of Service................................................................................................................................................................................................ 42
PFS (Proxy File Service) Deployments .............................................................................................................................................................. 45
NetFlow............................................................................................................................................................................................................... 51
IPSec .................................................................................................................................................................................................................. 53
Operation on VLAN Tagged Links...................................................................................................................................................................... 53
IV. Troubleshooting .............................................................................................................................................................................................. 54
Common Deployment Issues.............................................................................................................................................................................. 54
Reporting and Monitoring ................................................................................................................................................................................... 56
Troubleshooting Best Practices.......................................................................................................................................................................... 59
2
Preface
This Riverbed Certification Study Guide is intended for anyone who wants to become certified in
the Riverbed Steelhead products and Riverbed Optimization System (RiOS). The Riverbed
Certified Solutions Professional (RCSP) program is designed to validate the skills required of
technical professionals who work in the implementation of Riverbed products.
This study guide provides a combination of theory and practical experience needed for a general
understanding of the subject matter. It also provides sample questions that will help in the
evaluation of personal progress and provide familiarity with the types of questions that will be
encountered in the exam.
This publication does not replace practical experience, nor is it designed to be a stand-alone
guide for any subject. Instead, it is an effective tool that, when combined with education
activities and experience, can be a very useful preparation guide for the exam.
Certification Overview
The Riverbed Certified Solutions Professional certificate is granted to individuals who
demonstrate advanced knowledge and experience with the RiOS product suite. The typical RCSP
will have taken a Riverbed approved training class such as the Steelhead Appliance Deployment
& Management course in addition to having hands-on experience in performing deployment,
troubleshooting, and maintenance of RiOS products in small, medium, and large organizations.
While there are no set requirements prior to taking the exam, candidates who have taken a
Riverbed authorized training class and have at least six months of hands-on experience with
RiOS products have a significantly higher chance of receiving the certification. We would like to
emphasize that solely taking the class will not adequately prepare you for the exam.
To obtain the RCSP certification, you are required to pass a computerized exam available at any
Pearson VUE testing center worldwide.
Benefits of Certification
1. Establishes your credibility as a knowledgeable and capable individual in regard to
Riverbed's products and services.
2. Helps improve your career advancement potential.
3. Qualifies you for discounts and/or benefits for Riverbed sponsored events and training.
4. Entitles you to use the RCSP certification logo on your business card.
Exam Information
Exam Specifications
Exam Number: 199-01
Exam Name: Riverbed Certified Solutions Professional
Version of RiOS: Up to RiOS version 5.0 for the Steelhead appliances and the Central
Management Console, and Interceptor 2.0 and Steelhead Mobile 2.0
Number of Questions: 65
Total Time: 75 minutes for exam, 15 minutes for Survey and Tutorial (90 minutes total)
Exam Provider: Pearson VUE
Exam Language: English only. Riverbed allows a 30-minute time extension for English
exams taken in non-English speaking countries for students that request it. English speaking
countries are Australia, Bermuda, Canada, Great Britain, Ireland, New Zealand, Scotland,
4
South Africa, and the United States. A form will need to be completed by the candidate and
submitted to Pearson VUE.
Special Accommodations: Yes (must submit written request to Pearson VUE for ESL or
ADA accommodations; includes time extensions and/or a reader)
Offered Locations: Worldwide (over 5000 test centers in 165 countries)
Pre-requisites: None (although taking a Riverbed training class is highly recommended)
Available to: Everyone (partners, customers, employees, etc)
Passing Score: 700 out of 1000 (70%)
Certification Expires: Every 2 years (must recertify every 2 years, no grace period)
Wait Between Failed Attempts: 72 hours. No retakes allowed on passed exams.
Cost: $150.00 (USD)
Number of Attempts Allowed: Unlimited (though statistics are kept)
Certification Checklist
As the RCSP exam is geared towards individuals who have both the theoretical knowledge and
hands on experience with the RiOS product suite, ensuring proficiency in both areas is crucial
towards passing the exam. For individuals starting out with the process, we recommend the
following steps to guide you along the way:
1. Building Theoretical Knowledge
The easiest way to become knowledgeable in deploying, maintaining, and troubleshooting
the RiOS product suite is to take a Riverbed authorized training class. To ensure the greatest
possibility of passing the exam, it is recommended that you review the RCSP Study Guide
and ensure your familiarity with all topics listed, prior to any examination attempts.
2. Gaining Hands-on Experience
While the theoretical knowledge will get you partway there, it is the hands-on knowledge
that can get you over the top and enable you to pass the exam. Since all deployments are
different, providing an exact amount of experience required is difficult. Generally, we
recommend that resellers and partners perform at least five deployments in a variety of
technologies prior to attempting the exam. For customers, and alternatively for resellers and
partners, starting from the design and deployment phase and having at least six months of
experience in a production environment would be beneficial.
3. Taking the Exam
The final step in becoming an RCSP is to take the exam at a Pearson VUE authorized testing
center. To register for any Riverbed Certification exam, please visit
http://www.pearsonvue.com/riverbed.
Publications
Recommended Reading (In No Particular Order)
This study guide
Riverbed documentation
o Steelhead Management Console User's Guide
o Steelhead Command-Line Interface Reference Guide
o Steelhead Appliance Deployment Guide
o Steelhead Appliance Installation Guide
o Bypass Card Installation Guide
o Steelhead Mobile Controller Users Guide
o Steelhead Mobile Controller Installation Guide
o Central Management Console User's Guide
o Central Management Console Installation Guide
o Interceptor Appliance User's Guide
o Interceptor Appliance Installation Guide
http://ubiqx.org/cifs/Intro.html (CIFS)
Microsoft Windows 2000 Server Administrators Companion by Charlie Russell and Sharon
Crawford (Microsoft Press, 2000)
Common Internet File System (CIFS) Technical Reference by the Storage Networking
Industry Association (Storage Networking Industry Association, 2002)
Internet Routing Architectures (2nd Edition) by Bassam Halabi (Cisco Press, 2000)
I. General Knowledge
Optimizations Performed by RiOS
Optimization is the process of increasing data throughput and network performance over the
WAN using Steelhead appliances. An optimized connection exhibits bandwidth reduction as it
traverses the WAN. The optimization techniques RiOS utilizes are:
Data Streamlining
Transport Streamlining
Application Streamlining
Management Streamlining
You should be familiar with the differences in these streamlining techniques for the RCSP test.
This information can be found in the Steelhead Appliance Deployment Guide.
Transaction Acceleration (TA)
TA is composed of the following optimization mechanisms:
A connection bandwidth-reducing mechanism called Scalable Data Referencing (SDR)
A Virtual TCP Window Expansion (VWE) mechanism that repacks TCP payloads with
references that represent arbitrary amounts of data, thus increasing the client-data per WAN
TCP window
SDR and TP can work independently or in conjunction with one another depending on the
characteristics and workload of the data sent across the network. The results of the optimization
vary, but often result in throughput improvements in the range of 10 to 100 times over
unaccelerated links.
Scalable Data Referencing (SDR)
Bandwidth optimization is delivered through SDR. SDR uses a proprietary algorithm to break up
TCP data streams into data chunks that are stored in the hard disk (data store) of the Steelhead
appliances. Each data chunk is assigned a unique integer label (reference) before it is sent to the
peer Steelhead appliance across the WAN. If the same byte sequence is seen again in the TCP
data stream, then the reference is sent across the WAN instead of the raw data chunk. The peer
Steelhead appliance uses this reference to reconstruct the original data in the TCP data stream.
Data and references are maintained in persistent storage in the data store within each Steelhead
appliance. Because SDR checks data chunks byte-by-byte there are no consistency issues even in
the presence of replicated data.
How Does SDR Work?
When data is sent for the first time across a network (no commonality with any file ever sent
before), all data and references are new and are sent to the Steelhead appliance on the other side
of the network. This new data and the accompanying references are compressed using
conventional algorithms so as to improve performance, even on the first transfer.
2007-2009 Riverbed Technology, Inc. All rights reserved.
Over time, more data crosses the network (revisions of a document for example). Thereafter,
when these new requests are sent across the network, the data is compared with references that
already exist in the local data store. Any data that the Steelhead appliance determines already
exists on the far side of the network are not sentonly the references are sent across the
network.
As files are copied, edited, renamed, and otherwise changed or moved (as well as web pages
being viewed or email sent), the Steelhead appliance continually builds the data store to include
more and more data and references. References can be shared by different files and by files in
different applications if the underlying bits are common to both. Since SDR can operate on all
TCP-based protocols, data commonality across protocols can be leveraged so long as the binary
representation of that data does not change between the protocols. For example, when a file
transferred via FTP is then transferred using WFS (Windows File System), the binary
representation of the file is basically the same and thus references can be sent for that file.
Lempel-Ziv (LZ) Compression
SDR and compression are two different features and can be controlled separately. However, LZ
compression is the primary form of data reduction for cold transfers.
The Lempel-Ziv compression methods are among the most popular algorithms for lossless
storage. Compression is turned on by default. In-path rules can be used to define which
optimization features will be used for which set of traffic flowing through the Steelhead
appliance.
TCP Optimizations & Virtual Window Expansion (VWE)
As Steelhead appliances are designed to optimize data transfers across wide area networks, they
make extensive use of standards-based enhancements to the TCP protocol that may not be
present in the TCP stack of many desktop and server operating systems. This includes improved
transport capability for networks with high bandwidth delay products via the use of HighSpeed
TCP, MX-TCP, or TCP Vegas for lower bandwidth links, partial acknowledgements, and other
more obscure but throughput enhancing and latency reducing features.
VWE allows Steelhead appliances to repack TCP payloads with references that represent
arbitrary amounts of data. This is possible because Steelhead appliances operate at the
Application Layer and terminate TCP, which gives them more flexibility in the way they
optimize WAN traffic.
Essentially, the TCP payload is increased from its normal window size to an arbitrarily large
amount dependent on the compression ratio for the connection. Because of this increased
payload, a given application that relies on TCP performance (for example, HTTP or FTP) takes
fewer trips across the WAN to accomplish the same task. For example, consider a client-toserver connection that may have a 64KB TCP window. In the event that there is 256KB of data
to transfer, it would take several TCP windows to accomplish this in a network with high
latency. With SDR however, that 256KB of data can be potentially reduced to fit inside a single
TCP window, removing the need to wait for acknowledgements to be sent prior to sending the
next window, and thus speed the transfer.
Transaction Prediction
Application-level latency optimization is delivered through the Transaction Prediction module.
Transaction Prediction leverages an intimate understanding of protocol semantics to reduce the
chattiness that would normally occur over the WAN. By acting on foreknowledge of specific
protocol request-response mechanisms, Steelhead appliances streamline the delivery of data that
8
would normally be delivered in small increments through large numbers of interactions between
the client and server over the WAN. As transactions are executed between the client and server,
the Steelhead appliance intercepts each transaction, compares it to the database of past
transactions, and makes decisions about the probability of future events.
Based on this model, if a Steelhead appliance determines there is a high likelihood of a future
transaction occurring, it performs that transaction, rather than waiting for the response from the
server to propagate back to the client and then back to the server. Dramatic performance
improvements result from the time saved by not waiting for each serial transaction to arrive prior
to making the next request. Instead, the transactions are pipelined one right after the other.
Of course, transactions are executed by Steelhead appliances ahead of the client only when it is
safe to do so. To ensure data integrity, Steelhead appliances are designed with knowledge of the
underlying protocols to know when it is safe to do so. Fortunately, a wide range of common
applications have very predictable behaviors and, consequently, Transaction Prediction can
enhance WAN performance significantly. When combined with SDR, Transaction Prediction can
improve WAN performance up to 100 times.
Common Internet File System (CIFS) Optimization
CIFS is a proposed standard protocol that lets programs make requests for files and services on
remote computers over the Internet. CIFS uses the client/server programming model. A client
program makes a request of a server program (usually in another computer) for access to a file or
to pass a message to a program that runs in the server computer. The server takes the requested
action and returns a response. CIFS is a public or open variation of the Server Message Block
(SMB) protocol developed and used by Microsoft.
In the Steelhead appliance, CIFS optimization is enabled by default. Typically, you would only
disable CIFS optimization to troubleshoot the system.
Overlapping Opens
Due to the way certain applications handle the opening of files, file locks are not properly
granted to the application in such a way that would allow a Steelhead appliance to optimize
access to that file using Transaction Prediction. To prevent any compromise to data integrity, the
Steelhead appliance only optimizes data to which exclusive access is available (in other words,
when locks are granted). When an opportunistic lock (oplock) is not available, the Steelhead
appliance does not perform application-level latency optimizations but still performs SDR and
compression on the data as well as TCP optimizations. The CIFS overlapping opens feature
remedies this problem by having the server-side Steelhead handle file locking operations on
behalf of the requesting application. If you disable this feature, the Steelhead appliance will still
increase WAN performance, but not as effectively.
Enabling this feature on applications that perform multiple opens of the same file to complete an
operation will result in a performance improvement (for example, CAD applications).
NOTE: For the Steelhead appliance to handle the locking properly, all transactions on the file
must be optimized by that Steelhead appliance. Therefore, if a remote user opens a file which is
optimized using the overlapping opens feature, and a second user opens the same file they might
receive an error if the file fails to go through a Steelhead appliance or if it does not go through
the Steelhead appliance (for example, certain applications that are sent over the LAN). If this
occurs, you should disable overlapping opens optimizations for those applications.
Fails if user authentication set too high (downgrades to SDR/TCP acceleration only, no
Transaction Prediction)
MAPI Prepopulation
Without MAPI prepopulation, if a user closes Microsoft Outlook or switches off the workstation
the TCP sessions are broken. With MAPI prepopulation, the Steelhead appliance can start acting
as if it is the mail client. If the client closes the connection, the client-side Steelhead appliance
will keep an open connection to the server-side Steelhead appliance and the server-side
Steelhead appliance will keep the connection open to the server. This allows for data to be
pushed through the data store before the user logs on to the server again. The default timer is set
to 96 hours, after that, the connection will be reset.
Optimized MAPI connections held open after client exit (acts like the client left the PC on);
think of it as virtual client
No one is ever reconnected to the prepopulation session (including the original user)
HTTP Optimization
A typical web page is not a single file that is downloaded all at once. Instead, web pages are
composed of dozens of separate objectsincluding .jpg and .gif images, JavaScript code,
cascading style sheets, and moreeach of which must be requested and retrieved separately, one
after the other. Given the presence of latency, this behavior is highly detrimental to the
performance of web-based applications over the WAN.
The higher the latency, the longer it takes to fetch each individual object and, ultimately, to
display the entire page.
RiOS v5.0 and later optimizes web applications using:
Parsing and Prefetching of Dynamic Content
10
URL Learning
2007-2009 Riverbed Technology, Inc. All rights reserved.
Persistent Connections
More information can be found in the Steelhead Appliance Management Console Users Guide.
NFS Optimization
You can configure Steelhead appliances to use Transaction Prediction to perform applicationlevel latency optimization on NFS. Application-level latency optimization improves NFS
performance over high latency WANs.
NFS latency optimization optimizes TCP connections and is only supported for NFS v3.
You can configure NFS settings globally for all servers and volumes, or you can configure NFS
settings that are specific to particular servers or volumes. When you configure NFS settings for a
server, the settings are applied to all volumes on that server unless you override settings for
specific volumes.
Write-behind
11
TCP/IP
General Operation
Steelhead appliances are typically placed on two ends of the WAN as close to the client and
server as possible (no additional WAN links between the end node and the Steelhead appliance).
By placing Steelhead appliances in the network, the TCP session between client and server can
be intercepted, therefore a level of control over the TCP session can be obtained. TCP sessions
have to be intercepted in order to be optimized; therefore the Steelhead appliances must see all
traffic from source to destination and back. For any given optimized session, there are three
distinct sessions. There is a TCP connection between the client and the client-side Steelhead
appliance, between the server and the server-side Steelhead appliance, and finally a connection
between the two Steelhead appliances.
Common Ports
Ports Used by RiOS
Port
Type
12
Port
Type
TCP ECHO
23
Telnet
37
UDP/Time
107
179
513
Remote Login
514
Shell
1494, 2598
Citrix
3389
5631
PC Anywhere
2007-2009 Riverbed Technology, Inc. All rights reserved.
Port
Type
5900 - 5903
VNC
600
X11
Secure Ports Commonly Passed Through by Default on Steelhead Appliances (Partial List)
Port
Type
22/TCP
ssh
49/TCP
tacacs
443/TCP
https
465/TCP
smtps
563/TCP
nntps
585/TCP
imap4-ssl
614/TCP
sshell
636/TCP
ldaps
989/TCP
ftps-data
990/TCP
ftps
992/TCP
telnets
993/TCP
imaps
995/TCP
pop3s
1701/TCP l2tp
1723/TCP pptp
3713/TCP tftp over tls
13
SH1
SH2
Server
IP(C)IP(S):SYN
IP(C)IP(S):SYN+Probe
Probe result is
cached for 10 sec
IP(SH1)IP(SH2):SYN
IP(SH2)IP(SH1):SYN/ACK
IP(SH1)IP(SH2):ACK
Setup Information
IP(C)IP(S):SYN
IP(S)IP(C):SYN/ACK
Connect Result
IP(S)IP(C):SYN/ACK
IP(C)IP(S):ACK
IP(C)IP(S):ACK
Connect result is
cached until failure
Connection Pool:
20x
TCP Option
The TCP option used for auto-discovery is 0x4C which is 76 in decimal format. The client-side
Steelhead appliance attaches a 10 byte option to the TCP header; the server-side Steelhead
appliance attaches a 14 byte option in return. Note that this is only done in the initial discovery
process and not during connection setup between the Steelhead appliances and the outer TCP
sessions.
14
SH1
SH2
IP(C)IP(S):SYN SEQ1
IP(C)IP(S):SYN SEQ1 +Probe
IP(S)IP(C):SYN/ACK
Probe result is
cached for 10 sec
Server
acknum
IP(C)IP(S):ACK
Connection Result
IP(SH1)IP(SH2):SYN
IP(SH2)IP(SH1):SYN/ACK
Listening on
Service Port 7800
IP(SH1)IP(SH2):ACK
IP(S)IP(C):SYN/ACK
Setup Information
IP(C)IP(S):ACK
20x
Connection Pooling
General Operation
By default, all auto-discovered Steelhead appliance peers will have a default connection pool of
20. The connection pool is a user configurable value which can be configured for each Steelhead
appliance peer. The purpose of connection pooling is to avoid the TCP handshake for the inner
session between the Steelhead appliances across the high latency WAN. By pre-creating these
sessions between peer Steelhead appliances, when a new connection request is made by a client,
the client-side Steelhead appliance can simply use the connections in the pool. Once a
connection is pulled from the pool, a new connection is created to take its place so as to maintain
the specified number of connections.
In-path Rules
General Operation
In-path rules allow a client-side Steelhead appliance to determine what action to perform when
intercepting a new client connection (the first TCP SYN packet for a connection). The action
taken depends on the type of in-path rule selected and is outlined in detail below. It is important
to note that the rules are matched based on source/destination IP information, destination port,
and/or VLAN, and are processed from the first rule in the list to the last (top down). The rules
processing stops when the first rule matching the parameters specified is reached, at which point
the action selected by the rule is taken. Steelhead appliances have three passthrough rules by
default, and a fourth implicit rule to auto-discover remote Steelhead appliances. They attempt to
optimize traffic if the first three rules are not matched by traffic. The three default passthrough
rules include port groupings matching interactive traffic (i.e., Telnet, VNC, RDP), encrypted
traffic (i.e., server-side Steelhead), and Riverbed related used ports (i.e., 7800, 7810).
Different Types and Their Function
Pass Through. Pass through rules identify traffic that is passed through the network
unoptimized. For example, you may define pass through rules to exclude subnets from
2007-2009 Riverbed Technology, Inc. All rights reserved.
15
optimization. Traffic is also passed through when the Steelhead appliance is in bypass mode.
(Passthrough might occur because of in-path rules, because the connection was established
before the Steelhead appliance was put in place, or before the Steelhead service was
enabled.)
Fixed-Target. Fixed-target rules specify out-of-path Steelhead appliances near the target
server that you want to optimize. Determine which servers you want the Steelhead appliance
to optimize (and, optionally which ports), and add rules to specify the network of servers,
ports, port labels, and out-of-path Steelhead appliances to use. Fixed-target rules can also be
used for in-path deployments for Steelhead appliances not using EAD.
Discard. Packets for the connection that match the rule are dropped silently. The Steelhead
appliance filters out traffic that matches the discard rules. This process is similar to how
routers and firewalls drop disallowed packets; the connection-initiating device has no
knowledge of the fact that its packets were dropped until the connection times out.
Deny. When packets for connections match the deny rule, the Steelhead appliance actively
tries to reset the connection. With deny rules, the Steelhead appliance actively tries to reset
the TCP connection being attempted. Using an active reset process rather than a silent
discard allows the connection initiator to know that its connection is disallowed.
Peering Rules
Applicability and Conditions of Use
Peering Rules
Configuring peering rules defines what to do when a Steelhead appliance receives an autodiscovery probe from another Steelhead appliance. As such, the scope of a peering rule is limited
to a server-side Steelhead appliance (the one receiving the probe). Note that peering rules on an
intermediary Steelhead appliance (or server-side) will have no effect in preventing optimization
with a client-side Steelhead appliance if it is using a fixed-target rule designating the
intermediary Steelhead appliance as its destination (since there is no auto-discovery probe in a
fixed-target rule). The following example shows where you might wish to use peering rules:
Client
Site A
Steelhead1
Site C
Steelhead3
Site B
Steelhead2
Server 2
WAN 2
WAN 1
Server 1
Server1 is on the same LAN as Steelhead2 so connections from the client to Server1 should be
optimized between Steelhead1 and Steelhead2. Concurrently, Server2 is on the same LAN as
Steelhead3 and connections from the client to Server2 should be optimized between Steelhead1
and Steelhead3.
16
Add peering rules on Steelhead2 to process connections normally going to Server1 and to
pass through all other connections so that connections to Server2 are not optimized by
Steelhead2
A rule to pass through inner connections between Steelhead1 and Steelhead3 is already in
place by default (by default connection to destination port 7800 is included by port label
RBT-Proto)
If you have multiple branches that go through Steelhead2, you must add a fixed-target rule
for each of them on Steelhead1 and Steelhead3
The Primary and AUX ports cannot share the same network subnet
The Primary and In-path interfaces can share the same network subnet
You must use the Primary port on the server-side for out-of-path deployment
17
You can not use the Auxiliary port except for management
If the Steelhead appliance is deployed between two switches, both the LAN and WAN ports
must be connected with straight-through cables
18
II. Deployment
Deployment Methods
Physical In-path
In a physical in-path deployment, the Steelhead appliance is physically in the direct path network
traffic will take between clients and servers. The clients and servers continue to see client and
server IP addresses and the Steelhead appliance bridges unoptimized traffic from its LAN facing
side to its WAN facing side (and vice versa). Physical in-path configurations are suitable for any
location where the total bandwidth is within the limits of the installed Steelhead appliance or
serial cluster of Steelhead appliances. It is generally one of the simplest deployment options and
among the easiest to maintain.
Logical In-path
In a logical in-path deployment, the Steelhead appliance is logically in the path between clients
and servers. In a logical in-path deployment, clients and servers continue to see client and server
IP addresses. This deployment differs from a physical in-path deployment in that a packet
redirection mechanism is used to direct packets to Steelhead appliances that are not in the
physical path of the client or server.
Commonly used technologies for redirection are: Layer-4 switches, Web Cache Communication
Protocol (WCCP), and Policy-based Routing (PBR).
Server-Side Out-of-Path
A server-side out-of-path deployment is a network configuration in which the Steelhead
appliance is not in the direct or logical path between the client and the server. Instead, the serverside Steelhead appliance is connected through the Primary interface and listens on port 7810 to
connections coming from client-side Steelhead appliances. In an out-of-path deployment, the
Steelhead appliance acts as a proxy and does not perform NAT of the clients IP address as with
in-path deployments (to allow the server to see the original client IP address), but will instead
source NAT to the Primary interface address on the Steelhead appliance that is in server-side
out-of-path. A server-side out-of-path configuration is suitable for data center locations when
physical in-path or logical in-path configurations are not possible. With server-side out-of-path,
client IP visibility is no longer available to the server (due to the NAT) and optimization initiated
from the server side is not possible (since there is no redirection of the outbound connections
packets to the Steelhead appliance).
Physical Device Cabling
Steelhead appliances have multiple physical and virtual interfaces. The Primary interface is
typically used for management purposes, data store synchronization (if applicable), and for
server-side out-of-path configurations. The Primary interface can be assigned an IP address and
connected to a switch. You would use a straight-through cable for this configuration.
The LAN and WAN interfaces are purely L1/L2. No IP addresses can be assigned. Instead, a
logical L3 interface is created. This is the In-path interface and it is designated a name on a per
slot and port basis (in LAN/WAN pairs). A bypass card (or in-path card) in slot0 with just one
LAN and one WAN interface will have a logical interface called inpath0_0. In-path interfaces
for a 4-port card in slot1 will get inpath1_0 and inpath1_1, representing the pair or LAN/WAN
ports respectively.
Inpath1_0 will represent LAN1_0 and WAN1_0. Inpath1_1 will represent LAN1_1 and
WAN1_1.
2007-2009 Riverbed Technology, Inc. All rights reserved.
19
For a physical in-path deployment, when connecting the LAN and WAN interface to the
network, both of them are to be treated as a router. When connecting to a router, host, or firewall,
a crossover cable needs to be used. When connecting to a switch, a straight-through cable has to
be used. The Steelhead appliance supports auto-MDIX (medium dependent interface crossover),
however when using the wrong cables you run the risk of breaking the connection between the
components the Steelhead appliances placed in-between, especially in bypass. These components
may not support auto-MDIX.
For a virtual in-path deployment the WAN interface needs to be connected. The LAN interface
does not need to be connected and will be shut down automatically as soon as the virtual in-path
option is enabled in the Steelhead appliances configuration.
For server-side out-of-path deployments only the Primary interface needs to be connected.
In-path
In-path Networks
Physical in-path configurations are suitable for locations where the total bandwidth is within the
limits of the installed Steelhead appliance or serial cluster of Steelhead appliances.
The Steelhead appliance can be physically connected to access both ports and trunks. When the
Steelhead appliance is placed on a trunk, the In-path interface has to be able to tag its traffic with
the correct VLAN number. The supported trunking protocol is 802.1q (Dot1Q). A tag can be
assigned via the GUI or the CLI. The CLI command for this is:
HOSTNAME (config) # in-path interface inpathx_x vlan <id>
Inter-Steelhead appliance traffic will use this VLAN (except in Full Transparent connections as
explained below).
There are several variations of the in-path deployment. Steelhead appliances could be placed in
series to be redundant. Peering rules based on a peer IP address will have to be applied to both
Steelhead appliances to avoid peering between each other. When using 4-port cards, and thus
multiple in-path IP addresses, all addresses will have to be defined to avoid peering.
A serial cluster is a failover design that can be used to mitigate the risk of possible network
instabilities and outages caused by a single Steelhead appliance failure (typically caused by
excessive bandwidth as there is no longer data reduction occurring). When the maximum number
of TCP connections for a Steelhead appliance is reached, that appliance stops intercepting new
connections. This allows the next Steelhead appliance in the cluster the opportunity to intercept
the new connections, if it has not reached its maximum number of connections. The in-path
peering rules and in-path rules are used so that the Steelhead appliances in the cluster know not
to intercept connections between themselves.
Appliances in a failover deployment process the peering rules you specify in a spill-over fashion.
A keepalive method is used between two Steelhead appliances to monitor each others status and
set a master and backup state for both Steelhead appliances. It is recommended to assign the
LAN-side Steelhead appliance to be the master due to the amount of passthrough traffic from
Steelhead to client or server. Optionally, data stores can be synchronized to ensure warm
performance in case of a failure.
In case the Steelhead appliances are deployed in parallel of each other, measures need to be
taken to avoid asymmetrical traffic from being passed through without optimization. This usually
occurs when two or more routing points in the network exist where traffic is spread over the
links simultaneously. Connection Forwarding can be used to exchange flow information between
20
the Steelhead appliances in the parallel deployment. Multiple Steelhead appliances can be
bundled together.
WAN Visibility Modes
WAN visibility pertains to how packets traversing the WAN are addressed. RiOS v5.0 offers
three types of WAN visibility modes: correct addressing, port transparency, and full address
transparency.
You configure WAN visibility on the client-side Steelhead appliance (where the connection is
initiated). The server-side Steelhead appliance must also support multiple WAN visibility (RiOS
v5.0 or later).
Correct Addressing
Correct addressing uses Steelhead appliance IP addresses and port numbers in the TCP/IP packet
header fields for optimized traffic in both directions across the WAN. This is the default setting.
This is correct as the devices which are communicating (the TCP endpoints) are the Steelhead
appliances, so their IP addresses/ports are reflected in the connection.
Port Transparency
Port address transparency preserves your server port numbers in the TCP/IP header fields for
optimized traffic in both directions across the WAN. Traffic is optimized while the server port
number in the TCP/IP header field appears to be unchanged. Routers and network monitoring
devices deployed in the WAN segment between the communicating Steelhead appliances can
view these preserved fields. Use port transparency if you want to manage and enforce QoS
policies that are based on destination ports. If your WAN router is following traffic classification
rules written in terms of client and network addresses, port transparency enables your routers to
use existing rules to classify the traffic without any changes. Port transparency enables network
analyzers deployed within the WAN (between the Steelhead appliances) to monitor network
activity and to capture statistics for reporting by inspecting traffic according to its original TCP
port number. Port transparency does not require dedicated port configurations on your Steelhead
appliances.
NOTE: Port transparency only provides server port visibility. It does not provide client and
server IP address visibility, nor does it provide client port visibility.
Full Transparency
Full address transparency preserves your client and server IP addresses and port numbers in the
TCP/IP header fields for optimized traffic in both directions across the WAN. It also preserves
VLAN tags. Traffic is optimized while these TCP/IP header fields appear to be unchanged.
Routers and network monitoring devices deployed in the WAN segment between the
communicating Steelhead appliances can view these preserved fields. If both port transparency
and full address transparency are acceptable solutions, port transparency is preferable. Port
transparency avoids potential networking risks that are inherent to enabling full address
transparency. For details, see the Steelhead Appliance Deployment Guide. However, if you must
see your client or server IP addresses across the WAN, full transparency is your only
configuration option.
21
connections between these peers to be optimized. If the OOB splice dies all optimized
connections on the peer Steelhead appliances will be terminated.
The OOB connection is a single connection existing between two Steelhead appliances
regardless of the direction of flow. So if you open one or more connections in one direction, then
initiate a connection from the other direction, there will still be only one connection for the OOB
splice. This connection is made on the first connection between two peer Steelhead appliances
using their in-path IP addresses and port 7800 by default. The OOB splice is rarely of any
concern except in full transparency deployments.
Case Study
Server-2
10.3.0.10
Server-1
10.2.0.10
10.3.0.1
10.1.0.1
Client
10.1.0.10
CFE-1
10.1.0.2
1.1.1.1
WAN
FW-1
2.2.2.2
10.2.0.1
FW-2
SFE-1
10.2.0.2
Issue 1: After establishing inner connection, the Client will try to establish an OOB connection
to the Server-B. It will address it by the IP address reported by Steelhead (SFE-1) which is in
probe response (10.2.0.2). Clearly, the connection to this address will fail since 10.2.x.x
addresses are invalid outside of the firewall (FW-2).
Resolution 1: In the above example, there is one combination of address and port (IP:port) we
know about, the connection the client is destined for which is Server-1. The client should be able
to connect to Server-1. Therefore, the OOB splice creation code in sport can be changed to create
a transparent OOB connection from the Client to Server-1 if the corresponding inner connection
is transparent.
How to Configure
There are three options to address the problem of the OOB splice connection established
mentioned in Issue 1 above.
In a default configuration the out-of-band connection uses the IP addresses of the client-side
Steelhead and server-side Steelhead. This is known as correct addressing and is our default
behavior. However, this configuration will fail in the network topology described above but
works for the majority of networks. The command below is the default setting in a Steelhead
appliances configuration.
in-path peering oobtransparency mode none
In the network topology discussed in Issue 1, the default configuration does not work. There are
two oobtransparency modes that may work in establishing the peer connections; destination and
full. When destination mode is used, the client uses the first server IP and port pair to go through
the Steelhead appliance with which to connect to the server-side Steelhead appliance and the
client-side Steelhead IP and port number chosen by the client-side Steelhead appliance. To
change to this configuration use the following CLI command:
22
In oobtransparency full mode, the IP of the first client is used and a pre-configured on the clientside Steelhead appliance to use port 708. The destination IP and port are the same as in
destination mode, i.e., that of the server. This is the recommended configuration when VLAN
transparency is required. To change to this configuration use the following CLI command:
in-path peering oobtransparency mode full
To change the default port used the by the client-side Steelhead appliance when
oobtransparency mode full is configured, use the following CLI command:
in-path peering oobtransparency port
It is important to note that these oobtransparency options are only used with full transparency. If
the first inner-connection to a Steelhead was not transparent, the OOB will always use correct
addressing.
Virtual In-path
Introduction to Virtual In-path Deployments
In a virtual in-path deployment, the Steelhead appliance is virtually in the path between clients
and servers. Traffic moves in and out of the same WAN interface. This deployment differs from
a physical in-path deployment in that a packet redirection mechanism is used to direct packets to
Steelhead appliances that are not in the physical path of the client or server.
Redirection mechanisms:
Layer-4 Switch. You enable Layer-4 switch (or server load-balancer) support when you
have multiple Steelhead appliances in your network to manage large bandwidth
requirements.
PBR (Policy-Based Routing). PBR enables you to redirect traffic to a Steelhead appliance
that is configured as virtual in-path device. PBR allows you to define policies to redirect
packets instead of relying on routing protocols. You define policies to redirect traffic to the
Steelhead appliance and policies to avoid loop-back.
23
way of tracking whether the PBR next hop is available. You can enable this tracking feature in a
route map with the following Cisco router command:
set ip next-hop verify-availability
With this command, PBR attempts to verify the availability of the next hop using information
from CDP. If that next hop is unavailable, it skips the actions specified in the route map. PBR
checks availability in the following manner:
1. When PBR first attempts to send to a PBR next hop, it checks the CDP neighbor table to see
if the IP address of the next hop appears to be available. If so, it sends an Address Resolution
Protocol (ARP) request for the address, resolves it, and begins redirecting traffic to the next
hop (the Steelhead appliance).
2. After PBR has verified the next hop, it continues to send to the next hop as long as it obtains
answers from the ARP request for the next hop IP address. If the ARP request fails to obtain
an answer, it then rechecks the CDP table. If there is no entry in the CDP table, it no longer
uses the route map to send traffic. This verification provides a failover mechanism.
In more recent versions of the Cisco IOS software, there is a feature called PBR with Multiple
Tracking Options. In addition to the old method of using CDP information, it allows methods
such as HTTP and ping to be used to determine whether the PBR next hop is available. Using
CDP allows you to run with older IOS 12.x versions.
WCCP Deployments
Introduction to WCCP
The WCCP protocol is a stateful language that the router and Steelhead appliance can use to
redirect traffic to the Steelhead appliance in order for it to optimize. Several functions will have
to be covered to make it stateful and scalable. Failover, load distribution, and negotiation of
connection parameters will all have to be communicated throughout the cluster that the Steelhead
appliance and router form upon successful negotiation. The protocol has four messages to
encompass all of the above functions:
HERE_I_AM. Sent by Steelhead appliances to announce themselves.
I_SEE_YOU. Sent by WCCP enabled routers to respond to announcements.
REDIRECT_ASSIGN. Sent by the designated Steelhead appliance to determine flow
distribution.
REMOVAL_QUERY. Sent by router to check a Steelhead appliance after missed
HERE_I_AM messages.
When you configure WCCP on a Steelhead appliance:
Routers and Steelhead appliances are added to the same service group.
Steelhead appliances announce themselves to the routers.
Routers respond with their view of the service group.
One Steelhead will be the designated CE (caching engine) and tells the routers how to
redirect traffic among the Steelhead appliances in the service group.
How Steelhead Appliances Communicate with Routers
Steelhead appliances can use one of the following methods to communicate with routers:
Unicast UDP. The Steelhead appliance is configured with the IP address of each router. If
additional routers are added to the service group, they must be added on each Steelhead
appliance.
Multicast UDP. The Steelhead appliance is configured with a multicast group. If additional
routers are added, you do not need to add or change configuration settings on the Steelhead
appliances.
24
Redirection
By default, all TCP traffic is redirected, optionally a redirect-list can be defined where only the
contents of the redirect-list are redirected. A redirect-list in a WCCP configuration refers to an
ACL that is configured on the router to select the traffic that will be redirected.
Traffic is redirected using one of the following schemes:
GRE (Generic Routing Encapsulation). Each data packet is encapsulated in a GRE packet
with the Steelhead appliance IP address configured as the destination. This scheme is
applicable to any network.
L2 (Layer 2). Each packet MAC address is rewritten with a Steelhead appliance MAC
address. This scheme is possible only if the Steelhead appliance is connected to a router at
Layer 2.
Either. The either value uses L2 firstif Layer 2 is not supported, GRE is used. This is the
default setting.
You can configure your Steelhead appliance to not encapsulate return packets. This allows your
WCCP Steelhead appliance to negotiate with the router or switch as it if were going to send grereturn packets, but to actually send l2-return packets. This configuration is optional but
recommended when connected at L2 directly. The command to override WCCP packet return
negotiation is wccp l2-return enable. Be sure the network design permits this.
Load Balancing and Failover
WCCP supports unequal load balancing. Traffic is redirected based on a hashing scheme and the
weight of the Steelhead appliances. Each router uses a 256-bucket Redirection Hash Table to
distribute traffic for a Service Group across the member Steelhead appliances. It is the
responsibility of the Service Group's designated Steelhead appliance to assign each router's
Redirection Hash Table. The designated Steelhead appliance uses a
WCCP2_REDIRECT_ASSIGNMENT message to assign the routers' Redirection Hash Tables.
This message is generated following a change in Service Group membership and is sent to the
same set of addresses to which the Steelhead appliance sends WCCP2_HERE_I_AM messages.
A router will flush its Redirection Hash Table if a WCCP2_REDIRECT_ASSIGNMENT is not
received within five HERE_I_AM_T seconds of a Service Group membership change. The
HASH algorithm can use several different input fields to come up with an 8 bit output (which is
the bucket value). Default input fields are source and destination IP address of the packet that is
redirected. Source and destination TCP port or any combination can be used.
The weight determines the percentage of traffic a Steelhead appliance in a cluster gets, the
hashing algorithm determines which flow is redirected to which Steelhead appliance. The default
weight is based on the Steelhead appliance model number. The weight is heavier for models that
support more connections. You can modify the default weight if desired.
With the use of weight you can also create an active/passive cluster by assigning a weight of 0 to
the passive Steelhead appliance. This Steelhead appliance will only get traffic when the active
Steelhead appliance fails.
Assignment and Redirection Methods
The assignment method refers to how a router chooses which Steelhead appliance in a WCCP
service group to redirect packets to. There are two assignment methods: the Hash assignment
method and the Mask assignment method. Steelhead appliances support both the Hash
assignment and Mask assignment methods.
HASH
2007-2009 Riverbed Technology, Inc. All rights reserved.
25
Redirection using Hash assignment is a two-stage process. In the first stage a primary key is
formed from the packet which is defined by the Service Group and is hashed to yield an index.
This index number will then be placed into a Redirection Hash Table.
In the Redirection Hash Table a packet has either an unflagged web-cache, unassigned bucket, or
a flagged packet. In the event the packet has an unflagged web-cache, the packet is redirected to
that web-cache. If the bucket is unassigned the packet is forwarded normally. However, if the
bucket is flagged indicating a secondary hash then a secondary key is formed (as defined by the
Service Group description). This key is hashed to yield an index number which in turn is placed
into the Redirection Hash Table. If this secondary entry contains a web-cache index then the
packet is directed to that web-cache. If the entry is unassigned the packet is forwarded normally.
MASK
The first phase of Mask assignment is defining the mask itself. The mask can be up to seven bits
and can be applied to the SRC TCP port, DST TCP port, source IP address or DST IP address or
a combination of the four attributes but may not exceed seven bits. Depending on the amount of
bits selected different number of buckets are created and assigned to the different Steelhead
appliances in the service group. As traffic traverses the router a bitwise AND operation is
performed between the mask and the IP address/TCP port depending on the mask defined. The
traffic is assigned to the different buckets based on the results of the AND operation.
Mask IP address/TCP port pairs are processed in an order they are received and in turn are
compared to the seven bits.
From Internet-Draft WCCP version 2 (http://www.wrec.org/Drafts/draft-wilson-wrec-wccp-v200.txt ):
Note that in all of the mask fields of this element a zero means "Don't care.
0
1
2
3
0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
|
Source Address Mask
|
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
|
Destination Address Mask
|
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
|
Source Port Mask
|
Destination Port Mask
|
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
Source Address Mask. The 32-bit mask to be applied to the source IP address of the packet.
Destination Address Mask. The 32-bit mask to be applied to the destination IP address of
the packet.
Source Port Mask. The 16-bit mask to be applied to the TCP/UDP source port field of the
packet.
Destination Port Mask. The 16-bit mask to be applied to the TCP/UDP destination port
field of the packet.
It may not be obvious for the details here but there is a priority bit order when using Mask. The
above diagram reads from most significant to least significant bottom left to top. In other words,
the priority bits will be source port, destination port, destination address, and source address.
This is helpful in knowing in the event of troubleshooting which bucket a specific resource is
allocated.
26
For more information regarding Hash or Mask assignment, refer to the Steelhead Appliance
Deployment Guide and the whitepaper WCCP Mask Assignment provided on the Riverbed
Partner Portal and/or Riverbed Technical Support site.
27
Look for WCCP status messages near the end of the output.
You can trace WCCP packets and events on the router.
Checking the Access List Configuration
On the router, at the system prompt, enter the following set of commands:
Router>en
Router#show access-lists <access_list_number>
28
LAN I/F
WAN I/F
WAN
Client-side
Steelhead
PRI
IP SRC=S-SH
Fixed-target Rule
Server-side
Steelhead
A fixed-target rule is applied on the client-side Steelhead appliance to make sure the TCP session
is intercepted and statically sent to the out-of-path Steelhead appliance on the server side. When
enabling out-of-path on the server-side Steelhead appliance, it starts listening on port 7810 for
incoming connections from a client-side Steelhead appliance.
The Steelhead appliance can perform NAT. The server will see the IP address of the Steelhead
appliance as the source of the connection so the packets are returned to the Steelhead appliance
instead of the client. This is necessary to make sure that the bidirectional traffic is seen by the
Steelhead appliance. Also keep in mind that optimization will only occur when the TCP
connection is initiated by the client.
Out-of-Path, Failover Deployment
An out-of-path, failover deployment serves networks where an in-path deployment is not an
option. This deployment is cost effective, simple to manage, and provides redundancy.
When both Steelhead appliances are functioning properly, the connections traverse the master
appliance. If the master Steelhead appliance fails, subsequent connections traverse the backup
Steelhead appliance. When the master Steelhead appliance is restored, the next connection
traverses the master Steelhead appliance. If both Steelhead appliances fail, the connection is
passed through unoptimized to the server. The way to do this is to specify multiple target
appliances in the fixed-target in-path rule on the client-side Steelhead appliance.
29
Switch
Server
WAN
Router
Steelhead A
Steelhead B
Steelhead
Firewall/VPN
WAN
PRI
Client
FTP Server
DMZ
Web Server
In this hybrid design, a client-side Steelhead appliance (not shown) would use a typical autodiscovery process to optimize any data going to or coming from the clients shown. If however, a
remote user would like to get optimization to the DMZ shown above, the standard autodiscovery process would not function properly since the packet flow would prevent the autodiscovery probe from ever reaching the Steelhead appliance. To remedy this, a fixed-target rule
matching the destination address of the DMZ and targeted to the Primary (PRI) interface of the
Steelhead appliance above will ensure that the traffic will reach the Steelhead appliance, and due
to the server-side out-of-path NAT process, will ensure that it returns to the Steelhead appliance
for optimization on the return path.
30
dropped because during the detection process the Steelhead appliances have no way of knowing
that the connection is asymmetric.
If asymmetric routing is detected, an entry is placed in the asymmetric routing table and any
subsequent connections from that IP address pair will be passed through unoptimized. Further
connections between these hosts are not optimized until that particular asymmetric routing cache
entry times out.
Type
Description
Complete
Asymmetry
Server-side
Asymmetry
Client-side
Asymmetry
Multi-SYN
Retransmit
Connection Forwarding
In asymmetric networks, a client request traverses a different network path from the server
response. Although the packets traverse different paths, to optimize a connection, packets
traveling in both directions must pass through the same client and server Steelhead appliances.
If you have one path (through Steelhead-2) from the client to the server and a different path
(through Steelhead-3) from the server to the client, you need to enable in-path Connection
2007-2009 Riverbed Technology, Inc. All rights reserved.
31
Forwarding and configure the Steelhead appliances to communicate with each other. These
Steelhead appliances are called neighbors and exchange connection information to redirect
packets to each other.
You can configure multiple neighbors for a Steelhead appliance. Neighbors can be placed in the
same physical site or in different sites, but the latency between them should be small because the
packets traveling between them are not optimized.
When a SYN arrives on Steelhead-2, it will send a message on port 7850 telling it that it is
expecting packets for that connection. Steelhead-3 then acknowledges and once Steelhead-2 gets
the confirmation from Steelhead-3 it will continue with the SYN+ out to the WAN. When the
SYN/ACK+ comes back, if it arrives at Steelhead-3, it will encapsulate that packet and forward
it back to Steelhead-2. Once the connection has been established, there will be no more
encapsulation between the two Steelhead appliances for that flow.
If a subsequent packet arrives on Steelhead-3, it will perform the destination IP/port rewrite. The
Steelhead appliance simply changes the destination IP of the packet to that of the neighbor
Steelhead appliance. No encapsulation is involved later on in the flow.
In WCCP deployments, Connection Forwarding can also be used to prevent outages whenever
the cluster and the redirection table changes. Default behavior of Connection Forwarding is that
when a neighbor is lost, the Steelhead appliance that lost the neighbor also will pass through the
connection since it is assuming asymmetric routing of traffic. In WCCP deployments this is not
the case and this behavior has to be avoided. The command in-path neighbor allowfailure overrides the default behavior and allows the Steelhead appliances to continue
optimizing. Understanding the implication of applying this command prior to configuring it in a
production environment is recommended.
Commands to enable Connection Forwarding:
in-path neighbor enable
in-path neighbor ip address <addr> [port <port>]
in-path neighbor allow-failure (optional)
IP addresses of neighbors with multiple In-path interfaces only have to be specified with the first
In-path interface.
corresponding Steelhead appliance doing the optimization. With source enabled or all, the logical
IP address being used by the router is not bound to a physical interface or MAC address. By
using all or source-based SR, the MAC of the actual IP is learned by the Steelhead appliances
which could cause confusion in the route that the packet takes.
If you have not deployed data store synchronization it is also possible to manually send the data
from one Steelhead appliance to another. The receiving Steelhead appliance will have to start a
listening process on the Primary/AUX interface. The sending Steelhead appliance will have to
push the data to the IP address of the Primary interface.
Something to note about Primary and AUX interfaces: if the connection is created from the
Steelhead appliance to some external machine (non-Steelhead device), traffic will only go out the
Primary or AUX interfaces. Therefore, TACACS+ and RADIUS will only go out the Primary or
AUX interface since they originated from the Steelhead appliance.
The commands to start this are:
HOSTNAME (config) # datastore receive port <port>
HOSTNAME (config) # datastore send addr <addr> port <port>
CIFS Prepopulation
The prepopulation operation effectively performs the first Steelhead read of the data on the
prepopulation share. Subsequently, the Steelhead appliance handles read and write requests as
effectively as with a warm data transfer. With warm transfers, only new or modified data is sent,
dramatically increasing the rate of data transfer over the WAN.
33
The order in which authentication is attempted is based on the order specified in the AAA
method list. The local value must always be specified in the method list.
The authentication methods list provides backup methods should a method fail to authenticate a
user. If a method denies a user or is not reachable, the next method in the list is tried. If multiple
servers within a method (assuming the method is contacting authentication servers) and a server
time-out is encountered, the next server in the list is tried. If the current server being contacted
issues an authentication reject, no other servers for the method are tried and the next
authentication method in the list is attempted. If no methods validate a user, the user will not be
allowed access to the box.
The Steelhead appliance does not have the ability to set a per interface authentication policy. The
same default authentication method list is used for all interfaces. You cannot configure
authentication methods with subsets of the RADIUS or TACACS+ servers specified (that is,
there are no server groups).
When configuring the authentication server, it is important to specify the service rbt-exec along
with the appropriate custom attributes for authorization. Authorization can be based on either the
admin account or the monitor user account by using local-user-name=admin or local-username=monitor, respectively.
Refer to the CLI Guide for the available RADIUS and TACACS+ authentication commands.
SSL
With Riverbed SSL, Steelhead appliances are configured to have a trust relationship so they can
exchange information securely over an SSL connection. SSL clients and servers communicate
with each other exactly as they do without Steelhead appliances; no changes are required for the
client and server application, nor are they for the configuration of proxies. Riverbed splits up the
SSL handshake, the sequence of message exchanges at the start of an SSL connection. This is
called split termination.
In an ordinary SSL handshake, the client and server first establish identity using public-key
cryptography, then negotiate a symmetric session key to be used for data transfer. With Riverbed
SSL acceleration, the initial SSL message exchanges take place between the client and the
server-side Steelhead appliance. Then the server-side Steelhead appliance sets up a connection to
the server, to ensure that the service requested by the client is available. In the last part of the
handshake sequence, a Steelhead-to-Steelhead process ensures that both appliances (client-side
and server-side) know the session key.
The client SSL connection logically terminates at the server but physically terminates at the
client-side Steelhead appliancejust as is true for logical versus physical unencrypted TCP
connections. And just as the Steelhead-to-Steelhead TCP connection over the WAN may use a
better TCP implementation than the ones used by client or server, the Steelhead-to-Steelhead
connection may be configured to use better ciphers and protocols than the client and server
would normally use.
The Steelhead appliance also contains a secure vault which stores all SSL server settings, other
certificates (that is, the CA, peering trusts, and peering certificates), and the peering private key.
The secure vault protects your SSL private keys and certificates when the Steelhead appliance is
not powered on. You set a password for the secure vault which is used to unlock it when the
Steelhead appliance is powered on. After rebooting the Steelhead appliance, SSL traffic is not
optimized until the secure vault is unlocked with the correct password.
34
Refer to the Steelhead Appliance Management Console Users Guide for information on
configuring SSL.
Monitoring. The CMC provides both high-level status and detailed statistics of the
performance of Steelhead appliances and enables you to configure event notification for
managed Steelhead appliances.
Management. The CMC enables you to start, stop, restart, and reboot remote Steelhead
appliances. You can also schedule jobs to send software upgrades and configuration changes
to remote appliances or to collect logs from remote Steelhead appliances.
Optimization Policy. Use optimization policies to manage optimization features such as the
data store, in-path rules, and SSL settings, in addition to many others.
Security Policy. Use security policies to manage appliances in which security is a key
component.
System Settings Policy. Use system settings policies to organize and manage system setting
features such as alarms, announcements, email notifications, log settings, and others.
Each policy type is made up of particular RiOS features. For example, system settings policies
contain feature sets for common system administration settings such as alarm settings,
announcements, email notification settings, among others, while security policies contain feature
sets for encryption, authentication methods, and user permissions.
Each group or Steelhead appliance can be assigned one of each type of policy. Because the
Global group serves as the root group, or parent, to all subsequent groups and appliances, any
policies assigned to the Global group provide the default values for all groups and Steelhead
appliances.
The Override Parent feature can override the inheritance of values from the policies applied to
the parent group. It is off by default.
35
Steelhead Mobile Solution (Steelhead Mobile Controller & Steelhead Mobile Client)
The Steelhead Mobile Controller (SMC) enables centralized management of Steelhead Mobile
clients that deliver wide area data services for the entire mobile workforce.
The Steelhead Mobile software is deployed to laptops or desktops for mobile workers, home
users, and small branch office users. A Steelhead Mobile Controller, located in the data center, is
required for Steelhead Mobile deployment, management, and licensing control. Once deployed
and connected, the Steelhead Mobile clients connect directly with a Steelhead appliance in order
to accelerate data and application.
The Mobile Controller facilitates the essential administration tasks for your Mobile Clients:
Configuration. The Mobile Controller enables you to install, configure, and update Mobile
Clients in groups. The Mobile Controller utilizes Endpoint policies, Acceleration policies,
MSI packages, and deployment IDs (DIDs) to facilitate centralized configuration and
reporting.
Monitoring. The Mobile Controller provides both high-level status and detailed statistics on
Mobile Client performance, and enables you to configure alerts for managed Mobile Clients.
Management. The Mobile Controller enables you to schedule software upgrades and
configuration changes to groups of Mobile Clients, or to collect logs from Mobile Clients.
Endpoint Policies
Endpoint policies are used as configuration templates to configure groups of Mobile Clients that
have the same configuration requirements. For example, you might use the default endpoint
policy for the majority of your Mobile Clients and create another one for a group of users who
need to connect to a different Mobile Controller.
Acceleration Policies
Acceleration policies are used as configuration templates to configure groups of Mobile Clients
that have the same performance requirements. For example, you might use the default
36
acceleration policy for the majority of your Mobile Clients and create another acceleration policy
for a group of Mobile Clients that need to pass-through a specific type of traffic.
Mobile Clients must have both an endpoint policy and an acceleration policy for optimization to
occur.
MSI Packages
You use Microsoft Software Installer (MSI) packages to install and update the Steelhead Mobile
Client software on each of your endpoint clients. The MSI package contains information
necessary for Mobile Clients to communicate with the Mobile Controller.
Deployment IDs
The Mobile Controller utilizes deployment IDs to link Endpoint and Acceleration policies to
your Mobile Clients. The DID governs which policies and MSI packages the Mobile Controller
provides to your clients. You can define the DIDs when you create MSI packages. You then
assign policies to the DIDs. When you deploy an MSI package, the DID becomes associated
with the endpoint client. The Mobile Controller subsequently uses the DID to identify the client
and automatically provide policy and software updates.
Firewall Requirements
If you deploy the Mobile Controller in the DMZ next to a VPN concentrator with firewalls on
each side, the client-side network firewall must have port 7801 available. The server-side
firewall must have ports 22, 80, 443, 7800, and 7870 open. If you are using application control,
you need to allow rbtdebug.exe, rbtmon.exe, rbtsport.exe, and shmobile.exe.
Please see the Steelhead Mobile Controller Users Guide for more information
Interceptor Appliance
The Interceptor appliance extends the performance capabilities of Steelhead appliances to meet
the requirements of very large data center environments. Working with Steelhead appliances, an
Interceptor appliance can support up to 1,000,000 concurrent connections, running up to 4
gigabits per second.
Interceptor Deployment Terminology
37
Peer Neighbors. Steelhead 1, Steelhead 2, Steelhead 3, and Steelhead 4 are the pool of
LAN-side Steelhead appliances that are load balanced by the Interceptor appliances. In
relation to the Interceptor appliances, these Steelhead appliances are called peer neighbors.
Peer Interceptor Appliances. Interceptor 1 and Interceptor 3 are peers to each other,
connected virtually, in parallel.
Failover Buddies. Interceptor 1 and Interceptor 2 are failover buddies to each other,
connected with cables, in serial. If either Interceptor appliance goes down or requires
maintenance, its buddy handles redirection for its connections.
In-Path Rules
When the Interceptor appliance intercepts a SYN request to a server, the in-path rules you
configure determine the subnets and ports for traffic that will be optimized. You can specify inpath rules to pass-through, discard, or deny traffic; or to redirect and optimize it.
In the case of a data center, the Interceptor appliance intercepts SYN requests when a data center
server establishes a connection with a client that resides outside the data center.
In the connection-processing decision tree, in-path rules are processed before load-balancing
rules. Only traffic selected for redirection proceeds to load balancing rules processing.
Load Balancing
For connections selected by an in-path redirect rule, the Interceptor appliance distributes the
connection to the most appropriate Steelhead appliance based on rules you configure,
intelligence from monitoring peer neighbor Steelhead appliances, and the Riverbed connection
distribution algorithm.
Failover
You can configure a pair of Interceptor appliances as failover buddies. In the event one
Interceptor appliance goes down or requires maintenance, the failover buddy ensures
uninterrupted service.
Peer Interceptor Monitoring
Peer Interceptor appliances include both failover buddies deployed in serial configuration and
Interceptor appliances deployed in a parallel configuration to handle asymmetric routes.
Asymmetric routing can cause the response from the server to be routed along a different
physical network path from the original request, and a different Steelhead appliance may be on
each of these paths. When you deploy peer Interceptor appliances in parallel, the first Interceptor
appliance that receives a packet delays forwarding it. It requests that the other Interceptor
appliances redirect packets for the connection to it. When the other Interceptor appliances have
confirmed that they have received and accepted this request, the first Interceptor appliance
begins to redirect the connection.
Peer Neighbor Monitoring
Peer neighbor Steelhead appliances are the pool of Steelhead appliances for which the
Interceptor appliance monitors capacity and balances load. To assist in deployment tuning and
troubleshooting, you can monitor the state of neighbor Steelhead appliances.
Link State Detection and Link State Propagation
The Interceptor appliance monitors the link state of devices in its path, including routers,
switches, interfaces, and In-path interfaces. When the link state changes (for example, the link
goes down or it resumes), the Interceptor appliance propagates the change to the dynamic routing
38
table. Link state propagation ensures accurate and timely triggers for failover or redundancy
scenarios.
EtherChannel Deployment
The Interceptor appliance can operate within an EtherChannel. In an EtherChannel deployment,
all of the links in the channel must pass through the same Interceptor appliance.
VLAN Tagging
The Interceptor appliance supports VLAN tagged connections in VLAN trunked links. The
Interceptor appliance supports VLAN 802.1q.
39
III. Features
Feature Licensing
Certain features on Steelhead appliances require a license for operation. Licenses for all features,
including platform specific licenses, are included with the purchase of a Steelhead appliance
apart from the SSL license which you must request separately. These licenses are factory
installed, however licenses can be installed by the user via the CLI or Management Console.
Licenses are required to be installed for the base system to function, as well as the application
acceleration for CIFS and MAPI. This includes the Scalable Data Referencing license (base), the
Windows File Servers license (CIFS), and the Microsoft Exchange (EXCH) license. Additional
licensed features that will automatically be included upon activating the base license but do not
require a separate license key are the Microsoft SQL optimization module, and the NFS
optimization module.
All licensed features, with the exception of the Microsoft MS SQL optimization module, are
enabled by default.
Cwnd size at the rate prescribed by standard TCP is further compounded by considering that a
packet loss event causes TCP to back off by reducing the current Cwnd size by half. This
reduction is vital in allowing TCP to play nicely with other sessions sharing link bandwidth,
however in the case of high BDP links; the time to recover from such a loss event at standard
Cwnd growth rates would represent a very ineffective use of the bandwidth available.
For example, for a standard TCP connection with 1500-byte packets and a 100ms round-trip
time, achieving a steady-state throughput of 10Gbps would require an average congestion
window of 83,333 segments, and a packet drop rate of at most one congestion event every
5,000,000,000 packets (or equivalently, at most one congestion event every 1 2/3 hours). Clearly
this is not a likely possibility in real world networks, and is the basis for which HSTCP was
developed. HSTCP solves problems related to the rate at which to grow the Cwnd, as well as
how to respond when loss events occur and the Cwnd needs to be reduced. Further information
as to how this is achieved is explained in the RFCs referenced above.
The following table and graph displays how filling a Long Fat Network (OC-12) is done.
Test Scenario
Bandwidth
RTT Latency
Throughput
Baseline
622 Mbps
15 ms
36 Mbps
622 Mbps
15 ms
600+Mbps
Baseline
622 Mbps
100 ms
5 Mbps
622 Mbps
100 ms
600+Mbps
700
600
500
400
300
200
100
0
1
49
97
145 193 241 289 337 385 433 481 529 577 625 673
tim
(s e c)
Time e
(seconds)
w
/ Steelhead
w/Steelhead
HSTCP HSTCP
15
ms
RTT
15 ms
RTT
41
In order to achieve the maximum throughput possible for a given link with TCP, it is important
to set the send and receive buffers to a proper size. Using buffers that are too small may not
allow the Cwnd to fully open, while using buffers that are too large may overrun the receiver and
break the flow control process. When configuring the send and receive WAN buffers on a
Steelhead, it is recommended that they be set to two times the Bandwidth Delay Product.
As an example, a 45Mb/s point to point connection with 100ms of latency, should have a buffer
size of 1,125,000 bytes set on the WAN send (for the sending Steelhead), and the same number
on the receive for the WAN interface on the receiving Steelhead ( (45,000,000bits/8*.1s) *2).
For a point-to-point connection such as this one, the send and receive buffers would typically be
the same value.
Additionally, it is recommended that buffers on WAN routers be set to accommodate the packet
influx by allocating at least one times the BDP worth of packets. As an example, considering the
case of the 45Mb/s connection above with 100ms of latency, and given that a packet is 1500
bytes in size, we realize that we need to back that buffer 375 packets deep [(45,000,000/8
*.1)/1500].
MX-TCP
MX-TCP optimizes high-loss links where regular TCP would cause under utilization. With MXTCP, the TCP congestion control algorithm is removed on the inner connections. This allows the
link to be saturated in a much faster time frame and eliminates the possibility of underutilizing
the link. Any class that is defined on the Steelhead appliance can be MX-TCP enabled.
You can use MX-TCP to achieve high throughput rates even when the physical medium carrying
the traffic has high loss rates. For example, a common usage of MX-TCP is for ensuring high
throughput on satellite connections where no lower layer loss recovery technique is in use.
Another usage of MX-TCP is to achieve high throughput over high-bandwidth, high-latency
links, especially when intermediate routers do not have properly tuned interface buffers.
Improperly tuned router buffers cause TCP to perceive congestion in the network, resulting in
unnecessarily dropped packets, even when the network can support high throughput rates.
Quality of Service
QoS Concepts
You can configure QoS on Steelhead appliances to control the prioritization of different types of
network traffic and to ensure that Steelhead appliances give certain network traffic (for instance,
VoIP) higher priority than other network traffic.
RiOS 5.0 provides two types of QoS structures: Flat and Hierarchical.
Flat QoS. All classes are created at the same level. When all classes are on the same level,
the types of QoS policies that can be represented are limited.
Hierarchical QoS (H-QoS). Provides a way to create a hierarchical QoS structure that
supports parent and child classes. You can use a parent/child structure to segregate traffic for
remote offices based on flow source or destination. This is a way to effectively manage and
support remote sites with different bandwidth characteristics.
QoS allows you to specify priorities for various classes of traffic and properly distributes excess
bandwidth among classes.
NOTE: QoS enforcement is available only in physical in-path deployments.
42
Steelhead appliances use HFSC (Hierarchical Fair Service Curve) QoS operations to
simultaneously control bandwidth and latency for each QoS class. For each class, you can set a:
Priority level
Minimum guaranteed bandwidth level, which specifies the minimum amount of
bandwidth a QoS class is guaranteed to receive when there is bandwidth contention. If
unused bandwidth is available, a QoS class receives more than its minimum guaranteed
bandwidth level. The percentage of excess bandwidth each QoS class receives is relative to
the percentage of minimum guaranteed bandwidth it has been allocated. The total minimum
guaranteed bandwidth level of all QoS classes must be less than or equal to 100%.
Upper bandwidth level, which specifies the maximum amount of bandwidth a QoS class is
allowed to use, regardless of the available excess bandwidth.
Connection limit, which specifies a maximum number of connections the specified QoS
class will optimize. Connections over this limit are passed-through.
Once you have defined a QoS class, you can create one or more QoS rules to apply traffic to it.
QoS rules define source subnet or port, destination subnet or port, protocol, traffic type, and
VLAN and DSCP filters for a QoS class.
IMPORTANT: Familiarity with QoS classes and rules from the CLI is required for the exam.
About QoS Class Priorities
There are five QoS priorities for Steelhead appliances. You assign a class priority when you
create a QoS class. Once you have created a QoS class, you can modify its class priority. In
descending order, class priorities are:
Realtime
Interactive
Business Critical
Normal Priority
Low Priority
Priority levels are minimum priority guarantees. If higher priority service is available, a QoS
class will receive it even if the class has been assigned a lower priority level. For example, if a
QoS class is assigned the priority level Low Priority, and QoS classes that are assigned higher
priority levels are not active, the low priority QoS class adjusts to the highest possible priority
for the current traffic patterns.
Maximum Allowable QoS Classes and Rules
The number of QoS classes and rules you can create on a Steelhead appliance depends on the
appliance model number.
Steelhead Appliance Model
20
60
5x0, 1xx0
60
180
2xx0
80
240
3xx0
120
360
200
600
43
Service Ports
Service ports are the ports used for inner connections between Steelhead appliances. You can
configure multiple service ports on the server-side of the network for multiple QoS mappings.
You define a new service port and then map destination ports to that port, so that QoS
configuration settings on the router are applied to that service port. The default service ports are
7800 and 7810.
Riverbed QoS Implementation
Steelhead appliances make use of the HFSC QoS scheduling algorithm. Most traditional
algorithms allow you to define either the priority of a packet or the amount of bandwidth that
should be allocated for specific packet types (priority queuing, custom queuing). These methods
each suffer from problems such as starvation to lower priority queues or do not allow low
bandwidth queues with latency sensitive traffic the ability to leave the device sooner than larger
packets with more bandwidth allocated to them. Newer scheduling methods allow for a blend of
a priority queue for latency sensitive traffic, while other traffic would be placed into a general
purpose queue with bandwidth allocations specified by the administrator for each traffic type
(LLQ (Low Latency Queuing) uses this method).
Problems of having a single priority queue, or even multiple priority queues (of the same
priority, as is the case with LLQ), stem from the fact that today most networks carry traffic types
that cannot be classified with such a binary system (priority queue, or general queue). VoIP
traffic, which is typical very latency sensitive, should clearly be placed in a queue of high
priority. However, traffic such as stock quotes, video conferencing, remote PC control (i.e.
Remote Desktop Protocol, or PC Anywhere) are also latency sensitive, and placing them into
either the same priority queue or a separate priority queue with a different bandwidth allocation,
still causes the same problem two or more queues of the same priority will give latency
preference to packets in the queue that have more bandwidth allocated to it. As an example,
consider a case of LLQ where two priority queues are created, one for voice traffic, and one for
video traffic. The voice queue is allocated 10% of the bandwidth, and the video queue which is
also latency sensitive, is allocated 40% of the bandwidth. Since the router has no ability to
differentiate that the small voice packets should generally be allowed out before the larger video
packets (up to the bandwidth limit), you will experience a case where small voice packets may
get stuck behind several larger video packets despite not fully utilizing their 10% bandwidth
allocation.
HSFC solves these problems by logically separating the latency element of queuing from the
bandwidth element. As such, you can define multiple queues, each of a different priority relative
to the other queues, and be assured that despite more bandwidth being allocated in lower queues;
the higher queues will still get serviced preferentially from a latency perspective, up to the
amount of bandwidth specified for that queue. Steelhead appliances implement five queues with
each queue starting at Realtime and ending with Low Priority, with each queue in between
having lower latency priority than the next (Realtime having the highest). The strategy imposed
by HSFC lends itself particularly well to bursty traffic, as is the case with most networks.
Enforcing QoS for Active/Passive FTP
Active/Passive FTP Operation
To configure optimization policies for the FTP data channel, define an in-path rule with the
destination port 20 and set its optimization policy. Setting QoS for destination port 20 on the
44
client-side Steelhead appliance affects passive FTP, while setting the QoS for destination port 20
on the server-side Steelhead appliance affects active FTP.
In the case of an active FTP session, data connections originate on a server sourced on port 20
and destined to a random port specified by the client. As such, specifying a QoS rule on the
server-side Steelhead with a destination port of 20 is appropriate. With passive FTP however,
data connections initiate on the client from a random port, and are destined to a server on a
random port; as such, there is no seemingly simple way to apply a QoS rule based on the Layer 4
port information. To help solve this problem, the Steelhead allows you to define a client-side
QoS rule with a destination port of 20 to tell it that you would like to apply this QoS rule to a
passive FTP data connection. The Steelhead will intelligently identify the actual ports used for
the passive FTP data transfer, and apply the QoS logic set forth by the class where the rule has
been applied.
Converting between DSCP, IP Precedence, ToS
For the RCSP exam, you are expected to know how to convert various packet marking types.
This is important because the Steelhead appliances only understand DSCP (Differentiated
Services Code Point) values, while other network devices may support a different method of
marking or matching traffic. Various methods of converting to and from DSCP values are
defined by RFC 2474.
Interpreting and Converting Common Router Policies
In addition to being able to convert to and from DSCP values for proper marking and matching
between Steelhead appliances and other network nodes on the RCSP exam, understanding how
to convert simple QoS configurations from Cisco and other popular routing platforms is required.
Generally, some familiarity with QoS configuration on routers and an understanding of how
Steelhead appliances implement QoS (see Riverbed QoS Implementation section) should make
the process of converting configurations a simpler task.
Description
The server located in the data center which hosts the origin data
volumes.
IMPORTANT: The PFS share and the origin-server share name
cannot contain Unicode characters. The Management Console does
45
PFS Term
Description
not support Unicode characters.
Domain Mode
In Domain mode you join the Windows domain for which the
Steelhead appliance will be a member. Typically, this is the same
domain as your companys domain.
Domain Controller
Specifies the domain controller name, the host that provides user login
service in the domain. (Typically, with Windows Active Directory
Service domains, given a domain name, the system automatically
retrieves the domain controller name.)
Global Share
The data volume exported from the origin server to the remote
Steelhead appliance.
Local Name
The name that you assign to a share on the Steelhead appliance. This
is the name by which users identify and map a share.
Remote Path
The path to the data on the origin server or the Universal Naming
Convention (UNC) path of a share to which you want to make
available to PFS.
Share Synchronization
Continuous access to files in the event of WAN disruption. PFS provides support for
disconnected operations. In the event of a network disruption that prevents access over the
WAN to the origin server, files can still be accessed on the local Steelhead appliance.
Simple branch infrastructure and backup architectures. PFS consolidates file servers and
local tape backup from the branch into the data center. PFS enables a reduction in number
and size of backup windows running in complex backup architectures.
Automatic content distribution. PFS provides a means for automatically distributing new
and changed content throughout a network.
If any of these advantages can benefit your environment, then enabling PFS in the Steelhead
appliance is appropriate. However, PFS requires pre-identification of files and is not appropriate
in environments in which there is concurrent read-write access to data from multiple sites.
46
Pre-identification of PFS files. PFS requires that files accessed over the WAN are identified
in advance. If the data set accessed by the remote users is larger than the specified capacity of
your Steelhead appliance model or if it cannot be identified in advance, then you should have
end-users access the origin server directly through the Steelhead appliance without PFS.
(This configuration is also known as Global mode.)
Concurrent read-write data access from multiple sites. In a network environment where
users from multiple branch offices update a common set of centralized files and records over
the WAN, the Steelhead appliance without PFS is the most appropriate solution because file
locking is directed between the client and the server. The Steelhead appliance always
consults the origin server in response to a client request; it never provides a proxy response
or data from its data store without consulting the origin server.
The PFS Steelhead appliance must run the same version of the Steelhead appliance software
as the server-side Steelhead appliance.
PFS traffic to and from the Steelhead appliance travels through the Primary interface. PFS
requires that the traffic originated from the Primary interface flows through both Steelhead
appliances. For physical in-path deployments the traffic from the Primary interface has to
flow through the LAN interface of the same Steelhead appliance. When logically in-path this
traffic has to be redirected to the same Steelhead appliance.
The PFS share and origin-server share names cannot contain Unicode characters because the
Management Console does not support Unicode characters.
Enabling PFS does not reduce the amount of data store allocated for the SDR process performed
by a Steelhead appliance.
Version 2 vs Version 3 Setup
Version 2. Specify the server name and remote path for the share folder on the origin file server.
With Version v2.x, you must have the RCU service running on a Windows serverthis can be
the origin file server or a separate server.
Riverbed recommends you upgrade your v2.x shares to 3.x shares so that you do not have to run
the RCU on a server.
Version 3. Specify the login, password, and remote path used to access the share folder on the
origin file server. With Version 3, the RCU runs on the Steelhead applianceyou do not need to
install the RCU service on a Windows server.
Upgrading V2.x PFS Shares
By default, when you configure PFS shares with Steelhead appliance software versions 3.x and
higher, you create v3.x PFS shares. PFS shares configured with Steelhead appliance software
v2.x are v2.x shares. V2.x shares are not upgraded when you upgrade Steelhead appliance
software.
If you have shares created with v2.x software, Riverbed recommends that you upgrade them to
v3.x shares in the Management Console. If you upgrade any v2.x shares, you must upgrade all of
them. Once you have upgraded shares to v3.x, you should only create v3.x shares.
2007-2009 Riverbed Technology, Inc. All rights reserved.
47
You must install and start the RCU on the origin server or on a separate Windows host with
write-access to the data PFS uses. The account that starts the RCU must have write
permissions to the folder on the origin file server that contains the data PFS uses.
NOTE: In Steelhead appliance software version 3.x and higher, you do not need to install the
RCU service on the server for synchronization purposes. All RCU functionality has been
moved to the Steelhead appliance.
You must configure domain, not workgroup, settings. Domain mode supports v2.x PFS
shares but Workgroup mode does not.
Set the owner of all files and folders in all remote paths to a domain account and not a local
account.
A DNS entry should exist for the Steelhead appliance Primary interface when using Domain
mode.
NOTE: PFS only supports domain accounts on the origin file server; PFS does not support local
accounts on the origin file server. During an initial copy from the origin file server to the PFS
Steelhead appliance, if PFS encounters a file or folder with permissions for both domain and
local accounts, only the domain account permissions are preserved on the Steelhead appliance.
Local Workgroup Mode
In Local Workgroup mode you define a workgroup and add individual users that will have
access to the PFS shares on the Steelhead appliance.
48
Use Local Workgroup mode in environments where you do not want the Steelhead appliance to
be a part of a Windows domain. Creating a workgroup eliminates the need to join a Windows
domain and vastly simplifies the PFS configuration process.
NOTE: If you use Local Workgroup mode, you must manage the accounts and permissions for
the branch office on the Steelhead appliance. The local workgroup account permissions might
not match the permissions on the origin file server.
PFS Share Operating Modes
PFS provides Windows file service in the Steelhead appliance at a remote site. When you
configure PFS, you specify an operating mode for each individual file share on the Steelhead
appliance. The proxy file server can export data volumes in Local mode, Broadcast mode, and
Stand-Alone mode. After the Steelhead appliance receives the initial copy of the data and ACLs,
shares can be made available to local clients. In Broadcast and Local mode only, shares on the
Steelhead appliance are periodically synchronized with the origin server at intervals you specify,
or manually if you choose. During the synchronization process the Steelhead appliance optimizes
this traffic across the WAN.
Broadcast Mode. Use Broadcast mode for environments seeking to broadcast a set of readonly files to many users at different sites. Broadcast mode quickly transmits a read-only copy
of the files from the origin server to your remote offices. The PFS share on the Steelhead
appliance contains read-only copies of files on the origin server. The PFS share is
synchronized from the origin server according to parameters you specify when you configure
it. However, files deleted on the origin server are not deleted on the Steelhead appliance until
you perform a full synchronization. Additionally, if, on the origin server, you perform
directory moves (for example, move .\dir1\dir2 .\dir3\dir2) regularly, incremental
synchronization will not reflect these directory changes. You must perform a full
synchronization frequently to keep the PFS shares in synchronization with the origin server.
Local Mode. Use Local mode for environments that need to efficiently and transparently
copy data created at a remote site to a central data center, perhaps where tape archival
resources are available to back up the data. Local mode enables read-write access at remote
offices to update files on the origin file server.
After the PFS share on the Steelhead appliance receives the initial copy from the origin
server, the PFS share copy of the data becomes the master copy. New data generated by
clients is synchronized from the Steelhead appliance copy to the origin server based on
parameters you specify when you configure the share. The folder on the origin server
essentially becomes a back-up folder of the share on the Steelhead appliance. If you use
Local mode, users must not directly write to the corresponding folder on the origin server.
NOTE: In Local mode, the Steelhead appliance copy of the data is the master copy; do not make
changes to the shared files from the origin server while in Local mode. Changes are propagated
from the remote office hosting the share to the origin server. Riverbed recommends that you do
not use Windows file shortcuts if you use PFS.
Stand-Alone Mode. Use Stand-Alone mode for network environments where it is more
effective to maintain a separate copy of files that are accessed locally by the clients at the
remote site. The PFS share also creates additional storage space. The PFS share on the
Steelhead appliance is a one-time, working copy of data mapped from the origin server. You
can specify a remote path to a directory on the origin server, creating a copy at the branch
office. Users at the branch office can read from or write to stand-alone shares but there is no
49
synchronization back to the origin server since a stand-alone share is an initial and one-time
only synchronization.
Lock Files
When you configure a v3.x Local mode share or any v2.x share (except a Stand-Alone share in
which you do not specify a remote path to a directory on the origin server), a text file
(._rbt_share_lock.txt) that keeps track of which Steelhead appliance owns the share is created
on the origin server. Do not remove this file.
If you remove the._rbt_share_lock.txt file on the origin file server, PFS will not function
properly. (V3.x Broadcast and Stand-Alone shares do not create these files.)
Notes:
To join a domain, the Windows domain account must have the correct privileges to perform a
join domain operation.
50
The PFS share and the origin-server share name cannot contain Unicode characters. The
Management Console does not support Unicode characters.
If you have shares that were created with RiOS v2.x, the account that starts the RCU must
have write permissions to the folder on the origin file server. Also, the logon user for the
RCU server must to be a member of the Administrators group either locally on the file server
or globally in the domain.
Make sure the users are members of the Administrators group on the remote share server,
either locally on the file server (the local Administrators group) or globally in the domain
(the Domain Administrator group).
Riverbed recommends that you do not run a mixed system of PFS shares, that is, v2.x shares
and v3.0 shares.
NetFlow
Operation and Implementation
Steelhead appliances support the export of NetFlow v5 data. NetFlow can play an important role
in an organizations network by providing detailed accounting between hosts. This information
can then be used for various purposes such as billing, identifying top talkers, and capacity
planning to name a few. It can also assist in troubleshooting denial-of-service attacks.
It is common to configure NetFlow on the WAN routers in order to monitor the traffic traversing
the WAN. However, when the Steelhead appliances are in place, the WAN routers will only see
the inner Steelhead TCP session traffic and not the real IP addresses/ports of the client and
server. By supporting NetFlow v5 on the Steelhead appliance, this becomes a non-issue
altogether. In fact, it is possible to only have the Steelhead export the NetFlow data instead of the
router without compromising any functionality. By doing so, the router can spend more CPU
cycles on its core functionality: routing and switching of packets.
Similar to configuring NetFlow on the routers, NetFlow statistics are collected on the ingress
interfaces of the Steelhead appliance. Therefore, to see a complete flow or conversation between
the server and client, it is necessary to configure NetFlow on both the client-side Steelhead
appliance as well as the server-side Steelhead appliance, For example, to determine the amount
of CIFS traffic on the LAN between a server and client, configure NetFlow to collect on the
following interfaces:
Client-side Steelhead LAN interface (this will show pre-optimized traffic going from client
to server).
Server-side Steelhead LAN interface (this will show pre-optimized traffic going from server
to client).
Similarly, to determine the amount of CIFS traffic on the WAN between a server and client,
configure NetFlow to collect on the following interfaces:
Client-side Steelhead WAN interface (this will show optimized traffic going from server to
client).
Server-side Steelhead WAN interface (this will show optimized traffic going from client to
server).
51
However, bear in mind that more frequent exports could impact the performance of the Steelhead
appliance and more network bandwidth will be required to transmit the extra data.
52
IPSec
You configure IPSec encryption to allow data to be communicated securely between peer
Steelhead appliances. Enabling IPSec encryption makes it difficult for a third party to view your
data or pose as a machine you expect to receive data from. To enable IPSec, you must specify at
least one encryption (DES or NULL) and authentication algorithm (MD5 or SHA-1). Only
optimized data is protected, passthrough traffic is not.
IMPORTANT: You must set IPSec support on each peer Steelhead appliance in your network
for which you want to establish a secure connection. You must also specify a shared secret on
each peer Steelhead appliance.
NOTE: If you NAT traffic between Steelhead appliances, you cannot use the IPSec channel
between the Steelhead appliances because the NAT changes the packet headers causing IPSec to
reject them.
When you specify the VLAN Tag ID for the In-path interface, all packets originating from the
Steelhead appliance (In-path interface) are tagged with that VLAN number. The subnet that is
specified by the VLAN for an In-path interface is the one that the appliance will use to setup its
inner channel with other Steelhead appliances in your network. For passthrough traffic, the same
VLAN tag is applied to the packet upon exiting the opposite interface it enters in on. As an
example, if a passthrough packet enters the LAN interface on VLAN 10, it will leave the WAN
interface on VLAN 10 as well. For optimized traffic however, the packet may enter the LAN
interface on VLAN 10, but after the auto-discovery process will use a connection that the In-path
interface is configured on for the inner connection. Traffic returned to the Steelhead appliance
from another appliance via the inner TCP session will be placed on the correct VLAN upon
return. The VLAN Tag ID might be the same value or a different value than the VLAN tag used
on the client. A zero (0) value specifies non-tagged (or native) VLAN.
When considering the use of a Steelhead appliance on a trunk link, routing is often a point of
concern due to the potential for many networks. While static inpath routes can be used,
simplified routing commonly allows for an easier deployment.
NOTE: When the Steelhead appliance communicates with a client or a server it uses the same
VLAN tag as the client or the server. If the Steelhead appliance cannot determine which VLAN
the client or server is in, it uses its own VLAN until it is able to determine that information.
53
IV. Troubleshooting
Common Deployment Issues
Speed and Duplex
Some symptoms around speed and duplex could be:
Access does not speed up.
If you look at interface counters and see errors (sometimes counters on a Steelhead appliance
stay low, increase on network gear).
tcp.analysis.retransmission
tcp.analysis.fast_retransmission
tcp.analysis.lost_segment
tcp.analysis.duplicate_ack
A likely problem is that the router is set to 100Full (fixed) whereas the Steelhead appliance is set
to Auto. In this case, check with flood-ping, ping f I {in-path-ip} s 1400 {clientIP}
or from server-side Steelhead appliance to server. Do not perform across the WAN. Change the
interface speed/duplex to match.
NOTE: Ideally the WAN and LAN have the same duplex settings, otherwise the devices around
the Steelhead appliance will have a duplex mismatch when in bypass.
SMB (Server Message Block) Signing
SMB signing is a protocol add-on to protect permission distribution. It adds a cryptographic
signature to CIFS packets and authenticates endpoints to prevent man-in-the-middle attacks (or
optimization).
A symptom could be that file access does either not speed up or perhaps not as much. You
should see a log message about signed connections. Check the logs for
error=SMB_SHUTDOWN_ERR_SEC_SIG_REQUIRED messages.
A likely problem is that either the server has SMB signing enabled as does the client (1.X only)
or, the server has SMB signing required and the client has it enabled. In this case, change the
server to not be required:
(if enable:enable) protocol cifs secure-sig-opt enable
Packet Ricochet
If network connections fail on their first attempt but succeed on subsequent attempts, it could be
due to packet ricochet. You should suspect packet ricochet if:
The Steelhead appliance on one or both sides of a network has an In-path interface that is
different from that of the local host.
There are no in-path routes defined in your network but are needed.
54
The WAN router drops SYN packets from the Steelhead appliance before it issues an ICMP
redirect. Note that some routers might not be able or could be configured to not send ICMP
redirect packets. ICMP redirects are on by default on most routers and are sent whenever the
router has to send the packet out the same interface it arrived on to route it towards the
destination and when the next hop is on the same subnet as the source IP address. ICMP
redirect information is stored for five minutes on the Steelhead appliance.
Exclusive oplock. Informs a client that it is the only one to have a file open. It allows the
client to perform all file operations using cached or read-ahead local information until it
closes the file, at which time the server has to be updated with any changes made to the state
of the file (contents and attributes).
Batch oplock. Informs a client that it is the only one to have a file open. It allows the client
to perform all file operations on cached or read-ahead local information (including open and
close operations).
Losing an oplock may pose a problem for several reasons including anti-virus programs. The
oplock controls the consistency of optimizations such as read-ahead. Oplock levels are reduced
when conflicting opens are made to a file. The Steelhead appliance maintains the safety, thus it
reduces optimization when a client has shared access to a file instead of exclusive access in order
to keep correctness.
Asymmetric Routing (AR)
AR occurs when the transmit path is different than the return path for packets. For a Steelhead
appliance to optimize traffic it must see the flow bi-directionally. Traffic can flow
asymmetrically everywhere else in the network (between Steelhead appliances).
Detecting Asymmetric Routing
To detect AR by a client-side Steelhead look for things like:
A RST packet from the client with an invalid SYN number while the connection is in the
SYN_SENT state
Receiving a SYN/ACK packet from the server with an invalid ACK number while the
connection is in the SYN_SENT state
Receiving an ACK packet from the client while the connection in SYN_SENT state
55
Alert
Critical
Error
Warning
Notice
Info
Debug
default none
default none
Alarms
Admission Control - Whether the system connection limit has been reached. The appliance is
optimizing traffic beyond its rated capability and is unable to handle the amount of traffic
passing through the WAN link. During this event, the appliance will continue to optimize
existing connections, but new connections are passed through without optimization. The alarm
clears when the Steelhead appliance moves out of this condition.
Asymmetric Routing - Indicates OK if the system is not experiencing asymmetric traffic. If the
system does experience asymmetric traffic, this condition is detected and reported here. In
addition, the traffic is passed through, and the route appears in the Asymmetric Routing table.
56
Central Processing Unit (CPU) Utilization - Whether the system has reached the CPU
threshold for any of the CPUs in the Steelhead appliance. If the system has reached the CPU
threshold, check your settings. If your alarm thresholds are correct, reboot the Steelhead
appliance.
NOTE: If more than 100 MB of data is moved through a Steelhead appliance, while performing
PFS synchronization, the CPU utilization might become high and result in a CPU alarm. This
CPU alarm should not be cause for concern.
Data Store - Whether the data store is corrupt. To clear the data store of data, restart the
Steelhead service and select Clear Data Store on Next Restart.
Fan Error - Whether the system has detected a problem with the fans. Fans in 3U systems can
be replaced.
IPMI - Whether the system has encountered an Intelligent Platform Management Interface
(IPMI) error. The system will display a blinking amber LED. To clear the alarm, run the clear
hardware error-log command.
Licensing - Whether your licenses are current.
Link State - Whether the system has detected a link that is down. You are notified via SNMP
traps, email, and alarm status.
Memory Error - Whether the system has encountered a memory error.
Memory Paging - Whether the system has reached the memory paging threshold. If 100 pages
are swapped approximately every two hours the Steelhead appliance is functioning properly. If
thousands of pages are swapped every few minutes, then reboot the Steelhead appliance. If
rebooting does not solve the problem, contact Riverbed Technical Support.
Neighbor Incompatibility - Whether the system has encountered an error in reaching a
Steelhead appliance configured for Connection Forwarding.
Network Bypass - Whether the system is in bypass mode. If the Steelhead appliance is in bypass
mode, restart the Steelhead service. If restarting the service does not resolve the problem, reboot
the Steelhead appliance. If rebooting does not resolve the problem, shut down and restart the
Steelhead appliance.
NFS V2/V4 Alarm (If NFS enabled and V2/V4 used) - Whether the system has triggered a v2
or v4 NFS alarm.
Optimization Service - Whether the system has detected a software error in the Steelhead
service. The Steelhead service continues to function, but an error message appears in the logs
that you should investigate.
Prepopulation or Proxy File Service Configuration Error - Whether there has been a PFS or
prepopulation operation error. If an operation error is detected, restart the Steelhead service and
PFS.
Prepopulation or Proxy File Service Operation Failed - Whether a synchronization operation
has failed. If an operation failure is detected, attempt the operation again.
Proxy File Service partition Full - Whether the PFS partition is full.
RAID - Whether the system has encountered RAID errors (for example, missing drives, pulled
drives, drive failures, and drive rebuilds). For drive rebuilds, if a drive is removed and then
reinserted, the alarm continues to be triggered until the rebuild is complete.
2007-2009 Riverbed Technology, Inc. All rights reserved.
57
TCPDump
tcpdump <options>
Tcpdump options:
The tcpdump command takes the standard Linux options:
-a Attempt to convert network and broadcast addresses to names.
58
-i Listen on interface. If unspecified, tcpdump searches the system interface list for the
lowest numbered, configured up interface.
-n Do not convert addresses (that is, host addresses, port numbers, and so forth) to names.
-m Load SMI MIB module definitions from file module. This option can be used several
times to load several MIB modules into tcpdump.
-q Quiet output. Print less protocol information so output lines are shorter.
-r Read packets from file (which was created with the -w option).
-s Snarf snaplen bytes of data from each packet rather than the default of 68. 68 bytes is
adequate for IP, ICMP, TCP and UDP but may truncate protocol information from name
server and NFS packets. Packets truncated because of a limited snapshot are indicated in the
output with [|proto], where proto is the name of the protocol level at which the truncation
has occurred.
-v (Slightly more) verbose output. For example, the time to live, identification, total length
and options in an IP packet are printed. Also enables additional packet integrity checks such
as verifying the IP and ICMP header checksum.
-w Write the raw packets to file rather than parsing and printing them out. They can later be
printed with the -r option. Standard output is used if file is -.
-x Print each packet (minus its link level header) in hex. The smaller of the entire packet or
snaplen bytes will be printed.
-X When printing hex, print ASCII too. Thus if -x is also set, the packet is printed in
hex/ascii. This option enables you to analyze new protocols.
Straight-through cables. Primary and LAN ports on the appliance to the LAN switch.
Speed and duplex settings. Do not assume network auto-sensing is functioning properly.
Make sure your speed and duplex settings match on the Steelhead appliance and the router or
switch. Use a ping flood to test duplex settings.
WAN/LAN connections. Ensure the WAN interface is connected to traffic egress and the
LAN interface is connected to traffic ingress.
Appliance Configuration
IP addresses. To verify the IP address has been configured correctly:
Ensure the Steelhead appliances are reachable via the IP address. For instance, use the
Steelhead CLI command ping.
Verify that the server-side Steelhead appliance is visible to the client-side Steelhead
appliance. For example, at the system prompt, enter the CLI command:
tproxytrace -i inpath0_0 server:port
Verify that the client-side Steelhead appliance is visible to the server-side Steelhead
appliance. For example, at the system prompt, enter the CLI command:
tproxytrace -i inpath0_0 client:port
In-path rules. Verify that in-path rules are configured correctly. For example, at the system
prompt, enter the CLI command:
show in-path rules
59
In-path routes. Verify that in-path routes are configured correctly. For example, at the system
prompt, enter the CLI command:
sh ip in-path route <interface-name>
Steelhead service. If necessary, enable the Steelhead service. For example, at the system
prompt, enter the CLI command:
service enable
In-path support. If necessary, enable in-path support. For example, at the system prompt, enter
the CLI command:
in-path enable
In-path client out-path support. If necessary, disable in-path client out-of-path support. For
example, at the system prompt, enter the CLI command:
no in-path oop all-port enable
60
For any passthrough rules that could be causing some protocols to passthrough Steelhead
appliances unoptimized
That the LAN and WAN cables are not inadvertently swapped
V. Exam Questions
Types of Questions
The RSCP exam includes a variety of question types including single-answer multiple choice,
multiple-answer multiple choice, and fill in the blank. The question distribution is heavily
targeted towards the multiple choice variety, however, fill in the blank questions are used in
situations where the command is believed to be important part of everyday Steelhead appliance
operation. Regardless of the type of question, selecting the best answer(s) in response the
questions will yield the best score.
Sample Questions
1. How do you view the full configuration in the CLI?
a. > show con
b. > show configuration
c. > show config all
d. # show config full
e. (config) # show con
2. Under what circumstances will the NetFlow cache entries flush (be sent to the collector)?
(Select 3)
a. When inactive flows have remained for 15 seconds.
b. When inactive flows have remained for 30 minutes.
c. When active flows have remained for 30 minutes.
d. When the TCP URG bit is set.
e. When the TCP FIN bit is set.
3. The auto-discovery probe uses which TCP option number?
a. 0x4e (76 decimal)
b. 0x4c (76 decimal)
c. 0x42 (66 decimal)
d. Auto-discovery does not use TCP options
4. In order to achieve optimization using auto-discovery for traffic coming from site C and
destined to site A in the exhibit, which configuration below would be required?
a. In-path fixed-target rule on site B Steelhead pointing to Site A Steelhead
b. Peering rule on site B Steelhead passing through probes from site C
c. Peering rule on site B Steelhead passing through probe responses from site A
d. Both A and C
e. Both B and C
61
5. You are configuring HighSpeed TCP in an environment with an OC-12 (622Mbit/s) and 60
milliseconds of round-trip latency. The WAN router queue length is set to BDP for the link.
Assuming 1500 byte packets, the queue length for this link would be closest to:
a. 3,110 packets
b. 6,220 packets
c. 775 packets
d. 150 packets
e. 10,000 packets
6. Which of the following correctly describe the combination of cable types used in a fail-towire scenario for the interconnected devices shown in the accompanying figure? Assume
Auto-MDIX is not enabled on any device.
a. Cable 1: Crossover, Cable 2: Crossover
b. Cable 1: Straight-through, Cable 2: Straight-through
c. Cable 1: Crossover, Cable 2: Straight-through
d. Cable 1: Straight-through, Cable 2: Crossover
62
7. In the accompanying figure, on which interfaces would you capture the NetFlow export data
for active FTP data packets when a client performs a GET operation? (Assume you are not
interested in client response packets such as acknowledgements.) (Select the best answer)
a. A and B
b. B and D
c. C and D
d. B and C
e. A and C
f0
SH3
lan
s0
wan
L3 Switch
FTP Server
s0
WAN
SH4
f0
wan
lan
D
L2 Switch
FTP Client
63
10. Type in the command used to show information regarding the current health (status) of a
Steelhead, the current version, the uptime, and the model number. (fill in the blank)
_______________
Answers
1d, 2ace, 3b, 4b, 5a, 6c, 7e, 8e, 9e, 10 show info
64
VI. Appendix
Acronyms and Abbreviations
Acronym/Abbreviation
AAA
ACL
ADS
AR
ARP
BDP
BW
CA
CAD
CDP
CIFS
CLI
CMC
CPU
CSV
DC
DHCP
DID
DNS
DSCP
EAD
FIFO
FTP
GB
GRE
GUI
HFSC
HSRP
HSTCP
HTTP
HTTPS
ICMP
ID
IOS
IP
IPSec
L2
Definition
Authentication, Authorization, and Accounting
Access Control List
Active Directory Services
Asymmetric Routing
Address Resolution Protocol
Bandwidth-Delay Product
Bandwidth
Certificate Authority
Computer Aided Design
Cisco Discovery Protocol
Common Internet File System
Command-Line Interface
Central Management Console
Central Processing Unit
Comma-Separated Value
Domain Controller
Dynamic Host Configuration Protocol
Deployment ID (for Steelhead Mobile)
Domain Name Service
Differentiated Services Code Point
Enhanced Auto-Discovery
First in First Out
File Transfer Protocol
Gigabytes
Generic Routing Encapsulation
Graphical User Interface
Hierarchical Fair Service Curve
Hot Standby Routing Protocol
High-Speed Transmission Control Protocol
HyperText Transport Protocol
HyperText Transport Protocol Secure
Internet Control Message Protocol
Identification number
(Cisco) Internetwork Operating System
Internet Protocol
Internet Protocol Security Protocol
Layer 2
65
Acronym/Abbreviation
L4
LAN
LED
LZ
MAC
MAPI
MDIX
MIB
MOTD
MS SQL
MSI
MTU
MX-TCP
NAS
NAT
NFS
NSPI
NTP
OSI
PBR
PCI
PFS
QoS
RADIUS
RAID
RCU
SA
SDR
SFQ
SH
SMB
SMC
SMI
SMTP
SNMP
SQL
SSH
SSL
TA
TACACS+
66
Definition
Layer 4
Local Area Network
Light-Emitting Diode
Lempel-Ziv
Media Access Control
Messaging Application Protocol Interface
Medium Dependent Interface Crossover
Management Information Base
Message of the Day
Microsoft Structured Query Language
Microsoft Software Installer
Maximum Transmission Unit
Max-Speed TCP
Network Attached Storage
Network Address Translation
Network File System
Name Service Provider Interface
Network Time Protocol
Open System Interconnection
Policy-Based Routing
Peripheral Component Interconnect
Proxy File Service
Quality of Service
Remote Authentication Dial-In User Service
Redundant Array of Independent Disks
Riverbed Copy Utility
Security Association
Scalable Data Referencing
Stochastic Fairness Queuing
Riverbed Steelhead Appliance
Server Message Block
Steelhead Mobile Controller
Structure of Management Information
Simple Mail Transfer Protocol
Simple Network Management Protocol
Structured Query Language
Secure Shell or server-side Steelhead
Secure Sockets Layer
Transaction Acceleration
Terminal Access Controller Access Control System
2007-2009 Riverbed Technology, Inc. All rights reserved.
Acronym/Abbreviation
TCP
TCP/IP
TTL
ToS
UDP
UNC
URL
VLAN
VoIP
VWE
WAN
WCCP
WFS
Definition
Transmission Control Protocol
Transmission Control Protocol/Internet Protocol
Time to Live
Type of Service
User Datagram Protocol
Universal Naming Convention
Uniform Resource Locator
Virtual Local Area Network
Voice over IP
Virtual Window Expansion
Wide Area Network
Web Cache Communication Protocol
Windows File System
67