Sunteți pe pagina 1din 68
Riverbed Certified Solutions Professional (RCSP) Stud y Guide Exam 199-01 for RiOS v5.0 June, 2009

Riverbed Certified Solutions Professional (RCSP) Study Guide

Exam 199-01 for RiOS v5.0

June, 2009

RCSP Study Guide

COPYRIGHT © 2007-2009 Riverbed Technology, Inc.

ALL RIGHTS RESERVED

All content in this manual, including text, graphics, logos, icons, and images, is the exclusive property of Riverbed Technology, Inc. (“Riverbed”) and is protected by U.S. and international copyright laws. The compilation (meaning the collection, arrangement, and assembly) of all content in this manual is the exclusive property of Riverbed and is also protected by U.S. and international copyright laws. The content in this manual may be used as a resource. Any other use, including the reproduction, modification, distribution, transmission, republication, display, or performance, of the content in this manual is strictly prohibited.

TRADEMARKS

RIVERBED TECHNOLOGY, RIVERBED, STEELHEAD, RiOS, INTERCEPTOR, and the Riverbed logo are trademarks or registered trademarks of Riverbed. All other trademarks mentioned in this manual are the property of their respective owners. The trademarks and logos displayed in this manual may not be used without the prior written consent of Riverbed or their respective owners.

PATENTS

Portions, features and/or functionality of Riverbed's products are protected under Riverbed patents, as well as patents pending.

DISCLAIMER

THIS MANUAL IS PROVIDED BY RIVERBED ON AN "AS IS" BASIS. RIVERBED MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY KIND, EXPRESS OR IMPLIED, AS TO THE INFORMATION, CONTENT, MATERIALS, OR PRODUCTS INCLUDED OR REFERENCED IN THE MANUAL. TO THE FULL EXTENT PERMISSIBLE BY APPLICABLE LAW, RIVERBED DISCLAIMS ALL WARRANTIES, EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT.

Although Riverbed has attempted to provide accurate information in this manual, Riverbed assumes no responsibility for the accuracy or completeness of the information. Riverbed may change the programs or products mentioned in this manual at any time without notice, but Riverbed makes no commitment to update the programs or products mentioned in this manual in any respect. Mention of non-Riverbed products or services is for information purposes only and constitutes neither an endorsement nor a recommendation.

RIVERBED WILL NOT BE LIABLE UNDER ANY THEORY OF LAW, FOR ANY INDIRECT, INCIDENTAL, PUNITIVE OR CONSEQUENTIAL DAMAGES, INCLUDING, BUT NOT LIMITED TO, LOSS OF PROFITS, BUSINESS INTERRUPTION, LOSS OF INFORMATION OR DATA OR COSTS OF REPLACEMENT GOODS, ARISING OUT OF THE USE OR INABILITY TO USE THIS MANUAL OR ANY RIVERBED PRODUCT OR RESULTING FROM USE OF OR RELIANCE ON THE INFORMATION PRESENT, EVEN IF RIVERBED MAY HAVE BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.

CONFIDENTIAL INFORMATION

The information in this manual is considered Confidential Information (as defined in the Reseller Agreement entered with Riverbed or in the Riverbed License Agreement currently available at www.riverbed.com/license, as applicable).

RCSP Study Guide

Table of Contents

Preface

4

Certification Overview

4

Benefits of

Certification

4

Exam Information

4

Certification Checklist

5

Recommended Resources for Study

5

RIVERBED CERTIFIED SOLUTIONS PROFESSIONAL STUDY GUIDE

7

I. General Knowledge

7

Optimizations Performed by RiOS

7

TCP/IP

12

Common Ports

12

RiOS Auto-discovery Process

13

Enhanced Auto-Discovery Process

14

Connection Pooling

15

In-path Rules

15

Peering Rules

16

Steelhead Appliance Models and Capabilities

17

II. Deployment

19

In-path

20

Out-of-Band (OOB) Splice

21

Virtual In-path

23

Policy-Based Routing (PBR)

23

WCCP Deployments

24

Advanced WCCP Configuration

27

Server-Side Out-of-Path Deployments

28

Asymmetric Route Detection

30

Connection Forwarding

31

Simplified Routing (SR)

32

Data Store Synchronization

33

CIFS Prepopulation

33

Authentication and Authorization

33

SSL

34

Central Management Console (CMC)

35

Steelhead Mobile Solution (Steelhead Mobile Controller & Steelhead Mobile Client)

36

Interceptor Appliance

37

III. Features

40

Feature Licensing

40

HighSpeed TCP (HSTCP)

40

MX-TCP

42

Quality of Service

42

PFS (Proxy File Service) Deployments

45

NetFlow

51

IPSec

53

Operation on VLAN Tagged Links

53

IV. Troubleshooting

54

Common Deployment Issues

54

Reporting and Monitoring

56

Troubleshooting Best Practices

59

RCSP Study Guide

V. Exam Questions

61

Types of Questions

61

Sample Questions

61

VI. Appendix

65

Acronyms and Abbreviations

65

RCSP Study Guide

Preface

This Riverbed Certification Study Guide is intended for anyone who wants to become certified in the Riverbed Steelhead products and Riverbed Optimization System (RiOS). The Riverbed Certified Solutions Professional (RCSP) program is designed to validate the skills required of technical professionals who work in the implementation of Riverbed products.

This study guide provides a combination of theory and practical experience needed for a general understanding of the subject matter. It also provides sample questions that will help in the evaluation of personal progress and provide familiarity with the types of questions that will be encountered in the exam.

This publication does not replace practical experience, nor is it designed to be a stand-alone guide for any subject. Instead, it is an effective tool that, when combined with education activities and experience, can be a very useful preparation guide for the exam.

Certification Overview

The Riverbed Certified Solutions Professional certificate is granted to individuals who demonstrate advanced knowledge and experience with the RiOS product suite. The typical RCSP will have taken a Riverbed approved training class such as the Steelhead Appliance Deployment & Management course in addition to having hands-on experience in performing deployment, troubleshooting, and maintenance of RiOS products in small, medium, and large organizations. While there are no set requirements prior to taking the exam, candidates who have taken a Riverbed authorized training class and have at least six months of hands-on experience with RiOS products have a significantly higher chance of receiving the certification. We would like to emphasize that solely taking the class will not adequately prepare you for the exam.

To obtain the RCSP certification, you are required to pass a computerized exam available at any Pearson VUE testing center worldwide.

Benefits of Certification

1. Establishes your credibility as a knowledgeable and capable individual in regard to Riverbed's products and services.

2. Helps improve your career advancement potential.

3. Qualifies you for discounts and/or benefits for Riverbed sponsored events and training.

4. Entitles you to use the RCSP certification logo on your business card.

Exam Information

Exam Specifications

Exam Number: 199-01

Exam Name: Riverbed Certified Solutions Professional

Version of RiOS: Up to RiOS version 5.0 for the Steelhead appliances and the Central Management Console, and Interceptor 2.0 and Steelhead Mobile 2.0

Number of Questions: 65

Total Time: 75 minutes for exam, 15 minutes for Survey and Tutorial (90 minutes total)

Exam Provider: Pearson VUE

Exam Language: English only. Riverbed allows a 30-minute time extension for English exams taken in non-English speaking countries for students that request it. English speaking countries are Australia, Bermuda, Canada, Great Britain, Ireland, New Zealand, Scotland,

RCSP Study Guide

South Africa, and the United States. A form will need to be completed by the candidate and submitted to Pearson VUE.

Special Accommodations: Yes (must submit written request to Pearson VUE for ESL or ADA accommodations; includes time extensions and/or a reader)

Offered Locations: Worldwide (over 5000 test centers in 165 countries)

Pre-requisites: None (although taking a Riverbed training class is highly recommended)

Available to: Everyone (partners, customers, employees, etc)

Passing Score: 700 out of 1000 (70%)

Certification Expires: Every 2 years (must recertify every 2 years, no grace period)

Wait Between Failed Attempts: 72 hours. No retakes allowed on passed exams.

Cost: $150.00 (USD)

Number of Attempts Allowed: Unlimited (though statistics are kept)

Certification Checklist

As the RCSP exam is geared towards individuals who have both the theoretical knowledge and hands on experience with the RiOS product suite, ensuring proficiency in both areas is crucial towards passing the exam. For individuals starting out with the process, we recommend the following steps to guide you along the way:

1. Building Theoretical Knowledge The easiest way to become knowledgeable in deploying, maintaining, and troubleshooting the RiOS product suite is to take a Riverbed authorized training class. To ensure the greatest possibility of passing the exam, it is recommended that you review the RCSP Study Guide and ensure your familiarity with all topics listed, prior to any examination attempts.

2. Gaining Hands-on Experience While the theoretical knowledge will get you partway there, it is the hands-on knowledge that can get you over the top and enable you to pass the exam. Since all deployments are different, providing an exact amount of experience required is difficult. Generally, we recommend that resellers and partners perform at least five deployments in a variety of technologies prior to attempting the exam. For customers, and alternatively for resellers and partners, starting from the design and deployment phase and having at least six months of experience in a production environment would be beneficial.

3. Taking the Exam The final step in becoming an RCSP is to take the exam at a Pearson VUE authorized testing center. To register for any Riverbed Certification exam, please visit http://www.pearsonvue.com/riverbed.

Recommended Resources for Study

Riverbed Training Courses Information on Riverbed Training can be found at: http://www.riverbed.com/services/training/.

Steelhead Appliance Deployment & Management

Steelhead Appliance Operations & L1/L2 Troubleshooting

Steelhead Mobile Installation & Configuration

Central Management Console Configuration & Operations

Interceptor Appliance Installation & Configuration

RCSP Study Guide

Steelhead Appliance Advanced Deployment & Troubleshooting

Publications

Recommended Reading (In No Particular Order)

This study guide

Riverbed documentation

o

Steelhead Management Console User's Guide

o

Steelhead Command-Line Interface Reference Guide

o

Steelhead Appliance Deployment Guide

o

Steelhead Appliance Installation Guide

o

Bypass Card Installation Guide

o

Steelhead Mobile Controller User’s Guide

o

Steelhead Mobile Controller Installation Guide

o

Central Management Console User's Guide

o

Central Management Console Installation Guide

o

Interceptor Appliance User's Guide

o

Interceptor Appliance Installation Guide

Other Reading (URLs Subject to Change)

http://www.ietf.org/rfc.html

o

RFC 793 (Original TCP RFC)

o

RFC 1323 TCP extensions for high performance

o

RFC 3649 (HighSpeed TCP for Large Congestion Windows)

o

RFC 3742 (Limited Slow-Start for TCP with Large Congestion Windows)

o

RFC 2474 (Differentiated Services Code Point)

http://www.caida.org/tools/utilities/flowscan/arch.xml (NetFlow Protocol and Record Headers)

http://ubiqx.org/cifs/Intro.html (CIFS)

Microsoft Windows 2000 Server Administrator’s Companion by Charlie Russell and Sharon Crawford (Microsoft Press, 2000)

Common Internet File System (CIFS) Technical Reference by the Storage Networking Industry Association (Storage Networking Industry Association, 2002)

TCP/IP Illustrated, Volume I, The Protocols by W. R. Stevens (Addison-Wesley, 1994)

Internet Routing Architectures (2nd Edition) by Bassam Halabi (Cisco Press, 2000)

RCSP Study Guide

RIVERBED CERTIFIED SOLUTIONS PROFESSIONAL STUDY GUIDE

The Riverbed Certified Solutions Professional exam, and therefore this study guide, covers the Riverbed products and technologies through RiOS version 5.0 only (Interceptor 2.0 and Steelhead Mobile 2.0 as well).

I. General Knowledge

Optimizations Performed by RiOS

Optimization is the process of increasing data throughput and network performance over the WAN using Steelhead appliances. An optimized connection exhibits bandwidth reduction as it traverses the WAN. The optimization techniques RiOS utilizes are:

Data Streamlining

Transport Streamlining

Application Streamlining

Management Streamlining

You should be familiar with the differences in these streamlining techniques for the RCSP test. This information can be found in the Steelhead Appliance Deployment Guide.

Transaction Acceleration (TA) TA is composed of the following optimization mechanisms:

A connection bandwidth-reducing mechanism called Scalable Data Referencing (SDR)

A Virtual TCP Window Expansion (VWE) mechanism that repacks TCP payloads with references that represent arbitrary amounts of data, thus increasing the client-data per WAN TCP window

A latency reduction and avoidance mechanism called Transaction Prediction

SDR and TP can work independently or in conjunction with one another depending on the characteristics and workload of the data sent across the network. The results of the optimization vary, but often result in throughput improvements in the range of 10 to 100 times over unaccelerated links.

Scalable Data Referencing (SDR) Bandwidth optimization is delivered through SDR. SDR uses a proprietary algorithm to break up TCP data streams into data chunks that are stored in the hard disk (data store) of the Steelhead appliances. Each data chunk is assigned a unique integer label (reference) before it is sent to the peer Steelhead appliance across the WAN. If the same byte sequence is seen again in the TCP data stream, then the reference is sent across the WAN instead of the raw data chunk. The peer Steelhead appliance uses this reference to reconstruct the original data in the TCP data stream. Data and references are maintained in persistent storage in the data store within each Steelhead appliance. Because SDR checks data chunks byte-by-byte there are no consistency issues even in the presence of replicated data.

How Does SDR Work? When data is sent for the first time across a network (no commonality with any file ever sent before), all data and references are new and are sent to the Steelhead appliance on the other side of the network. This new data and the accompanying references are compressed using conventional algorithms so as to improve performance, even on the first transfer.

RCSP Study Guide

Over time, more data crosses the network (revisions of a document for example). Thereafter, when these new requests are sent across the network, the data is compared with references that already exist in the local data store. Any data that the Steelhead appliance determines already exists on the far side of the network are not sent—only the references are sent across the network.

As files are copied, edited, renamed, and otherwise changed or moved (as well as web pages being viewed or email sent), the Steelhead appliance continually builds the data store to include more and more data and references. References can be shared by different files and by files in different applications if the underlying bits are common to both. Since SDR can operate on all TCP-based protocols, data commonality across protocols can be leveraged so long as the binary representation of that data does not change between the protocols. For example, when a file transferred via FTP is then transferred using WFS (Windows File System), the binary representation of the file is basically the same and thus references can be sent for that file.

Lempel-Ziv (LZ) Compression SDR and compression are two different features and can be controlled separately. However, LZ compression is the primary form of data reduction for cold transfers.

The Lempel-Ziv compression methods are among the most popular algorithms for lossless storage. Compression is turned on by default. In-path rules can be used to define which optimization features will be used for which set of traffic flowing through the Steelhead appliance.

TCP Optimizations & Virtual Window Expansion (VWE) As Steelhead appliances are designed to optimize data transfers across wide area networks, they make extensive use of standards-based enhancements to the TCP protocol that may not be present in the TCP stack of many desktop and server operating systems. This includes improved transport capability for networks with high bandwidth delay products via the use of HighSpeed TCP, MX-TCP, or TCP Vegas for lower bandwidth links, partial acknowledgements, and other more obscure but throughput enhancing and latency reducing features.

VWE allows Steelhead appliances to repack TCP payloads with references that represent arbitrary amounts of data. This is possible because Steelhead appliances operate at the Application Layer and terminate TCP, which gives them more flexibility in the way they optimize WAN traffic.

Essentially, the TCP payload is increased from its normal window size to an arbitrarily large amount dependent on the compression ratio for the connection. Because of this increased payload, a given application that relies on TCP performance (for example, HTTP or FTP) takes fewer trips across the WAN to accomplish the same task. For example, consider a client-to- server connection that may have a 64KB TCP window. In the event that there is 256KB of data to transfer, it would take several TCP windows to accomplish this in a network with high latency. With SDR however, that 256KB of data can be potentially reduced to fit inside a single TCP window, removing the need to wait for acknowledgements to be sent prior to sending the next window, and thus speed the transfer.

Transaction Prediction Application-level latency optimization is delivered through the Transaction Prediction module. Transaction Prediction leverages an intimate understanding of protocol semantics to reduce the chattiness that would normally occur over the WAN. By acting on foreknowledge of specific protocol request-response mechanisms, Steelhead appliances streamline the delivery of data that

RCSP Study Guide

would normally be delivered in small increments through large numbers of interactions between the client and server over the WAN. As transactions are executed between the client and server, the Steelhead appliance intercepts each transaction, compares it to the database of past transactions, and makes decisions about the probability of future events.

Based on this model, if a Steelhead appliance determines there is a high likelihood of a future transaction occurring, it performs that transaction, rather than waiting for the response from the server to propagate back to the client and then back to the server. Dramatic performance improvements result from the time saved by not waiting for each serial transaction to arrive prior to making the next request. Instead, the transactions are pipelined one right after the other.

Of course, transactions are executed by Steelhead appliances ahead of the client only when it is safe to do so. To ensure data integrity, Steelhead appliances are designed with knowledge of the underlying protocols to know when it is safe to do so. Fortunately, a wide range of common applications have very predictable behaviors and, consequently, Transaction Prediction can enhance WAN performance significantly. When combined with SDR, Transaction Prediction can improve WAN performance up to 100 times.

Common Internet File System (CIFS) Optimization CIFS is a proposed standard protocol that lets programs make requests for files and services on remote computers over the Internet. CIFS uses the client/server programming model. A client program makes a request of a server program (usually in another computer) for access to a file or to pass a message to a program that runs in the server computer. The server takes the requested action and returns a response. CIFS is a public or open variation of the Server Message Block (SMB) protocol developed and used by Microsoft.

In the Steelhead appliance, CIFS optimization is enabled by default. Typically, you would only disable CIFS optimization to troubleshoot the system.

Overlapping Opens Due to the way certain applications handle the opening of files, file locks are not properly granted to the application in such a way that would allow a Steelhead appliance to optimize access to that file using Transaction Prediction. To prevent any compromise to data integrity, the Steelhead appliance only optimizes data to which exclusive access is available (in other words, when locks are granted). When an opportunistic lock (oplock) is not available, the Steelhead appliance does not perform application-level latency optimizations but still performs SDR and compression on the data as well as TCP optimizations. The CIFS overlapping opens feature remedies this problem by having the server-side Steelhead handle file locking operations on behalf of the requesting application. If you disable this feature, the Steelhead appliance will still increase WAN performance, but not as effectively.

Enabling this feature on applications that perform multiple opens of the same file to complete an operation will result in a performance improvement (for example, CAD applications). NOTE: For the Steelhead appliance to handle the locking properly, all transactions on the file must be optimized by that Steelhead appliance. Therefore, if a remote user opens a file which is optimized using the overlapping opens feature, and a second user opens the same file they might receive an error if the file fails to go through a Steelhead appliance or if it does not go through the Steelhead appliance (for example, certain applications that are sent over the LAN). If this occurs, you should disable overlapping opens optimizations for those applications.

RCSP Study Guide

Messaging Application Programming Interface (MAPI) Optimization MAPI optimization is enabled by default. Only uncheck this box if you want to disable MAPI optimization. Typically, you disable MAPI optimization to troubleshoot problems with the system. For example, if you are experiencing problems with Microsoft Outlook clients connecting to Exchange, you can disable MAPI latency acceleration (while continuing to optimize with SDR for MAPI).

Read ahead on attachments

Read ahead on large emails

Write behind on attachments

Write behind on large emails

Fails if user authentication set too high (downgrades to SDR/TCP acceleration only, no Transaction Prediction)

MAPI Prepopulation

Without MAPI prepopulation, if a user closes Microsoft Outlook or switches off the workstation the TCP sessions are broken. With MAPI prepopulation, the Steelhead appliance can start acting

as if it is the mail client. If the client closes the connection, the client-side Steelhead appliance will keep an open connection to the server-side Steelhead appliance and the server-side Steelhead appliance will keep the connection open to the server. This allows for data to be

pushed through the data store before the user logs on to the server again. The default timer is set

to 96 hours, after that, the connection will be reset.

Optimized MAPI connections held open after client exit (acts like the client left the PC on); think of it as virtual client

Keep reading mail until timeout

No one is ever reconnected to the prepopulation session (including the original user)

No need for more Client Access Licenses (CALs); no agents to deploy

Can configure frequency check and timeout or to disable it

Enables transmission during off times even in consolidated environments

The feature can be disabled independently from other MAPI optimizations

HTTP Optimization

A typical web page is not a single file that is downloaded all at once. Instead, web pages are

composed of dozens of separate objects—including .jpg and .gif images, JavaScript code, cascading style sheets, and more—each of which must be requested and retrieved separately, one after the other. Given the presence of latency, this behavior is highly detrimental to the performance of web-based applications over the WAN.

The higher the latency, the longer it takes to fetch each individual object and, ultimately, to display the entire page.

RiOS v5.0 and later optimizes web applications using:

Parsing and Prefetching of Dynamic Content

URL Learning

RCSP Study Guide

Removal of Unfetchable Objects

HTTP Metadata Responses

Persistent Connections

More information can be found in the Steelhead Appliance Management Console User’s Guide.

NFS Optimization You can configure Steelhead appliances to use Transaction Prediction to perform application- level latency optimization on NFS. Application-level latency optimization improves NFS performance over high latency WANs.

NFS latency optimization optimizes TCP connections and is only supported for NFS v3.

You can configure NFS settings globally for all servers and volumes, or you can configure NFS settings that are specific to particular servers or volumes. When you configure NFS settings for a server, the settings are applied to all volumes on that server unless you override settings for specific volumes.

Read-ahead and read caching (checks freshness with modify date)

Write-behind

Metadata prefetching and caching

Convert multiple requests into one larger request

Special symbolic link handling

Microsoft SQL Optimization Steelhead appliance MS SQL protocol support includes the ability to perform prefetching and synthetic pre-acknowledgement of queries on database applications. By default, rules that increase optimization for Microsoft Project Enterprise Edition ship with the unit. This optimization is not enabled by default, and enabling MS SQL optimization without adding specific rules will rarely have an effect on any other applications. MS SQL packets must be carried in TDS (Tabular Data Stream) format for a Steelhead appliance to be able to perform optimization.

You can also use MS SQL protocol optimization to optimize other database applications, but you must define SQL rules to obtain maximum optimization. If you are interested in enabling the MS SQL feature for other database applications, contact Riverbed Professional Services.

Oracle Forms Optimization The Oracle Java Initiator (Jinitiator) or Oracle Forms is a browser plug-in program that accesses Oracle E-Business application content and Oracle forms applications directly within a web browser.

The Steelhead appliance decrypts, optimizes, and then re-encrypts Oracle Forms native and HTTP mode traffic.

Use Oracle Forms optimization to improve Oracle Forms traffic performance. Oracle Forms does not need a separate license and is enabled by default. However, you must also set an in-path rule to enable this feature.

RCSP Study Guide

TCP/IP

General Operation Steelhead appliances are typically placed on two ends of the WAN as close to the client and server as possible (no additional WAN links between the end node and the Steelhead appliance). By placing Steelhead appliances in the network, the TCP session between client and server can be intercepted, therefore a level of control over the TCP session can be obtained. TCP sessions have to be intercepted in order to be optimized; therefore the Steelhead appliances must see all traffic from source to destination and back. For any given optimized session, there are three distinct sessions. There is a TCP connection between the client and the client-side Steelhead appliance, between the server and the server-side Steelhead appliance, and finally a connection between the two Steelhead appliances.

Common Ports

Ports Used by RiOS

Port

Type

7744

Data store sync port

7800

In-path port

7801

NAT port

7810

Out-of-path port

7820

Failover port for redundant appliances

7830

Exchange traffic port

7840

Exchange Director NSPI traffic port

7850

Connection Forwarding (neighbor) port

7860

Interceptor Appliance

7870

Steelhead Mobile

Interactive Ports Commonly Passed Through by Default on Steelhead Appliances (Partial List)

Port

Type

7

TCP ECHO

23

Telnet

37

UDP/Time

107

Remote Telnet Service

179

Border Gateway Protocol

513

Remote Login

514

Shell

1494, 2598

Citrix

3389

MS WBT, TS/Remote Desktop

5631

PC Anywhere

RCSP Study Guide

Port

Type

5900 - 5903

VNC

600

X11

Secure Ports Commonly Passed Through by Default on Steelhead Appliances (Partial List)

Port

Type

22/TCP

ssh

49/TCP

tacacs

443/TCP

https

465/TCP

smtps

563/TCP

nntps

585/TCP

imap4-ssl

614/TCP

sshell

636/TCP

ldaps

989/TCP

ftps-data

990/TCP

ftps

992/TCP

telnets

993/TCP

imaps

995/TCP

pop3s

1701/TCP

l2tp

1723/TCP

pptp

3713/TCP

tftp over tls

RiOS Auto-discovery Process

Auto-discovery is the process by which the Steelhead appliance automatically intercepts and optimizes traffic on all IP addresses and ports. By default, auto-discovery is applied to all IP addresses and the ports which are not secure, interactive, or Riverbed well-known ports.

Packet Flow The following diagram shows the first connection packet flow for traffic that is classified as to be optimized for the original auto-discovery protocol. The TCP SYN sent by the client is intercepted by the Steelhead appliance. A TCP option is attached in the TCP header; this allows the remote Steelhead appliance to recognize that there is a Steelhead appliance on the other side of the network. When the server-side Steelhead appliance sees the option (also known as a TCP probe) it responds to the option by sending a TCP SYN/ACK back. After auto-discovery has taken place, the Steelhead appliances continue to set up the TCP inner session and the TCP outer sessions.

RCSP Study Guide

Client SH1 SH2 Server IP(C)IP(C)→→IP(S):SYNIP(S):SYN IP(C)IP(C)→→IP(S):SYN+ProbeIP(S):SYN+Probe
Client
SH1
SH2
Server
IP(C)IP(C)→→IP(S):SYNIP(S):SYN
IP(C)IP(C)→→IP(S):SYN+ProbeIP(S):SYN+Probe
IP(S)IP(S)→→IP(C):SYN/ACK+ProbeIP(C):SYN/ACK+Probe rsprsp (SH2)(SH2)
Probe result is
cached for 10 sec
Announces service port
(default = TCP port 7800)
IP(SH1)IP(SH1)→→IP(SH2):SYNIP(SH2):SYN
IP(SH2)IP(SH2)→→IP(SH1):SYN/ACKIP(SH1):SYN/ACK
IP(SH1)IP(SH1)→→IP(SH2):ACKIP(SH2):ACK
SetupSetup InformationInformation
IP(C)IP(C)→→IP(S):SYNIP(S):SYN
IP(S)IP(S)→→IP(C):SYN/ACKIP(C):SYN/ACK
ConnectConnect ResultResult
IP(C)IP(C)→→IP(S):ACKIP(S):ACK
IP(S)IP(S)→→IP(C):SYN/ACKIP(C):SYN/ACK
Connect result is
cached until failure
IP(C)IP(C)→→IP(S):ACKIP(S):ACK
Connection Pool:
20x

TCP Option The TCP option used for auto-discovery is 0x4C which is 76 in decimal format. The client-side Steelhead appliance attaches a 10 byte option to the TCP header; the server-side Steelhead appliance attaches a 14 byte option in return. Note that this is only done in the initial discovery process and not during connection setup between the Steelhead appliances and the outer TCP sessions.

Enhanced Auto-Discovery Process

In RiOS v4.0.x or later, enhanced auto-discovery (EAD) is available. Enhanced auto-discovery automatically discovers the last Steelhead appliance in the network path of the TCP connection. In contrast, the original auto-discovery protocol automatically discovers the first Steelhead appliance in the path. The difference is only seen in environments where there are three or more Steelhead appliances in the network path for connections to be optimized.

Enhanced auto-discovery works with Steelhead appliances running the original auto-discovery protocol. Enhanced auto-discovery ensures that a Steelhead appliance only optimizes TCP connections that are being initiated or terminated at its local site, and that a Steelhead appliance does not optimize traffic that is transiting through its site.

RCSP Study Guide

Client

SH1

SH2

Server

IP(C)IP(C)→→IP(S):SYNIP(S):SYN SEQ1SEQ1 We are still using 0x4c but we now use two of them (back-to-back)
IP(C)IP(C)→→IP(S):SYNIP(S):SYN SEQ1SEQ1
We are still using 0x4c but we now
use two of them (back-to-back)
Notification is being sent to SH1
IP(C)IP(C)→→IP(S):SYNIP(S):SYN SEQ1SEQ1 +Probe+Probe
IP(C)IP(C)→→IP(S):SYNIP(S):SYN SEQ2SEQ2 ++ ProbeProbe
IP(S)→IP(C):SYN/ACK
IP(S)→IP(C):SYN/ACK
Probe result is
cached for 10 sec
Notification: not the last SH
Notification: not the last SH
IP(S)IP(S)→→IP(C):SYN/ACKIP(C):SYN/ACK
IP(S)→IP(C):SYN/ACK+Probe rsp (S-SH)
IP(S)→IP(C):SYN/ACK+Probe rsp (S-SH)
IP(C)IP(C)→→IP(S):ACKIP(S):ACK
Connect result is
cached until failure
Connection Result
Connection Result
acknum
IP(SH1)→IP(SH2):SYN
IP(SH1)→IP(SH2):SYN
Listening on
Listening on
IP(SH2)IP(SH2)→→IP(SH1):SYN/ACKIP(SH1):SYN/ACK
Service Port 7800
Service Port 7800
IP(SH1)IP(SH1)→→IP(SH2):ACKIP(SH2):ACK
SetupSetup InformationInformation
IP(S)IP(S)→→IP(C):SYN/ACKIP(C):SYN/ACK
IP(C)IP(C)→→IP(S):ACKIP(S):ACK
20x

Connection Pooling

General Operation By default, all auto-discovered Steelhead appliance peers will have a default connection pool of 20. The connection pool is a user configurable value which can be configured for each Steelhead appliance peer. The purpose of connection pooling is to avoid the TCP handshake for the inner session between the Steelhead appliances across the high latency WAN. By pre-creating these sessions between peer Steelhead appliances, when a new connection request is made by a client, the client-side Steelhead appliance can simply use the connections in the pool. Once a connection is pulled from the pool, a new connection is created to take its place so as to maintain the specified number of connections.

In-path Rules

General Operation In-path rules allow a client-side Steelhead appliance to determine what action to perform when intercepting a new client connection (the first TCP SYN packet for a connection). The action taken depends on the type of in-path rule selected and is outlined in detail below. It is important to note that the rules are matched based on source/destination IP information, destination port, and/or VLAN, and are processed from the first rule in the list to the last (top down). The rules processing stops when the first rule matching the parameters specified is reached, at which point the action selected by the rule is taken. Steelhead appliances have three passthrough rules by default, and a fourth implicit rule to auto-discover remote Steelhead appliances. They attempt to optimize traffic if the first three rules are not matched by traffic. The three default passthrough rules include port groupings matching interactive traffic (i.e., Telnet, VNC, RDP), encrypted traffic (i.e., server-side Steelhead), and Riverbed related used ports (i.e., 7800, 7810).

Different Types and Their Function

Pass Through. Pass through rules identify traffic that is passed through the network unoptimized. For example, you may define pass through rules to exclude subnets from

RCSP Study Guide

optimization. Traffic is also passed through when the Steelhead appliance is in bypass mode. (Passthrough might occur because of in-path rules, because the connection was established before the Steelhead appliance was put in place, or before the Steelhead service was enabled.)

Fixed-Target. Fixed-target rules specify out-of-path Steelhead appliances near the target server that you want to optimize. Determine which servers you want the Steelhead appliance to optimize (and, optionally which ports), and add rules to specify the network of servers, ports, port labels, and out-of-path Steelhead appliances to use. Fixed-target rules can also be used for in-path deployments for Steelhead appliances not using EAD.

Auto Discover. Auto-discovery is the process by which the Steelhead appliance automatically intercepts and optimizes traffic on all IP addresses and ports. By default, auto- discovery is applied to all IP addresses and the ports which are not secure, interactive, or default Riverbed ports. Defining in-path rules modifies this default setting.

Discard. Packets for the connection that match the rule are dropped silently. The Steelhead appliance filters out traffic that matches the discard rules. This process is similar to how routers and firewalls drop disallowed packets; the connection-initiating device has no knowledge of the fact that its packets were dropped until the connection times out.

Deny. When packets for connections match the deny rule, the Steelhead appliance actively tries to reset the connection. With deny rules, the Steelhead appliance actively tries to reset the TCP connection being attempted. Using an active reset process rather than a silent discard allows the connection initiator to know that its connection is disallowed.

Peering Rules

Applicability and Conditions of Use

Peering Rules Configuring peering rules defines what to do when a Steelhead appliance receives an auto- discovery probe from another Steelhead appliance. As such, the scope of a peering rule is limited to a server-side Steelhead appliance (the one receiving the probe). Note that peering rules on an intermediary Steelhead appliance (or server-side) will have no effect in preventing optimization with a client-side Steelhead appliance if it is using a fixed-target rule designating the intermediary Steelhead appliance as its destination (since there is no auto-discovery probe in a fixed-target rule). The following example shows where you might wish to use peering rules:

Site A

Site A

Site B

Site B

Site C

Site C

Client

Client

Steelhead1 Steelhead1 Steelhead2 Steelhead2 Steelhead3 Steelhead3 WANWAN 11 WAN 1 WANWAN 22 WAN 2
Steelhead1
Steelhead1
Steelhead2
Steelhead2
Steelhead3
Steelhead3
WANWAN 11
WAN 1
WANWAN 22
WAN 2

Server 2

Server 2

WANWAN 11 WAN 1 WANWAN 22 WAN 2 Server 2 Server 2 Server 1 Server 1

Server 1

Server 1

Server1 is on the same LAN as Steelhead2 so connections from the client to Server1 should be optimized between Steelhead1 and Steelhead2. Concurrently, Server2 is on the same LAN as Steelhead3 and connections from the client to Server2 should be optimized between Steelhead1 and Steelhead3.

RCSP Study Guide

You do not need to define any rules on Steelhead1 or Steelhead3

Add peering rules on Steelhead2 to process connections normally going to Server1 and to pass through all other connections so that connections to Server2 are not optimized by

Steelhead2

A rule to pass through inner connections between Steelhead1 and Steelhead3 is already in place by default (by default connection to destination port 7800 is included by port label “RBT-Proto”)

This configuration causes connections going to Server1 to be intercepted by Steelhead2, and connections going to anywhere else to be intercepted by another Steelhead appliance (for example, Steelhead3 for Server2).

Overcoming Peering Issues Using Fixed-Target Rules

If you do not enable automatic peering or define peering rules as described in the previous sections, you must define:

A fixed-target rule on Steelhead1 to go to Steelhead3 for connections to Server2

A fixed-target rule on Steelhead3 to go to Steelhead1 for connections to servers in the same site as Steelhead1

If you have multiple branches that go through Steelhead2, you must add a fixed-target rule for each of them on Steelhead1 and Steelhead3

Steelhead Appliance Models and Capabilities

Model Specifications (subject to change)

and Capabilities Model Specifications (subject to change) Steelhead Appliance Ports A Steelhead appliance has

Steelhead Appliance Ports A Steelhead appliance has Console, AUX, Primary, and WAN and LAN ports.

The Primary and AUX ports cannot share the same network subnet

The Primary and In-path interfaces can share the same network subnet

You must use the Primary port on the server-side for out-of-path deployment

RCSP Study Guide

You can not use the Auxiliary port except for management

If the Steelhead appliance is deployed between two switches, both the LAN and WAN ports must be connected with straight-through cables

Interface Naming Conventions The interface names for the bypass cards are a combination of the slot number and the port pairs (<slot>_<pair>, <slot>_<pair>). For example, if a four-port bypass card is located in slot 0 of your appliance, the interface names are: lan0_0, wan0_0, lan0_1, and wan0_1 respectively. Alternatively, if the bypass card is located in slot 1 of your appliance, the interface names are:

lan1_0, wan1_0, lan1_1, and wan1_1 respectively.

The maximum number of copper LAN-WAN pairs (total paths) is ten; two built-in with a four- port card, six with two six-port cards, and then two for a four-port card – for a maximum of ten pairs.

RCSP Study Guide

II. Deployment

Deployment Methods

Physical In-path

In a physical in-path deployment, the Steelhead appliance is physically in the direct path network

traffic will take between clients and servers. The clients and servers continue to see client and server IP addresses and the Steelhead appliance bridges unoptimized traffic from its LAN facing side to its WAN facing side (and vice versa). Physical in-path configurations are suitable for any location where the total bandwidth is within the limits of the installed Steelhead appliance or serial cluster of Steelhead appliances. It is generally one of the simplest deployment options and among the easiest to maintain.

Logical In-path

In a logical in-path deployment, the Steelhead appliance is logically in the path between clients

and servers. In a logical in-path deployment, clients and servers continue to see client and server

IP addresses. This deployment differs from a physical in-path deployment in that a packet

redirection mechanism is used to direct packets to Steelhead appliances that are not in the physical path of the client or server.

Commonly used technologies for redirection are: Layer-4 switches, Web Cache Communication Protocol (WCCP), and Policy-based Routing (PBR).

Server-Side Out-of-Path

A server-side out-of-path deployment is a network configuration in which the Steelhead

appliance is not in the direct or logical path between the client and the server. Instead, the server- side Steelhead appliance is connected through the Primary interface and listens on port 7810 to connections coming from client-side Steelhead appliances. In an out-of-path deployment, the Steelhead appliance acts as a proxy and does not perform NAT of the client’s IP address as with

in-path deployments (to allow the server to see the original client IP address), but will instead source NAT to the Primary interface address on the Steelhead appliance that is in server-side out-of-path. A server-side out-of-path configuration is suitable for data center locations when physical in-path or logical in-path configurations are not possible. With server-side out-of-path, client IP visibility is no longer available to the server (due to the NAT) and optimization initiated from the server side is not possible (since there is no redirection of the outbound connection’s packets to the Steelhead appliance).

Physical Device Cabling Steelhead appliances have multiple physical and virtual interfaces. The Primary interface is typically used for management purposes, data store synchronization (if applicable), and for server-side out-of-path configurations. The Primary interface can be assigned an IP address and connected to a switch. You would use a straight-through cable for this configuration.

The LAN and WAN interfaces are purely L1/L2. No IP addresses can be assigned. Instead, a logical L3 interface is created. This is the “In-path” interface and it is designated a name on a per slot and port basis (in LAN/WAN pairs). A bypass card (or in-path card) in slot0 with just one LAN and one WAN interface will have a logical interface called inpath0_0. In-path interfaces for a 4-port card in slot1 will get inpath1_0 and inpath1_1, representing the pair or LAN/WAN ports respectively.

Inpath1_0 will represent LAN1_0 and WAN1_0. Inpath1_1 will represent LAN1_1 and

WAN1_1.

RCSP Study Guide

For a physical in-path deployment, when connecting the LAN and WAN interface to the network, both of them are to be treated as a router. When connecting to a router, host, or firewall, a crossover cable needs to be used. When connecting to a switch, a straight-through cable has to be used. The Steelhead appliance supports auto-MDIX (medium dependent interface crossover), however when using the wrong cables you run the risk of breaking the connection between the components the Steelhead appliances placed in-between, especially in bypass. These components may not support auto-MDIX.

For a virtual in-path deployment the WAN interface needs to be connected. The LAN interface does not need to be connected and will be shut down automatically as soon as the virtual in-path option is enabled in the Steelhead appliances configuration.

For server-side out-of-path deployments only the Primary interface needs to be connected.

In-path

In-path Networks Physical in-path configurations are suitable for locations where the total bandwidth is within the limits of the installed Steelhead appliance or serial cluster of Steelhead appliances.

The Steelhead appliance can be physically connected to access both ports and trunks. When the Steelhead appliance is placed on a trunk, the In-path interface has to be able to tag its traffic with the correct VLAN number. The supported trunking protocol is 802.1q (“Dot1Q”). A tag can be assigned via the GUI or the CLI. The CLI command for this is:

HOSTNAME (config) # in-path interface inpathx_x vlan <id>

Inter-Steelhead appliance traffic will use this VLAN (except in Full Transparent connections as explained below).

There are several variations of the in-path deployment. Steelhead appliances could be placed in series to be redundant. Peering rules based on a peer IP address will have to be applied to both Steelhead appliances to avoid peering between each other. When using 4-port cards, and thus multiple in-path IP addresses, all addresses will have to be defined to avoid peering.

A serial cluster is a failover design that can be used to mitigate the risk of possible network

instabilities and outages caused by a single Steelhead appliance failure (typically caused by excessive bandwidth as there is no longer data reduction occurring). When the maximum number

of TCP connections for a Steelhead appliance is reached, that appliance stops intercepting new

connections. This allows the next Steelhead appliance in the cluster the opportunity to intercept

the new connections, if it has not reached its maximum number of connections. The in-path peering rules and in-path rules are used so that the Steelhead appliances in the cluster know not

to intercept connections between themselves.

Appliances in a failover deployment process the peering rules you specify in a spill-over fashion.

A keepalive method is used between two Steelhead appliances to monitor each others status and

set a master and backup state for both Steelhead appliances. It is recommended to assign the LAN-side Steelhead appliance to be the master due to the amount of passthrough traffic from Steelhead to client or server. Optionally, data stores can be synchronized to ensure warm performance in case of a failure.

In case the Steelhead appliances are deployed in parallel of each other, measures need to be

taken to avoid asymmetrical traffic from being passed through without optimization. This usually occurs when two or more routing points in the network exist where traffic is spread over the links simultaneously. Connection Forwarding can be used to exchange flow information between

RCSP Study Guide

the Steelhead appliances in the parallel deployment. Multiple Steelhead appliances can be bundled together.

WAN Visibility Modes WAN visibility pertains to how packets traversing the WAN are addressed. RiOS v5.0 offers three types of WAN visibility modes: correct addressing, port transparency, and full address transparency.

You configure WAN visibility on the client-side Steelhead appliance (where the connection is initiated). The server-side Steelhead appliance must also support multiple WAN visibility (RiOS v5.0 or later).

Correct Addressing Correct addressing uses Steelhead appliance IP addresses and port numbers in the TCP/IP packet header fields for optimized traffic in both directions across the WAN. This is the default setting. This is “correct” as the devices which are communicating (the TCP endpoints) are the Steelhead appliances, so their IP addresses/ports are reflected in the connection.

Port Transparency Port address transparency preserves your server port numbers in the TCP/IP header fields for optimized traffic in both directions across the WAN. Traffic is optimized while the server port number in the TCP/IP header field appears to be unchanged. Routers and network monitoring devices deployed in the WAN segment between the communicating Steelhead appliances can view these preserved fields. Use port transparency if you want to manage and enforce QoS policies that are based on destination ports. If your WAN router is following traffic classification rules written in terms of client and network addresses, port transparency enables your routers to use existing rules to classify the traffic without any changes. Port transparency enables network analyzers deployed within the WAN (between the Steelhead appliances) to monitor network activity and to capture statistics for reporting by inspecting traffic according to its original TCP port number. Port transparency does not require dedicated port configurations on your Steelhead appliances. NOTE: Port transparency only provides server port visibility. It does not provide client and server IP address visibility, nor does it provide client port visibility.

Full Transparency Full address transparency preserves your client and server IP addresses and port numbers in the TCP/IP header fields for optimized traffic in both directions across the WAN. It also preserves VLAN tags. Traffic is optimized while these TCP/IP header fields appear to be unchanged. Routers and network monitoring devices deployed in the WAN segment between the communicating Steelhead appliances can view these preserved fields. If both port transparency and full address transparency are acceptable solutions, port transparency is preferable. Port transparency avoids potential networking risks that are inherent to enabling full address transparency. For details, see the Steelhead Appliance Deployment Guide. However, if you must see your client or server IP addresses across the WAN, full transparency is your only configuration option.

Out-of-Band (OOB) Splice

What is the OOB Splice? An OOB splice is an independent, separate TCP connection made on the first connection between two peer Steelhead appliances used to transfer version, licensing and other OOB data between peer Steelhead appliances. An OOB connection must exist between two peers for

RCSP Study Guide

connections between these peers to be optimized. If the OOB splice dies all optimized connections on the peer Steelhead appliances will be terminated.

The OOB connection is a single connection existing between two Steelhead appliances regardless of the direction of flow. So if you open one or more connections in one direction, then initiate a connection from the other direction, there will still be only one connection for the OOB splice. This connection is made on the first connection between two peer Steelhead appliances using their in-path IP addresses and port 7800 by default. The OOB splice is rarely of any concern except in full transparency deployments.

Case Study

In the example below, the Client is trying to establish connection to Server-1:

SFE-2

Server-2

10.3.0.2

10.3.0.10

Server-1

SFE-2 Server-2 10.3.0.2 10.3.0.10 Server-1 10.2.0.10 SFE-1 10.2.0.2 10.3.0.1 10.2.0.1 WANWAN 10.1.0.1 F
SFE-2 Server-2 10.3.0.2 10.3.0.10 Server-1 10.2.0.10 SFE-1 10.2.0.2 10.3.0.1 10.2.0.1 WANWAN 10.1.0.1 F

10.2.0.10

SFE-1

10.2.0.2

10.3.0.1
10.3.0.1
10.3.0.10 Server-1 10.2.0.10 SFE-1 10.2.0.2 10.3.0.1 10.2.0.1 WANWAN 10.1.0.1 F W - 1 1.1.1.1 2.2.2.2 FW-2

10.2.0.1

WANWAN
WANWAN
Server-1 10.2.0.10 SFE-1 10.2.0.2 10.3.0.1 10.2.0.1 WANWAN 10.1.0.1 F W - 1 1.1.1.1 2.2.2.2 FW-2 Client
Server-1 10.2.0.10 SFE-1 10.2.0.2 10.3.0.1 10.2.0.1 WANWAN 10.1.0.1 F W - 1 1.1.1.1 2.2.2.2 FW-2 Client
Server-1 10.2.0.10 SFE-1 10.2.0.2 10.3.0.1 10.2.0.1 WANWAN 10.1.0.1 F W - 1 1.1.1.1 2.2.2.2 FW-2 Client

10.1.0.1

10.2.0.10 SFE-1 10.2.0.2 10.3.0.1 10.2.0.1 WANWAN 10.1.0.1 F W - 1 1.1.1.1 2.2.2.2 FW-2 Client CFE-1

FW-1

1.1.1.1

2.2.2.2

FW-2

Client

CFE-1

10.1.0.10

10.1.0.2

Issue 1: After establishing inner connection, the Client will try to establish an OOB connection to the Server-B. It will address it by the IP address reported by Steelhead (SFE-1) which is in probe response (10.2.0.2). Clearly, the connection to this address will fail since 10.2.x.x addresses are invalid outside of the firewall (FW-2).

Resolution 1: In the above example, there is one combination of address and port (IP:port) we know about, the connection the client is destined for which is Server-1. The client should be able to connect to Server-1. Therefore, the OOB splice creation code in sport can be changed to create a transparent OOB connection from the Client to Server-1 if the corresponding inner connection is transparent.

How to Configure There are three options to address the problem of the OOB splice connection established mentioned in Issue 1 above.

In a default configuration the out-of-band connection uses the IP addresses of the client-side Steelhead and server-side Steelhead. This is known as correct addressing and is our default behavior. However, this configuration will fail in the network topology described above but works for the majority of networks. The command below is the default setting in a Steelhead appliance’s configuration.

in-path peering oobtransparency mode none

In the network topology discussed in Issue 1, the default configuration does not work. There are two oobtransparency modes that may work in establishing the peer connections; destination and full. When destination mode is used, the client uses the first server IP and port pair to go through the Steelhead appliance with which to connect to the server-side Steelhead appliance and the client-side Steelhead IP and port number chosen by the client-side Steelhead appliance. To change to this configuration use the following CLI command:

RCSP Study Guide

in-path peering oobtransparency mode destination

In oobtransparency full mode, the IP of the first client is used and a pre-configured on the client- side Steelhead appliance to use port 708. The destination IP and port are the same as in destination mode, i.e., that of the server. This is the recommended configuration when VLAN transparency is required. To change to this configuration use the following CLI command:

in-path peering oobtransparency mode full

To change the default port used the by the client-side Steelhead appliance when oobtransparency mode full is configured, use the following CLI command:

in-path peering oobtransparency port

It is important to note that these oobtransparency options are only used with full transparency. If the first inner-connection to a Steelhead was not transparent, the OOB will always use correct addressing.

Virtual In-path

Introduction to Virtual In-path Deployments In a virtual in-path deployment, the Steelhead appliance is virtually in the path between clients and servers. Traffic moves in and out of the same WAN interface. This deployment differs from a physical in-path deployment in that a packet redirection mechanism is used to direct packets to Steelhead appliances that are not in the physical path of the client or server.

Redirection mechanisms:

Layer-4 Switch. You enable Layer-4 switch (or server load-balancer) support when you have multiple Steelhead appliances in your network to manage large bandwidth requirements.

PBR (Policy-Based Routing). PBR enables you to redirect traffic to a Steelhead appliance that is configured as virtual in-path device. PBR allows you to define policies to redirect packets instead of relying on routing protocols. You define policies to redirect traffic to the Steelhead appliance and policies to avoid loop-back.

WCCP (Web Cache Communication Protocol). WCCP was originally implemented on Cisco routers, multi-layer switches, and web caches to redirect HTTP requests to local web caches (version 1). Version 2, which is supported on Steelhead appliances, can redirect any type of connection from multiple routers or web caches and different ports.

Policy-Based Routing (PBR)

Introduction to PBR PBR is a router configuration that allows you to define policies to route packets instead of relying on routing protocols. It is enabled on an interface basis and packets coming into a PBR- enabled interface are checked to see if they match the defined policies. If they do match, the packets are routed according to the rule defined for the policy. If they do not match, packets are routed based on the usual routing table. The rules can redirect the packets to a specific IP address.

To avoid an infinite loop, PBR must be enabled on the interfaces where the client traffic is arriving and disabled on the interfaces corresponding to the Steelhead appliance. The common best practice is to place the Steelhead appliance on a separate subnet.

One of the major issues with PBR is that it can black hole traffic (drop all TCP connections to a destination) if the device it is redirecting to fails. To avoid black holing traffic, PBR must have a

RCSP Study Guide

way of tracking whether the PBR next hop is available. You can enable this tracking feature in a route map with the following Cisco router command:

set ip next-hop verify-availability

With this command, PBR attempts to verify the availability of the next hop using information from CDP. If that next hop is unavailable, it skips the actions specified in the route map. PBR checks availability in the following manner:

1. When PBR first attempts to send to a PBR next hop, it checks the CDP neighbor table to see if the IP address of the next hop appears to be available. If so, it sends an Address Resolution Protocol (ARP) request for the address, resolves it, and begins redirecting traffic to the next hop (the Steelhead appliance).

2. After PBR has verified the next hop, it continues to send to the next hop as long as it obtains answers from the ARP request for the next hop IP address. If the ARP request fails to obtain an answer, it then rechecks the CDP table. If there is no entry in the CDP table, it no longer uses the route map to send traffic. This verification provides a failover mechanism.

In more recent versions of the Cisco IOS software, there is a feature called PBR with Multiple Tracking Options. In addition to the old method of using CDP information, it allows methods such as HTTP and ping to be used to determine whether the PBR next hop is available. Using CDP allows you to run with older IOS 12.x versions.

WCCP Deployments

Introduction to WCCP

The WCCP protocol is a stateful language that the router and Steelhead appliance can use to redirect traffic to the Steelhead appliance in order for it to optimize. Several functions will have to be covered to make it stateful and scalable. Failover, load distribution, and negotiation of connection parameters will all have to be communicated throughout the cluster that the Steelhead appliance and router form upon successful negotiation. The protocol has four messages to encompass all of the above functions:

HERE_I_AM. Sent by Steelhead appliances to announce themselves.

I_SEE_YOU. Sent by WCCP enabled routers to respond to announcements.

REDIRECT_ASSIGN. Sent by the designated Steelhead appliance to determine flow distribution.

REMOVAL_QUERY. Sent by router to check a Steelhead appliance after missed HERE_I_AM messages. When you configure WCCP on a Steelhead appliance:

Routers and Steelhead appliances are added to the same service group.

Steelhead appliances announce themselves to the routers.

Routers respond with their view of the service group.

One Steelhead will be the designated CE (caching engine) and tells the routers how to redirect traffic among the Steelhead appliances in the service group. How Steelhead Appliances Communicate with Routers Steelhead appliances can use one of the following methods to communicate with routers:

Unicast UDP. The Steelhead appliance is configured with the IP address of each router. If additional routers are added to the service group, they must be added on each Steelhead appliance.

Multicast UDP. The Steelhead appliance is configured with a multicast group. If additional routers are added, you do not need to add or change configuration settings on the Steelhead appliances.

RCSP Study Guide

Redirection By default, all TCP traffic is redirected, optionally a redirect-list can be defined where only the contents of the redirect-list are redirected. A redirect-list in a WCCP configuration refers to an ACL that is configured on the router to select the traffic that will be redirected.

Traffic is redirected using one of the following schemes:

GRE (Generic Routing Encapsulation). Each data packet is encapsulated in a GRE packet with the Steelhead appliance IP address configured as the destination. This scheme is applicable to any network.

L2 (Layer 2). Each packet MAC address is rewritten with a Steelhead appliance MAC address. This scheme is possible only if the Steelhead appliance is connected to a router at Layer 2.

Either. The either value uses L2 first—if Layer 2 is not supported, GRE is used. This is the default setting. You can configure your Steelhead appliance to not encapsulate return packets. This allows your WCCP Steelhead appliance to negotiate with the router or switch as it if were going to send gre- return packets, but to actually send l2-return packets. This configuration is optional but recommended when connected at L2 directly. The command to override WCCP packet return negotiation is wccp l2-return enable. Be sure the network design permits this.

Load Balancing and Failover WCCP supports unequal load balancing. Traffic is redirected based on a hashing scheme and the weight of the Steelhead appliances. Each router uses a 256-bucket Redirection Hash Table to distribute traffic for a Service Group across the member Steelhead appliances. It is the responsibility of the Service Group's designated Steelhead appliance to assign each router's Redirection Hash Table. The designated Steelhead appliance uses a WCCP2_REDIRECT_ASSIGNMENT message to assign the routers' Redirection Hash Tables. This message is generated following a change in Service Group membership and is sent to the same set of addresses to which the Steelhead appliance sends WCCP2_HERE_I_AM messages.

A router will flush its Redirection Hash Table if a WCCP2_REDIRECT_ASSIGNMENT is not received within five HERE_I_AM_T seconds of a Service Group membership change. The HASH algorithm can use several different input fields to come up with an 8 bit output (which is the bucket value). Default input fields are source and destination IP address of the packet that is redirected. Source and destination TCP port or any combination can be used.

The weight determines the percentage of traffic a Steelhead appliance in a cluster gets, the hashing algorithm determines which flow is redirected to which Steelhead appliance. The default weight is based on the Steelhead appliance model number. The weight is heavier for models that support more connections. You can modify the default weight if desired.

With the use of weight you can also create an active/passive cluster by assigning a weight of 0 to the passive Steelhead appliance. This Steelhead appliance will only get traffic when the active Steelhead appliance fails.

Assignment and Redirection Methods The assignment method refers to how a router chooses which Steelhead appliance in a WCCP service group to redirect packets to. There are two assignment methods: the Hash assignment method and the Mask assignment method. Steelhead appliances support both the Hash assignment and Mask assignment methods.

HASH

RCSP Study Guide

Redirection using Hash assignment is a two-stage process. In the first stage a primary key is formed from the packet which is defined by the Service Group and is hashed to yield an index. This index number will then be placed into a Redirection Hash Table.

In the Redirection Hash Table a packet has either an unflagged web-cache, unassigned bucket, or

a flagged packet. In the event the packet has an unflagged web-cache, the packet is redirected to

that web-cache. If the bucket is unassigned the packet is forwarded normally. However, if the bucket is flagged indicating a secondary hash then a secondary key is formed (as defined by the Service Group description). This key is hashed to yield an index number which in turn is placed into the Redirection Hash Table. If this secondary entry contains a web-cache index then the packet is directed to that web-cache. If the entry is unassigned the packet is forwarded normally.

MASK

The first phase of Mask assignment is defining the mask itself. The mask can be up to seven bits and can be applied to the SRC TCP port, DST TCP port, source IP address or DST IP address or

a combination of the four attributes but may not exceed seven bits. Depending on the amount of

bits selected different number of buckets are created and assigned to the different Steelhead

appliances in the service group. As traffic traverses the router a bitwise AND operation is performed between the mask and the IP address/TCP port depending on the mask defined. The traffic is assigned to the different buckets based on the results of the AND operation.

Mask IP address/TCP port pairs are processed in an order they are received and in turn are compared to the seven bits.

From Internet-Draft WCCP version 2 (http://www.wrec.org/Drafts/draft-wilson-wrec-wccp-v2- 00.txt ):

Note that in all of the mask fields of this element a zero means "Don't care”.

0

1

2

3

 

0

1

2

3 4

5 6

7

8

9

0

1 2

3 4

5

6

7

8

9

0

1

2

3

4

5

6

7

8

9

0

1

 

+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+

 

|

Source Address Mask

 

|

+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+

|

Destination Address Mask

 

|

+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+

|

Source Port Mask

 

|

Destination Port Mask

 

|

+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+

Source Address Mask. The 32-bit mask to be applied to the source IP address of the packet.

Destination Address Mask. The 32-bit mask to be applied to the destination IP address of the packet.

Source Port Mask. The 16-bit mask to be applied to the TCP/UDP source port field of the packet.

Destination Port Mask. The 16-bit mask to be applied to the TCP/UDP destination port field of the packet.

It

may not be obvious for the details here but there is a priority bit order when using Mask. The

above diagram reads from most significant to least significant bottom left to top. In other words,

the priority bits will be source port, destination port, destination address, and source address. This is helpful in knowing in the event of troubleshooting which bucket a specific resource is allocated.

RCSP Study Guide

For more information regarding Hash or Mask assignment, refer to the Steelhead Appliance Deployment Guide and the whitepaper “WCCP Mask Assignment” provided on the Riverbed Partner Portal and/or Riverbed Technical Support site.

Advanced WCCP Configuration

Using Multicast Groups If you add multiple routers and Steelhead appliances to a service group, you can configure them to exchange WCCP protocol messages through a multicast group. Configuring a multicast group is advantageous because if a new router is added, it does not need to be explicitly added on each Steelhead appliance.

Multicast addresses must be between 224.0.0.0 and 239.255.255.255.

Configuring Multicast Groups on the Router On the router, at the system prompt, enter the following set of commands:

Router> enable Router# configure terminal Router(config)# ip wccp 90 group-address 224.0.0.3 Router(config)# interface fastEthernet 0/0 Router(config-if)# ip wccp 90 redirect in Router(config-if)# ip wccp 90 group-listen Router(config-if)# end Router# write memory

NOTE: Multicast addresses must be between 224.0.0.0 and 239.255.255.255.

Configuring Multicast Groups on the Steelhead Appliance On the WCCP Steelhead appliance, at the system prompt, enter the following set of commands:

WCCP Steelhead > enable WCCP Steelhead # configure terminal WCCP Steelhead (config) # wccp enable WCCP Steelhead (config) # wccp mcast-ttl 10 WCCP Steelhead (config) # wccp service-group 90 routers 224.0.0.3 WCCP Steelhead (config) # write memory WCCP Steelhead (config) # exit

Limiting Redirection by TCP Port By default all TCP ports are redirected, but you can configure the WCCP Steelhead appliance to tell the router to redirect only certain TCP source or destination ports. You can specify up to a maximum of seven ports per service group.

Using Access Lists for Specific Traffic Redirection If redirection is based on traffic characteristics other than ports, you can use ACLs on the router to define what traffic is redirected.

ACL considerations:

ACLs are processed in order, from top to bottom. As soon as a particular packet matches a statement, it is processed according to that statement and the packet is not evaluated against subsequent statements. Therefore, the order of your access-list statements is very important.

RCSP Study Guide

If no port information is explicitly defined, all ports are assumed.

By default all lists include an implied deny all entry at the end, which ensures that traffic that is not explicitly included is denied. You cannot change or delete this implied entry.

Access Lists: Best Practice To avoid requiring the router to do extra work, Riverbed recommends that you create an ACL that routes only TCP traffic to the Steelhead appliance. When a WCCP configured Steelhead appliance receives UDP, GRE, ICMP, and other non-TCP traffic, it returns the traffic to the router.

Verifying and Troubleshooting WCCP Configuration

Checking the Router Configuration On the router, at the system prompt, enter the following set of commands:

Router>en Router#show ip wccp Router#show ip wccp 90 detail Router#show ip wccp 90 view

Verifying WCCP Configuration on an Interface On the router, at the system prompt, enter the following set of commands:

Router>en Router#show ip interface

Look for WCCP status messages near the end of the output.

You can trace WCCP packets and events on the router.

Checking the Access List Configuration On the router, at the system prompt, enter the following set of commands:

Router>en Router#show access-lists <access_list_number>

Tracing WCCP Packets and Events on the Router On the router, at the system prompt, enter the following set of commands:

Router>en Router#debug ip wccp events WCCP events debugging is on Router#debug ip wccp packets WCCP packet info debugging is on Router#term mon

Server-Side Out-of-Path Deployments

Out-of-path Networks An out-of-path deployment is a network configuration in which the Steelhead appliance is not in the direct physical or logical path between the client and the server. In an out-of-path deployment, the Steelhead appliance acts as a proxy. An out-of-path configuration is suitable for data center locations where physical in-path or virtual in-path configurations are not possible.

RCSP Study Guide

In an out-of-path deployment, the client-side Steelhead appliance is configured as an in-path

device, and the server-side Steelhead appliance is configured as an out-of-path device.

The command to enable server-side out-of-path is:

HOSTNAME (config) # out-of-path enable

LAN I/F LAN I/F WAN I/F WAN I/F WANWAN WAN Client-side Client-side PRI PRI Steelhead
LAN I/F
LAN I/F
WAN I/F
WAN I/F
WANWAN
WAN
Client-side
Client-side
PRI
PRI
Steelhead
Steelhead
Fixed-target Rule
Fixed-target Rule

IP SRC=S-SH

IP SRC=S-SH

Server-side

Server-side

Steelhead

Steelhead

A

fixed-target rule is applied on the client-side Steelhead appliance to make sure the TCP session

is

intercepted and statically sent to the out-of-path Steelhead appliance on the server side. When

enabling out-of-path on the server-side Steelhead appliance, it starts listening on port 7810 for incoming connections from a client-side Steelhead appliance.

The Steelhead appliance can perform NAT. The server will see the IP address of the Steelhead appliance as the source of the connection so the packets are returned to the Steelhead appliance instead of the client. This is necessary to make sure that the bidirectional traffic is seen by the Steelhead appliance. Also keep in mind that optimization will only occur when the TCP connection is initiated by the client.

Out-of-Path, Failover Deployment An out-of-path, failover deployment serves networks where an in-path deployment is not an option. This deployment is cost effective, simple to manage, and provides redundancy.

When both Steelhead appliances are functioning properly, the connections traverse the master appliance. If the master Steelhead appliance fails, subsequent connections traverse the backup Steelhead appliance. When the master Steelhead appliance is restored, the next connection traverses the master Steelhead appliance. If both Steelhead appliances fail, the connection is passed through unoptimized to the server. The way to do this is to specify multiple target appliances in the fixed-target in-path rule on the client-side Steelhead appliance.

RCSP Study Guide

Data Center LAN

Data Center LAN

Switch Switch WAN WANWAN Router Router
Switch
Switch
WAN
WANWAN
Router
Router

Server

Server

Switch Switch WAN WANWAN Router Router Server Server Steelhead A Steelhead A Steelhead B Steelhead B

Steelhead A

Steelhead A

Steelhead B

Steelhead B

Hybrid Mode: In-Path and Server-Side Out-of-Path Deployment

A hybrid mode deployment serves offices with one WAN routing point and users, and where the

Steelhead appliance must be referenced from remote sites as an out-of-path device (for example,

to avoid mistaken auto-discovery or to bypass intermediary Steelhead appliances).

The following figure illustrates the client-side of the network where the Steelhead appliance is configured as both an in-path and server-side out-of-path device.

Steelhead Steelhead Firewall/VPN Firewall/VPN Switch Switch WANWAN WAN PRI PRI DMZ DMZ Client Client FTP
Steelhead
Steelhead
Firewall/VPN
Firewall/VPN
Switch
Switch
WANWAN
WAN
PRI
PRI
DMZ
DMZ
Client
Client
FTP Server
FTP Server
Web Server
Web Server

In this hybrid design, a client-side Steelhead appliance (not shown) would use a typical auto-

discovery process to optimize any data going to or coming from the clients shown. If however, a

remote user would like to get optimization to the DMZ shown above, the standard auto- discovery process would not function properly since the packet flow would prevent the auto- discovery probe from ever reaching the Steelhead appliance. To remedy this, a fixed-target rule matching the destination address of the DMZ and targeted to the Primary (PRI) interface of the

Steelhead appliance above will ensure that the traffic will reach the Steelhead appliance, and due

to the server-side out-of-path NAT process, will ensure that it returns to the Steelhead appliance

for optimization on the return path.

Asymmetric Route Detection

Asymmetric auto-detection enables Steelhead appliances to detect the presence of asymmetry within the network. Asymmetry is detected by the client-side Steelhead appliances. Once detected, the Steelhead appliance will pass through asymmetric traffic unoptimized allowing the TCP connections to continue to work. The first TCP connection for a pair of addresses might be

RCSP Study Guide

dropped because during the detection process the Steelhead appliances have no way of knowing that the connection is asymmetric.

If asymmetric routing is detected, an entry is placed in the asymmetric routing table and any subsequent connections from that IP address pair will be passed through unoptimized. Further connections between these hosts are not optimized until that particular asymmetric routing cache entry times out.

Type

Description

Asymmetric Routing Table and Log Entries

 

Packets traverse both Steelhead appliances going from the client to the server but bypass both Steelhead appliances on the return path.

Asymmetric Routing Table: bad RST

Log: Sep 5 11:16:38 gen-sh102 kernel:

Complete

Asymmetry

[intercept.WARN] asymmetric routing between 10.11.111.19 and 10.11.25.23 detected (bad RST)

 

Packets traverse both Steelhead appliances going from the client to the server but bypass the server-side Steelhead appliance on the return path.

Asymmetric Routing Table: bad SYN/ACK

Log: Sep 7 16:17:25 gen-sh102 kernel:

Server-side

[intercept.WARN] asymmetric routing between 10.11.25.23:5001 and 10.11.111.19:33261 detected (bad SYN/ACK)

Asymmetry

 

Packets traverse both Steelhead appliances going from the client to the server but bypass the client-side Steelhead appliance on the return path.

Asymmetric Routing Table: no SYN/ACK

Client-side

Log: Sep 7 16:41:45 gen-sh102 kernel:

Asymmetry

[intercept.WARN] asymmetric routing between 10.11.111.19:33262 and 10.11.25.23:5001 detected (no SYN/ACK)

 

There are two types of Multi- SYN Retransmit:

Asymmetric Routing Table: probe- filtered(not-AR)

Probe-filtered occurs when the client-side Steelhead appliance sends out multiple SYN+ frames and does not get a response.

Log: Sep 13 20:59:16 gen-sh102 kernel:

Multi-SYN

[intercept.WARN] it appears as though probes from 10.11.111.19 to 10.11.25.23 are being filtered. Passing through connections between these two hosts.

Retransmit

SYN-rexmit occurs when the client-side Steelhead appliance receives multiple SYN retransmits from a client and does not see a SYN/ACK packet from the destination server.

Connection Forwarding

In asymmetric networks, a client request traverses a different network path from the server response. Although the packets traverse different paths, to optimize a connection, packets traveling in both directions must pass through the same client and server Steelhead appliances.

If you have one path (through Steelhead-2) from the client to the server and a different path (through Steelhead-3) from the server to the client, you need to enable in-path Connection

RCSP Study Guide

Forwarding and configure the Steelhead appliances to communicate with each other. These Steelhead appliances are called neighbors and exchange connection information to redirect packets to each other.

You can configure multiple neighbors for a Steelhead appliance. Neighbors can be placed in the same physical site or in different sites, but the latency between them should be small because the packets traveling between them are not optimized.

When a SYN arrives on Steelhead-2, it will send a message on port 7850 telling it that it is expecting packets for that connection. Steelhead-3 then acknowledges and once Steelhead-2 gets the confirmation from Steelhead-3 it will continue with the SYN+ out to the WAN. When the SYN/ACK+ comes back, if it arrives at Steelhead-3, it will encapsulate that packet and forward it back to Steelhead-2. Once the connection has been established, there will be no more encapsulation between the two Steelhead appliances for that flow.

If a subsequent packet arrives on Steelhead-3, it will perform the destination IP/port rewrite. The Steelhead appliance simply changes the destination IP of the packet to that of the neighbor Steelhead appliance. No encapsulation is involved later on in the flow.

In WCCP deployments, Connection Forwarding can also be used to prevent outages whenever the cluster and the redirection table changes. Default behavior of Connection Forwarding is that when a neighbor is lost, the Steelhead appliance that lost the neighbor also will pass through the connection since it is assuming asymmetric routing of traffic. In WCCP deployments this is not the case and this behavior has to be avoided. The command in-path neighbor allow- failure overrides the default behavior and allows the Steelhead appliances to continue optimizing. Understanding the implication of applying this command prior to configuring it in a production environment is recommended.

Commands to enable Connection Forwarding:

in-path neighbor enable in-path neighbor ip address <addr> [port <port>]

in-path neighbor allow-failure (optional)

IP addresses of neighbors with multiple In-path interfaces only have to be specified with the first In-path interface.

Simplified Routing (SR)

Simplified routing collects the IP address for the next hop MAC address from each packet it receives to use in addressing traffic. Enabling simplified routing eliminates the need to add static routes when the Steelhead appliance is in a different subnet from the client and the server.

Without simplified routing, if a Steelhead appliance is installed in a different subnet from the client or server, you must define one router as the default gateway and optionally define static routes for the other subnets.

Without having static routes or other forms of ‘routing’ intelligence, packets can end up flowing through the Steelhead appliance twice causing packet ricochet. This could potentially lead to broken QoS models, firewalls blocking packets, and a performance decrease. Enabling simplified routing eliminates these issues.

It is recommend to use destination only (dest-only) in certain asymmetric environments. If using Hot Standby Router Protocol (HSRP), it provides asymmetry in networks. Normally, this is not a problem, but when using WAN accelerators it is crucial to have all information to return to the

RCSP Study Guide

corresponding Steelhead appliance doing the optimization. With source enabled or all, the logical IP address being used by the router is not bound to a physical interface or MAC address. By using all or source-based SR, the MAC of the actual IP is learned by the Steelhead appliances which could cause confusion in the route that the packet takes.

Data Store Synchronization

In a serial failover scenario the data stores are not synchronized by default. When the master Steelhead appliance fails, the backup Steelhead appliance will take over but users will experience cold performance again. Data store synchronization can be turned on to exchange data store content. This can either be done via the Primary or the AUX interface. The synchronization process runs on port 7744, the reconnect timer is set to 30 seconds. Data store synchronization can only occur between the same Steelhead appliance models and can only be used in pairs of two.

The commands to enable automatic data store synchronization are:

HOSTNAME (config) #datastore sync peer-ip "x.x.x.x" HOSTNAME (config) #datastore sync port "7744" HOSTNAME (config) #datastore sync reconnect "30" HOSTNAME (config) #datastore sync master HOSTNAME (config) #datastore sync enable

If you have not deployed data store synchronization it is also possible to manually send the data from one Steelhead appliance to another. The receiving Steelhead appliance will have to start a listening process on the Primary/AUX interface. The sending Steelhead appliance will have to push the data to the IP address of the Primary interface.

Something to note about Primary and AUX interfaces: if the connection is created from the Steelhead appliance to some external machine (non-Steelhead device), traffic will only go out the Primary or AUX interfaces. Therefore, TACACS+ and RADIUS will only go out the Primary or AUX interface since they originated from the Steelhead appliance.

The commands to start this are:

HOSTNAME (config) # datastore receive port <port> HOSTNAME (config) # datastore send addr <addr> port <port>

CIFS Prepopulation

The prepopulation operation effectively performs the first Steelhead read of the data on the prepopulation share. Subsequently, the Steelhead appliance handles read and write requests as effectively as with a warm data transfer. With warm transfers, only new or modified data is sent, dramatically increasing the rate of data transfer over the WAN.

Authentication and Authorization

Authentication

The Steelhead appliance can use a RADIUS or TACACS+ authentication system for logging in administrative and monitor users. The following methods for user authentication are provided with the Steelhead appliance:

Local

RADIUS

TACACS+

RCSP Study Guide

The order in which authentication is attempted is based on the order specified in the AAA method list. The local value must always be specified in the method list.

The authentication methods list provides backup methods should a method fail to authenticate a user. If a method denies a user or is not reachable, the next method in the list is tried. If multiple servers within a method (assuming the method is contacting authentication servers) and a server time-out is encountered, the next server in the list is tried. If the current server being contacted issues an authentication reject, no other servers for the method are tried and the next authentication method in the list is attempted. If no methods validate a user, the user will not be allowed access to the box.

The Steelhead appliance does not have the ability to set a per interface authentication policy. The same default authentication method list is used for all interfaces. You cannot configure authentication methods with subsets of the RADIUS or TACACS+ servers specified (that is, there are no server groups).

When configuring the authentication server, it is important to specify the service rbt-exec along with the appropriate custom attributes for authorization. Authorization can be based on either the admin account or the monitor user account by using local-user-name=admin or local-user- name=monitor, respectively.

Refer to the CLI Guide for the available RADIUS and TACACS+ authentication commands.

SSL

With Riverbed SSL, Steelhead appliances are configured to have a trust relationship so they can exchange information securely over an SSL connection. SSL clients and servers communicate with each other exactly as they do without Steelhead appliances; no changes are required for the client and server application, nor are they for the configuration of proxies. Riverbed splits up the SSL handshake, the sequence of message exchanges at the start of an SSL connection. This is called split termination.

In an ordinary SSL handshake, the client and server first establish identity using public-key cryptography, then negotiate a symmetric session key to be used for data transfer. With Riverbed SSL acceleration, the initial SSL message exchanges take place between the client and the server-side Steelhead appliance. Then the server-side Steelhead appliance sets up a connection to the server, to ensure that the service requested by the client is available. In the last part of the handshake sequence, a Steelhead-to-Steelhead process ensures that both appliances (client-side and server-side) know the session key.

The client SSL connection logically terminates at the server but physically terminates at the client-side Steelhead appliance—just as is true for logical versus physical unencrypted TCP connections. And just as the Steelhead-to-Steelhead TCP connection over the WAN may use a better TCP implementation than the ones used by client or server, the Steelhead-to-Steelhead connection may be configured to use better ciphers and protocols than the client and server would normally use.

The Steelhead appliance also contains a secure vault which stores all SSL server settings, other certificates (that is, the CA, peering trusts, and peering certificates), and the peering private key. The secure vault protects your SSL private keys and certificates when the Steelhead appliance is not powered on. You set a password for the secure vault which is used to unlock it when the Steelhead appliance is powered on. After rebooting the Steelhead appliance, SSL traffic is not optimized until the secure vault is unlocked with the correct password.

RCSP Study Guide

Refer to the Steelhead Appliance Management Console User’s Guide for information on configuring SSL.

Central Management Console (CMC)

Introduction The CMC facilitates the essential administration tasks for the Riverbed system:

Configuration. The CMC enables you to automatically configure new Steelhead appliances or to send configuration settings to appliances in remote offices. The CMC utilizes policies and groups to facilitate centralized configuration and reporting.

Monitoring. The CMC provides both high-level status and detailed statistics of the performance of Steelhead appliances and enables you to configure event notification for managed Steelhead appliances.

Management. The CMC enables you to start, stop, restart, and reboot remote Steelhead appliances. You can also schedule jobs to send software upgrades and configuration changes to remote appliances or to collect logs from remote Steelhead appliances.

CMC Configuration Objects The CMC utilizes appliance policies and appliance groups to facilitate centralized configuration and reporting of remote Steelhead appliances. Groups are comprised of Steelhead appliances or sub-groups of Steelhead appliances; all groups and Steelhead appliances are contained in the root default Global group.

Policies are sets of common configuration options that can be shared among different Steelhead appliances independently or via group membership. You should be familiar with the policies and the features that each policy manages.

The following policy types are available:

Optimization Policy. Use optimization policies to manage optimization features such as the data store, in-path rules, and SSL settings, in addition to many others.

Networking Policy. Use networking policies to manage networking features such as asymmetric routing, DNS settings, host settings, QoS settings, and others.

Security Policy. Use security policies to manage appliances in which security is a key component.

System Settings Policy. Use system settings policies to organize and manage system setting features such as alarms, announcements, email notifications, log settings, and others.

Each policy type is made up of particular RiOS features. For example, system settings policies contain feature sets for common system administration settings such as alarm settings, announcements, email notification settings, among others, while security policies contain feature sets for encryption, authentication methods, and user permissions.

Each group or Steelhead appliance can be assigned one of each type of policy. Because the Global group serves as the root group, or parent, to all subsequent groups and appliances, any policies assigned to the Global group provide the default values for all groups and Steelhead appliances.

The Override Parent feature can override the inheritance of values from the policies applied to the parent group. It is off by default.

RCSP Study Guide

Steelhead Appliance Auto-Registration Steelhead appliances must be registered with the CMC so that you can monitor and manage them with the CMC.

Steelhead appliances are designed to send a registration request periodically to the CMC—either to an IP address or hostname you specify when you run the Steelhead appliance installation wizard, or to a default CMC hostname. In order for auto-registration with the default hostname to work, you must configure your DNS server to map to the hostname riverbedcmc and the IP address of the CMC.

The steps you take to register Steelhead appliances with the CMC depend on the order in which you install the products.

You can alternatively add Steelhead appliances manually to be managed by the CMC.

Secure Vault on the CMC Initially the secure vault is keyed with a default password known only to the RiOS software. This allows the system to automatically unlock the vault during system start up. You can change the password, but the secure vault does not automatically unlock on start up. To optimize SSL connections or to use data store encryption, the secure vault must be unlocked.

Please see the Central Management Console User’s Guide for more information.

Steelhead Mobile Solution (Steelhead Mobile Controller & Steelhead Mobile Client)

The Steelhead Mobile Controller (SMC) enables centralized management of Steelhead Mobile clients that deliver wide area data services for the entire mobile workforce.

The Steelhead Mobile software is deployed to laptops or desktops for mobile workers, home users, and small branch office users. A Steelhead Mobile Controller, located in the data center, is required for Steelhead Mobile deployment, management, and licensing control. Once deployed and connected, the Steelhead Mobile clients connect directly with a Steelhead appliance in order to accelerate data and application.

The Mobile Controller facilitates the essential administration tasks for your Mobile Clients:

Configuration. The Mobile Controller enables you to install, configure, and update Mobile Clients in groups. The Mobile Controller utilizes Endpoint policies, Acceleration policies, MSI packages, and deployment IDs (DIDs) to facilitate centralized configuration and reporting.

Monitoring. The Mobile Controller provides both high-level status and detailed statistics on Mobile Client performance, and enables you to configure alerts for managed Mobile Clients.

Management. The Mobile Controller enables you to schedule software upgrades and configuration changes to groups of Mobile Clients, or to collect logs from Mobile Clients.

Endpoint Policies Endpoint policies are used as configuration templates to configure groups of Mobile Clients that have the same configuration requirements. For example, you might use the default endpoint policy for the majority of your Mobile Clients and create another one for a group of users who need to connect to a different Mobile Controller.

Acceleration Policies Acceleration policies are used as configuration templates to configure groups of Mobile Clients that have the same performance requirements. For example, you might use the default

RCSP Study Guide

acceleration policy for the majority of your Mobile Clients and create another acceleration policy for a group of Mobile Clients that need to pass-through a specific type of traffic.

Mobile Clients must have both an endpoint policy and an acceleration policy for optimization to occur.

MSI Packages You use Microsoft Software Installer (MSI) packages to install and update the Steelhead Mobile Client software on each of your endpoint clients. The MSI package contains information necessary for Mobile Clients to communicate with the Mobile Controller.

Deployment IDs The Mobile Controller utilizes deployment IDs to link Endpoint and Acceleration policies to your Mobile Clients. The DID governs which policies and MSI packages the Mobile Controller provides to your clients. You can define the DIDs when you create MSI packages. You then assign policies to the DIDs. When you deploy an MSI package, the DID becomes associated with the endpoint client. The Mobile Controller subsequently uses the DID to identify the client and automatically provide policy and software updates.

Firewall Requirements If you deploy the Mobile Controller in the DMZ next to a VPN concentrator with firewalls on each side, the client-side network firewall must have port 7801 available. The server-side firewall must have ports 22, 80, 443, 7800, and 7870 open. If you are using application control, you need to allow rbtdebug.exe, rbtmon.exe, rbtsport.exe, and shmobile.exe.

Please see the Steelhead Mobile Controller User’s Guide for more information

Interceptor Appliance

The Interceptor appliance extends the performance capabilities of Steelhead appliances to meet the requirements of very large data center environments. Working with Steelhead appliances, an Interceptor appliance can support up to 1,000,000 concurrent connections, running up to 4 gigabits per second.

Interceptor Deployment Terminology

4 gigabits per second. Interceptor Deployment Terminology © 2007-2009 Riverbed Technology, Inc. All rights reserved.

RCSP Study Guide

Peer Neighbors. Steelhead 1, Steelhead 2, Steelhead 3, and Steelhead 4 are the pool of LAN-side Steelhead appliances that are load balanced by the Interceptor appliances. In relation to the Interceptor appliances, these Steelhead appliances are called peer neighbors.

Peer Interceptor Appliances. Interceptor 1 and Interceptor 3 are peers to each other, connected virtually, in parallel.

Failover Buddies. Interceptor 1 and Interceptor 2 are failover buddies to each other, connected with cables, in serial. If either Interceptor appliance goes down or requires maintenance, its buddy handles redirection for its connections.

In-Path Rules When the Interceptor appliance intercepts a SYN request to a server, the in-path rules you configure determine the subnets and ports for traffic that will be optimized. You can specify in- path rules to pass-through, discard, or deny traffic; or to redirect and optimize it.

In the case of a data center, the Interceptor appliance intercepts SYN requests when a data center server establishes a connection with a client that resides outside the data center.

In the connection-processing decision tree, in-path rules are processed before load-balancing rules. Only traffic selected for redirection proceeds to load balancing rules processing.

Load Balancing For connections selected by an in-path redirect rule, the Interceptor appliance distributes the connection to the most appropriate Steelhead appliance based on rules you configure, intelligence from monitoring peer neighbor Steelhead appliances, and the Riverbed connection distribution algorithm.

Failover You can configure a pair of Interceptor appliances as failover buddies. In the event one Interceptor appliance goes down or requires maintenance, the failover buddy ensures uninterrupted service.

Peer Interceptor Monitoring Peer Interceptor appliances include both failover buddies deployed in serial configuration and Interceptor appliances deployed in a parallel configuration to handle asymmetric routes.

Asymmetric routing can cause the response from the server to be routed along a different physical network path from the original request, and a different Steelhead appliance may be on each of these paths. When you deploy peer Interceptor appliances in parallel, the first Interceptor appliance that receives a packet delays forwarding it. It requests that the other Interceptor appliances redirect packets for the connection to it. When the other Interceptor appliances have confirmed that they have received and accepted this request, the first Interceptor appliance begins to redirect the connection.

Peer Neighbor Monitoring Peer neighbor Steelhead appliances are the pool of Steelhead appliances for which the Interceptor appliance monitors capacity and balances load. To assist in deployment tuning and troubleshooting, you can monitor the state of neighbor Steelhead appliances.

Link State Detection and Link State Propagation The Interceptor appliance monitors the link state of devices in its path, including routers, switches, interfaces, and In-path interfaces. When the link state changes (for example, the link goes down or it resumes), the Interceptor appliance propagates the change to the dynamic routing

RCSP Study Guide

table. Link state propagation ensures accurate and timely triggers for failover or redundancy scenarios.

EtherChannel Deployment The Interceptor appliance can operate within an EtherChannel. In an EtherChannel deployment, all of the links in the channel must pass through the same Interceptor appliance.

VLAN Tagging The Interceptor appliance supports VLAN tagged connections in VLAN trunked links. The Interceptor appliance supports VLAN 802.1q.

RCSP Study Guide

III. Features

Feature Licensing

Certain features on Steelhead appliances require a license for operation. Licenses for all features, including platform specific licenses, are included with the purchase of a Steelhead appliance apart from the SSL license which you must request separately. These licenses are factory installed, however licenses can be installed by the user via the CLI or Management Console. Licenses are required to be installed for the base system to function, as well as the application acceleration for CIFS and MAPI. This includes the Scalable Data Referencing license (base), the Windows File Servers license (CIFS), and the Microsoft Exchange (EXCH) license. Additional licensed features that will automatically be included upon activating the base license but do not require a separate license key are the Microsoft SQL optimization module, and the NFS optimization module.

All licensed features, with the exception of the Microsoft MS SQL optimization module, are enabled by default.

HighSpeed TCP (HSTCP)

Applicability and Considerations To better utilize links that have high bandwidth and high latency, such as in GigE WANs, OCx/STMx, or any other link that may be classified as a large BDP (bandwidth delay product) link, enabling HSTCP should be considered. HSTCP is a feature you can enable on Steelhead appliances to help reduce WAN data transfers inefficiencies that are caused by limitations with regular TCP. Enabling the HSTCP feature allows for more complete utilization of these “long fat pipes”. HSTCP is an IETF defined RFC standard (defined in RFC 3649 and RFC 3742), and has been shown to provide significant performance improvements in networks with high BDP values.