Sunteți pe pagina 1din 132

ibm.

com/redbooks
Front cover
HiperSockets
Implementation Guide
Bill White
Roy Costa
Michael Gamble
Franck Injey
Giada Rauti
Karan Singh
Discussing architecture, functions, and
operating systems support
Planning and implementation
Setting up examples for z/OS,
z/VM and Linux on System z
International Technical Support Organization
HiperSockets Implementation Guide
March 2007
SG24-6816-01
Copyright International Business Machines Corporation 2002, 2006
Note to U.S Government Users - Documentation related to restricted rights - Use, duplication or disclosure is subject to restrictions set
forth in GSA ADP Schedule Contract with IBM Corp.
Second Edition (March 2007)
This edition applies to the HiperSockets on IBM System z, for use with z/OS V1R8, z/VM V5R2, and Linux
System z.
Comments may be addressed to:
IBM Corporation, International Technical Support Organization
Dept. HYTD Mail Station P099
2455 South Road
Poughkeepsie, NY 12601-5400
When you send information to IBM, you grant IBM a non-exclusive right to use or distribute the information in
any way it believes appropriate without incurring any obligation to you.
Take Note! Before using this information and the product it supports, be sure to read the general
information in Notices on page vii.
Copyright IBM Corp. 2002, 2006 iii
Contents
Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii
Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viii
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix
The team that wrote this IBM Redbooks publication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix
Become a published author . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .x
Comments welcome. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .x
Chapter 1. Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.1.1 HiperSockets benefits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.1.2 Installation planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2 Server integration with HiperSockets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.3 HiperSockets mode of operation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.3.1 HiperSockets usage example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.4 HiperSockets functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.4.1 Broadcast support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.4.2 Multicast support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.4.3 IP Version 6 support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.4.4 Hardware assists . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.4.5 VLAN support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
1.4.6 HiperSockets Network Concentrator on Linux . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
1.4.7 DYNAMICXCF and Sysplex subplexing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
1.4.8 HiperSockets Accelerator on z/OS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
1.5 Operating system support summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
1.6 Test configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
Chapter 2. Hardware definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
2.1 System configuration considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
2.2 HCD definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
2.2.1 Channel Path definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
2.2.2 Control unit definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
2.2.3 I/O device definitions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
2.3 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
Chapter 3. z/OS support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
3.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
3.1.1 z/OS implementation tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
3.2 Hardware definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
3.3 VTAM and TCP/IP started task JCL procedures. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
3.3.1 Locating the TCP/IP profile dataset from the TCP/IP JCL procedure. . . . . . . . . . 43
3.3.2 Locating the VTAM start options dataset from the VTAM JCL procedure . . . . . . 43
3.4 HiperSockets implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
3.4.1 HiperSockets implementation environment. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
3.4.2 Implementation steps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
3.4.3 VTAM customization for HiperSockets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
3.4.4 TCP/IP profile customization for HiperSockets . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
3.4.5 Verification of the HiperSockets configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
3.5 DYNAMICXCF HiperSockets implementation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
iv HiperSockets Implementation Guide
3.5.1 DYNAMICXCF implementation environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
3.5.2 Implementation steps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
3.5.3 VTAM configuration for DYNAMICXCF. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
3.5.4 TCP/IP configuration for DYNAMICXCF. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
3.5.5 Verification of the DYNAMICXCF configuration . . . . . . . . . . . . . . . . . . . . . . . . . . 54
3.6 VLAN HiperSockets implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
3.6.1 VLAN HiperSockets environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
3.6.2 Implementation steps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
3.6.3 VTAM customization for VLAN HiperSockets . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
3.6.4 TCP/IP profile customization for VLAN HiperSockets. . . . . . . . . . . . . . . . . . . . . . 59
3.6.5 Verify VLAN implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
3.7 TCP/IP Sysplex subplex over HiperSockets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
3.7.1 Subplex implementation environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
3.7.2 Implementation steps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
3.7.3 VTAM configuration setup for Sysplex subplex. . . . . . . . . . . . . . . . . . . . . . . . . . . 66
3.7.4 TCP/IP configuration setup for sysplex Subplex. . . . . . . . . . . . . . . . . . . . . . . . . . 67
3.7.5 Verification of the IP subplex over HiperSockets . . . . . . . . . . . . . . . . . . . . . . . . . 68
3.8 HiperSockets Accelerator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
3.8.1 HiperSockets Accelerator implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
3.8.2 HiperSockets Accelerator implementation steps. . . . . . . . . . . . . . . . . . . . . . . . . . 75
3.8.3 VTAM configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
3.8.4 TCP/IP configuration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
3.8.5 HiperSockets Accelerator verification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
3.9 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
Chapter 4. z/VM support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
4.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
4.2 z/VM HiperSockets support. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
4.2.1 Implementation steps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
4.2.2 z/VM definitions for guest systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
4.3 HiperSockets network definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
4.3.1 TCP/IP definitions for z/VM host system. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
4.3.2 z/VM guest system network definitions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
4.4 VLAN. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
4.4.1 VLAN definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
4.5 Commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
4.6 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
Chapter 5. Linux support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
5.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
5.1.1 Software requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
5.1.2 Linux configuration example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
5.2 Setup for Linux . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
5.2.1 z/VM definitions when running Linux guest systems. . . . . . . . . . . . . . . . . . . . . . . 96
5.2.2 Linux I/O definitions - initial install of Linux system. . . . . . . . . . . . . . . . . . . . . . . . 96
5.2.3 Linux I/O definitions - adding to an existing Linux system . . . . . . . . . . . . . . . . . . 97
5.2.4 Permanent Linux definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
5.3 VLAN. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
5.4 HiperSockets Network Concentrator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
5.5 Commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
5.6 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
Related publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
IBM Redbooks publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
Contents v
Other resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
Referenced Web sites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112
How to get IBM Redbooks publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112
IBM Redbooks publications collections. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
vi HiperSockets Implementation Guide
Copyright IBM Corp. 2002, 2006. All rights reserved. vii
Notices
This information was developed for products and services offered in the U.S.A.
IBM may not offer the products, services, or features discussed in this document in other countries. Consult
your local IBM representative for information on the products and services currently available in your area. Any
reference to an IBM product, program, or service is not intended to state or imply that only that IBM product,
program, or service may be used. Any functionally equivalent product, program, or service that does not
infringe any IBM intellectual property right may be used instead. However, it is the user's responsibility to
evaluate and verify the operation of any non-IBM product, program, or service.
IBM may have patents or pending patent applications covering subject matter described in this document. The
furnishing of this document does not give you any license to these patents. You can send license inquiries, in
writing, to:
IBM Director of Licensing, IBM Corporation, North Castle Drive, Armonk, NY 10504-1785 U.S.A.
The following paragraph does not apply to the United Kingdom or any other country where such
provisions are inconsistent with local law: INTERNATIONAL BUSINESS MACHINES CORPORATION
PROVIDES THIS PUBLICATION "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR
IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT,
MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer of
express or implied warranties in certain transactions, therefore, this statement may not apply to you.
This information could include technical inaccuracies or typographical errors. Changes are periodically made
to the information herein; these changes will be incorporated in new editions of the publication. IBM may make
improvements and/or changes in the product(s) and/or the program(s) described in this publication at any time
without notice.
Any references in this information to non-IBM Web sites are provided for convenience only and do not in any
manner serve as an endorsement of those Web sites. The materials at those Web sites are not part of the
materials for this IBM product and use of those Web sites is at your own risk.
IBM may use or distribute any of the information you supply in any way it believes appropriate without incurring
any obligation to you.
Information concerning non-IBM products was obtained from the suppliers of those products, their published
announcements or other publicly available sources. IBM has not tested those products and cannot confirm the
accuracy of performance, compatibility or any other claims related to non-IBM products. Questions on the
capabilities of non-IBM products should be addressed to the suppliers of those products.
This information contains examples of data and reports used in daily business operations. To illustrate them
as completely as possible, the examples include the names of individuals, companies, brands, and products.
All of these names are fictitious and any similarity to the names and addresses used by an actual business
enterprise is entirely coincidental.
COPYRIGHT LICENSE:
This information contains sample application programs in source language, which illustrate programming
techniques on various operating platforms. You may copy, modify, and distribute these sample programs in
any form without payment to IBM, for the purposes of developing, using, marketing or distributing application
programs conforming to the application programming interface for the operating platform for which the sample
programs are written. These examples have not been thoroughly tested under all conditions. IBM, therefore,
cannot guarantee or imply reliability, serviceability, or function of these programs.
viii HiperSockets Implementation Guide
Trademarks
The following terms are trademarks of the International Business Machines Corporation in the United States,
other countries, or both:
developerWorks
HiperSockets
IBM
MVS
Redbooks
Redbooks (logo)
System z
System z9
Tivoli
VSE/ESA
VTAM
z/OS
z/VM
z9
The following terms are trademarks of other companies:
SAP, and SAP logos are trademarks or registered trademarks of SAP AG in Germany and in several other
countries.
Linux is a trademark of Linus Torvalds in the United States, other countries, or both.
Other company, product, or service names may be trademarks or service marks of others.
Copyright IBM Corp. 2002, 2006 ix
Preface
This IBM Redbook discusses the System z HiperSockets function. It offers a broad
description of the architecture, functions, and operating systems support.
This IBM Redbooks publication will help you plan and implement System z HiperSockets. It
provides information about the definitions needed to configure HiperSockets for the
supported operating systems.
This IBM Redbooks publication is intended for system programmers, network planners, and
system engineers who will plan and install HiperSockets. A solid background in network and
TCP/IP is assumed.
The team that wrote this IBM Redbooks publication
This IBM Redbooks publication was produced by a team of specialists from around the world
working at the International Technical Support Organization, Poughkeepsie Center.
Bill White is a Project Leader and Senior Networking Specialist at the International
Technical Support Organization, Poughkeepsie Center.
Roy Costa is an Advisory Systems Programmer at the International Technical Support
Organization, Poughkeepsie Center. He has over 20 years of experience in z/VM systems
programming. Roy has worked with Linux on System z for more than five years and has
provided technical advice and support to numerous IBM Redbooks publications for the past
10 years.
Michael Gamble is a Systems Management specialist with over 40 years experience in
programming, real time environments and system support. He has been involved with VM
since 1979 and Linux on System z since 2000. He has written many utilities and tools for use
within the VM and Linux environments to ease and automate support work. He currently
works in the Integrated Technology Delivery Linux team to support over 250 SLES servers
under several z/VM systems in the USA and Canada.
Franck Injey is an I/T Architect at the International Technical Support Organization,
Poughkeepsie Center.
Giada Rauti is an Advisory I/T Specialist working at the IT Services Tivoli Lab in Rome,
Italy. She holds a Laurea degree in Physics from La Sapienza University in Rome. She has
been working for twenty years in IBM. She has 17 years of experience in the LAN and WAN
networking. Her areas of expertise include SNA/APPN and TCP/IP in z/OS and z/VM
environments.
Karan Singh is a systems programmer for IBM Global Services with 10 years of experience
in z/OS systems operation.
Thanks to the following people for their contributions to this project:
Alexandra Winter
IBM Systems and Technology Group, Development, Boeblingen
Bob Haimowitz
International Technical Support Organization, Raleigh Center
x HiperSockets Implementation Guide
Thanks to the authors of the first edition:
Rama Ayyar
Global Technology Services, West-Pennant Hills, NSW Australia.
Velibor Uskokovic
Global Technology Services, Toronto, Ontario Canada
Become a published author
Join us for a two- to six-week residency program! Help write an IBM Redbooks publication
dealing with specific products or solutions, while getting hands-on experience with
leading-edge technologies. You'll team with IBM technical professionals, Business Partners
and clients.
Your efforts will help increase product acceptance and customer satisfaction. As a bonus, you
will develop a network of contacts in IBM development labs, and increase your productivity
and marketability.
Find out more about the residency program, browse the residency index, and apply online at:
ibm.com/redbooks/residencies.html
Comments welcome
Your comments are important to us!
We want our Redbooks to be as helpful as possible. Send us your comments about this or
other Redbooks in one of the following ways:
Use the online Contact us review IBM Redbooks publication form found at:
ibm.com/redbooks
Send your comments in an Internet note to:
redbook@us.ibm.com
Mail your comments to:
IBM Corporation, International Technical Support Organization
Dept. HYTD Mail Station P099
2455 South Road
Poughkeepsie, NY 12601-5400
Copyright IBM Corp. 2002, 2006 1
Chapter 1. Overview
This chapter provides a high-level overview of System z HiperSockets, and also an
introduction of the HiperSockets configuration that we have used while writing this IBM
Redbooks publication.
The topics covered in this chapter are:
Overview
Server integration with HiperSockets
HiperSockets mode of operation
HiperSockets functions
Operating system support summary
Test configuration
1
2 HiperSockets Implementation Guide
1.1 Overview
HiperSockets is a technology that provides high-speed Transmission Control Protocol/Internet
Protocol (TCP/IP) connectivity between servers within a System z. This technology eliminates
the requirement for any physical cabling or external networking connection among these
virtual servers. It works similar to an internal Local Area Network (LAN). HiperSockets is
very useful if you have very huge data flow among these virtual servers.
HiperSockets uses internal Queued Input/Output (iQDIO) at memory speeds to pass traffic
among these virtual servers.
HiperSockets is a Licensed Internal Code (LIC) function that emulates the Logical Link
Control (LLC) layer of an OSA-Express QDIO interface.
The following operating systems support HiperSockets:
z/OS
z/VM
Linux on System z
VSE/ESA
1.1.1 HiperSockets benefits
The following is a list of HiperSockets benefits:
Cost saving
You can use HiperSockets to communicate among consolidated servers in a single
processor. Therefore, you can eliminate all the hardware boxes running these separate
servers. With HiperSockets, there are zero external components or cables to pay for, to
replace, to maintain, or to wear out. The more consolidation of servers, the greater your
savings potential for costs associated with external servers and their associated
networking components.
Simplicity
HiperSockets is part of z/Architecture technology, including QDIO and advanced adapter
interrupt handling. The data transfer itself is handled much like a cross address space
memory move, using the memory bus. HiperSockets is application transparent and
appears as a typical TCP/IP device. Its configuration is simple, making installation easy. It
is supported by existing, known management and diagnostic tools.
Availability
With HiperSockets, there are no network hubs, routers, adapters, or wires to break or
maintain. The reduced number of network external components greatly improves
availability.
High performance
Consolidated servers that have to access corporate data residing on the System z can do
so at memory speeds with latency close to zero, by bypassing all the network overhead
and delays.
Also, you can customize HiperSockets to accommodate varying traffic sizes. With
HiperSockets, you can define a maximum frame size according to the traffic
characteristics for each HiperSockets. In contrast, LANs such as Ethernet and Token Ring
have a maximum frame size predefined by their architecture.
Chapter 1. Overview 3
Priority
Priority queuing is a capability supported by the QDIO architecture and introduced for
z/OS environment only. It sorts outgoing IP message traffic according to the service policy
that you have set up for the specific priority assigned in the IP header. It is used for
HiperSockets Accelerator function.
Security
Because there is no server-to-server traffic outside the System z, HiperSockets has no
external components, and therefore it provides a very secure connection. For security
purposes, you can connect servers to different HiperSockets. All security features, such as
firewall filtering, are available for HiperSockets interfaces in the same way as they are with
other TCP/IP network interfaces. In a Sysplex environment, subplexing allows you to
define security zones. Thus, only members within the same security zone may
communicate with each other.
VLAN support
A virtual LAN allows you to divide a physical network administratively into separate logical
networks. These logical networks operate as though they are physically independent of
each other. This allows for traffic flow over HiperSockets and between HiperSockets and
OSA-Express features. Inside each single HiperSockets LAN, you can define multiple
VLAN connections (up to a maximum of four).
Sysplex connection improvement
HiperSockets can also improve TCP/IP communications within a sysplex environment
when the DYNAMICXCF facility is used.
1.1.2 Installation planning
The following are the steps needed to implement HiperSockets:
Apply the OS maintenances level that provides HiperSockets support.
Define the HiperSockets CHPIDs and I/O devices to your configuration.
Update TCP/IP configuration with the parameters that support HiperSockets.
HiperSockets is a LIC function, which may require EC level maintenance. Check with your
local service representative to insure your System z has the required EC level installed. There
is no extra charge for HiperSockets.
HiperSockets connectivity
HiperSockets supports:
Up to sixteen independent HiperSockets.
Up to 12288 I/O devices across all 16 HiperSockets.
VLAN support, with a maximum four VLANs for each defined HiperSocket.
Spanned channel support, which allows sharing of HiperSockets across multiple Logical
Channel SubSystems (LCSS).
Up to 4096 TCP/IP stack connections across all HiperSockets. For z/OS, z/VM and Linux,
and VSE/ESA, the maximum number of TCP/IP stacks or HiperSockets communication
queues that can concurrently connect on a single z9 EC, z9 BC, z990, or z890 server is
4096.
Up to 16000 IP addresses across all 16 HiperSockets, which means that a total of 16000
IP addresses can be kept for the 16 possible IP address lookup tables. These IP
addresses include the HiperSockets interface, and also Virtual IP addresses (VIPA) and
dynamic Virtual IP Addresses (DVIPA) that are defined to the TCP/IP stack.
4 HiperSockets Implementation Guide
z/OS allows the operation of multiple TCP/IP stacks within a single image. The read
control and write control I/O devices are required only once per image, and are controlled
by VTAM. Each TCP/IP stack within the same z/OS image requires one I/O device for
data exchange.
If you run one TCP/IP stack per logical partition, z/OS requires three I/O devices (as do
z/VM and Linux). Each additional TCP/IP stack in a z/OS logical partition requires only
one additional I/O device for data exchange. The I/O device addresses can be shared
between z/OS systems running in different logical partitions. Therefore, the number of I/O
devices is not a limitation for z/OS.
1.2 Server integration with HiperSockets
Many data center environments today are multi-tiered server applications, with a variety of
middle-tier servers surrounding the System z data and transaction server. Interconnecting
this multitude of servers requires the cost and complexity of many networking connections
and components. The performance and availability of the inter-server communication is
dependent on the performance and stability of the set of connections. The more servers
involved, the greater the number of network connections and complexity to install, administer,
and maintain.
Figure 1-1 shows two configurations.
The configuration on the left shows a server farm surrounding a System z server, with its
corporate data and transaction servers. This configuration is very complex, involving the
backup of the servers and network connections. It is also very expensive and has a high
administrative cost.
The configuration on the right consolidates the mid-tier workload onto multiple Linux virtual
servers running on a System z server in a very reliable, high-speed network that
HiperSockets provides, over which these servers can communicate. In addition, these
consolidated servers also have direct high-speed access to database and transaction servers
running under z/OS on the same System z server. The external network connection for all
servers is concentrated over a few high-speed OSA-Express interfaces.
Figure 1-1 Server consolidation
z/OS
Consolidation
System z server Multiple external servers
many network connections
TCP/IP network
TCP/IP network
OSA-Express
z/OS
z/VM
Linux Guest Systems
HiperSockets
few network connections
Chapter 1. Overview 5
1.3 HiperSockets mode of operation
HiperSockets implementation is based on the OSA-Express Queued Direct Input/Output
(QDIO) protocol, hence HiperSockets is called internal QDIO (iQDIO). The LIC emulates the
link control layer of an OSA-Express QDIO interface. Typically, before you can transport a
packet on an external LAN, you have to build a LAN frame, and insert the MAC address of the
destination host or router on that LAN into the frame. HiperSockets do not use LAN frames,
destination hosts, or routers. TCP/IP stacks are addressed by inbound data queue addresses
instead of MAC addresses.
The System z LIC maintains a lookup table of IP addresses for each HiperSocket. This table
represents an internal LAN. At the time at which a TCP/IP stack starts a HiperSockets
device, the device is registered in the IP address lookup table with its IP address, and its input
and output data queue pointers. If a TCP/IP device is stopped, the entry for this device is
deleted from the IP address lookup table.
HiperSockets copy data synchronously from the output queue of the sending TCP/IP device
to the input queue of the receiving TCP/IP device by using the memory bus to copy the data
through an I/O instruction.
The controlling operating system that performs I/O processing is identical to OSA-Express in
QDIO mode. The data transfer time is similar to a cross-address space memory move, with
latency close to zero. To get a data move total elapsed time, you have to add the operating
system I/O processing time to the LIC data move time.
HiperSockets operations are executed on the CP where the I/O request is initiated.
HiperSockets starts read or write operations. The completion of a data move is indicated by
the sending side to the receiving side with a Signal Adapter (SIGA) instruction. Optionally, the
receiving side can use dispatcher polling instead of handling SIGA interrupts. The I/O
processing is performed reducing demand on the System Assist Processor (SAP). This new
implementation is also called thin interrupt.
6 HiperSockets Implementation Guide
The data transfer itself is handled much like a cross-address space memory transfer using the
memory bus, not the server I/O bus. HiperSockets does not contend with other system I/O
activity and it does not use CP cache resources. See Figure 1-2.
Figure 1-2 HiperSockets basic operation
The HiperSockets operational flow consists of five steps:
1. Each TCP/IP stack (image) registers its IP addresses into HiperSockets server-wide
Common Address Lookup table. There is one lookup table for each HiperSockets internal
LAN. The scope of each LAN is the logical partitions that are defined to share the
HiperSockets IQD CHPID.
2. Then, the address of the TCP/IP stacks receive buffers are appended to the HiperSockets
queues.
3. When data is being transferred, the send operation of HiperSockets performs a table
lookup for the addresses of the sending and receiving TCP/IP stacks and their associated
send and receive buffers.
4. The sending processor copies the data from its send buffers into the target CP processors
receive buffers (System z9 server memory).
5. The sending processor optionally delivers an interrupt to the target TCP/IP stack. This
optional interrupt uses the thin interrupt support function of the System z server, which
means the receiving host is going to look ahead, detecting and processing inbound data.
This technique reduces the frequency of real I/O or external interrupts.
HiperSockets TCP/IP devices are configured similar to OSA-Express QDIO devices. Each
HiperSockets requires the definition of a channel path identifier (CHPID) similar to any other
I/O interface. HiperSockets is not allocated a CHPID until it is defined. It also does not take an
I/O cage slot. Customers who have used all the available CHPIDs on the server cannot
enable HiperSockets; therefore, you must include HiperSockets in the customers overall
channel I/O planning. The CHPID type for HiperSockets is IQD, and the CHPID number must
be in the range from hex 00 to hex FF. No other I/O interface can use a CHPID number
Note: You must define the source and destination interfaces to the same HiperSockets.
Device
Driver
TCP/IP TCP/IP TCP/IP
Device
Driver
Device
Driver
Logical partition
virtual
Server 1
Common Lookup Table across entire HiperSockets LAN
1 1
2 2
4/5
3
System z
2
1
Logical partition
virtual
Server 2
Logical partition
virtual
Server 3
Chapter 1. Overview 7
defined for a HiperSockets, even though HiperSockets does not occupy any physical I/O
connection position.
We recommend assigning the CHPID addresses starting at the high end of the CHPID
addressing range (xFF, FE, FD, and FC...) to minimize possible addressing conflicts with real
channels. This is similar to the approach used when defining other internal channels.
Real LANs have a maximum frame size limit defined by their protocol. The maximum frame
size for Ethernet is 1492 bytes, and for Gigabit Ethernet there is the jumbo frame option for a
maximum frame size of 9 kilobytes (KB). The maximum frame size for a HiperSockets is
assigned when the HiperSockets CHPID is defined. You can select frame sizes of 16 KB,
24 KB, 40 KB, and 64 KB. The default maximum frame size is 16 KB. The selection depends
on the data characteristics transported over a HiperSockets, which is also a trade-off between
performance and storage allocation. The MTU size used by the TCP/IP stack for the
HiperSockets interface is also determined by the maximum frame size. See Table 1-1.
Table 1-1 Maximum frame size and MTU size
The maximum frame size is defined in the hardware configuration, which is displayed in IOCP
as CHPARM.
An IP address is registered with its HiperSockets interface by the TCP/IP stack at the time at
which the TCP/IP device is started. IP addresses are removed from an IP address lookup
table when a HiperSockets device is stopped. Under operating system control, you can
reassign IP addresses to other HiperSockets interfaces on the same HiperSockets LAN. This
allows flexible backup of TCP/IP stacks.
1.3.1 HiperSockets usage example
Each HiperSockets is identified by a Channel Path Identifier (CHPID) number. As for all other
input/output operations, operating systems address a HiperSockets interface via device
numbers specified during the CHPID definition process.
Maximum frame size Maximum Transmission
Unit size
16 KB 8 KB
24 KB 16 KB
40 KB 32 KB
64 KB 56 KB
Note: Reassignment is only possible within the same HiperSockets LAN. A HiperSockets
is one network or subnetwork. Reassignment is only possible for the same operating
system type. For example, an IP address originally assigned to a Linux TCP/IP stack can
only be reassigned to another Linux TCP/IP stack, a z/OS dynamic VIPA can only be
reassigned to another z/OS TCP/IP stack, or a z/VM TCP/IP VIPA can only be reassigned
to another z/VM TCP/IP stack. The LIC performs the reassignment in force mode. It is up
to the operating systems TCP/IP stack to control this change.
8 HiperSockets Implementation Guide
There are many possibilities for applying HiperSockets technology. Figure 1-3 shows the use
of three possible HiperSockets in a System z.
Figure 1-3 HiperSockets usage example
The three HiperSockets illustrated in Figure 1-3 are used as follows:
HiperSockets with CHPID FD
Connected to this HiperSockets are all servers in the System z, which are:
The multiple Linux servers running under z/VM in LPAR-1
The z/VM TCP/IP stack running in LPAR-1
All z/OS servers in sysplex A (logical partition s 5 to 7) for non-sysplex traffic
All z/OS servers in sysplex B (logical partitions 8 to 10) for non-sysplex traffic
HiperSockets with CHPID FE
This is the connection used by sysplex A (logical partitions 5 to 7) to transport TCP/IP
user-data traffic among the three sysplex logical partitions.
If the following prerequisites are met, HiperSockets is automatically used within a single
sysplex environment:
XCF dynamics are defined to the TCP/IP stacks.
HiperSockets is available to the TCP/IP stacks.
HiperSockets with CHPID FF
This is the connection used by sysplex B (logical partitions 8 to 10) to transport TCP/IP
data traffic among the three sysplex logical partitions.
Note: SNA/APPN traffic is supported over HiperSockets in conjunction with Enterprise
Extender.
System z
Sysplex A
z/OS
LP -5
z/OS
LP -6
z/OS
LP -7
FE
Sysplex B
z/OS
LP -1
z/OS
LP -2
z/OS
LP -3
FF
z/VM LP -1
Linux Guest
1
Linux Guest
2
Linux Guest
n
z/VM
TCP/IP
........
HiperSockets x'FD'
Chapter 1. Overview 9
1.4 HiperSockets functions
The functions supported by HiperSockets are discussed in the following sections:
1.4.1, Broadcast support on page 9
1.4.2, Multicast support on page 9
1.4.3, IP Version 6 support on page 9
1.4.4, Hardware assists on page 9
1.4.5, VLAN support on page 10
1.4.6, HiperSockets Network Concentrator on Linux on page 10
1.4.7, DYNAMICXCF and Sysplex subplexing on page 12
1.4.8, HiperSockets Accelerator on z/OS on page 14
1.4.1 Broadcast support
Broadcasts are now supported across HiperSockets on Internet Protocol Version 4 (IPv4) for
applications. Now, applications that use the broadcast function can propagate the broadcast
frames to all TCP/IP applications that are using HiperSockets. This support is applicable to
Linux, z/OS, and z/VM environments.
1.4.2 Multicast support
Multicast is now supported across HiperSockets on Internet Protocol Version 4 (IPv4) for
applications. Now, applications that use the multicast function can propagate the multicast
frames to all TCP/IP applications that are using HiperSockets. This support is applicable to
Linux, z/OS, and z/VM environments.
1.4.3 IP Version 6 support
HiperSockets supports Internet Protocol Version 6 (IPv6). IPv6 is the protocol designed by
the Internet Engineering Task Force (IETF) to replace Internet Protocol Version 4 (IPv4) to
help satisfy the demand for additional IP addresses.
The support of IPv6 on HiperSockets (CHPID type IQD) is exclusive to System z9, and is
supported by z/OS and z/VM. IPv6 support is currently available on the OSA-Express2 and
OSA-Express features in the z/OS, z/VM, and Linux on System z9 environments.
HiperSockets support of IPv6 (CHPID type IQD) on System z9 requires, at a minimum, the
following:
z/OS V1.7.
z/VM V5.2 with PTF in APAR VM63952. Support of guests is expected to be transparent to
z/VM if the device is directly connected to the guest (pass through).
1.4.4 Hardware assists
A complementary virtualization technology is available for z9 EC, z9 BC, z990, and z890,
which includes:
QDIO Enhanced Buffer-State Management (QEBSM), which are two new hardware
instructions designed to help eliminate the overhead of hypervisor interception.
10 HiperSockets Implementation Guide
Host Page-Management Assist (HPMA) is an interface to the z/VM main storage
management function designed to allow the hardware to assign, lock, and unlock page
frames without z/VM hypervisor assistance.
These hardware assists allow a cooperating guest operating system to initiate QDIO
operations directly to the applicable channel, without interception by z/VM, thereby helping to
provide additional performance improvements. The z990 and z890 servers require MCL
updates. Support is integrated in the z9 EC and z9 BC LIC.
1.4.5 VLAN support
Virtual Local Area Networks (VLANs), IEEE standard 802.1q, is being offered for
HiperSockets in a Linux on System z environment. VLANs can reduce overhead by allowing
networks to be organized by traffic patterns rather than physical location. This enhancement
permits traffic flow on a VLAN connection both over HiperSockets and between HiperSockets
and an OSA-Express GbE, 1000BASE-T Ethernet, or Fast Ethernet feature.
VLANs facilitate easy administration of logical groups of servers that can communicate as
though they were on the same LAN. They also facilitate easier administration of moves, adds,
and changes in members of these groups. VLANs are also designed to provide a degree of
low-level security to provide a greater degree of isolation.
With servers, where multiple TCP/IP stacks exist, sharing one or more VLAN support can
provide a greater degree of flexibility. You can group the same server in the same VLAN for
different types of application and they exchange a high flow of data or use VLAN for security
reasons to separate different lines of business. See Figure 1-4.
Figure 1-4 HiperSockets VLAN
1.4.6 HiperSockets Network Concentrator on Linux
Traffic between HiperSockets and OSA-Express can be transparently bridged using the
HiperSockets Network Concentrator, without requiring intervening network routing overhead,
thus increasing performance and simplifying the network configuration. This is achieved by
configuring a connector Linux system that has HiperSockets and OSA-Express connections
defined. The HiperSockets Network Concentrator registers itself with HiperSockets as a
HiperSockets
VLAN 11
10.10.11.0/24
VLAN 13
10.10.13.0/24
z/VM
LP 1
z/OS
LP 4
Linux
LP 5
Linux
LP 8
Linux
LP 10
z/VM
LP 7
CHPID x'F4'
...
...
Linux
Linux
Linux
Linux
HiperSockets
VLAN
Chapter 1. Overview 11
special network entity to receive data packets destined for an IP address on the external LAN
via an OSA-Express port. The HiperSockets Network Concentrator also registers IP
addresses to the OSA-Express on behalf of the TCP/IP stacks using HiperSockets, hence
providing inbound and outbound connectivity.
HiperSockets Network Concentrator support is performed using the next-hop-IP-address in
the Queued Direct Input/Output (QDIO) header, instead of using a Media Access Control
(MAC) address. Therefore, VLANs in a switched Ethernet fabric are not supported. TCP/IP
stacks that use only HiperSockets to communicate among each other with no external
network connection are seen as no different (the HiperSockets support them), and the
networking characteristics are unchanged.
The HiperSockets Network Concentrator, shown in Figure 1-5 on page 12, is a mechanism to
connect systems with HiperSockets interfaces to the external network using the same subnet.
In other words, the Linux system appears as though they are directly connected to the
physical network. A Linux system acts as a forwarder for traffic between the OSA interface
and the internal HiperSockets connected systems (z/VM, z/OS, VSE, and Linux on z). Refer
to Linux on System z, Device Drivers, Features, and Commands, SC33-8281, for detailed
information.
HiperSockets Network Concentrator can be a useful solution if you have a Linux on System z
(native logical partition or Guest under z/VM) and a huge traffic among servers inside System
z and also the requirement of high-speed communications to the external network. This
bridges function, without routing functions, and it does not consume other subnets.
In addition, HiperSockets Network Concentrator allows you to migrate systems from the LAN
into a System z environment, without changing IP address and network routing. Thus,
HiperSockets Network Concentrator helps to simplify network configuration and
administration.
We recommend that you always have backup connections.
Note: IP fragmentation does not work for multicast bridging.The MTU of the HiperSockets
link and OSA must be of the same size. Multicast packets not fitting in the link MTU are
discarded.
12 HiperSockets Implementation Guide
Figure 1-5 represents an example of a HiperSockets Network Concentrator (HSNC) on Linux
using OSA-Express.

Figure 1-5 HiperSockets Network Concentrator on Linux
To exploit HiperSockets Network Concentrator unicast and multicast support, a Linux
distribution including the qeth driver (dated 2003-10-31 or later) from the June 2003 stream
is required. This is for Kernel 2.4 or later.
See the developerWorks Web site at:
http://www-128.ibm.com/developerworks/linux/linux390/index.html
1.4.7 DYNAMICXCF and Sysplex subplexing
HiperSockets can also improve TCP/IP communications within a sysplex environment when
the DYNAMICXCF facility is used. When an DYNAMICXCF HiperSockets device and link are
activated, a subnetwork route is created across the HiperSockets link. The subnetwork is
created by using the DYNAMICXCF IP address and mask. This allows any logical partition
within the same server to be reached, even ones that are not within the sysplex. The logical
partition that is outside of the sysplex environment must define at least one IP address for the
HiperSockets endpoint that is within the subnetwork defined by the DYNAMICXCF IP address
and mask
z/OS Communications Server now allows you to subdivide a sysplex network into multiple
subplex scopes from a sysplex networking function perspective. For example, some VTAM
and TCP/IP instances in a sysplex may belong to one subplex, while other VTAM or TCP/IP
instances in the same sysplex belong to different subplexes.
Network
Concentrator
scenario
E800-E802
192.0.1.1
TCPIP
LP-A12
z/VM VMLinux7
SYSPLEX SYSPLEX
LP-A23
z/OS SC30
E800-E802
192.0.1.4
LP-A24
z/OS SC31
E800-E802
192.0.1.5
LP-A25
z/OS SC32
E800-E802
192.0.1.6 Linux
LNXSU2
(7000-7002)
E808-E80A
192.0.1.3
(7000-7002)
E804-E806
192.0.1.2
Linux
LNXRH2
HiperSockets CHPID F4
192.0.1.0/24
(7200-7203)
192.0.1.0/24
OSA
2200-2203
CHPID 06
192.0.1.8
Chapter 1. Overview 13
With subplexing, you are able to build security zones. Thus, only members within the same
security zone may communicate with each other. Subplex members are VTAM nodes and
TCP/IP stacks that are grouped in security zones to isolate communication.
A subplex is a subset of a Sysplex that consists of selected members. These members are
connected and communicate through dynamic cross-system coupling facility (XCF) groups to
each other, using the following methods:
XCF links (for cross-system IP and VTAM connections)
IUTSAMEH (for IP connections within a logical partition)
HiperSockets (IP connections cross-logical partitions in the same server)
Subplexes do not communicate with members outside the subset of the Sysplex. For example
in Figure 1-6, TCP/IP stacks with connectivity to the internal network can be isolated from
TCP/IP stacks connected to an external network using subplexing.
TCP/IP stacks are defined as members of a subplex group with a defined group ID. For
example, in Figure 1-6 TCP/IP, stacks within Subplex 1 are able to communicate only with
stacks within the same subplex group. They are not able to communicate with stacks in
Subplex 2.
In an environment where a single logical partition has access to internal and external
networks through two TCP/IP stacks, those stacks are assigned to two different subplex
group IDs. Even though IUTSAMEH is the communication method, it is controlled
automatically through the association of subplex group IDs, thus creating two separate
security zones within the logical partition.
Figure 1-6 Subplexing multiple security zone
Dedicated LPARs with
single TCP/IP stacks
External Network
(e.g. Internet)
z/OS LPAR
Appl1 Appl2
TCPIP
VTAM
z/OS LPAR
Appl1 Appl2
TCPIP
VTAM
z/OS LPAR
Appl1 Appl2
TCPIP
VTAM
z/OS LPAR
Appl1 Appl2
TCPIP
VTAM
Internal Network
External Network
(e.g.Internet)
VTAM VTAM
z/OS LPAR
Appl3 Appl4
TCPIPB
Subplex 2
Appl3 Appl4
TCPIPB
Subplex 2
Appl1 Appl2
TCPIPA
Appl1 Appl2
TCPIPA
Subplex 1 Subplex 1
IUTSAMEH
Communications
within same Subplex
VLAN IDs may be
associated with Subplex
VLAN IDs may be
associated with Subplex
No communications to
dissimilar Subplexes
Internal Network
No communications to
dissimilar Subplexes
HiperSockets
XCF
Subplex 2 Subplex 1
HiperSockets
Multi-purpose LPARs
with dual TCP/IP stacks
z/OS LPAR
XCF
IUTSAMEH
14 HiperSockets Implementation Guide
1.4.8 HiperSockets Accelerator on z/OS
HiperSockets Accelerator is supported by z/OS. It allows a z/OS TCP/IP router stack to
efficiently route IP packets from an OSA-Express (QDIO) interface to a HiperSockets (iQDIO)
interface and vice versa. The routing is done by the z/OS Communications Server device
drivers at the lowest possible software data link control level. IP packets do not have to be
processed at the higher level TCP/IP stack routing function, hence reducing the path-length
and improving performance.
If a TCP/IP router stack is required, the selection must be based on the following
considerations:
Performance.
Availability: A backup is required for the OSA-Express network connection, and also for
the TCP/IP router stack.
System and administrative overhead: How many additional logical partitions, operating
systems, and TCP/IP stacks are required? How many different operating systems are on
the path to the application?
Figure 1-7 represents an example of an HiperSockets Accelerator routing stack with four
OSA-Express interfaces in a single System z that has multiple logical partitions. These logical
partitions could be running z/OS, z/VM, or Linux on System z, and also z/VM with numerous
guest systems.
Figure 1-7 HiperSockets Accelerator on z/OS: routing stack implementation
System z
z/VM
Linux
Guest
1
Linux
Guest
2
Linux
Guest
n
........
LP 7 LP 8 LP 9
LP10 LP11 LP12
LP4 LP5 LP6
OSA1 OSA2 OSA3 OSA4
ENet1 ENet2 ENet3 ENet4
z/OS
HiperSockets
Accelerator
x'FC'
x'FE'
x'FD'
x'FF'
Chapter 1. Overview 15
Figure 1-8 illustrates how HiperSockets Accelerator works. The solid line connecting TCP/IP1
and TCP/IP3 represents the normal path through the TCP/IP stacks routing function, while
the dotted line represents the accelerated path through the VTAM device driver.
Figure 1-8 HiperSockets Accelerator flow
You can activate the HiperSockets Accelerator by configuring the IQDIO Routing option in the
TCP/IPprofile using the IPCONFIG statement.The TCP/IP stack automatically detects an IP
packet prerouting across a HiperSockets Accelerator eligible route. Eligible routes are from
OSA-Express (QDIO) to HiperSockets (iQDIO), and from HiperSockets (iQDIO) to
OSA-Express (QDIO).
Figure 1-8 shows what happens when a TCP/IPA sends something to the TCP/IPX. The
process is explained as follows:
1. The first packet consists of the TCP/IP routing stack in TCP/IPH, which creates IQDIO
routing route entries for source TCP/IPA, destination TCP/IPX, the gateway for external
network. These entries are added to the IQDIO routing table (Chapter 3, z/OS support
on page 39 an example for this). The destination stack TCP/IPX must be reachable
through HiperSockets.
2. Starting with the second packet, all subsequent packets for the same destination take the
optimized device driver path, and do not traverse the routing function of the TCP/IP routing
stack. No change is required for target stacks. There is a timer built into the HiperSockets
Accelerator function. Based on this timer, if a specific IQDIORouting entry is not used for
90 seconds, it is deleted from this table.
Therefore, just for the first time that a TCP/IP host sends the first packet of new entries, it is
created in the IQDIORouting table and it is involved in the TCP/IP2 routing stack. The IP
packets that follow this first packet are routed through the VTAM device driver.
Restriction: HiperSockets Accelerator cannot be enabled if IPSECURITY (or FIREWALL
prior to z/OS V1.8) or NODATAGRAMFWD are specified in the IPCONFIG statement.
Logical partition
TCP/IP X
External LAN
TCP/IP H
Communication Server
System z
TCP/IP A
OSA-E
HiperSockets
iQDIO LAN
Accelerated
QDIO - iQDIO routing
z/OS
Logical partition
2
2
1
1
iQDIO
Device Driver
QDIO Device
Driver
16 HiperSockets Implementation Guide
If any IP packets have to be fragmented in order to be routed between QDIO and iQDIO (or
vise versa), then they are not accelerated and the normal path through the TCP/IP stack
routing function is taken. You can prevent IP fragmentation conflicts by using path MTU
discovery (PATHMTUDISCOVERY in IPCONFIG), or by coding the appropriate MTU size in
the static route statement (if static routes are used). For more details on defining MTU
discovery and MTU sizes, refer to z/OS Communications Server, IP Configuration Reference,
SC31-8776.
The HiperSockets Accelerator is very useful when you have a huge amount traffic inside
HiperSockets, which has to have high availability, load balancing, and high performance. (You
require a z/OS.) Figure 1-7 on page 14 shows the function of a single TCP/IP stack that has
multiple direct physical connectivity to the OSAs LANs similar to the HiperSockets router.
However, you can have more TCP/IP stacks to provide a redundant path in case one of the
z/OS HiperSockets Accelerator images suffers an outage. This single stack can then connect
to all the remaining TCP/IP stacks in other images within Server z, which require connectivity
to the OSA LANs, using HiperSockets connectivity. Remember that HiperSockets Accelerator
works with the Data Link Control Layer when it is not providing additional functions such as
fragmentation.
1.5 Operating system support summary
All HiperSockets functions supported on System z are summarized in Table 1-2 with
information about the minimum release and maintenance levels required.
Table 1-2 Summary of HiperSockets supported functions
Function z/OS z/VM Linux on
System z
Shared, spanned CHPID. V1.5
VLAN: Allows networks to be organized by traffic patterns rather
than physical location. This enhancement permits traffic flow on a
VLAN connection both over HiperSockets and between
HiperSockets and an OSA-E GbE, 1000BASE-T Ethernet, or Fast
Ethernet feature.
V1.5 z/VM 5.1
+ptfs
z/VM 5.2
+ptfs
2.4 kernel
Network Concentrator: Traffic between HiperSockets and
OSA-Express can be transparently bridged without requiring
network routing overhead.
n/a n/a 2.4 kernel
Broadcast in IPv4. V1.5 z/VM 5.1 2.4 kernel
Multicast in IPv4. V1.5 z/VM 5.1 2.4 kernel
IPv6. V1.7 z/VM 5.2
+ptfs
No
Chapter 1. Overview 17
1.6 Test configuration
Figure 1-9 shows the HiperSockets base configuration used through out the examples in this
IBM Redbooks publication. Based on this configuration, we implement the new functions and
place the definition for each system in the specific session. Additional setup for specific
scenarios such as DYNAMICXCF, HiperSockets Accelerator, and so on, are documented in
the relevant sections.
Figure 1-9 HiperSockets base configuration scenario
We used four logical partitions to set up and verify HiperSockets support with z/OS, z/VM,
and Linux. Because HiperSockets are shared among logical partitions, you must define the
CHPIDs as shared in the hardware definitions.
For HiperSockets F4, we used the IP network address 192.0.1.0/24: Our configuration does
not include Virtual IP Addresses (VIPA), which are also supported.
DYNAMIC XCF: Connects all images within the same Sysplex
through a dynamic XCF connection, created by the
DYNAMICXCF definition in the TCP/IP profile.
V1.5 n/a n/a
Sysplex Subplexing: Supports multiple security zones in a
Sysplex.
V1.8 n/a n/a
HiperSockets Accelerator: Allows a z/OS TCP/IP router stack to
efficiently route IP packets from an OSA-Express (QDIO)
interface to a HiperSockets (iQDIO) interface and vice versa.
V1.5 n/a n/a
QDIO Enhanced Buffer State Management (QEBSM) and Host
Page-Management Assist (HPMA) interface to the z/VM main
storage Management.
n/a z/VM 5.2
+ptfs
2.6.16
kernel
Function z/OS z/VM Linux on
System z
HiperSockets
base
scenario
E800-E802
192.0.1.1
TCPIP
LP-A12
z/VM VMLinux7
SYSPLEX SYSPLEX
LP-A23
z/OS SC30
E800-E802
192.0.1.4
LP-A24
z/OS SC31
E800-E802
192.0.1.5
LP-A25
z/OS SC32
E800-E802
192.0.1.6 Linux
LNXSU2
(7000-7002)
E808-E80A
192.0.1.3
(7000-7002)
E804-E806
192.0.1.2
Linux
LNXRH2
HiperSockets CHPID F4
192.0.1.0/24
18 HiperSockets Implementation Guide
For a HiperSockets connection, three I/O devices are required. One device for read control
(even numbered), one device for write control (odd numbered), and one device for data
exchange.
The logical partitions are configured as follows:
1. In the logical partition-A12, there are two Linux, one Red Hat, and one SUSE (LNXRH2
and LNXSU2), systems running as guests under z/VM, along with the z/VM system
(VMLINUX7). The Linux systems use DEDICATE, and control their HiperSockets
connections directly. Each of the two systems has one interface to HiperSockets through
CHPID F4.
For LNXRH2, we allocated real addresses E804-E806 that map virtual unit addresses
7000-7002 to the three I/O devices. For LNXSU2, we used the next available unit
addresses in a similar way, in ascending order.
2. The logical partition-A23 runs a z/OS image (SC30) and is part of a sysplex. This image
connects to HiperSockets F4 by E800-E802 devices.
3. The logical partition-A24 runs a z/OS image (SC31) and is part of a sysplex. This image
connects to HiperSockets F4 by E800-E802 devices.
4. The logical partition-A25 runs a z/OS image (SC32) and is part of a sysplex. This image
connects to HiperSockets F4 by E800-E802 devices.
Table 1-3 shows the details of the test HiperSockets configuration.
Table 1-3 Shows the details of the test HiperSockets configuration
In order to operate a HiperSockets connection, all required devices must first be online to the
operating system. At the time a TCP/IP stack starts a HiperSockets device, the stacks
HiperSockets interface information is registered in an IP address lookup table. One IP
address lookup table is maintained per HiperSockets.
As the HiperSockets devices for our configuration are started, two internal tables are created.
Table 1-4 on page 19 is the lookup table for CHPID F4. This table represents all TCP/IP
stacks connected to HiperSockets F4. The I/O queue pointer is a real storage address that is
set when the TCP/IP stack brings up the connection and is directly associated with the data
exchange device address.
LP Name Environment System
name
CHPID Device address IP address
A12 z/VM VMLINUX7 F4 E800-E802 192.0.1.1
A12 Linux under z/VM LNXRH2 F4 E804-E806 192.0.1.2
A12 Linux under z/VM LNXSU2 F4 E808-E80A 192.0.1.3
A23 z/OS sysplex SC30 F4 E800-E802 192.0.1.4
A24 z/OS sysplex SC31 F4 E800-E802 192.0.1.5
A25 z/OS sysplex SC32 F4 E800-E802 192.0.1.6
Note: Each logical partition can use the same unit addresses. When multiple TCP/IP
stacks are present in a single logical partition, then the unit addresses must be unique to
each TCP/IP stack, for example, the process in logical partition-A12 shown in Table 1-3.
Chapter 1. Overview 19
We now look at a data transfer operation. For example, Linux LNXRH2 wants to send a
packet to z/OS SC30 over HiperSockets. According to our configuration in Figure 1-3 on
page 8, both servers are connected to HiperSockets F4.
The steps executed for the send operation are:
1. Linux LNXRH2 performs a send operation (SIGA instruction), passing the destination IP
address.
2. The HiperSockets searches in the IP address lookup table F4 (see Table 1-3 on page 18)
for the routing destination IP address 192.0.1.4, which is the IP address for z/OS SC30.
The IP address lookup table F4 represents the HiperSockets that the sending TCP/IP
stack is connected to. Direct routing across different HiperSockets is not supported.
3. The HiperSockets finds the entry for IP address 192.0.1.4 in the IP address lookup table
F4.
4. The hardware copies the data from the Linux LNXRH2 send queue to the z/OS (SC30)
receive queue.
5. Optionally, the hardware initiates a Program Controlled Interrupt (PCI) to inform the
destination (SC30) that data has arrived. In this case, optionally means that the
hardware can either deliver an interrupt to the receiving side or the operating system on
the receiving side works with dispatcher polling. This option is negotiated between the
hardware and the operating system at the time the HiperSockets interface is started.
Table 1-4 shows the IP address lookup for HiperSockets CHPID 4F.
Table 1-4 IP address lookup table for HiperSockets CHPID F4

Note: The routing destination IP address is the next-hop IP address in the link header.
In our example, the next-hop IP address and the destination IP address in the IP packet
header are identical, because the next-hop is the target host. In a case where the
next-hop is a router stack, the next-hop IP address in the link header and the
destination IP address in the IP packet header are not identical.
Note: If an entry is not found, the packet is discarded. This is considered an error
condition.
IP address Logical partition
name
Device addresses Input/Output queue
pointer
192.0.1.1 A12 E800-E801 E802
192.0.1.2 A12 E804-E805 E806
192.0.1.3 A12 E808-E809 E80A
192.0.1.4 A23 E800-E801 E802
192.0.1.5 A24 E800-E801 E802
192.0.1.6 A25 E800-E801 E802
20 HiperSockets Implementation Guide
Copyright IBM Corp. 2002, 2006 21
Chapter 2. Hardware definitions
In this chapter, we describe how to update your system hardware configuration with CHPID
and I/O device definitions to support HiperSockets. We discuss various configuration
considerations, and then go through by the step-by-step Hardware Configuration Definition
(HCD) procedure we used to configure our environment.
This chapter contains the following:
System configuration considerations
HCD definitions
2
22 HiperSockets Implementation Guide
2.1 System configuration considerations
HiperSockets is defined as a channel connection with a channel path type IQD. Even though
there is no physical attachment associated with a HiperSockets CHPID, the CHPID numbers
cannot be used for other I/O connections.
These are the hardware configuration rules for z9 EC, z9 BC, z990, and z890:
HiperSockets requires the definition of a CHPID defined as type=IQD. This CHPID is
treated like any other CHPID, and is counted as one of the available channels within the z9
EC, z9 BC, z990, and z890 servers.
With the introduction of the new channel subsystem, transparent sharing of HiperSockets
is possible with the extension to the Multiple Image facility (MIF). HiperSockets channels
can be configured to multiple Logical Channel Subsystems (LCSS). They are
transparently shared by any or all of the configured logical partitions without regard to the
LCSS to which the partition is configured.
A HiperSockets channel can be defined as spanned in HCD and has no PCHID
association.
Up to 64 control units can be defined on each IQD CHPID.
If more than one control unit is defined for an IQD CHPID, a logical address is required for
each control number. Control unit logical addresses can range from X00 to X3F.
Up to 256 I/O devices can be connected to an IQD control unit.
Each TCP/IP connection to a HiperSockets requires three devices: one control read, one
control write, and one data exchange device. See 1.3, HiperSockets mode of operation
on page 5 for more details.
The total number of all HiperSockets I/O devices may not exceed 12288.
When you define an IQD CHPID, you have the option to specify the maximum frame size
to be used by the HiperSockets. This is done through the CHPARM parameter. Valid
CHPARM parameter values and their resulting maximum frame sizes are shown in
Table 2-1. The selected maximum frame size for a HiperSockets is used by the TCP/IP
stacks to define the Maximum Transmission Unit (MTU) size for the interface.
Table 2-1 IQD CHPID maximum frame size and MTU size
Note: With ICP IOCP for z9 EC, z9 BC, z990 and z890, the optional CHPARM keyword
replaces optional keyword OS. Although the OS keyword is currently accepted, IOCP
will disallow it in the future.
CHPARM=value Maximum frame size Maximum Transmission
Unit size
00 (default) 16 KB 8 KB
40 24 KB 16 KB
80 40 KB 32 KB
C0 64 KB 56 KB
Chapter 2. Hardware definitions 23
2.2 HCD definitions
As with all channel-attached devices, an IQD CHPID must be defined by a channel path, a
control unit, and I/O devices in your system configuration.
This section shows the steps needed to define an IQD CHPID, using the z/OS Hardware
Configuration Definition (HCD) tool. We have included examples of the following definitions:
Spanned Channel Path
Control Unit
Devices
2.2.1 Channel Path definitions
The process of defining a HiperSockets channel, control unit, and device is similar to defining
any other set of channel, control unit and device on z/OS using the Hardware Configuration
Definiton (HCD) ISPF application. The differences are:
During channel definition, a screen will be displayed to set the maximum frame size for the
HiperSockets channel.
A HiperSockets channel has no associated PCHID.
A minimum of three IQD devices must be defined for a IQD control unit.
Follow your installations procedure to access the HCD main screen and enter the appropriate
IODF file to begin the definition process.
1. Starting from the HCD main menu screen, select 1, as displayed in Figure 2-1.
Figure 2-1 HCD main menu screen
z/OS V1.7 HCD
Command ===> ________________________________________________________________

Hardware Configuration

Select one of the following.

1 1. Define, modify, or view configuration data
2. Activate or process configuration data
3. Print or compare configuration data
4. Create or view graphical configuration report
5. Migrate configuration data
6. Maintain I/O definition files
7. Query supported hardware and installed UIMs
8. Getting started with this dialog
9. What's new in this release

For options 1 to 5, specify the name of the IODF to be used.

I/O definition file . . . 'SYS6.IODF65.WORK' +
F1=Help F2=Split F3=Exit F4=Prompt F9=Swap F12=Cancel F22=Command
24 HiperSockets Implementation Guide
2. On the next screen, entitled Define, Modify, or View Configuration Data, select Option 3 -
Processors, as shown in Figure 2-2.
Figure 2-2 Define, Modify, or View Configuration Data panel
3. Figure 2-3 shows the Processor List defined in the IODF data set. Select the processor to
update and press Enter, as shown in Figure 2-3.
Figure 2-3 Processor List
4. On a System z processor, this will display the Channel Subsystem list. Select a channel
subsystem where the HiperSockets channel will be defined, as shown in Figure 2-4 on
page 25.
z/OS V1.7 HCD
+______________Define, Modify, or View Configuration Data ______________+
| |
| Select type of objects to define, modify, or view data. |
| |
| 3_ 1. Operating system configurations |
| consoles |
| system-defined generics |
| EDTs |
| esoterics |
| user-modified generics |
| 2. Switches |
| ports |
| switch configurations |
| port matrix |
| 3. Processors |
| channel subsystems |
| partitions |
| channel paths |
| 4. Control units |
| 5. I/O devices |
| F1=Help F2=Split F3=Exit F9=Swap F12=Cancel |
+_______________________________________________________________________+
Goto Filter Backup Query Help
------------------------------------------------------------------------------
Processor List Row 1 of 7 More: >
Command ===> _______________________________________________ Scroll ===> CSR

Select one or more processors, then press Enter. To add, use F11.
/ Proc. ID Type + Model + Mode+ Serial-# + Description
_ ISGSYN 2064 1C7 LPAR __________ ________________________________
_ ISGS11 2064 1C7 LPAR __________ ________________________________
_ P000STP1 2084 C24 LPAR 01534A2084 ________________________________
_ P000STP2 2094 S08 LPAR 0BAD4E2094 ________________________________
s SCZP101 2094 S18 LPAR 02991E2094 ________________________________
_ SCZP801 2064 1C7 LPAR 010ECB2064 ________________________________
_ SCZP901 2084 C24 LPAR 026A3A2084 ________________________________
Chapter 2. Hardware definitions 25
Figure 2-4 Channel Subsystem List
This displays the Channel Path List screen (see Figure 2-5).
Figure 2-5 Channel Path List
5. Press F11 to add a channel path.
Goto Backup Query Help
------------------------------------------------------------------------------
Channel Subsystem List Row 1 of 3
Command ===> _______________________________________________ Scroll ===> CSR

Select one or more channel subsystems, then press Enter. To add, use F11.

Processor ID . . . : SCZP101

CSS Devices in SS0 Devices in SS1
/ ID Maximum + Actual Maximum + Actual Description
_ 0 65280 14022 0 0 ________________________________
_ 1 65280 14096 0 0 ________________________________
s 2 65280 14041 65535 1 ________________________________
******************************* Bottom of data ********************************
Goto Filter Backup Query Help
------------------------------------------------------------------------------
Channel Path List Row 1 of 138 More: >
Command ===> _______________________________________________ Scroll ===> CSR

Select one or more channel paths, then press Enter. To add use F11.

Processor ID . . . . : SCZP101
Configuration mode . : LPAR
Channel Subsystem ID : 2

DynEntry Entry +
/ CHPID Type+ Mode+ Switch + Sw Port Con Mngd Description
_ 00 OSD SPAN __ __ __ No 1000BaseT
_ 01 OSD SPAN __ __ __ No 1000BaseT
26 HiperSockets Implementation Guide
6. Enter all the required information, as shown in Figure 2-6.
In our scenario, we set:
Channel path ID to F4.
Channel path type to IQD. (required for a HiperSockets channel.)
Operation mode to SPAN, because the IQD CHPID is shared among logical partitions
across Channel Subsystems.
All other parameters to default. These are not relevant to IQD CHPIDs.
Figure 2-6 Add a channel path
7. When the definitions are completed, press Enter. If the definition process has been started
from a production IODF, a screen appears allowing you to create a work IODF. Enter the
appropriate information to create a work IODF. The next screen that appears is the Specify
Maximum Frame Size, as shown Figure 2-7 on page 27; select a Maximum Frame Size.
8. Press F4 for a list of the four possible options. We chose the default size of 16 KB. As this
is the default, no OS value will appear in the IOCP. Figure 2-7 on page 27 shows OS
values for Maximum Frame Size other than 16 KB in IOCP. Press Enter.
Add Channel Path

Specify or revise the following values.

Processor ID . . . . : SCZP101
Configuration mode . : LPAR
Channel Subsystem ID : 2

Channel path ID . . . . F4 + PCHID . . . ___
Number of CHPIDs . . . . 1
Channel path type . . . IQD +
Operation mode . . . . . SPAN +
Managed . . . . . . . . No (Yes or No) I/O Cluster ________ +
Description . . . . . . ________________________________

Specify the following values only if connected to a switch:
Dynamic entry switch ID __ + (00 - FF)
Entry switch ID . . . . __ +
Entry port . . . . . . . __ +
Important: The Maximum Frame Size is directly related to the Maximum Transmission
Unit used by TCP/IP. See Table 2-1 on page 22 for the corresponding values
Chapter 2. Hardware definitions 27
Figure 2-7 Define the maximum frame size
9. Complete the Access List for the partitions sharing the channel and press Enter.
In our example, we defined the IQD CHPID as shared by one logical partition on Channel
Subsystem 1 (see Figure 2-8) and three logical partitions on channel subsystem 2 (see
Figure 2-9 on page 28).
Figure 2-8 Define Access List - screen 1
+___________ Specify Maximum Frame Size ____________+
| |
| |
| Specify or revise the value below. |
| |
| Maximum frame size |
| in KB . . . . . . . 16+ |
| |
| F1=Help F2=Split F3=Exit F4=Prompt |
| F5=Reset F9=Swap F12=Cancel |
+___________________________________________________+
Define Access List
Row 21 of 45
Command ===> _________________________________________ Scroll ===> CSR

Select one or more partitions for inclusion in the access list.

Channel subsystem ID : 2
Channel path ID . . : F4 Channel path type . : IQD
Operation mode . . . : SPAN Number of CHPIDs . . : 1

/ CSS ID Partition Name Number Usage Description
_ 1 A1F F CF Trainer Sysplex FACIL06
_ 1 A11 1 OS Testplex SC75
/ 1 A12 2 OS VMLINUX7
_ 1 A13 3 OS Trainer Sysplex #@$2
_ 1 A14 4 OS Trainer Sysplex #@$3
_ 1 A15 5 OS SC58
_ 1 A16 6 OS
_ 1 A17 7 OS VMLINUX1
_ 1 A18 8 OS WTSCPLX1 SC53
_ 1 A19 9 OS WTSCPLX1 SC47
28 HiperSockets Implementation Guide
Figure 2-9 Define Access List - screen 2
Now we return to Channel Path List, with CHPID F4 defined, as shown in Figure 2-10.
Figure 2-10 Channel Path After the CHIPID is defined
10.Now press F20 to scroll to the right to verify the access list (see Figure 2-11 on page 29
and Figure 2-12 on page 29).
Define Access List
Row 36 of 45
Command ===> _________________________________________ Scroll ===> CSR

Select one or more partitions for inclusion in the access list.

Channel subsystem ID : 2
Channel path ID . . : F4 Channel path type . : IQD
Operation mode . . . : SPAN Number of CHPIDs . . : 1

/ CSS ID Partition Name Number Usage Description
_ 2 A2F F CF WTSCPLX5 CF39
_ 2 A21 1 OS SC76
_ 2 A22 2 OS VMLINUX3
/ 2 A23 3 OS WTSCPLX5 SC30
/ 2 A24 4 OS WTSCPLX5 SC31
/ 2 A25 5 OS WTSCPLX5 SC32
_ 2 A26 6 OS SC60
_ 2 A27 7 OS
_ 2 A28 8 OS WTSCPLX1 SC69
_ 2 A29 9 OS WTSCPLX1 SCxx
Channel Path List Row 129 of 138 More: >
Command ===> _______________________________________________ Scroll ===> CSR

Select one or more channel paths, then press Enter. To add use F11.

Processor ID . . . . : SCZP101
Configuration mode . : LPAR
Channel Subsystem ID : 2

DynEntry Entry +
/ CHPID Type+ Mode+ Switch + Sw Port Con Mngd Description
_ F2 IQD SPAN __ __ __ No ________________________________
_ F3 IQD SPAN __ __ __ No ________________________________
_ F4 IQD SPAN __ __ __ No ________________________________
_ F5 IQD SPAN __ __ __ No ________________________________
_ F6 IQD SPAN __ __ __ No ________________________________
_ F7 IQD SPAN __ __ __ No ________________________________
_ FC IQD SHR __ __ __ No ________________________________
_ FD IQD SHR __ __ __ No ________________________________
_ FE IQD SHR __ __ __ No ________________________________
_ FF IQD SHR __ __ __ No ________________________________
Chapter 2. Hardware definitions 29
Figure 2-11 Verify the Channel Path Access List screen 1
Figure 2-12 Verify the Channel Path Access List screen 2
Channel Path List Row 131 of 138 More: < >
Command ===> _______________________________________________ Scroll ===> CSR

Select one or more channel paths, then press Enter. To add, use F11.

Channel Subsystem ID : 2
1=A21 2=A22 3=A23 4=A24 5=A25
6=A26 7=A27 8=A28 9=A29 A=A2A
B=A2B C=A2C D=A2D E=A2E F=A2F
I/O Cluster --------- Partitions 2x -----
/ CHPID Type+ Mode+ Mngd Name + 1 2 3 4 5 6 7 8 9 A B C D E F PCHID
_ F4 IQD SPAN No ________ _ _ a a a _ _ _ _ _ _ _ _ _ _ ___
_ F5 IQD SPAN No ________ _ _ a a a _ _ _ _ _ _ _ _ _ _ ___
_ F6 IQD SPAN No ________ _ _ a a a _ _ _ _ _ _ _ _ _ _ ___
_ F7 IQD SPAN No ________ _ _ a a a _ _ _ _ _ _ _ _ _ _ ___
_ FC IQD SHR No ________ a a a a a a a a a a a a _ _ _ ___
_ FD IQD SHR No ________ a a a a a a a a a a a a _ _ _ ___
_ FE IQD SHR No ________ a a a a a a a a a a a a _ _ _ ___
_ FF IQD SHR No ________ a a a a a a a a a a a a _ _ _ ___
Channel Path List Row 131 of 138 More: <
Command ===> _______________________________________________ Scroll ===> CSR

Select one or more channel paths, then press Enter. To add, use F11.

Channel Subsystem ID : 2
1=A11 2=A12 3=A13 4=A14 5=A15
6=A16 7=A17 8=A18 9=A19 A=A1A
B=A1B C=A1C D=A1D E=A1E F=A1F
I/O Cluster --------- Partitions 1x -----
/ CHPID Type+ Mode+ Mngd Name + 1 2 3 4 5 6 7 8 9 A B C D E F PCHID
_ F4 IQD SPAN No ________ _ a _ _ _ _ _ _ _ _ _ _ _ _ _ ___
_ F5 IQD SPAN No ________ _ a _ _ _ _ _ _ _ _ _ _ _ _ _ ___
_ F6 IQD SPAN No ________ _ a _ _ _ _ _ _ _ _ _ _ _ _ _ ___
_ F7 IQD SPAN No ________ _ a _ _ _ _ _ _ _ _ _ _ _ _ _ ___
_ FC IQD SHR No ________ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ ___
_ FD IQD SHR No ________ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ ___
_ FE IQD SHR No ________ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ ___
_ FF IQD SHR No ________ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ ___
Note: Figure 2-11 and Figure 2-12 show that no PCHID association is required for IQD
CHPID.
30 HiperSockets Implementation Guide
2.2.2 Control unit definitions
Starting at the Channel Path List screen, select the CHPID to get to the control unit list, as
shown in Figure 2-13.
Figure 2-13 Select a control unit list
11.Press F11 to add a control unit when the screen shown in Figure 2-14 appears.
Figure 2-14 Add a control unit
12.Enter the required information, as shown in Figure 2-15 on page 31, and then press Enter.
In our example, we set:
Control unit number to E800
Control unit type to IQD (required for a HiperSockets control unit.)
Channel Path List Row 130 of 138 More: >
Command ===> _______________________________________________ Scroll ===> CSR

Select one or more channel paths, then press Enter. To add use F11.

Processor ID . . . . : SCZP101
Configuration mode . : LPAR
Channel Subsystem ID : 2

DynEntry Entry +
/ CHPID Type+ Mode+ Switch + Sw Port Con Mngd Description
_ F3 IQD SPAN __ __ __ No ________________________________
s F4 IQD SPAN __ __ __ No ________________________________
_ F5 IQD SPAN __ __ __ No ________________________________
_ F6 IQD SPAN __ __ __ No ________________________________
_ F7 IQD SPAN __ __ __ No ________________________________
_ FC IQD SHR __ __ __ No ________________________________
_ FD IQD SHR __ __ __ No ________________________________
_ FE IQD SHR __ __ __ No ________________________________
_ FF IQD SHR __ __ __ No ________________________________
Control Unit List
Command ===> ___________________________________________ Scroll ===> CSR

Select one or more control units, then press Enter. To add, use F11.

Processor ID . . : SCZP101 CSS ID . : 2 Channel path ID : F4

/ CU Type + #CSS #MC Serial-# + Description
******************************* Bottom of data ***************************
Chapter 2. Hardware definitions 31
Figure 2-15 Define a control unit
13.Attach the control unit to a processor channel subsystem. In this example, we attach the
control unit to channel subsystem 1 and 2 of the selected processor. Define CHPID F4 as
the CHPID that connects to the control unit, as shown in Figure 2-16, and then press F20.
Figure 2-16 Select a processor to the control unit
Add Control Unit


Specify or revise the following values.

Control unit number . . . . E800 +
Control unit type . . . . . IQD__________ +

Serial number . . . . . . . __________
Description . . . . . . . . ________________________________

Connected to switches . . . __ __ __ __ __ __ __ __ +
Ports . . . . . . . . . . . __ __ __ __ __ __ __ __ +

If connected to a switch:

Define more than eight ports . . 2 1. Yes
2. No
Propose CHPID/link addresses and
unit addresses . . . . . . . . . 2 1. Yes
2. No
Select Processor / CU Row 1 of 10 More: >
Command ===> _______________________________________________ Scroll ===> CSR

Select processors to change CU/processor parameters, then press Enter.

Control unit number . . : E800 Control unit type . . . : IQD

---------------Channel Path ID . Link Address + ---------------
/ Proc.CSSID 1------ 2------ 3------ 4------ 5------ 6------ 7------ 8------
_ ISGSYN _______ _______ _______ _______ _______ _______ _______ _______
_ ISGS11 _______ _______ _______ _______ _______ _______ _______ _______
_ P000STP1.0 _______ _______ _______ _______ _______ _______ _______ _______
_ P000STP2.0 _______ _______ _______ _______ _______ _______ _______ _______
_ SCZP101.0 _______ _______ _______ _______ _______ _______ _______ _______
_ SCZP101.1 F4_____ _______ _______ _______ _______ _______ _______ _______
_ SCZP101.2 F4_____ _______ _______ _______ _______ _______ _______ _______
_ SCZP801 _______ _______ _______ _______ _______ _______ _______ _______
_ SCZP901.0 _______ _______ _______ _______ _______ _______ _______ _______
_ SCZP901.1 _______ _______ _______ _______ _______ _______ _______ _______
32 HiperSockets Implementation Guide
14.Define the control unit starting address as 00 for a range of 256 devices (see Figure 2-17),
and then press Enter.
Figure 2-17 Define the control unit address range
15.Verify that the control unit has been attached to the channel path (see Figure 2-18), and
then press Enter.
Figure 2-18 Verify that the control unit is attached to the processor
Select Processor / CU Row 1 of 10 More: < >
Command ===> _______________________________________________ Scroll ===> CSR

Select processors to change CU/processor parameters, then press Enter.

Control unit number . . : E800 Control unit type . . . : IQD

CU --------------Unit Address . Unit Range + -------------
/ Proc.CSSID Att ADD+ 1----- 2----- 3----- 4----- 5----- 6----- 7----- 8-----
_ ISGSYN __ ______ ______ ______ ______ ______ ______ ______ ______
_ ISGS11 __ ______ ______ ______ ______ ______ ______ ______ ______
_ P000STP1.0 __ ______ ______ ______ ______ ______ ______ ______ ______
_ P000STP2.0 __ ______ ______ ______ ______ ______ ______ ______ ______
_ SCZP101.0 __ ______ ______ ______ ______ ______ ______ ______ ______
_ SCZP101.1 __ 00.256 ______ ______ ______ ______ ______ ______ ______
_ SCZP101.2 __ 00.256 ______ ______ ______ ______ ______ ______ ______
_ SCZP801 __ ______ ______ ______ ______ ______ ______ ______ ______
_ SCZP901.0 __ ______ ______ ______ ______ ______ ______ ______ ______
_ SCZP901.1 __ ______ ______ ______ ______ ______ ______ ______ ______
Select Processor / CU Row 1 of 10 More: < >
Command ===> _______________________________________________ Scroll ===> CSR

Select processors to change CU/processor parameters, then press Enter.

Control unit number . . : E800 Control unit type . . . : IQD

CU --------------Unit Address . Unit Range + -------------
/ Proc.CSSID Att ADD+ 1----- 2----- 3----- 4----- 5----- 6----- 7----- 8-----
_ ISGSYN __ ______ ______ ______ ______ ______ ______ ______ ______
_ ISGS11 __ ______ ______ ______ ______ ______ ______ ______ ______
_ P000STP1.0 __ ______ ______ ______ ______ ______ ______ ______ ______
_ P000STP2.0 __ ______ ______ ______ ______ ______ ______ ______ ______
_ SCZP101.0 __ ______ ______ ______ ______ ______ ______ ______ ______
_ SCZP101.1 Yes __ 00.256 ______ ______ ______ ______ ______ ______ ______
_ SCZP101.2 Yes __ 00.256 ______ ______ ______ ______ ______ ______ ______
_ SCZP801 __ ______ ______ ______ ______ ______ ______ ______ ______
_ SCZP901.0 __ ______ ______ ______ ______ ______ ______ ______ ______
_ SCZP901.1 __ ______ ______ ______ ______ ______ ______ ______ ______
******************************* Bottom of data ********************************
Chapter 2. Hardware definitions 33
2.2.3 I/O device definitions
1. Starting at the Control Unit List screen, shown in Figure 2-19, select the CU and press
Enter to get to the I/O device list.
Figure 2-19 Select a device list
2. From the I/O Device List screen, as shown in Figure 2-20, press F11 to add a device.
Figure 2-20 Add a device
3. Enter the required information, as shown in Figure 2-21 on page 34, and then press Enter.
In our example, we set:
Device number to E800
Number of devices to 32
Device type to IQD (required for a HiperSockets device)
Control Unit List Row 1 of 1
Command ===> ___________________________________________ Scroll ===> CSR

Select one or more control units, then press Enter. To add, use F11.

Processor ID . . : SCZP101 CSS ID . : 2 Channel path ID : F4

/ CU Type + #CSS #MC Serial-# + Description
s E800 IQD 2 __________ ________________________________
I/O Device List
Command ===> ___________________________________________ Scroll ===> CSR

Select one or more devices, then press Enter. To add, use F11.

Control unit number : E800 Control unit type . : IQD

----------Device------ --#--- --------Control Unit Numbers + --------
/ Number Type + CSS OS 1--- 2--- 3--- 4--- 5--- 6--- 7--- 8---
******************************* Bottom of data ********************************
Note: How many devices should be defined? The answer depends on the number of
TCP/IP stacks running within the logical partition. Each TCP/IP stack requires one data
device and VTAM requires two devices (read/write control). At most, eight TCP/IP
stacks can be active on a z/OS logical partition, so 10 IQD devices is the maximum that
can be used for a single HiperSockets channel on a single z/OS logical partition. (one
read control + one write control + (one data X eight TCP/IP stacks)).
A minimum of three devices must defined to a IQD control unit and a maximun of 256
can be defined. 160 is the maximum number of IQD devices that can be online to a
z/OS logical partition.
34 HiperSockets Implementation Guide
Figure 2-21 Define a device
4. Set the unit address to 00 for the first device for both Channel Subsystems, as shown in
Figure 2-22, and then press Enter.
Figure 2-22 Define the first device address
5. Select the operating system to be defined for the device, as shown in Figure 2-23 on
page 35, and then press Enter.
Add Device

Specify or revise the following values.

Device number . . . . . . . . E800 + (0000 - FFFF)
Number of devices . . . . . . 32__
Device type . . . . . . . . . IQD__________ +

Serial number . . . . . . . . __________
Description . . . . . . . . . ________________________________

Volume serial number . . . . . ______ (for DASD)

Connected to CUs . . E800 ____ ____ ____ ____ ____ ____ ____ +

F1=Help F2=Split F3=Exit F4=Prompt F5=Reset F9=Swap F12=Cancel
Device / Processor Definition
Row 1 of 2
Command ===> __________________________________________ Scroll ===> CSR

Select processors to change device/processor definitions, then press
Enter.

Device number . . : E800 Number of devices . : 32
Device type . . . : IQD

Preferred Device Candidate List
/ Proc.CSSID SS+ UA+ Time-Out STADET CHPID + Explicit Null
_ SCZP101.1 _ 00 No No __ No ___
_ SCZP101.2 _ 00 No No __ No ___
***************************** Bottom of data *****************************
Chapter 2. Hardware definitions 35
Figure 2-23 Select the operating system
6. Select the operating system parameters for the device, as shown in Figure 2-24, and then
press Enter. We are using the defaults.
Figure 2-24 Select the operating system parameters
Define Device to Operating System Configuration
Row 1 of 8
Command ===> _____________________________________ Scroll ===> CSR

Select OSs to connect or disconnect devices, then press Enter.

Device number . : E800 Number of devices : 32
Device type . . : IQD

/ Config. ID Type SS Description Defined
_ ALLDEV MVS All devices
_ L06RMVS1 MVS Sysplex systems
_ MVSW1 MVS Production systems
_ OPENMVS1 MVS OpenEdition MVS
s TEST2094 MVS Sysplex systems
_ TEST3287 MVS Test 3287 devices
_ TRAINER MVS Trainer - Local Site Online
_ TRAINER2 MVS Trainer - Remote Site Online
************************** Bottom of data ***************************
Define Device Parameters / Features
Row 1 of 3
Command ===> ___________________________________________ Scroll ===> CSR

Specify or revise the values below.

Configuration ID . : ALLDEV All devices
Device number . . : E800 Number of devices : 32
Device type . . . : IQD

Parameter/
Feature Value + R Description
OFFLINE No Device considered online or offline at IPL
DYNAMIC Yes Device has been defined to be dynamic
LOCANY Yes UCB can reside in 31 bit storage
***************************** Bottom of data ******************************
36 HiperSockets Implementation Guide
We do not define the IQD devices as esoteric, so press Enter to bypass the
Assign/Unassign Device to Esoteric screen. Select the devices to verify, as shown in
Figure 2-25, and press Enter.
Figure 2-25 I/O device List
Next, the screen in Figure 2-26 appears; you can verify your definitions.
Figure 2-26 Verify the device definitions
--------------------------------------------------------------------------
I/O Device List Row 1 of 1 More: >
Command ===> ___________________________________________ Scroll ===> CSR

Select one or more devices, then press Enter. To add, use F11.

Control unit number : E800 Control unit type . : IQD

----------Device------ --#--- --------Control Unit Numbers + --------
/ Number Type + CSS OS 1--- 2--- 3--- 4--- 5--- 6--- 7--- 8---
s E800,32 IQD 2 2 E800 ____ ____ ____ ____ ____ ____ ____
******************************* Bottom of data ********************************
I/O Device List Row 1 of 32 More: >
Command ===> ___________________________________________ Scroll ===> CSR

Select one or more devices, then press Enter. To add, use F11.

Control unit number : E800 Control unit type . : IQD

----------Device------ --#--- --------Control Unit Numbers + --------
/ Number Type + CSS OS 1--- 2--- 3--- 4--- 5--- 6--- 7--- 8---
_ E800 IQD 2 2 E800 ____ ____ ____ ____ ____ ____ ____
_ E801 IQD 2 2 E800 ____ ____ ____ ____ ____ ____ ____
_ E802 IQD 2 2 E800 ____ ____ ____ ____ ____ ____ ____
_ E803 IQD 2 2 E800 ____ ____ ____ ____ ____ ____ ____
_ E804 IQD 2 2 E800 ____ ____ ____ ____ ____ ____ ____
_ E805 IQD 2 2 E800 ____ ____ ____ ____ ____ ____ ____
_ E806 IQD 2 2 E800 ____ ____ ____ ____ ____ ____ ____
_ E807 IQD 2 2 E800 ____ ____ ____ ____ ____ ____ ____
_ E808 IQD 2 2 E800 ____ ____ ____ ____ ____ ____ ____
_ E809 IQD 2 2 E800 ____ ____ ____ ____ ____ ____ ____
_ E80A IQD 2 2 E800 ____ ____ ____ ____ ____ ____ ____
_ E80B IQD 2 2 E800 ____ ____ ____ ____ ____ ____ ____
_ E80C IQD 2 2 E800 ____ ____ ____ ____ ____ ____ ____
_ E80D IQD 2 2 E800 ____ ____ ____ ____ ____ ____ ____
_ E80E IQD 2 2 E800 ____ ____ ____ ____ ____ ____ ____
_ E80F IQD 2 2 E800 ____ ____ ____ ____ ____ ____ ____
_ E810 IQD 2 2 E800 ____ ____ ____ ____ ____ ____ ____
_ E811 IQD 2 2 E800 ____ ____ ____ ____ ____ ____ ____
Chapter 2. Hardware definitions 37
The IOCP statements generated by these HCD steps are shown in Example 2-1.
Example 2-1 IOCP statements
CHPID PATH=(CSS(1,2),F4),SHARED, *
PARTITION=((CSS(1),(A12),(=)),(CSS(2),(A23,A24,A25),(=))*
),TYPE=IQD
CNTLUNIT CUNUMBR=E800,PATH=((CSS(1),F4),(CSS(2),F4)),UNIT=IQD
IODEVICE ADDRESS=(E800,032),CUNUMBR=(E800),UNIT=IQD
The CHPARM value for maximum frame sizes other than the default of 16 KB is shown in
Example 2-2.
Example 2-2 CHPARM values for maximum frames sizes other than 16 KB
CHPID PATH=(CSS(1,2),F5),SHARED, *
PARTITION=((CSS(1),(A12),(=)),(CSS(2),(A23,A24,A25),(=))*
),CHPARM=40,TYPE=IQD
CHPID PATH=(CSS(1,2),F6),SHARED, *
PARTITION=((CSS(1),(A12),(=)),(CSS(2),(A23,A24,A25),(=))*
),CHPARM=80,TYPE=IQD
CHPID PATH=(CSS(1,2),F7),SHARED, *
PARTITION=((CSS(1),(A12),(=)),(CSS(2),(A23,A24,A25),(=))*
),CHPARM=C0,TYPE=IQD
2.3 References
System z9 Stand-Alone Input/Output Configuration Program Users Guide, SB10-7152
z/OS Hardware Configuration Definition (HCD) Planning, GA22-7525
z/OS Hardware Configuration Definition (HCD) User's Guide, SC33-7988
38 HiperSockets Implementation Guide
Copyright IBM Corp. 2002, 2006 39
Chapter 3. z/OS support
In this chapter, we focus on the HiperSockets support provided by z/OS. We show how to set
up HiperSockets for z/OS, DYNAMICXCF between z/OS systems running in a sysplex
environment, Sysplex subplexing between IP stacks using HiperSockets, and HiperSockets
Accelerator.
3
40 HiperSockets Implementation Guide
3.1 Overview
z/OS implementation of HiperSockets consists of defining HiperSockets using HCD and then
configuring VTAM and TCP/IP to support the type of HiperSockets configuration required. A
HiperSockets connection requires you to specify all configuration statements (DEVICE, LINK,
HOME, BSDROUTINGPARMS, BEGINROUTES, and START) for the HiperSockets
connection. z/OS systems within a sysplex can use cross system coupling facility (XCF)
services to create a DYNAMICXCF HiperSockets connection. For the DYNAMICXCF
HiperSockets connection, the required statements are dynamically created. Once defined,
the HiperSockets connection can be configured to support VLAN and Sysplex subplex (which
requires DYNAMICXCF).
A z/OS TCP/IP stack does not support HiperSockets and DYNAMICXCF traffic over the same
HiperSockets CHPID. TCP/IP stacks that have both types of traffic must use two separate
CHPIDs.
z/OS V1R5 and above support all functions of the System z HiperSockets.
3.1.1 z/OS implementation tasks
The following steps summarize the tasks required to enable HiperSockets support:
1. Define the HiperSockets channel, control unit, and devices using HCD. Generate an
IOCDS for the hardware and IODF for the operating system. Activate the hardware
configuration (IOCDS) and software configuration (IODF) on the System z server. Verify
that the HiperSockets devices are online to the target logical partitions.
2. Configure VTAM if DYNAMICXCF HiperSockets devices are to be used.
3. Configure the TCP/IP stacks for HiperSockets support.
We cover these requirements in the following sections.
For our test environment, we examine the following configurations:
HiperSockets
DYNAMICXCF HiperSockets
Sysplex subplex using HiperSockets
VLAN with HiperSockets
3.2 Hardware definitions
The first step in implementing HiperSockets on z/OS is to define HiperSockets to the
hardware using HCD. Regardless of whether or DYNAMICXCF devices are to be used, the
hardware definition is the same. In the following section, we review our hardware
implementation environment, which is used for all our configuration examples. Once the
hardware definitions are completed and successfully activated, then configuration of VTAM
and TCP/IP can be started. As the software configuration will vary depending on intended
use, we cover several examples beginning with 3.4, HiperSockets implementation on
page 44.
Chapter 3. z/OS support 41
HCD and IOCP definitions
To implement HiperSockets, we first define the IQD (HiperSockets) channel, control unit, and
devices. See Chapter 2, Hardware definitions on page 21 for details on how to use HCD to
define the required definitions. From these HCD definitions, we produce the following
hardware definitions (IOCP).
Example 3-1 shows the IOCP definitions we used for HiperSockets (CHPID F4), for
DYNAMICXCF (CHPID F5) and Sysplex subplex (CHPID F7).
Example 3-1 IOCP statements for the z/OS environment
CHPID PATH=(CSS(1,2),F4),SHARED, *
PARTITION=((CSS(1),(A12),(=)),(CSS(2),(A23,A24,A25),(=))*
),TYPE=IQD
CHPID PATH=(CSS(1,2),F5),SHARED, *
PARTITION=((CSS(1),(A12),(=)),(CSS(2),(A23,A24,A25),(=))*
),TYPE=IQD
CHPID PATH=(CSS(1,2),F7),SHARED, *
PARTITION=((CSS(1),(A12),(=)),(CSS(2),(A23,A24,A25),(=))*
),TYPE=IQD
CNTLUNIT CUNUMBR=E800,PATH=((CSS(1),F4),(CSS(2),F4)),UNIT=IQD
IODEVICE ADDRESS=(E800,032),CUNUMBR=(E800),UNIT=IQD
CNTLUNIT CUNUMBR=E900,PATH=((CSS(1),F5),(CSS(2),F5)),UNIT=IQD
IODEVICE ADDRESS=(E900,032),CUNUMBR=(E900),UNIT=IQD
CNTLUNIT CUNUMBR=EB00,PATH=((CSS(1),F7),(CSS(2),F7)),UNIT=IQD
IODEVICE ADDRESS=(EB00,032),CUNUMBR=(EB00),UNIT=IQD
CHPIDs F4, F5, and F6 were defined as spanned in HCD and are shared between the three
z/OS logical partitions on LCSS 2 and a VM logical partition on LCSS 1. IO devices
E800-E801F were defined to CHPID F4 via controller E800 for HiperSockets. IO devices
E900-E91F were defined to CHPID F5 via controller E900 for DYNAMICXCF. IO devices
EB00-EB1F were defined to CHPID F7 via controller EB00 for Sysplex subplex.
CHPID F4, F5, and F7 do not have the CHPARM parameter defined. It defaults to
CHPARM=00, resulting in a maximum frame size of 16 KB, and an MTU size of 8 KB. The
CHPARM parameter values are listed in Table 2-1 on page 22.
42 HiperSockets Implementation Guide
Verify IOCP definitions
The display CHPID command (see Example 3-2) can be used to verify that CHPIDs F4, F5,
and F7 are defined as TYPE=24, which is the CHPID type for IQD. It can also verify that the
addresses E800-E81F, E900-E91F, and EB00-EB1F are online, as indicated by the plus (+)
sign.
Example 3-2 Display IQD CHPIDs
D M=CHP(F4,F5,F7)
IEE174I 14.11.12 DISPLAY M 744
CHPID F4: TYPE=24, DESC=INTERNAL QUEUED DIRECT COMM, ONLINE
CHPID F5: TYPE=24, DESC=INTERNAL QUEUED DIRECT COMM, ONLINE
CHPID F7: TYPE=24, DESC=INTERNAL QUEUED DIRECT COMM, ONLINE
DEVICE STATUS FOR CHANNEL PATH F4
0 1 2 3 4 5 6 7 8 9 A B C D E F
0E80 + + + + + + + + + + + + + + + +
0E81 + + + + + + + + + + + + + + + +
SWITCH DEVICE NUMBER = NONE
DEVICE STATUS FOR CHANNEL PATH F5
0 1 2 3 4 5 6 7 8 9 A B C D E F
0E90 + + + + + + + + + + + + + + + +
0E91 + + + + + + + + + + + + + + + +
SWITCH DEVICE NUMBER = NONE
DEVICE STATUS FOR CHANNEL PATH F7
0 1 2 3 4 5 6 7 8 9 A B C D E F
0EB0 + + + + + + + + + + + + + + + +
0EB1 + + + + + + + + + + + + + + + +
SWITCH DEVICE NUMBER = NONE
************************ SYMBOL EXPLANATIONS ************************
+ ONLINE @ PATH NOT VALIDATED - OFFLINE . DOES NOT EXIST
* PHYSICALLY ONLINE $ PATH NOT OPERATIONAL
3.3 VTAM and TCP/IP started task JCL procedures
To configure HiperSockets on z/OS requires updates to the TCP/IP profile dataset and the
VTAM start options member (for DYNAMICXCF setup). These files can be located by
examining the JCL procedures used to start the tasks, as shown in the following examples.
Note: Once the 160th IQD device is brought online to a logical partition, subsequent
attempts to vary an IQD device online will fail and generate the following message :
IOS577I IQD INITIALIZATION FAILED, COMPLETION TABLE FULL
Chapter 3. z/OS support 43
3.3.1 Locating the TCP/IP profile dataset from the TCP/IP JCL procedure
Our TCP/IP stack used for most implementation examples is called TCPIPF. We created a
JCL procedure in SYS1.PROCLIB called TCPIPF, which is used to start the TCP/IP address
space, as shown in Figure 3-1. We use this same procedure for each logical partition. We
also create JCL procedures for two additional IP stacks called TCPIPD and TCPIPE.
The TCP/IP profile is pointed to by the PROFILE DD statement in the TCP/IP JCL procedure.
In our environment, the profile name resolves to PROFF30 for logical partition A23, PROFF31
for logical partition A2,4 and PROFF32 for logical partition A25.
Figure 3-1 TCPIPF started task JCL procedure
3.3.2 Locating the VTAM start options dataset from the VTAM JCL procedure
We use a singe VTAM JCL procedure to start the VTAM nodes for all logical partitions in the
sysplex. This member is in SYS1.PROCLIB and is called NET, as shown in Figure 3-3 on
page 44. The dataset containing the VTAM start options member is pointed to by the
VTAMLST DD statement. The member name containing the start options is ATCSTRxx. For
our implementation, we use member ATCSTR00.
Figure 3-2 VTAM started task JCL procedure
//TCPIPF PROC PARMS='CTRACE(CTIEZB00),IDS=00',
// PROFILE=PROFF&SYSCLONE.,TCPDATA=DATA&SYSCLONE.
//TCPIPF EXEC PGM=EZBTCPIP,REGION=0M,TIME=1440,
// PARM=('&PARMS',
// 'ENVAR("RESOLVER_CONFIG=//''TCPIPF.TCPPARMS(&TCPDATA)''")')
//SYSPRINT DD SYSOUT=*,DCB=(RECFM=VB,LRECL=132,BLKSIZE=136)
//ALGPRINT DD SYSOUT=*,DCB=(RECFM=VB,LRECL=132,BLKSIZE=136)
//CFGPRINT DD SYSOUT=*,DCB=(RECFM=VB,LRECL=132,BLKSIZE=136)
//SYSOUT DD SYSOUT=*,DCB=(RECFM=VB,LRECL=132,BLKSIZE=136)
//CEEDUMP DD SYSOUT=*,DCB=(RECFM=VB,LRECL=132,BLKSIZE=136)
//SYSERROR DD SYSOUT=*
//PROFILE DD DISP=SHR,DSN=TCPIPF.TCPPARMS(&PROFILE)
//SYSTCPD DD DSN=TCPIPF.TCPPARMS(&TCPDATA),DISP=SHR
//NET PROC
//NET EXEC PGM=ISTINM01,REGION=2048K,DPRTY=(15,12)
//STEPLIB DD DSN=NCP.SSPLIB,DISP=SHR
//VTAMLST DD DSN=SYS1.VTAMLST,DISP=SHR
//VTAMLIB DD DSN=SYS1.LOCAL.VTAMLIB,DISP=SHR
// DD DSN=SYS1.VTAMLIB,DISP=SHR
//NCPLOAD DD DSN=NCPUSER.LOADLIB,DISP=SHR
//NCPDUMP DD DSN=NCPUSER.NCPDUMP,DISP=SHR
//CSPDUMP DD DSN=NCPUSER.CDUMP,DISP=SHR
//MOSSDUMP DD DSN=NCPUSER.MDUMP,DISP=SHR
//SISTCLIB DD DSN=SYS1.SISTCLIB,DISP=SHR
//TRSDB DD DISP=SHR,DSN=SYS1.VTAM.&SYSNAME..TRSDB
//DSDBCTRL DD DISP=SHR,DSN=SYS1.VTAM.&SYSNAME..DSDBCTRL
//DSDB1 DD DISP=SHR,DSN=SYS1.VTAM.&SYSNAME..DSDB1
//DSDB2 DD DISP=SHR,DSN=SYS1.VTAM.&SYSNAME..DSDB2
//VTMNCP DD DUMMY
44 HiperSockets Implementation Guide
3.4 HiperSockets implementation
Here we discuss the HiperSockets implementation.
3.4.1 HiperSockets implementation environment
Our first implementation configured a HiperSockets channel. We configured one TCP/IP
stack on each logical partition (A23, A24, and A25) to use HiperSockets channel F4, as
shown in Figure 3-3. We configured the environment according to the following requirements:
The HiperSockets must be defined as CHPID type IQD to the server using HCD or IOCP.
This CHPID must be defined as shared to all logical partitions that will be part of the
HiperSockets internal LAN.
The HiperSockets device must be defined to the TCP/IP stacks using a device name of
IUTIQDxx, where xx is the CHPID number.
The IQD CHPID being used for the HiperSockets device cannot be the same CHIPID used
for a DYNAMICXCF device. To avoid this, a VTAM start options, IQDCHPID, is used to
identify which IQD CHPID will be used by DYNAMICXCF when multiple HiperSockets are
used.
Figure 3-3 HiperSockets implementation
3.4.2 Implementation steps
We took the following steps to implement HiperSockets:
Defined the HiperSockets channel, control unit, and device.
Made the required configuration changes to the TCP/IP profile.
Started the TCP/IP stacks.
SYSPLEX SYSPLEX
LPAR-A23
z/OS
SC30
E800-E802
192.0.1.4
LPAR-A24
z/OS
SC31
E800-E802
192.0.1.5
LPAR-A25
z/OS
SC32
E800-E802
192.0.1.6
CF38 CF39
HiperSockets CHPID F4
192.0.1.0/24
Chapter 3. z/OS support 45
We used the same definitions shown in Example 3-1 on page 41. The HiperSockets channel
and devices must be online prior to starting the first TCP/IP stack configured to use the
devices. For our example, CHPID F4 and device numbers E800, E801, and E802 must be
active to z/OS prior to starting the first TCP/IP stack configured to use HiperSockets channel
F4. We have verified that the CHPID and devices are online in 3.2, Hardware definitions on
page 40.
3.4.3 VTAM customization for HiperSockets
No VTAM setup is required because VTAM dynamically creates the necessary Transport
Resource List Element (TRLE). When the first TCP/IP stack is started, VTAM will build a
single MultiPath Channel (MPC) group using the subchannel devices associated with the IQD
CHPID. VTAM will use two subchannel devices for the read and write control devices, and one
to eight subchannel devices for the data device.
3.4.4 TCP/IP profile customization for HiperSockets
HiperSockets devices have to be defined in the TCP/IP profile with a DEVICE and LINK
statement. Additionally, HOME, BEGINROUTES, and START statements should be specified.
If required, BSDROUTINGPARMS can be specified (we do not specify it in our
implementation example).
We update the profile member for each TCP/IP stack following the configuration rules. Ours
customization for TCPIPF on logical partition A23 is shown in Example 3-3.
Example 3-3 TCPIPF profile for logical partition A23 for HiperSockets
DEVICE IUTIQDF4 MPCIPA 1
LINK HIPERLF4 IPAQIDIO IUTIQDF4 2

HOME
192.0.1.4 HIPERLF4 3
BEGINROUTES
ROUTE 192.0.1.0/24 = HIPERLF4 MTU 8192 4
ENDROUTES

START IUTIQDF4 5
1 The DEVICE statement is in the form DEVICE device_name MPCIPA. The device_name
must be in the form IUTIQDxx. The prefix IUTIQD must be specified. The xx indicates that the
hexidecimal value of the IQD CHPID that was defined in HCD. MPCIPA is required and
specifies that the device belongs to the MPC family and uses the IP Assist based interface.
Since the first six characters of the device name must be IUTIQD and the last two characters
of the device name must be the CHPID number, for our example the device name must be
IUTIQDF4. Based on this name, at TCP/IP device start time, a VTAM Transport Resource List
Element (TRLE) is dynamically built.
2 The LINK statement is specified as LINK link_name IPAQDIO device_name. The link_name
maximum length is 16 characters. IPAQDIO is required and indicates that the link uses the IP
Assist based interface, belongs to the QDIO family of interfaces, and uses the HiperSockets
protocol. The device_name must be the same as the device_name specified on the DEVICE
statement.
Since the link_name is user defined, we have used HIPERLF4. As this LINK statement
applies to our HiperSockets device, we use the device name specified on our DEVICE
statement, IUTIQDF4.
46 HiperSockets Implementation Guide
3 The HOME statement is specified as HOME internet_addr link_name. The internet_addr
should specify a valid IP address for the host in dotted decimal format. The link_name must
match the link_name specified on the LINK statement for the associated IPAQIDIO link.
When a TCP/IP device is started, the IP address contained in the TCP/IP stacks HOME list is
registered in the IP address lookup table, as shown in Table 1-4 on page 19. The z/OS
TCP/IP stack becomes part of HiperSockets F4 with the IP address 192.0.1.4.
4 The ROUTE entry of the BEGINROUTES statement is specified as ROUTE destination
gateway_addr link_name MTU mtu_size. The destination should specify a valid host, network
or subnetwork. Specifying = for the gateway_addr means packets are routed directly to
destinations on that network or host. The link_name must match the link_name specified on
the LINK statement.
Since we are using static routes in our environment, we have to define a Route statement. If
you are using dynamic routing (RIP or OSPF), omit this statement. Also note that we have
specified an MTU size of 8 KB to accommodate the maximum frame size of the IQD CHPID
F4 which we defined in the HCD with the default value (16 KB)
5 The START statement is specified as: START device_name. The START statement is used
to start a device. The device_name must match the device_name specified on the DEVICE
statement for the HiperSockets CHPID.
Optionally, a VARY TCPIP,tcpipproc,START,devicename command can be issued or the
START statement can be used in a VARY TCPIP,tcpipproc,OBEYFILE,datasetname
command to start the desired device.
We also updated the TCP/IP profile for TCPIPF on logical partition A24 and logical partition
A25. All parameters are the same as defined for logical partition A23, except for the HOME
statement IP address, as shown in Example 3-4 for logical partition A24 and Example 3-5 for
logical partition A25.
Example 3-4 A24 TCPIPF HOME parameter profile configuration
HOME
192.0.1.5 HIPERLF4
Example 3-5 A25 TCPIPF HOME parameter profile configuration
HOME
192.0.1.6 HIPERLF4
3.4.5 Verification of the HiperSockets configuration
This section shows the display commands we used to verify our HiperSockets configuration.
TCP/IP startup
We start TCP/IP stack TCPIPF on logical partition S30. Since we specified a START
parameter in the tcp profile dataset for the HiperSockets device IUTQDF4, the initialization
will occur when we start the TCPIPF stack. Successful initialization of the HiperSockets
device can be verified by checking the SYSLOG messages when the IP stack is started, as
shown in Example 3-6.
Example 3-6 TCPIPF start up messages for HiperSockets device
EZZ4313I INITIALIZATION COMPLETE FOR DEVICE IUTIQDF4
Chapter 3. z/OS support 47
We also start the TCP/IP stacks on logical partition A23 and A24.
Verify TCP/IP device and link status
To verify the status of devices and links defined to the TCP/IP stack, we use the D
TCPIP,procname,NETSTAT,DEVLINKS command to request NETSTAT information. The
output for our example is shown in Example 3-7.
We verify that the device name and type match our tcp profile definitions and that the device
is in a ready status 1. We verify that the link name and type match our tcp profile definitions
and that the link is in a ready status 2. We also verify that the MTU definition matches what
was defined in the tcp profile 3.
Example 3-7 Verify device and link information HiperSockets
D TCPIP,TCPIPF,NETSTAT,DEVL
EZD0101I NETSTAT CS V1R8 TCPIPF 329
DEVNAME: IUTIQDF4 DEVTYPE: MPCIPA 1
DEVSTATUS: READY
LNKNAME: HIPERLF4 LNKTYPE: IPAQIDIO LNKSTATUS: READY 2
NETNUM: N/A QUESIZE: N/A
IPBROADCASTCAPABILITY: NO
CFGROUTER: NON ACTROUTER: NON
ARPOFFLOAD: YES ARPOFFLOADINFO: YES
ACTMTU: 8192
READSTORAGE: GLOBAL (2048K)
SECCLASS: 255 MONSYSPLEX: NO
BSD ROUTING PARAMETERS:
MTU SIZE: 8192 3 METRIC: 00
DESTADDR: 0.0.0.0 SUBNETMASK: 255.255.255.0
MULTICAST SPECIFIC:
MULTICAST CAPABILITY: YES
GROUP REFCNT
----- ------
224.0.0.1 0000000001
Verify VTAM TRLE
VTAM will automatically create a TRLE (IUTIQDF4) 1 for the HiperSockets channel F4 as part
of the transport resource list major node (ISTTRL) 2, as shown in the output of the D NET,TRL
command in Example 3-8.
Example 3-8 D NET,TRL display for HiperSockets example (partial output)
D NET,TRL
IST097I DISPLAY ACCEPTED
IST350I DISPLAY TYPE = TRL 013
IST924I -------------------------------------------------------------
IST1954I TRL MAJOR NODE = ISTTRL 2
IST1314I TRLE = IUTIQDF4 STATUS = ACTIV CONTROL = MPC 1
48 HiperSockets Implementation Guide
For further information about the TRLE, we issued D NET,TRL TRLE. The results are shown
in Example 3-9.
The display shows that for the IUTIQDF4 TRLE, devices E800 and E801 are used for read
and write control 1. For TCPIPF data, device E802 is assigned 2. When a datapath device is
active, it indicates which TCP/IP stack is using it in message IST1717I (ULPID = jobname) 3.
For example, TCPIPF (jobname) is using device E802. Note that if we had multiple TCP/IP
stacks running in the same logical partition and using CHPID F4, then datapath devices E803
through E809 would be used 4. No additional read/write devices would be needed.
Example 3-9 Verify VTAM TRLE for HiperSockets
D NET,TRL,TRLE=IUTIQDF4
IST097I DISPLAY ACCEPTED
IST075I NAME = IUTIQDF4, TYPE = TRLE
IST1954I TRL MAJOR NODE = ISTTRL
IST486I STATUS= ACTIV, DESIRED STATE= ACTIV
IST087I TYPE = LEASED , CONTROL = MPC , HPDT = YES
IST1715I MPCLEVEL = QDIO MPCUSAGE = SHARE
IST1716I PORTNAME = LINKNUM = 0 OSA CODE LEVEL = ...(
IST1577I HEADER SIZE = 4096 DATA SIZE = 16384 STORAGE = ***NA***
IST1221I WRITE DEV = E801 STATUS = ACTIVE STATE = ONLINE 1
IST1577I HEADER SIZE = 4092 DATA SIZE = 0 STORAGE = ***NA***
IST1221I READ DEV = E800 STATUS = ACTIVE STATE = ONLINE 1
IST1221I DATA DEV = E802 STATUS = ACTIVE STATE = N/A 2
IST1724I I/O TRACE = OFF TRACE LENGTH = *NA*
IST1717I ULPID = TCPIPF 3
IST1815I IQDIO ROUTING DISABLED
IST1918I READ STORAGE = 2.0M(126 SBALS)
IST1757I PRIORITY1: UNCONGESTED PRIORITY2: UNCONGESTED
IST1757I PRIORITY3: UNCONGESTED PRIORITY4: UNCONGESTED
IST1801I UNITS OF WORK FOR NCB AT ADDRESS X'0E794010'
IST1802I P1 CURRENT = 0 AVERAGE = 1 MAXIMUM = 1
IST1802I P2 CURRENT = 0 AVERAGE = 1 MAXIMUM = 1
IST1802I P3 CURRENT = 0 AVERAGE = 0 MAXIMUM = 0
IST1802I P4 CURRENT = 0 AVERAGE = 0 MAXIMUM = 0
IST1221I DATA DEV = E803 STATUS = RESET STATE = N/A 4
IST1724I I/O TRACE = OFF TRACE LENGTH = *NA*
IST1221I DATA DEV = E804 STATUS = RESET STATE = N/A
IST1724I I/O TRACE = OFF TRACE LENGTH = *NA*
IST1221I DATA DEV = E805 STATUS = RESET STATE = N/A
IST1724I I/O TRACE = OFF TRACE LENGTH = *NA*
IST1221I DATA DEV = E806 STATUS = RESET STATE = N/A
IST1724I I/O TRACE = OFF TRACE LENGTH = *NA*
IST1221I DATA DEV = E807 STATUS = RESET STATE = N/A
IST1724I I/O TRACE = OFF TRACE LENGTH = *NA*
IST1221I DATA DEV = E808 STATUS = RESET STATE = N/A
IST1724I I/O TRACE = OFF TRACE LENGTH = *NA*
IST1221I DATA DEV = E809 STATUS = RESET STATE = N/A
IST1724I I/O TRACE = OFF TRACE LENGTH = *NA*
IST314I END
Chapter 3. z/OS support 49
Verify HiperSockets connections
We use the ping command to verify the HiperSockets connection between the logical
partitions. If multiple stacks are active, you must route the ping command to the correct stack.
In our example, TCPIPF is using the HiperSockets link, so we specify (tcp tcpipf) as a ping
command option 1. We use the ping command and specify the TCP/IP stack that we wish to
use. The results are shown in Example 3-10.
Example 3-10 HiperSockets connectivity test
Ping from TCPIPF on LPAR A23 to LPAR A24
ping 192.0.1.5 (tcp tcpipf) 1
CS V1R8: Pinging host 192.0.1.5
Ping #1 response took 0.000 seconds.
Ping from TCPIPF on LPAR A23 to LPAR A25
ping 192.0.1.6 (tcp tcpipf)
CS V1R8: Pinging host 192.0.1.6
Ping #1 response took 0.000 seconds.
Ping from TCPIPF on LPAR A24 to LPAR A23
ping 192.0.1.4 (tcp tcpipf)
CS V1R8: Pinging host 192.0.1.4
Ping #1 response took 0.000 seconds.
Ping from TCPIPF on LPAR A24 to LPAR A25
ping 192.0.1.6 (tcp tcpipf)
CS V1R8: Pinging host 192.0.1.6
Ping #1 response took 0.000 seconds.
Ping from TCPIPF on LPAR A25 to LPAR A23
ping 192.0.1.4 (tcp tcpipf)
CS V1R8: Pinging host 192.0.1.4
Ping #1 response took 0.000 seconds.
Ping from TCPIPF on LPAR A25 to LPAR A24
ping 192.0.1.5 (tcp tcpipf)
CS V1R8: Pinging host 192.0.1.5
Ping #1 response took 0.000 seconds.
3.5 DYNAMICXCF HiperSockets implementation
From an IP topology perspective, DYNAMICXCF establishes fully meshed IP connectivity to
all other z/OS TCP/IP stacks in the Sysplex. We only need one endpoint specification in each
stack for fully meshed connectivity to all other stacks in the Sysplex. When a new stack gets
started, Dynamic XCF connectivity is automatically established.
Dynamic XCF uses Sysplex Sockets support, allowing the stacks to communicate with each
other and exchange information, such as VTAM CPNAMEs, MVS SYSCLONE value, and
IP addresses. The dynamic XCF definition is activated by coding the IPCONFIG
DYNAMICXCF parameter in the TCP/IP profile.
Dynamic XCF creates definitions for DEVICE, LINK, HOME, and BSDROUTINGPARMS
statements and the START statement dynamically. When activated, the dynamic XCF devices
and links appear to the stack as though they had been defined in the TCP/IP profile. They can
be displayed using standard commands, and they can be stopped and started.
Note: Only one dynamic XCF network is supported per Sysplex.
50 HiperSockets Implementation Guide
During TCP/IP initialization, the stack joins the XCF group, ISTXCF, through VTAM. When
other stacks in the group discover the new stack, the definitions are created automatically, the
links are activated, and the remote IP address for each link is added to the routing table. After
the remote IP address has been added, IP traffic can flow across one of the following
interfaces:
IUTSAMEH (within the same logical partition)
HiperSockets (within the same server)
XCF signaling (different server, either using the coupling facility link or a CTC connection)
Figure 3-4 shows the Dynamic XCF support implementation.
Figure 3-4 Dynamic XCF support
HiperSockets DYNAMICXCF connectivity
z/OS images within the same server with DYNAMICXCF coded will use the HiperSockets
DYNAMICXCF connectivity instead of the standard XCF connectivity, under these conditions:
The TCP/IP stacks must be on the same server.
For the DYNAMICXCF HiperSockets device (IUTIQDIO), the stacks must be using the
same IQD CHPID, even with different channel subsystems (spanning).
The stacks must be configured (HCD) to use HiperSockets.
For IPv6 HiperSockets connectivity, both stacks must be, at least, at the z/OS V1R7 level.
The initial HiperSockets activation must complete successfully.
When an IPv4 DYNAMICXCF HiperSockets device and link are created and successfully
activated, a subnetwork route is created across the HiperSockets link. The subnetwork is
created by using the DYNAMICXCF IP address and mask. This allows any logical partition
within the same server to be reached, even ones that are not within the sysplex. To do that,
the logical partition that is outside of the sysplex environment must define at least one IP
LPAR 1
Server 1
TCP/IP
Stack A
TCP/IP
Stack B
HiperSockets IUTSAMEH
LPAR 2
TCP/IP
Stack C
Coupling Facility Link
(XCF Signaling)
LPAR 3
TCP/IP
Stack D
Server 2
CF
Chapter 3. z/OS support 51
address for the HiperSockets endpoint that is within the subnetwork defined by the
DYNAMICXCF IP address and mask.
When multiple stacks reside within the same logical partition that supports HiperSockets,
both IUTSAMEH and HiperSockets links or interfaces will coexist. In this case, it is possible to
transfer data across either link. Because IUTSAMEH links have better performance, it is
always better to use them for intra-stack communication. A host route will be created by
DYNAMICXCF processing across the IUTSAMEH link, but not across the HiperSockets link.
3.5.1 DYNAMICXCF implementation environment
For our test implementation, we configure a single TCP/IP stack on logical partition A23, A24,
and A25 for DYNAMICXCF connectivity using CHPID F5, as shown in Figure 3-5. We
configured the test environment based on the following requirements:
All z/OS hosts must belong to the same sysplex.
VTAM must have XCF communications enabled by specifying XCFINIT=YES or
XCFINIT=DEFINE as a startup parameter or by activating the VTAM XCF local SNA major
node, ISTLSXCF. For details about configuration, refer to z/OS Communications Server:
SNA Network Implementation, SC31-8777.
DYNAMICXCF must be specified in the TCP/IP profile of each stack.
The IQD CHPID being used for the DYNAMICXCF device cannot be the HiperSockets
device (IQD CHPID). To avoid this, a VTAM start options, IQDCHPID, can be used to
identify which IQD CHPID will be used by DYNAMICXCF.
Figure 3-5 DYNAMICXCF implementation environment
SYSPLEX SYSPLEX
LPAR-A23
z/OS
SC30
E800-E802
192.0.2.4
LPAR-A24
z/OS
SC31
E800-E802
192.0.2.5
LPAR-A25
z/OS
SC32
E800-E802
192.0.2.6
CF38 CF39
HiperSockets CHPID F5
192.0.2.0/24
52 HiperSockets Implementation Guide
Table 3-1 is the lookup table for this configuration.
Table 3-1 IP address lookup table for HiperSockets DYNAMICXCF in CHPID F5
All sysplex systems can connect to each other.
3.5.2 Implementation steps
We took the following steps to implement DYNAMICXCF HiperSockets:
Define the HiperSockets channel, control unit, and device.
Configure VTAM to specify XCFINIT=YES and to specify the IQDCHPID parameter.
Add the DYNAMICXCF statement to the TCP/IP profile.
Start the TCP/IP stacks.
We used the same definitions shown in Example 3-1 on page 41. The HiperSockets channel
and devices must be online prior to starting the first TCP/IP stack configured to use the
devices. For our example, CHPID F5 and device numbers E900, E901, and E902 must be
active to z/OS prior to starting the first TCP/IP stack configured to use HiperSockets channel
F5. We have verified that the CHPID and devices are online in 3.2, Hardware definitions on
page 40.
3.5.3 VTAM configuration for DYNAMICXCF
To enable DYNAMICXCF over HiperSockets, the VTAM start options XCFINIT=YES and
IQDCHPID should be specified. We modify the VTAM start options (ATCSTRxx) to add the
required parameters.
We add the XCFINIT=YES line to the VTAM start options 1, as shown in Example 3-11.
Example 3-11 VTAM XCFINIT statement for DYNAMICXCF
XCFINIT=YES 1
This tells VTAM to dynamically build the ISTLSXCF link station major node and establish
connections to other VTAMs in the sysplex.
We add the IQDCHPID statement to the VTAM start options 2, as shown in Example 3-12.
Example 3-12 VTAM IQDCHPID statement for DYNAMICXCF
IQDCHPID=F5 2
This tells VTAM to use CHPID F5 as the DYNAMICXCF link for the sysplex. In our
configuration, we have multiple HiperSockets CHPIDs defined (F4,F5,F7). For this reason,
the definition of IQDCHPID is mandatory to ensure iQDIO with the appropriate CHPID will be
selected. Otherwise, if multiple IQD CHPIDs are defined, VTAM will use the first acceptable
IP address Logical partition # Device addresses Input/Output queue
pointer
192.0.2.4 A23 E900-E901 E902
192.0.2.5 A24 E900-E901 E902
192.0.2.6 A25 E900-E901 E902
Chapter 3. z/OS support 53
HiperSockets CHPID detected (having at least three subchannel devices defined). We
recommend specifying the IQDCHPID statement to reserve the IQD channel for
DYNAMICXCF use when multiple HiperSockets are defined. VTAM will dynamically create a
IUTIQDIO TRLE for the DYNAMICXCF HiperSockets link.
In our configuration, the VTAM START definitions are the same for logical partition A24 and
logical partition A25.
3.5.4 TCP/IP configuration for DYNAMICXCF
In order for a TCP/IP stack to use the DYNAMICXCF HiperSockets connection, the
DYNAMICXCF parameter must be specified on the IPCONFIG statement in the tcp profile.
1 The DYNAMICXCF parameter is specified as DYNAMICXCF ipv4_address subnet_mask
cost_metric.
Specifying DYNAMICXCF will allow for the creation of the DEVICE, LINK, HOME,
BSDROUTINGPARMS, and the START (for IUTIQDIO device) statements.
We modified the IPCONFIG statement to add the DYNAMICXCF parameter with an IP
address, a subnet mask, and a cost metric in the TCP/IP profile (PROFF30) for A23, as
shown in Example 3-13 1.
Example 3-13 IPCONFIG statement for DYNAMICXCF connection for logical partition A23
IPCONFIG DYNAMICXCF 192.0.2.4 255.255.255.0 1 1
Our IP address (192.0.2.4) is the interface to HiperSockets CHPID F5. This address will be
automatically appended to the TCP/IP profile HOME list. At TCP/IP stack initialization time,
the stack is registered with its IP address 192.0.2.4 in the IP address lookup table, as shown
in Table 3-1 on page 52.
The definitions for logical partition A24 and logical partition A25 are identical to the definition
for A23, except for the IP address of (192.0.2.5) for A24, as shown in Example 3-14, and
(192.0.2.6) for A25 as shown in Example 3-15.
Example 3-14 IPCONFIG statement for DYNAMICXCF connection for logical partition A24
IPCONFIG DYNAMICXCF 192.0.2.5 255.255.255.0 1
Example 3-15 IPCONFIG statement for DYNAMICXCF connection for logical partition A25
IPCONFIG DYNAMICXCF 192.0.2.6 255.255.255.0 1
54 HiperSockets Implementation Guide
3.5.5 Verification of the DYNAMICXCF configuration
In this section, we show the display commands we used to verify our DYNAMICXCF
configuration.
Verify the VTAM setup prior to starting the IP stack
Prior to starting a TCP/IP stack to use the DYNAMICXCF connection we verify that the VTAM
configuration is active. We used the display VTAM options command to make sure that
IQDCHPID=F5 1 and XCFINIT=YES 2 were defined. In Example 3-16, only the messages
related to DYNAMICXCF are shown. We issued the command to each VTAM node in our
sysplex to verify that the required parameters were correct.
Example 3-16 Verify VTAM options (partial output)
D NET,VTAMOPTS
...
IST1189I IQDCHPID = F5 1 IQDIOSTG = 7.8M(126 SBALS)
...
IST1189I XCFGRPID = ***NA*** XCFINIT = YES 2
...
TCP/IP startup
We start the TCP/IP stack TCPIPF on logical partition A23. SYSLOG messages issued
during the start of the TCP/IP stack indicate if DYNAMICXCF is enabled and if the VTAM
TRLE definition was generated. To ensure DYNAMICXCF was enabled, we checked the
system log for the EZZ0624I message 1, which is generated at TCP/IP startup time. We also
verify that the dynamically defined VTAM TRLE IUTIQDIO device was started 2.
Example 3-17 TCP/IP startup messages for DYNAMICXCF configuration
EZZ0624I DYNAMIC XCF DEFINITIONS ARE ENABLED 1
EZZ4313I INITIALIZATION COMPLETE FOR DEVICE IUTIQDIO 2
We also start the TCP/IP stacks on logical partition A23 and A24, which were configured for
DYNAMICXCF.
Chapter 3. z/OS support 55
Verify TCP/IP DEVICE and LINK status
To verify the status of devices and links defined to the TCP/IP stack, we use the D
TCPIP,procname,NETSTAT,DEVLINKS command to request NETSTAT information. The
output shown in Example 3-18.
Example 3-18 Verify device and link status for DYNAMICXCF
D TCPIP,TCPIPF,NETSTAT,DEV
...
DEVNAME: IUTIQDIO 1 DEVTYPE: MPCIPA
DEVSTATUS: READY
LNKNAME: IQDIOLNKC0000204 LNKTYPE: IPAQIDIO LNKSTATUS: READY 2
NETNUM: N/A QUESIZE: N/A
IPBROADCASTCAPABILITY: NO
CFGROUTER: NON ACTROUTER: NON
ARPOFFLOAD: YES ARPOFFLOADINFO: YES
ACTMTU: 8192
READSTORAGE: GLOBAL (2048K)
SECCLASS: 255
BSD ROUTING PARAMETERS:
MTU SIZE: 8192 METRIC: 01
DESTADDR: 0.0.0.0 SUBNETMASK: 255.255.255.0
MULTICAST SPECIFIC:
MULTICAST CAPABILITY: YES
GROUP REFCNT
----- ------
224.0.0.1 0000000001
...
TCP/IP will automatically generate a DEVICE IUTIQDIO 1 with LINK name
IQDIOLNKxxxxxxxx, where xxxxxxxx is the IP address in hexadecimal format. In our case,
the IP address was 192.0.2.4 for A23. The LINK name created was IQDIOLNKC0000204 2.
Table 3-2 shows the hexadecimal value in the LINK name converted to an IP address.
Table 3-2 LINK name-to-IP address conversion
LINK name
C0000204
C0 00 02 04
IP address
192.0.2.4
192 0 2 4
Note: The LINK name generated by TCP/IP can be used in conjunction with static routes.
However, you must first start the stack, then issue the VARY TCPIP command to add static
routes. Also be aware that the LINK name will change whenever the IP address defined in
the DYNAMICXCF statement changes.
56 HiperSockets Implementation Guide
Verify TRLE definitions
The first stack within a logical partition to initialize dynamic XCF support causes VTAM to
generate a TRLE with a name of IUTIQDIO 1. We used the D NET,TRL,TRLE=trle_name
command, as shown in Example 3-19, to verify that the TRLE was active 2.
We see from the display that the IUTIQDIO MPC group has been assigned E900 for the read
control device 3 and E901 for the write control device 4. TCPIPF, our stack configured to use
DYNAMICXCF 5, is assigned data device E902 6. Note that if we had multiple TCP/IP stacks
running in the same logical partition and using CHPID F5, then datapath devices E903
through E909 would be used. No additional read and write control devices are required.
The IUTIQDIO device is assigned a PORTNAME IUTIQDxx, where xx is the IQD CHPID that
was specified in the IQDCHPID VTAM statement. For our example, the PORTNAME is
IUTIQDF5 7, as we have specified IQDCHPID=F5 in our vtam start options.
Example 3-19 Verify TRLE
D NET,TRL,TRLE=IUTIQDIO
IST097I DISPLAY ACCEPTED
IST075I NAME = IUTIQDIO, TYPE = TRLE 1
IST1954I TRL MAJOR NODE = ISTTRL
IST486I STATUS= ACTIV, DESIRED STATE= ACTIV 2
IST087I TYPE = LEASED , CONTROL = MPC , HPDT = YES
IST1715I MPCLEVEL = QDIO MPCUSAGE = SHARE
IST1716I PORTNAME = IUTIQDF5 7 LINKNUM = 0 OSA CODE LEVEL = ...(
IST1577I HEADER SIZE = 4096 DATA SIZE = 16384 STORAGE = ***NA***
IST1221I WRITE DEV = E901 STATUS = ACTIVE STATE = ONLINE 4
IST1577I HEADER SIZE = 4092 DATA SIZE = 0 STORAGE = ***NA***
IST1221I READ DEV = E900 STATUS = ACTIVE STATE = ONLINE 3
IST1221I DATA DEV = E902 STATUS = ACTIVE STATE = N/A 6
IST1724I I/O TRACE = OFF TRACE LENGTH = *NA*
IST1717I ULPID = TCPIPF 5
IST1815I IQDIO ROUTING DISABLED
IST1918I READ STORAGE = 2.0M(126 SBALS)
IST1757I PRIORITY1: UNCONGESTED PRIORITY2: UNCONGESTED
IST1757I PRIORITY3: UNCONGESTED PRIORITY4: UNCONGESTED
IST1801I UNITS OF WORK FOR NCB AT ADDRESS X'0EB50010'
IST1802I P1 CURRENT = 0 AVERAGE = 0 MAXIMUM = 0
IST1802I P2 CURRENT = 0 AVERAGE = 0 MAXIMUM = 0
IST1802I P3 CURRENT = 0 AVERAGE = 0 MAXIMUM = 0
IST1802I P4 CURRENT = 0 AVERAGE = 0 MAXIMUM = 0
IST1221I DATA DEV = E903 STATUS = RESET STATE = N/A
IST1724I I/O TRACE = OFF TRACE LENGTH = *NA*
IST1221I DATA DEV = E904 STATUS = RESET STATE = N/A
IST1724I I/O TRACE = OFF TRACE LENGTH = *NA*
IST1221I DATA DEV = E905 STATUS = RESET STATE = N/A
IST1724I I/O TRACE = OFF TRACE LENGTH = *NA*
IST1221I DATA DEV = E906 STATUS = RESET STATE = N/A
IST1724I I/O TRACE = OFF TRACE LENGTH = *NA*
IST1221I DATA DEV = E907 STATUS = RESET STATE = N/A
IST1724I I/O TRACE = OFF TRACE LENGTH = *NA*
IST1221I DATA DEV = E908 STATUS = RESET STATE = N/A
IST1724I I/O TRACE = OFF TRACE LENGTH = *NA*
IST1221I DATA DEV = E909 STATUS = RESET STATE = N/A
IST1724I I/O TRACE = OFF TRACE LENGTH = *NA*
IST314I END
Chapter 3. z/OS support 57
If multiple TCP/IP stacks are running in the same z/OS logical partition, the DYNAMICXCF
connection creates a SAMEHOST TRLE named IUTSAMEH 1. DYNAMICXCF will provide
connectivity between stacks under the same logical partition by using the SAMEHOST device
and not use the IUTIQDIO HiperSockets connection. Since we have additional TCP/IP stacks
active, which are not a part of our configuration example, we used the D
NET,TRL,TRLE=trle_name command, as shown in Example 3-20, to verify the TRLE status.
Example 3-20 Display IUTSAMEH TRLE
D NET,TRL,TRLE=IUTSAMEH
IST097I DISPLAY ACCEPTED
IST075I NAME = IUTSAMEH, TYPE = TRLE 1
IST1954I TRL MAJOR NODE = ISTTRL
IST486I STATUS= ACTIV, DESIRED STATE= ACTIV
IST087I TYPE = LEASED , CONTROL = MPC , HPDT = YES
IST1715I MPCLEVEL = HPDT MPCUSAGE = SHARE
IST1717I ULPID = TCPIPF
IST314I END
Verify DYNAMICXCF HiperSockets connections
We use the ping command to verify the HiperSockets connection between the logical
partitions. If multiple stacks are active, you must route the ping command to the correct stack.
In our example, TCPIPF is using the DYNAMICXCF HiperSockets link, so we specify (tcp
tcpipf) as a ping command option 1. We use the ping command and specify the TCP/IP stack
that we wish to use. The results are shown in Example 3-21.
Example 3-21 DYNAMICXCF HiperSockets connectivity test
Ping from TCPIPF on LPAR A23 to LPAR A24
ping 192.0.2.5 (tcp tcpipf) 1
CS V1R8: Pinging host 192.0.2.5
Ping #1 response took 0.000 seconds.
Ping from TCPIPF on LPAR A23 to LPAR A25
ping 192.0.2.6 (tcp tcpipf)
CS V1R8: Pinging host 192.0.2.6
Ping #1 response took 0.000 seconds.
Ping from TCPIPF on LPAR A24 to LPAR A23
ping 192.0.2.4 (tcp tcpipf)
CS V1R8: Pinging host 192.0.2.4
Ping #1 response took 0.000 seconds.
Ping from TCPIPF on LPAR A24 to LPAR A25
ping 192.0.2.6 (tcp tcpipf)
CS V1R8: Pinging host 192.0.2.6
Ping #1 response took 0.000 seconds.
Ping from TCPIPF on LPAR A25 to LPAR A23
ping 192.0.2.4 (tcp tcpipf)
CS V1R8: Pinging host 192.0.2.4
Ping #1 response took 0.000 seconds.
Ping from TCPIPF on LPAR A25 to LPAR A24
ping 192.0.2.5 (tcp tcpipf)
CS V1R8: Pinging host 192.0.2.5
Ping #1 response took 0.000 seconds.
58 HiperSockets Implementation Guide
3.6 VLAN HiperSockets implementation
HiperSockets supports VLANs, which means you can logically subdivide the internal LAN for
a HiperSockets channel into multiple virtual LANs. Two TCP/IP stacks that configure the
same VLAN ID can communicate over HiperSockets, while two stacks that configure a
different VLAN cannot communicate.
3.6.1 VLAN HiperSockets environment
For this example, we create two VLANs across a single HiperSockets channel. Logical
partition A24 is defined to VLAN 1 and logical partitions A23 and A25 are defined to VLAN 3,
as shown in Figure 3-6.
Additionally, a z/VM logical partition and Linux system are configured to VLAN 1. Another
Linux system is configured to VLAN 3. The z/VM and Linux configuration details are covered
in Chapter 4, z/VM support on page 81 and Chapter 5, Linux support on page 93.
We configured the TCP/IP stacks using the following configuration rules:
The VLANID parameter must be specified on the LINK statement for the HiperSockets
device in the tcp profile data set.
For TCP/IP stacks communicating over the same HiperSockets channel, each stack must
specify the same VLANID parameter value.
Figure 3-6 HiperSockets VLAN configuration
Although the example described in the following pages uses IPv4, VLAN for HiperSockets is
also supported in IPv6.
HiperSockets
VLAN
scenario
E800-E802
192.0.1.1
TCPIP
LP-A12
z/VM VMLinux7
SYSPLEX SYSPLEX
LP-A23
z/OS SC30
E800-E802
192.0.3.4
LP-A24
z/OS SC31
E800-E802
192.0.1.5
LP-A25
z/OS SC32
E800-E802
192.0.3.6 Linux
LNXSU2
Linux
LNXRH2
(7000-7002)
E808-E80A
192.0.3.3
(7000-7002)
E804-E806
192.0.1.2
HiperSockets CHPID F4
VLAN 1
192.0.1.0/24
VLAN 3
192.0.3.0/24
Chapter 3. z/OS support 59
3.6.2 Implementation steps
We took the following steps to implement VLAN on a HiperSockets:
Use the configuration detailed in 3.4, HiperSockets implementation on page 44.
Add the VLANID to the LINK statement in the tcp profile data set for each TCP/IP stack
participating in the VLAN. Assigned each VLAN a separate subnet address.
Started the TCP/IP stacks.
We used the same definitions shown in Example 3-1 on page 41.
3.6.3 VTAM customization for VLAN HiperSockets
No additional VTAM setup is required.
3.6.4 TCP/IP profile customization for VLAN HiperSockets
The VLANID keyword must be added to the LINK statement associated with the HiperSockets
device. For our example, we add the VLANID and assign a value to TCPIPFs profile on each
logical partition.
For A23, we specify VLANID 3 1 and assign an address 2 in a subnet 3 for the VLAN, as
shown in Example 3-22.
Example 3-22 VLAN configuration for TCPIPF on logical partition A23
DEVICE IUTIQDF4 MPCIPA
LINK HIPERLF4 IPAQIDIO IUTIQDF4 VLANID 3 1

HOME
192.0.3.4 HIPERLF4 2

BEGINROUTES
ROUTE 192.0.3.0/24 = HIPERLF4 MTU 8192 3
ENDROUTES

START IUTIQDF4
Since logical partition A25 is to participate in VLAN 3 (with logical partition A23), we configure
its TCPIPF stack to use the same VLANID as logical partition30 1 and assign it an address 2
in the same subnet 3 as TCPIPF on logical partition A23, as shown in Example 3-23.
Example 3-23 VLAN configuration for TCPIPF on logical partition A25
DEVICE IUTIQDF4 MPCIPA
LINK HIPERLF4 IPAQIDIO IUTIQDF4 VLANID 3 1

HOME
192.0.3.6 HIPERLF4 2

BEGINROUTES
ROUTE 192.0.3.0/24 = HIPERLF4 MTU 8192 3
ENDROUTES

START IUTIQDF4
60 HiperSockets Implementation Guide
We configure the TCP/IP stack on logical partition A24 to use a different VLANID and subnet
then logical partitions A23 and A25. Although they are all using the same HiperSockets
channel, F4, they will not be able to communicate because of the different VLANID. For A24,
we specify VLANID 1 1 and assign an address 2 in a subnet 3 for the VLAN, as shown in
Example 3-24.
Example 3-24 VLAN configuration for TCPIPF on logical partition A24
DEVICE IUTIQDF4 MPCIPA
LINK HIPERLF4 IPAQIDIO IUTIQDF4 VLANID 1 1

HOME
192.0.1.5 HIPERLF4 2

BEGINROUTES
ROUTE 192.0.1.0/24 = HIPERLF4 MTU 8192 3
ENDROUTES

START IUTIQDF4
3.6.5 Verify VLAN implementation
In this section, we show the display commands we used to verify our configuration.
TCP/IP startup
There are no messages issued during the TCP/IP stack initialization to indicate that the IP
stack is part of a VLAN.
Verify the device and link
After the TCP/IP stack has started, issue the D TCPIP,procname,NETSTAT,DEV command to
verify the VLANID 1 for the HiperSockets device, in our example, it is IUTIQDF4 2. For A23,
the display command output is shown in Example 3-25.
Example 3-25 verify vlanid for A23 (partial output)
D TCPIP,TCPIPF,NETSTAT,DEV
DEVNAME: IUTIQDF4 2 DEVTYPE: MPCIPA
DEVSTATUS: READY
LNKNAME: HIPERLF4 LNKTYPE: IPAQIDIO LNKSTATUS: READY
NETNUM: N/A QUESIZE: N/A
IPBROADCASTCAPABILITY: NO
CFGROUTER: NON ACTROUTER: NON
ARPOFFLOAD: YES ARPOFFLOADINFO: YES
ACTMTU: 8192
VLANID: 3 1 VLANPRIORITY: DISABLED
DYNVLANREGCFG: NO DYNVLANREGCAP: NO
READSTORAGE: GLOBAL (2048K)
SECCLASS: 255 MONSYSPLEX: NO
Chapter 3. z/OS support 61
For A24, the output of the D TCPIP,procname,NETSTAT,DEV command is shown in
Example 3-26.
Example 3-26 verify vlanid for A24 (partial output)
DEVNAME: IUTIQDF4 DEVTYPE: MPCIPA
DEVSTATUS: READY
LNKNAME: HIPERLF4 LNKTYPE: IPAQIDIO LNKSTATUS: READY
NETNUM: N/A QUESIZE: N/A
IPBROADCASTCAPABILITY: NO
CFGROUTER: NON ACTROUTER: NON
ARPOFFLOAD: YES ARPOFFLOADINFO: YES
ACTMTU: 8192
VLANID: 1 VLANPRIORITY: DISABLED
DYNVLANREGCFG: NO DYNVLANREGCAP: NO
READSTORAGE: GLOBAL (2048K)
SECCLASS: 255 MONSYSPLEX: NO
For A25, the output of the D TCPIP,procname,NETSTAT,DEV command is shown in
Example 3-27.
Example 3-27 verify vlanid for A25 (partial output)
DEVNAME: IUTIQDF4 DEVTYPE: MPCIPA
DEVSTATUS: READY
LNKNAME: HIPERLF4 LNKTYPE: IPAQIDIO LNKSTATUS: READY
NETNUM: N/A QUESIZE: N/A
IPBROADCASTCAPABILITY: NO
CFGROUTER: NON ACTROUTER: NON
ARPOFFLOAD: YES ARPOFFLOADINFO: YES
ACTMTU: 8192
VLANID: 3 VLANPRIORITY: DISABLED
DYNVLANREGCFG: NO DYNVLANREGCAP: NO
READSTORAGE: GLOBAL (2048K)
SECCLASS: 255 MONSYSPLEX: NO
VLAN connectivity test
To test that the there is no connectivity between the two VLANs, we attempt to ping across the
VLANs. There is no connectivity between VLAN 1 and VLAN 3, as shown in Example 3-28:
Example 3-28 ping from A23(vlan 3) to A24(vlan 1).
===> ping 192.0.1.5 (tcp tcpipf)
CS V1R8: Pinging host 192.0.1.5
sendto(): EDC8130I Host cannot be reached.
***
There is connectivity between the two systems on the same VLAN (A23 and A25 are in VLAN
3), as shown in Example 3-29
Example 3-29 ping form A23(vlan 3) to A25(vlan 3)
===> ping 192.0.3.6 (tcp tcpipf)
CS V1R8: Pinging host 192.0.3.6
Ping #1 response took 0.000 seconds.
***
62 HiperSockets Implementation Guide
3.7 TCP/IP Sysplex subplex over HiperSockets
Before subplexing, VTAM and TCP/IP Sysplex functions were deployed Sysplex-wide and
users had to implement complex resource controls and disable many of the dynamic XCF and
routing functions to support multiple security zones. For example, as shown in Figure 3-7,
TCP/IP stacks access different networks with diverse security requirements within the same
Sysplex:
In the top configuration, two TCP/IP stacks in the left logical partitions access an internal
network, while the TCP/IP stacks in the right two logical partitions access the external
network. Presumably, the security requirements would include isolating external traffic
from the internal network. However, all the TCP/IP stacks in the Sysplex can dynamically
establish connectivity with all the other TCP/IP stacks in the Sysplex.
In the bottom configuration, TCP/IP stacks in the same logical partition have different
security requirements. The first stack in each logical partition connects to the internal
network and the second stack connects to the external network. Through the IUTSAMEH
connection, the two stacks in each logical partition can dynamically establish connectivity
with each other and therefore, possibly violating security policies.
Figure 3-7 Sysplex connectivity - examples
With subplexing, you are able to build security zones. Thus, only members within the same
security zone may communicate with each other. Subplex members are VTAM nodes and
TCP/IP stacks that are grouped in security zones to isolate communication.
Internal Network External Network
(e.g. Internet)
Appl1 Appl2
TCPIPA
Appl3 Appl4
TCPIPB
VTAM
z/OS LPAR
Appl1 Appl2
TCPIPA
Appl3 Appl4
TCPIPB
VTAM
z/OS LPAR
IUTSAMEH IUTSAMEH
XCF HiperSockets
Communications with all TCP/IP
stacks, including via IUTSAMEH
Multi-purpose LPARs
with dual TCP/IP stacks
XCF HiperSockets
External Network
(e.g. Internet)
z/OS LPAR
Appl1 Appl2
TCPIP
VTAM
z/OS LPAR
Appl1 Appl2
TCPIP
VTAM
z/OS LPAR
Appl1 Appl2
TCPIP
VTAM
z/OS LPAR
Appl1 Appl2
TCPIP
VTAM
Internal Network
Communications with all TCP/IP stacks
Dedicated LPARs with
single TCP/IP stacks
Chapter 3. z/OS support 63
Concept of subplexing
As cited, a subplex is a subset of a Sysplex that consists of selected members. Those
members are connected and communicate through dynamic cross-system coupling facility
(XCF) groups to each other, using the following methods:
XCF links (for cross-system IP and VTAM connections)
IUTSAMEH (for IP connections within a logical partition)
HiperSockets (IP connections cross-logical partition in the same server)
Subplexes do not communicate with members outside the subset of the Sysplex. For
example, in Figure 3-8, TCP/IP stacks with connectivity to the internal network can be
isolated from TCP/IP stacks connected to external network using subplexing.
Figure 3-8 Subplexing multiple security zones
TCP/IP stacks are defined as members of a subplex group with a defined group ID. For
example, in Figure 3-8, TCP/IP stacks within subplex 1 are able to communicate only with
stacks within the same subplex group. They are not able to communicate with stacks in
subplex 2.
In an environment where a single logical partition has access to internal and external
networks via two TCP/IP stacks, those stacks are assigned to two different subplex group IDs.
Even though IUTSAMEH is the communication method, it would be controlled automatically
through the association of subplex group IDs, thus creating two separate security zones
within the logical partition.
Dedicated LPARs with
single TCP/IP stacks
External Network
(e.g. Internet)
z/OS LPAR
Appl1 Appl2
TCPIP
VTAM
z/OS LPAR
Appl1 Appl2
TCPIP
VTAM
z/OS LPAR
Appl1 Appl2
TCPIP
VTAM
z/OS LPAR
Appl1 Appl2
TCPIP
VTAM
Internal Network
External Network
(e.g.Internet)
VTAM VTAM
z/OS LPAR
Appl3 Appl4
TCPIPB
Subplex 2
Appl3 Appl4
TCPIPB
Subplex 2
Appl1 Appl2
TCPIPA
Appl1 Appl2
TCPIPA
Subplex 1 Subplex 1
IUTSAMEH
Communications
within same Subplex
VLAN IDs may be
associated with Subplex
VLAN IDs may be
associated with Subplex
No communications to
dissimilar Subplexes
Internal Network
IUTSAMEH
No communications to
dissimilar Subplexes
HiperSockets
XCF
Subplex 2 Subplex 1
HiperSockets
Multi-purpose LPARs
with dual TCP/IP stacks
z/OS LPAR
XCF
64 HiperSockets Implementation Guide
3.7.1 Subplex implementation environment
We configured our test environment to enable IP subplexing using DYNAMICXCF and
HiperSockets, as shown in Figure 3-9 on page 65. We configure stacks TCPIPD on logical
partition A23 and TCPIPD logical partition A24 to be part of subplex 11. We configure stacks
TCPIPE on logical partition A24 and logical partition A25 to be part of subplex 22.
To implement subplexing, we configured our environment according to the following
guidelines:
Since a subplex uses DYNAMICXCF, the VTAM IQDCHPID and XCFINIT parameters
must be specified.
The XCFGRPID and IQDVLANID parameters of the GLOBALCONFIG statements must
specified for each TCP/IP stack participating in the subplex using the HiperSockets
connection.
All TCP/IP stacks in the same subplex must have the same TCP/IP XCFGRPID.
TCP/IP stacks in the same subplex using the same HiperSockets connection should have
the same VLANID in order to establish connectivity.
When defining only a TCP/IP subplex, a default VTAM subplex is defined automatically.
A TCP/IP subplex uses VTAM XCF support for DYANMICXCF connectivity, therefore, a
TCP/IP stack cannot span different VTAM subplexes. In our environment, the
automatically created VTAM subplex consists of all the nodes in our sysplex, so our
TCP/IP subplexes do not span different VTAM subplexes.
Recommendation: Network connectivity provided through an OSA port in a multiple
security zone environment should not be shared across different subplex groups. The OSA
ports should be physically isolated or logically separated using firewall and VLAN
technologies.
Chapter 3. z/OS support 65
Figure 3-9 Sysplex subplex
XCF group names
Basically, XCF group names for subplexes are created through the XCFGRPID parameter for
the VTAM and TCP/IP environment, for example:
To define a VTAM subplex, use the XCFGRPID parameter in the VTAM start option. For
detailed information group and structure names, refer to SNA Network Implementation
Guide, SC31-8777.
To define a TCP/IP subplex, use the XCFGRPID parameter on the GLOBALCONFIG
statement in the TCP/IP profile.
For TCP/IP, both the VTAM group ID suffix and the TCP group ID suffix will be used to build
the TCP/IP group name. This group name is also used to join the Sysplex. Remember, when
starting TCP/IP under Sysplex Autonomics control in previous z/OS releases, the stack joined
the Sysplex group with the name EZBTCPCS. You may check this using the D XCF,GROUP
command.
EZBTCPCS is the default TCP/IP group name. Actually, the format of this group name is
EZBTvvtt. The meaning of the last four characters are as follows:
vv is a 2-digit VTAM group ID suffix, specified on the VTAM XCFGRPID start option. The
default is CP if not specified.
tt is a 2-digit TCP group ID suffix, specified on the XCFGRPID parameter of the
GLOBALCONFIG statement. The default is CS if not specified.
In our test case, we defined XCFGRPID 11 for TCP/IP and we did not define an XCFGRPID
for VTAM. The result was an XCF group name of EZBTCP11.
HiperSockets channel F7
z/OS LPAR z/OS LPAR z/OS LPAR
VTAM
SC30
VTAM
SC31
VTAM
SC32
VTAM Subplex
TCP/IP D
192.0.11.4
TCP/IP E
192.0.22.5
TCP/IP D
192.0.11.5
TCP/IP E
192.0.22.6
IP Subplex 11
IP Subplex 22
XCFGRPID:(default to cp)
XCFGRPID:11
XCFGRPID:22
IQDVLANID:11
IQDVLANID:22
CF38
CF39
66 HiperSockets Implementation Guide
You may recognize that both XCFGRPIDs are important in creating the subplex group name.
Be aware that changing the VTAM XCFGRPID will change the XCF group name for the
TCP/IP stack. Hence, the stack will no longer be a member of the previous TCP/IP subplex
group.
For example, in our environment, no VTAM XCFGRPID was defined and XCFGRPID 11 was
specified for TCP/IP. Therefore, the XCF group name was dynamically built as EZBTCP11. If
we add XCFGRPID=02 to the VTAM start option, then the new XCF group name will be
EZBT0211. Although nothing has been changed in the TCP/IP profile definitions in this
example, the TCP/IP stack with the new subplex group name is no longer a member of the
previous subplex (EZBTCP11). Thus, the TCP/IP stack will lose the connectivity to the
subplex.
3.7.2 Implementation steps
We took the following steps to implement Sysplex subplex over HiperSockets:
Define the HiperSockets channel, control unit, and device.
Configure VTAM to specify XCFINIT=YES and to specify the IQDCHPID parameter.
Add the DYNAMICXCF statement to the TCP/IP profile.
Add the XCFGRPID and IQDVLANID to the GLOBALCONFIG statement in the tcp profile.
Start the TCP/IP stacks.
We used the same definitions as shown in Example 3-1 on page 41. No additional changes
were required. We verified that CHPID F7 and addresses in the range EB00-EB1F were
online.
3.7.3 VTAM configuration setup for Sysplex subplex
To enable subplexing requires VTAM to be configured to use XCF. XCFINIT=YES should be
coded as a VTAM parameter 1. The HiperSockets channel used for the TCP/IP
DYNAMICXCF connection should be specified in the IQDCHPID parameter. In our example,
we use CHPID F7 2. We specify the following values in the VTAM start options member
(ATCSTRxx) for each of the VTAM nodes, as shown in Example 3-30.
Example 3-30 VTAM configuration changes
IQDCHPID=F7, 2
XCFINIT=YES 1
The IQDCHPID value can be dynamically changed using the following command, where xx
specifies the HiperSockets channel:
V NET,VTAMOPTS,IQDCHPID=xx
We allow the VTAM XCFGRPID to default for all VTAM nodes in our sysplex. The
XCFINIT=YES statement causes the creation of a VTAM XCF group. Since we did not specify
a VTAM XCFGRPID, the default name for the VTAM XCF group is ISTXCF. This XCF group is
the default VTAM subplex. In our test environment, this default VTAM XCF group includes all
VTAM nodes in our sysplex.
Chapter 3. z/OS support 67
3.7.4 TCP/IP configuration setup for sysplex Subplex
We configure two IP subplexes within our single VTAM subplex.
TCPIPD on logical partition A23 and TCPIPD on logical partition A24 are defined to
subplex 11.
TCPIPE on logical partition A24 and TCPIPE on logical partition A25 are defined to
subplex 22.
For IP stacks to communicate, they must be in the same subplex, so the XCFGRPID must be
the same. Additionally, since we are using HiperSockets as the link, the IQDVLANID must
also be the same.
We update the tcp profile for TCPIPD on logical partition A23, as shown in Example 3-31. We
add the XCFGRPID with a value of 11 and an IQDVLANID of 11 1. These two values do not
have to match. XCFGRPID allows values from 2 to 31, while IQDVLANID allows values from
1 to 4096. As Sysplex subplexing requires that the TCP/IP stack be configured for
DYNAMICXCF, we add the DYNAMICXCF parameter to the IPCONFIG statement with a host
address, subnet mask, and cost metric 2.
Example 3-31 Logical partition A23 TCPIPD tcp profile configuration
GLOBALCONFIG NOTCPIPSTATISTICS XCFGRPID 11 IQDVLANID 11 1
IPCONFIG DYNAMICXCF 192.0.11.4 255.255.255.0 1 2
We updated TCPIPD on logical partition A24, as shown in Example 3-32. Note that TCPIPD
on logical partition A23 and TCPIPD on logical partition A24 have matching values for
XCFGRPID, so these two stacks will be part of the same XCF group. The XCF created
EZBTCP11. Since these two stacks also have matching IQDVLANID parameter, they will be
able to establish communication over the HiperSockets channel defined by the VTAM
IQDCHPID statement F7.
Example 3-32 Logical partition A24 TCPIPD tcp profile configuration
GLOBALCONFIG NOTCPIPSTATISTICS XCFGRPID 11 IQDVLANID 11
IPCONFIG DYNAMICXCF 192.0.11.5 255.255.255.0 1
TCPIPE on logical partition A24 and logical partition A25 will be defined as subplex 22. We
update the tcp profile on logical partition A24 shown in Example 3-33. We add the
XCFGRPID and IQDVLANID parameters to the GLOBALCONFIG statement. We assign a
value of 22 for XCFGRPID and a value of 22 for IQDVLANID 1. We also add the required
DYNAMICXCF parameter to the IPCONFIG statement along with the host IP address, subnet
mask, and cost metric 2.
Example 3-33 Logical partition A24 TCPIPE tcp profile configuration
GLOBALCONFIG NOTCPIPSTATISTICS XCFGRPID 22 IQDVLANID 22
IPCONFIG DYNAMICXCF 192.0.22.5 255.255.255.0 1 2
We update TCPIPEs tcp profile on logical partition A25, as shown in Example 3-34 on
page 68. As TCPIPE on logical partition A24 and logical partition A25 have matching
XCFGRPIDs, they will join the same sysplex group. The XCF group created is EZBTCP22.
Since these two stacks also have matching IQDVLANID parameters, they will be able to
establish communication over the HiperSockets channel defined by the VTAM IQDCHPID
statement, F7.
68 HiperSockets Implementation Guide
Example 3-34 Logical partition A25 TCPIPE tcp profile configuration
GLOBALCONFIG NOTCPIPSTATISTICS XCFGRPID 22 IQDVLANID 22
IPCONFIG DYNAMICXCF 192.0.22.6 255.255.255.0 1
3.7.5 Verification of the IP subplex over HiperSockets
Here we discuss the verification of the IP subplex over HiperSockets.
Verify the VTAM setup prior to starting the IP stack
Prior to starting a TCP/IP stack to use the DYNAMICXCF connection, we verify that the VTAM
configuration is active. We used the display VTAM options command to make sure that
IQDCHPID=F7 1 and XCFINIT=YES 2 were defined. We issued the command to each VTAM
node in our sysplex to verify that the required parameters were correct. Note that we did not
specify a value for XCFGRPID 3.
Example 3-35 Vtamopts display (partial display)
-D NET,VTAMOPTS
IST097I DISPLAY ACCEPTED
IST1189I IQDCHPID = F7 1 IQDIOSTG = 7.8M(126 SBALS)
IST1189I XCFGRPID = ***NA*** 3 XCFINIT = YES 2
TCP/IP startup
Once the configuration changes are made and the VTAM configuration has been verified, the
IP stacks can be started. The first IP stack in a subplex that starts will cause the creation of an
XCF group.
From the SYSLOG messaged generated at the IP stacks initialization, we can verify that
DYNAMICXCF is enabled 1, that IP stack has joined its sysplex group 2, and that the
DYNAMICXCF HiperSockets IUTIQDIO device has successfully started 3. The SYSLOG
messages when TCPIPD is started on logical partition A23 is shown in Example 3-36.
Example 3-36 TCP syslog messages
...
EZZ0624I DYNAMIC XCF DEFINITIONS ARE ENABLED 1
...
EZD1176I TCPIPD HAS SUCCESSFULLY JOINED THE TCP/IP SYSPLEX GROUP
EZBTCP11 2
...
EZZ4313I INITIALIZATION COMPLETE FOR DEVICE IUTIQDIO 3
...
Chapter 3. z/OS support 69
Verify the DYNAMICXCF device and link
To verify the status of devices and links defined to the TCP/IP stack, we use the D
TCPIP,procname,NETSTAT,DEVLINKS command to request NETSTAT information. The
Netstat device display shows the HiperSockets connection with VLANID 11 1, which was what
was specified for TCPIPDs IQDVLANID parameter, as shown in Example 3-37.
Example 3-37 Netstats device display showing IUTIQDIO (HiperSockets) VLANID
-D TCPIP,TCPIPD,NETSTAT,DEV
EZD0101I NETSTAT CS V1R8 TCPIPD 952
DEVNAME: IUTIQDIO DEVTYPE: MPCIPA
DEVSTATUS: READY
LNKNAME: IQDIOLNKC0000B04 LNKTYPE: IPAQIDIO LNKSTATUS: READY
NETNUM: N/A QUESIZE: N/A
IPBROADCASTCAPABILITY: NO
CFGROUTER: NON ACTROUTER: NON
ARPOFFLOAD: YES ARPOFFLOADINFO: YES
ACTMTU: 8192
VLANID: 11 1 VLANPRIORITY: DISABLED
DYNVLANREGCFG: NO DYNVLANREGCAP: NO
READSTORAGE: GLOBAL (2048K)
SECCLASS: 255
BSD ROUTING PARAMETERS:
MTU SIZE: 8192 METRIC: 01
DESTADDR: 0.0.0.0 SUBNETMASK: 255.255.255.0
MULTICAST SPECIFIC:
MULTICAST CAPABILITY: YES
GROUP REFCNT
----- ------
224.0.0.1 0000000001
LINK STATISTICS:
BYTESIN = 316
INBOUND PACKETS = 1
INBOUND PACKETS IN ERROR = 0
INBOUND PACKETS DISCARDED = 0
INBOUND PACKETS WITH NO PROTOCOL = 0
BYTESOUT = 316
OUTBOUND PACKETS = 1
OUTBOUND PACKETS IN ERROR = 0
OUTBOUND PACKETS DISCARDED = 0
70 HiperSockets Implementation Guide
Verify TRLE definitions
We used the D NET,TRL,TRLE=trle_name command, as shown in Example 3-19 on
page 56, to verify that the dynamically created TRLE, IUTIQDIO, for the HiperSockets
connection, was active 1.
Example 3-38 IUTQDIO TRLE display
D NET,TRL,TRLE=IUTIQDIO
IST097I DISPLAY ACCEPTED
IST075I NAME = IUTIQDIO, TYPE = TRLE
IST1954I TRL MAJOR NODE = ISTTRL
IST486I STATUS= ACTIV, DESIRED STATE= ACTIV 1
IST087I TYPE = LEASED , CONTROL = MPC , HPDT = YES
IST1715I MPCLEVEL = QDIO MPCUSAGE = SHARE
IST1716I PORTNAME = IUTIQDF7 LINKNUM = 0 OSA CODE LEVEL = ...(
IST1577I HEADER SIZE = 4096 DATA SIZE = 16384 STORAGE = ***NA***
IST1221I WRITE DEV = EB01 STATUS = ACTIVE STATE = ONLINE
IST1577I HEADER SIZE = 4092 DATA SIZE = 0 STORAGE = ***NA***
IST1221I READ DEV = EB00 STATUS = ACTIVE STATE = ONLINE
IST1221I DATA DEV = EB02 STATUS = ACTIVE STATE = N/A
IST1724I I/O TRACE = OFF TRACE LENGTH = *NA*
IST1717I ULPID = TCPIPD
IST1815I IQDIO ROUTING DISABLED
IST1918I READ STORAGE = 2.0M(126 SBALS)
IST1757I PRIORITY1: UNCONGESTED PRIORITY2: UNCONGESTED
IST1757I PRIORITY3: UNCONGESTED PRIORITY4: UNCONGESTED
IST1801I UNITS OF WORK FOR NCB AT ADDRESS X'0EAEC010'
IST1802I P1 CURRENT = 0 AVERAGE = 0 MAXIMUM = 0
IST1802I P2 CURRENT = 0 AVERAGE = 0 MAXIMUM = 0
IST1802I P3 CURRENT = 0 AVERAGE = 0 MAXIMUM = 0
IST1802I P4 CURRENT = 0 AVERAGE = 0 MAXIMUM = 0
IST1221I DATA DEV = EB03 STATUS = RESET STATE = N/A
IST1724I I/O TRACE = OFF TRACE LENGTH = *NA*
IST1221I DATA DEV = EB04 STATUS = RESET STATE = N/A
IST1724I I/O TRACE = OFF TRACE LENGTH = *NA*
IST1221I DATA DEV = EB05 STATUS = RESET STATE = N/A
IST1724I I/O TRACE = OFF TRACE LENGTH = *NA*
IST1221I DATA DEV = EB06 STATUS = RESET STATE = N/A
IST1724I I/O TRACE = OFF TRACE LENGTH = *NA*
IST1221I DATA DEV = EB07 STATUS = RESET STATE = N/A
IST1724I I/O TRACE = OFF TRACE LENGTH = *NA*
IST1221I DATA DEV = EB08 STATUS = RESET STATE = N/A
IST1724I I/O TRACE = OFF TRACE LENGTH = *NA*
IST1221I DATA DEV = EB09 STATUS = RESET STATE = N/A
IST1724I I/O TRACE = OFF TRACE LENGTH = *NA*
IST314I END
Chapter 3. z/OS support 71
Verify XCF groups
We verify that the expected XCF groups have been created issuing the D XCF,GROUP
command, as shown in Example 3-39. Subplex 11 (TCPIPD on logical partition A23 and
logical partition A24) specified XCFGRPID on the tcp profile GLOBALCONFIG statement. So
we expected an XCF group name of EZBTCP11 (CP is the default if no VTAM XCFGRPID
value is specified) 1. For subplex 22 (TCPIPE on logical partition A24 and logical partition
A25), we expected a group name of EZBTCP22, since 22 was specified on the
GLOBALCONFIG XCFGRPID parameter 2.
Example 3-39 Display of XCF groups
D XCF,GROUP
IXC331I 16.40.41 DISPLAY XCF 917
GROUPS(SIZE): COFVLFNO(3) DBCDU(3) EZBTCPCS(6)
EZBTCP11(2) 1 EZBTCP22(2) 2 IDAVQUI0(3)
IGWXSGIS(6) IOEZFS(3) IRRXCF00(3)
ISTCFS01(3) ISTXCF(3) IXCLO00A(3)
IXCLO00B(3) IXCLO006(3) SYSATB01(2)
SYSATB02(2) SYSATB03(2) SYSBPX(3)
SYSCNZMG(3) SYSDAE(4) SYSENF(3)
SYSGRS(3) SYSGRS2(1) SYSIEFTS(3)
SYSIGW00(3) SYSIGW01(3) SYSIGW02(3)
SYSIGW03(3) SYSIKJBC(3) SYSIOS01(3)
SYSJES(3) SYSMCS(8) SYSMCS2(8)
SYSRMF(3) SYSTTRC(3) SYSWLM(3)
SYSXCF(3) WTSCPLX5(3) WTSC77(1)
The D XCF,GROUP,GROUPNAME command will list the members of the XCF group, as
shown for EZBTCP11 in Example 3-40.
Example 3-40 Display of XCF group EZBTCP11
D XCF,GROUP,EZBTCP11
IXC332I 16.44.32 DISPLAY XCF 930
GROUP EZBTCP11: SC30TCPIPD SC31TCPIPD
We issued the D XCF,GROUP,GROUPNAME command to verify the member list of
EZBTCP22, as shown in Example 3-41.
Example 3-41 Display of XCF group EZBTCP22
D XCF,GROUP,EZBTCP22
IXC332I 16.44.32 DISPLAY XCF 930
GROUP EZBTCP22: SC31TCPIPE SC32TCPIPE
72 HiperSockets Implementation Guide
The D TCPIP command can also be issued to display the sysplex group that a specific IP
stack belongs to, as shown in Example 3-42.
Example 3-42 Display of Sysplex group to which an IP stack belongs
-D TCPIP,TCPIPD,SYSPLEX,GROUP
EZZ8270I SYSPLEX GROUP FOR TCPIPD AT SC30 IS EZBTCP11
...
-RO SC31,D TCPIP,TCPIPD,SYSPLEX,GROUP
EZZ8270I SYSPLEX GROUP FOR TCPIPD AT SC31 IS EZBTCP11
...
-RO SC31,D TCPIP,TCPIPE,SYSPLEX,GROUP
EZZ8270I SYSPLEX GROUP FOR TCPIPE AT SC31 IS EZBTCP22
...
-RO SC32,D TCPIP,TCPIPE,SYSPLEX,GROUP
EZZ8270I SYSPLEX GROUP FOR TCPIPE AT SC32 IS EZBTCP22
Verify XCFGRPID and IQDVLANID
The Netstat config display shows the XCFGRPID and IQDVLANID for stack D, as shown in
Example 3-43. We verified that the IQDVLANID and XCFGRPID values displayed matches
what was specified on the GLOBALCONFIG statement.
Example 3-43 Netstat config display with XCFGRPID and IQDVLANID for TCPIPD
-D TCPIP,TCPIPD,NETSTAT,CONFIG
EZD0101I NETSTAT CS V1R8 TCPIPD 946
GLOBAL CONFIGURATION INFORMATION:
TCPIPSTATS: NO ECSALIMIT: 0000000K POOLLIMIT: 0000000K
MLSCHKTERM: NO XCFGRPID: 11 IQDVLANID: 11
SEGOFFLOAD: YES SYSPLEXWLMPOLL: 060
SYSPLEX MONITOR:
The Netstat config display shows the XCFGRPID and IQDVLANID for stack E as shown in
Example 3-44.
Example 3-44 Netstat config display with XCFGRPID and IQDVLANID for TCPIPE
-RO SC31,D TCPIP,TCPIPE,NETSTAT,CONFIG
EZD0101I NETSTAT CS V1R8 TCPIPE 940
GLOBAL CONFIGURATION INFORMATION:
TCPIPSTATS: NO ECSALIMIT: 0000000K POOLLIMIT: 0000000K
MLSCHKTERM: NO XCFGRPID: 22 IQDVLANID: 22
SEGOFFLOAD: YES SYSPLEXWLMPOLL: 060
IP subplex connectivity test
We test connectivity between the IP stacks to verify that the stacks cannot communicate
across subplex boundaries.
We attempt a ping command using TCPIPD on logical partition A23 to logical partition A24
(same subplex 11). The results are successful, as shown in Example 3-45.
Example 3-45 Ping attempt to same subplex (subplex 11)
===> ping 192.0.11.5 (tcp tcpipd)
CS V1R8: Pinging host 192.0.11.5
Ping #1 response took 0.000 seconds.
***
Chapter 3. z/OS support 73
We attempt a ping command using TCPIPD on logical partition A23 to logical partition A24
(subplex 11 to subplex 22). The result is a failure, as shown in Example 3-46.
Example 3-46 Ping attempt to another subplex (subplex 11 to subplex 22)
===> ping 192.0.22.5 (tcp tcpipd)
CS V1R8: Pinging host 192.0.22.5
sendto(): EDC8130I Host cannot be reached.
***
We also verify that multiple stacks in the same z/OS LPAR but in different subplexes cannot
use an IUTSAMEH device to communicate. In our example, A24 has two IP stacks, TCPIPD
in subplex 11 and TCPIPE in subplex 22. We first display the status of the IUTSAMEH TRLE,
as shown in Example 3-47. We see that stacks TCPIPE and TCPIPE are defined to this link.
Example 3-47 Display of IUTSAMEH dynamic TRLE
RO SC31,D NET,TRL,TRLE=IUTSAMEH
IST097I DISPLAY ACCEPTED
IST075I NAME = IUTSAMEH, TYPE = TRLE
IST1954I TRL MAJOR NODE = ISTTRL
IST486I STATUS= ACTIV, DESIRED STATE= ACTIV
IST087I TYPE = LEASED , CONTROL = MPC , HPDT = YES
IST1715I MPCLEVEL = HPDT MPCUSAGE = SHARE
IST1717I ULPID = TCPIPE
IST1717I ULPID = TCPIPD
IST314I END
We attempt to ping across the stacks, which will use the IUTSAMEH instead of the IUTIQDIO
(HiperSockets) link, and fail, as shown in Example 3-48.
Example 3-48 Ping attempts between TCPIPD and TCPIPE on logical partition A24
PING 192.0.22.5 (TCP TCPIPD)
CS V1R8: Pinging host 192.0.22.5
sendto(): EDC8130I Host cannot be reached.
READY
PING 192.0.11.5 (TCP TCPIPE)
CS V1R8: Pinging host 192.0.11.5
sendto(): EDC8130I Host cannot be reached.
READY
3.8 HiperSockets Accelerator
The Communications Server leverages the technological advances and high-performing
nature of the I/O processing offered by HiperSockets with the IBM System z servers and
OSA-Express, using the QDIO architecture. This is achieved by optimizing IP packet
forwarding processing that occurs across these two types of technologies. This function is
referred to as HiperSockets Accelerator. It is a configurable option, and is activated by
defining the IQDIORouting option on the IPCONFIG statement.
When the TCP/IP stack is configured with HiperSockets Accelerator, it allows IP packets
received from HiperSockets to be forwarded to an OSA-Express port (or vice versa) without
the need for those IP packets to be processed by the TCP/IP stack.
74 HiperSockets Implementation Guide
When using this function, one or more logical partitions contain the routing stack, which
manages connectivity via OSA-Express ports to the LAN, while the other logical partitions
connect to the routing stack using the HiperSockets, as shown in Figure 3-10.
Figure 3-10 HiperSockets Accelerator
Only one PRIRouter can be defined to an OSA-Express port. However, a second TCP/IP
stack can be defined as SECRouter to the same OSA-Express port and serve as a backup to
the PRIRouter TCP/IP stack.
Important: In order for a TCP/IP router stack to forward IP packets from an OSA-Express
device to a HiperSockets device, the OSA-Express port must be defined as PRIRouter on
the DEVICE statement in the TCP/IP profile. If no PRIRouter option is defined to the
OSA-Express port, IP packets are not forwarded.
Note: This example is intended purely to demonstrate IP traffic flow. We do not
recommend implementing HiperSockets Accelerator using a single logical partition.
LPAR A LPAR B LPAR C LPAR D
System z Server
TCP/IP
Stack A
TCP/IP
Stack B
TCP/IP
Stack C
TCP/IP
Stack D
HiperSockets
(CHPID FE)
HiperSockets
(CHPID FD)
LPAR E
OSA-E
Gigabit Ethernet Network
OSA-E
Chapter 3. z/OS support 75
3.8.1 HiperSockets Accelerator implementation
We implement the HiperSockets Accelerator function using a single TCP/IP stack, TCPIPF on
logical partition A23, to forward IP traffic from the HiperSockets LAN on CHPID F4 to an
OSA-Express port defined on CHPID 06. Our test environment is shown in Figure 3-11. We
configured it according to the following guidelines:
The OSA-Express port must be defined as the PRIRouter on the DEVICE statement in the
TCP/IP profile.
Only one PRIRouter can be defined to an OSA-Express port.
HiperSockets Accelerator is activated by configuring the IQDIORouting option in the
TCP/IP profile using the IPCONFIG statement.
Figure 3-11 HiperSockets Accelerator implementation environment
3.8.2 HiperSockets Accelerator implementation steps
We took the following steps to configure our HiperSockets Accelerator test:
Defined the HiperSockets and OSA-Express channel, control unit, and devices.
Configured the TCP/IP stacks to use the HiperSockets channel.
Configured a TCP/IP stack to use the OSA-EXPRESS device as a primary router.
Configured the TCP/IP using the OSA-EXPRESS device to allow forwarding of IP packets
from its HiperSockets interface to the OSA-EXPRESS interface.
HiperSockets
Accelerator
scenario
E800-E802
192.0.1.1
TCPIP
LP-A12
z/VM VMLinux7
SYSPLEX SYSPLEX
LP-A23
z/OS SC30
E800-E802
192.0.1.4
LP-A24
z/OS SC31
E800-E802
192.0.1.5
LP-A25
z/OS SC32
E800-E802
192.0.1.6 Linux
LNXSU2
(7000-7002)
E808-E80A
192.0.1.3
(7000-7002)
E804-E806
192.0.1.2
Linux
LNXRH2
HiperSockets CHPID F4
192.0.1.0/24
(7200-7203)
2200-2203
OSA
CHPID 06
192.0.20.4
192.0.20.0/24
76 HiperSockets Implementation Guide
For the HiperSockets connection, we used the same definitions shown in Example 3-1 on
page 41. The OSA-EXPRESS IOCP statements are shown in Example 3-49.
Example 3-49 OSA-EXPRESS IOCP statements
CHPID PATH=(CSS(0,1,2),06),SHARED, *
PARTITION=((CSS(0),(A01,A02,A03,A04,A05),(=)),(CSS(1),(A*
11,A12,A13,A14,A15),(=)),(CSS(2),(A21,A22,A23,A24,A25),(*
=))),PCHID=120,TYPE=OSD
CNTLUNIT CUNUMBR=2200, *
PATH=((CSS(0),06),(CSS(1),06),(CSS(2),06)),UNIT=OSA
IODEVICE ADDRESS=(2200,015),CUNUMBR=(2200),UNIT=OSA
IODEVICE ADDRESS=220F,UNITADD=FE,CUNUMBR=(2200),UNIT=OSAD
3.8.3 VTAM configuration
A VTAM TRL major node must be defined for the OSA-EXPRESS device. We defined the
VTAM TRL as member OSA2060 in our VTAMLST data set, as shown in Example 3-50.
Example 3-50 VTAM TRLE definition for OSA-EXPRESS
OSA2060 VBUILD TYPE=TRL
OSA2060P TRLE LNCTL=MPC, *
READ=2200, *
WRITE=2201, *
DATAPATH=2202, *
PORTNAME=OSA06D, *
MPCLEVEL=QDIO
We activate the OSA TRL using a V NET,ACT,ID=OSA2060 command.
3.8.4 TCP/IP configuration
In our implementation, we configure stack TCPIPF on logical partition A23 to use the
HiperSockets connection on CHPID F4 by defining the DEVICE, LINK, HOME, ROUTE, and
START statements for the HiperSockets connection (see Example 3-51 on page 77). We also
define the DEVICE, LINK, HOME, ROUTE, and START statements for the OSA-EXPRESS
connection. HiperSockets Accelerator is activated by configuring the IQDIORouting option in
the TCP/IP profiles IPCONFIG statement 1. We also specify the DATAGRAMFWD option on
the IPCONFIG statement to allow pack forwarding 2. We also configure the OSA-Express
device with the PRIRouter option 3.
Chapter 3. z/OS support 77
Example 3-51 Configuring TCPIPF
GLOBALCONFIG NOTCPIPSTATISTICS
IPCONFIG DATAGRAMFWD 2 PATHMTUDISCOVERY IQDIOROUTING 1
TCPCONFIG TCPSENDBFRSIZE 16K TCPRCVBUFRSIZE 16K SENDGARBAGE FALS
TCPCONFIG RESTRICTLOWPORTS
UDPCONFIG RESTRICTLOWPORTS
; PROFILE.TCPIP sep up HS SC30
; =============================

DEVICE IUTIQDF4 MPCIPA
LINK HIPERLF4 IPAQIDIO IUTIQDF4

; PRIMARY ROUTING SU CHPID 06 2
DEVICE OSA06D MPCIPA PRIR 3
LINK OSA06LNK IPAQENET OSA06D
HOME
192.0.1.4 HIPERLF4
192.0.20.4 OSA06LNK
BEGINROUTES
ROUTE 192.0.1.0/24 = HIPERLF4 MTU 1500
ROUTE 192.0.20.0/24 = OSA06LNK MTU 1500
ROUTE DEFAULT 192.0.20.254 OSA06LNK MTU 1500
ENDROUTES

START IUTIQDF4
START OSA06D
In our example, we used static routing; however, dynamic routing with RIP or OSPF is
supported when the HiperSockets Accelerator function is enabled.
3.8.5 HiperSockets Accelerator verification
Here we discuss HiperSockets Accelerator verification
OSA-Express verification
We use D,NET,TRLE=trle_name to verify that the OSA-Express TRLE has been activated, as
shown in Example 3-52.
Example 3-52 OSA TRLE display
D NET,TRL,TRLE=OSA2060P
IST097I DISPLAY ACCEPTED
IST075I NAME = OSA2060P, TYPE = TRLE 037
IST1954I TRL MAJOR NODE = OSA2060
IST486I STATUS= NEVAC, DESIRED STATE= INACT
IST087I TYPE = LEASED , CONTROL = MPC , HPDT = *NA*
IST1715I MPCLEVEL = QDIO MPCUSAGE = SHARE
IST1716I PORTNAME = OSA06D LINKNUM = 0 OSA CODE LEVEL = *NA*
IST1221I WRITE DEV = 2201 STATUS = RESET STATE = N/A
IST1221I READ DEV = 2200 STATUS = RESET STATE = N/A
IST1221I DATA DEV = 2202 STATUS = RESET STATE = N/A
IST314I END
Important: Since NODATAGRAMFWD is the default for IPCONFIG, you must explicitly
code DATAGRAMFWD when using the HiperSockets Accelerator.
78 HiperSockets Implementation Guide
TCP/IP startup
When TCPIPF is started on logical partition A23, SYSLOG messages allow us to verify that
ipforwarding is enabled 1, the HiperSockets Accelerator function is enabled 2, the
HiperSockets device is initialized 3, and the OSA device is initialized 4, as shown in
Example 3-53.
Example 3-53 HiperSockets Accelerator stack SYSLOG messages
EZZ0641I IP FORWARDING NOFWDMULTIPATH SUPPORT IS ENABLED
EZZ0623I PATH MTU DISCOVERY SUPPORT IS ENABLED
EZZ0688I IQDIO ROUTING IS ENABLED
EZZ4313I INITIALIZATION COMPLETE FOR DEVICE IUTIQDF4
EZZ4313I INITIALIZATION COMPLETE FOR DEVICE OSA06D
If this function cannot be enabled, you will receive following message:
EZZ0689I CANNOT ENABLE IQDIO ROUTING
This could be due to one the conditions described in the following messages:
IP FORWARDING IS DISABLED
FIREWALL IS ACTIVE
PROCESSOR IS NOT HIPERSOCKETS CAPABLE
Example 3-55 on page 79 shows the NETSTAT command with the configuration option, which
we used in order to verify that this function is enabled. The IP configuration table portion of
the D TCPIP,procname,N,CONFIG command also verifies that IP forwarding and routing are
enabled, as shown in Example 3-54.
Example 3-54 TCPIPF configuration display
D TCPIP,TCPIPF,N,CONFIG
...
IP CONFIGURATION TABLE:
FORWARDING: YES TIMETOLIVE: 00064 RSMTIMEOUT: 00060
IPSECURITY: NO
ARPTIMEOUT: 01200 MAXRSMSIZE: 65535 FORMAT: LONG
IGREDIRECT: NO SYSPLXROUT: NO DOUBLENOP: NO
STOPCLAWER: NO SOURCEVIPA: NO
MULTIPATH: NO PATHMTUDSC: YES DEVRTRYDUR: 0000000090
DYNAMICXCF: NO
IQDIOROUTE: YES QDIOPRIORITY: 1
TCPSTACKSRCVIPA: NO
...
Another way to verify that the HiperSockets Accelerator is enabled is to display the iQDIO
routing table. Example 3-56 on page 79 shows our iQDIO routing table, which was created by
simultaneous pinging all IP addresses that participate in HiperSockets CHPID F7.
Chapter 3. z/OS support 79
Example 3-55 IQDIO routing table
D tcpip,tcpipf,netstat,route,IQDIO
EZD0101I NETSTAT CS V1R8 TCPIPF 723
IPV4 DESTINATIONS
DESTINATION GATEWAY INTERFACE
192.0.1.2/32 192.0.1.2 HIPERLF4
192.0.10.88/32 192.0.20.254 OSA06LNK
192.0.20.99/32 192.0.20.99 OSA06LNK
3 OF 3 RECORDS DISPLAYED
END OF THE REPORT
DESTINATION in this example indicates the final destination of the IP packet routed through
the accelerator, while GATEWAY indicates the IP address of the next hop, which is to be used
in order to reach the destination.
Example 3-56 is the same display 90 seconds after the last pings were issued to the
HiperSockets interfaces; all entries in our IPQDIORouting table were deleted.
Example 3-56 Display iQDIO routing table 2
D TCPIP,TCPIPD,NETSTAT,ROUTE,IQDIO
EZZ2500I NETSTAT CS V1R2 TCPIPD 843
DESTINATION GATEWAY INTERFACE
0 OF 0 RECORDS DISPLAYED
The last command we used to verify the iQDIO routing function is a display of the VTAM
TRLE major node (IUTIQDF4), shown in Example 3-57.
Example 3-57 Verify HiperSockets Accelerator is enabled by displaying the TRLE
-D NET,TRL,TRLE=IUTIQDF4
IST097I DISPLAY ACCEPTED
IST075I NAME = IUTIQDF4, TYPE = TRLE
IST1954I TRL MAJOR NODE = ISTTRL
IST486I STATUS= ACTIV, DESIRED STATE= ACTIV
IST087I TYPE = LEASED , CONTROL = MPC , HPDT = YES
IST1715I MPCLEVEL = QDIO MPCUSAGE = SHARE
IST1716I PORTNAME = LINKNUM = 0 OSA CODE LEVEL = ...(
IST1577I HEADER SIZE = 4096 DATA SIZE = 16384 STORAGE = ***NA***
IST1221I WRITE DEV = E801 STATUS = ACTIVE STATE = ONLINE
IST1577I HEADER SIZE = 4092 DATA SIZE = 0 STORAGE = ***NA***
IST1221I READ DEV = E800 STATUS = ACTIVE STATE = ONLINE
IST1221I DATA DEV = E802 STATUS = ACTIVE STATE = N/A
IST1724I I/O TRACE = OFF TRACE LENGTH = *NA*
IST1717I ULPID = TCPIPF
IST1814I IQDIO ROUTING ENABLED
80 HiperSockets Implementation Guide
FTP test
Example 3-58 shows the ftp transmission between two hosts across the HiperSockets
Accelerator. The sender is a Linux guest under VM (IP ADDRESS 192.0.1.2) and the receiver
is a host (IP ADDRESS 192.0.10.88) in an outside network (ETHERNET LAN).
Example 3-58 ftp transmission between two host across HiperSockets Accelerator
ftp> put install.log install.log.test5
local: install.log remote: install.log.test5
227 Entering Passive Mode (192,0,20,99,121,175)
150 Ok to send data.
226 File receive OK.
55654 bytes sent in 0.0026 seconds (2.1e+04 Kbytes/s)
ftp> bin
200 Switching to Binary mode.
ftp> get .fonts.cache-2 .fonts.cache-2.test5
local: .fonts.cache-2.test5 remote: .fonts.cache-2
227 Entering Passive Mode (192,0,20,99,105,141)
150 Opening BINARY mode data connection for .fonts.cache-2 (929793 bytes).
226 File send OK.
929793 bytes received in 0.028 seconds (3.2e+04 Kbytes/s)
3.9 References
Communications Server for z/OS V1R8 TCP/IP Implementation Volume 1: Base
Functions, Connectivity, and Routing, SG24-7339
IBM System z Connectivity Handbook, SG24-5444
z/OS Communications Server, IP Configuration Guide, SC31-8775
z/OS Communications Server, IP Configuration Reference, SC31-8776
z/OS Communications Server, SNA Resource Definition Reference, SC31-8778
These publications are available on the Internet at:
http://www.ibm.com/servers/eserver/zseries/zos/bkserv
Copyright IBM Corp. 2002, 2006 81
Chapter 4. z/VM support
This chapter discusses HiperSockets support provided by z/VM. For each option we include
setup and verification examples.
The emphases is on z/VM configuration definitions.
This chapter contains the following information:
Overview of the z/VM support
A list of prerequisites needed to provide HiperSockets functionality
Configuration examples
Definitions required for z/VM HiperSockets
Definitions required for z/VM VLAN
References for further information
4
82 HiperSockets Implementation Guide
4.1 Overview
z/VM at the currently supported releases provides the System z HiperSockets function for
high-speed TCP/IP communication between virtual machines and logical partitions within the
same System z server. The HiperSockets function can be utilized on Integrated Facility for
Linux (IFL) processors as well as standard processors. z/VM provides a range of
HiperSockets support for use by guest operating systems and z/VMs own TCP/IP virtual
machine. HiperSockets devices are fully supported and managed by z/VM in the same
manner as real devices.
The iQDIO logic required by HiperSockets is provided by LIC at the following levels:
z9 (EC and BC)
z990 / z890
As with QDIO devices, each z/VM TCP/IP HiperSockets connection requires three I/O
devices. One device is used for read control that must be an even numbered device; one
device is used for write control and must be the read control device plus one, which makes it
odd numbered; and one device is used for data exchange and must be one greater than the
write control device. All three I/O devices must in a group of three contiguous device
addresses.
System z HiperSockets support provides communication capability within a single z/VM
logical partition and among other logical partitions on the same System z regardless of the
operating system running in those logical partitions.
This chapter shows the tasks required to set up z/VM and Linux guest systems to use the
HiperSockets network. Implementation tasks are:
Identify the software environment to confirm HiperSockets support.
Define the HiperSockets I/O configuration. This includes:
Identifying the HiperSockets CHPID in the IOCP
Making the I/O devices available to z/VM and the Linux guest systems
Verifying the I/O configuration
Configure and verify the z/VM TCP/IP network to use HiperSockets.
Configure and verify a VLAN with the HiperSockets network.
4.2 z/VM HiperSockets support
Here we discuss z/VM HiperSockets support.
4.2.1 Implementation steps
The TCP/IP component of z/VM provides full support for the System z HiperSockets Licensed
Internal Code (LIC) (see 1.3, HiperSockets mode of operation on page 5). Although different
HiperSockets cannot be directly connected together, any TCP/IP stack that has a connection
to the specific HiperSockets can connect them via routing.
Note: The z/VM guest operating system must be able to support HiperSockets in order to
communicate using HiperSockets.
Chapter 4. z/VM support 83
Software requirements
The minimum requirement is z/VM V5R1 at the base service level. Figure 4-1 shows the
configuration we used for our z/VM setup. The z/VM host system, VMLINUX7, is z/VM
Version 5 Release 2 at Recommended Service Upgrade (RSU) level 0602, in a z9 logical
partition-A12 with two Linux guests systems. LNXRH2 is Red Hat Enterprise Level 4 (RHEL4)
at kernel level 2.6.9. LNXSU2 is Novell SUSE Linux Enterprise Server 10 (SLES 10) at kernel
level 2.6.16.
Introduced in z/VM V5R2 is enhanced performance assist for z/VM guests. This improves
performance for guest QDIO operations. This function is first utilized in Linux kernel 2.6.16. To
properly exploit this feature, z/VM APAR VM63838 needs to be installed. This APAR is
included in V5R2 RSU 0602.
Figure 4-1 Logical partition A12 z/VM with two Linux guest systems
HCD and IOCP definitions
If you use z/OS HCD to generate your IOCP definitions, see Chapter 2, Hardware
definitions on page 21. Example 2-1 on page 37 shows the IOCP definitions we used to
support HiperSockets in z/VM (logical partition A12).
Per Example 2-1 on page 37 for TCP/IP:
On z/VM, we used real devices addresses E800-E802.
For the Red Hat Linux guest, we used real devices E804-E806 connected as virtual
devices 7000-7002.
for the SUSE Linux guest, we used real devices E808-E80A connected as virtual devices
7000-7002.
The CHPID F4 is defined as shared with 16 devices and the default frame size of 16 KB. This
results in a TCP/IP MTU size of 8 KB. The CHPARM parameter on the IOCP CHPID
statement determines the frame size and subsequent MTU size values.
See 2.1, System configuration considerations on page 22 and Example 2-2 on page 37 for
additional information.
HiperSockets
base
scenario
E800-E802
192.0.1.1
TCPIP
LP-A12
z/VM VMLinux7
SYSPLEX SYSPLEX
LP-A23
z/OS SC30
E800-E802
192.0.1.4
LP-A24
z/OS SC31
E800-E802
192.0.1.5
LP-A25
z/OS SC32
E800-E802
192.0.1.6 Linux
LNXSU2
(7000-7002)
E808-E80A
192.0.1.3
(7000-7002)
E804-E806
192.0.1.2
Linux
LNXRH2
HiperSockets CHPID F4
192.0.1.0/24
84 HiperSockets Implementation Guide
z/VM I/O verification
To verify that the path to the HiperSockets devices is in an ONLINE status, use the CP
QUERY CHPID command. See Example 4-1.
Example 4-1 Query CHPID F4
q chpid f4
Path F4 online to devices E800 E801 E802 E803 E804 E805 E806 E807
Path F4 online to devices E808 E809 E80A E80B E80C E80D E80E E80F
Path F4 online to devices E810 E811 E812 E813 E814 E815 E816 E817
Path F4 online to devices E818 E819 E81A E81B E81C E81D E81E E81F
Display the HiperSockets devices to confirm that they are available to the z/VM system.
guests. See Example 4-2.
Example 4-2 Query devices
q e800-e80f
OSA E800 FREE , OSA E801 FREE , OSA E802 FREE , OSA E803 FREE
OSA E804 FREE , OSA E805 FREE , OSA E806 FREE , OSA E807 FREE
OSA E808 FREE , OSA E809 FREE , OSA E80A FREE , OSA E80B FREE
OSA E80C FREE , OSA E80D FREE , OSA E80E FREE , OSA E80F FREE
Example 4-3 shows the status of the specific paths for the HiperSockets devices (CHPID,
availability, and status).
Example 4-3 Query paths for I/O devices
q paths e800-e802
Device E800, Status ONLINE
CHPIDs to Device E800 (PIM) : F4
Physically Available (PAM) : +
Online (LPM) : +
Device E801, Status ONLINE
CHPIDs to Device E801 (PIM) : F4
Physically Available (PAM) : +
Online (LPM) : +
Device E802, Status ONLINE
CHPIDs to Device E802 (PIM) : F4
Physically Available (PAM) : +
Online (LPM) : +
Legend + Yes - No
4.2.2 z/VM definitions for guest systems
To define permanent allocation for HiperSocket devices to the guest systems, we used
DEDICATE statements in the z/VM user directory USER DIRECT. This would be done for any
z/VM, Linux, or z/OS guest systems running within a z/VM host.
The syntax for the DEDICATE statement is:
DEDICATE <virtual_address> <real_address>
Chapter 4. z/VM support 85
The following z/VM user directory statements were used to attach the HiperSockets I/O
devices permanently to the Red Hat guest LNXRH2: we recommend that DEDICATE
statements are placed just before any SPOOL, LINK, SPECIAL, and MDISK directory entries:
DEDICATE 7000 E804
DEDICATE 7001 E805
DEDICATE 7002 E806
The following were used for the SUSE guest LNXSU2:
DEDICATE 7000 E808
DEDICATE 7001 E809
DEDICATE 7002 E80A
To dynamically attach the device addresses to a running guest system, use the CP ATTACH
command from a privileged z/VM user ID. This is a temporary definition that will be lost when
the guest is logged off or the z/VM system is IPLed.
The syntax for the CP ATTACH commands is:
CP ATTACH <real_address> <guest_id> <virtual_address>.
We used following z/VM commands in order to allocate I/O devices to our running guest Linux
systems:
attach E804 lnxrh2 7000
attach E805 lnxrh2 7001
attach E806 lnxrh2 7002
attach E808 lnxsu2 7000
attach E809 lnxsu2 7001
attach E80A lnxsu2 7002
Displaying the HiperSockets devices, as in Example 4-4, provides information about how the
devices are allocated to the virtual machines for TCP/IP and the two Linux guests.
Example 4-4 Query I/O devices
q e800-e80f
OSA E800 ATTACHED TO TCPIP E800 DEVTYPE HIPER CHPID F4 IQD
OSA E801 ATTACHED TO TCPIP E801 DEVTYPE HIPER CHPID F4 IQD
OSA E802 ATTACHED TO TCPIP E802 DEVTYPE HIPER CHPID F4 IQD
OSA E803 FREE
OSA E804 ATTACHED TO LNXRH2 7000 DEVTYPE HIPER CHPID F4 IQD
OSA E805 ATTACHED TO LNXRH2 7001 DEVTYPE HIPER CHPID F4 IQD
OSA E806 ATTACHED TO LNXRH2 7002 DEVTYPE HIPER CHPID F4 IQD
OSA E807 FREE
OSA E808 ATTACHED TO LNXSU2 7000 DEVTYPE HIPER CHPID F4 IQD
OSA E809 ATTACHED TO LNXSU2 7001 DEVTYPE HIPER CHPID F4 IQD
OSA E80A ATTACHED TO LNXSU2 7002 DEVTYPE HIPER CHPID F4 IQD
OSA E80B FREE , OSA E80C FREE , OSA E80D FREE , OSA E80E FREE
OSA E80F FREE
Note that the allocated devices show up as an OSA-type device driver. This is normal since
the device driver is the same for QDIO and iQDIO in z/VM. However, the DEVTYPE is
identified as HIPER rather than OSA, as it would be in a OSA device.
86 HiperSockets Implementation Guide
4.3 HiperSockets network definitions
This section provides the TCP/IP definitions needed to support the HiperSockets environment
on the z/VM system, as shown in Figure 4-1 on page 83. By default, the TCP/IP service
machine is user ID TCPIP and the TCP/IP configuration files reside on TCPMAINTs 198 disk.
4.3.1 TCP/IP definitions for z/VM host system
The z/VM TCP/IP profile, PROFILE TCPIP, definitions for our configuration are shown in
Example 4-5. The device type on the DEVICE statement for HiperSockets is HIPERS, as
highlighted in the example. The DEVICE and LINK names, HIPERDF4 and HIPERLF4, are
arbitrary. We chose a naming convention to identify the use of HiperSockets along with the
applicable CHPID.
Example 4-5 z/VM TCP/IP definitions
...
DEVICE HIPERDF4 HIPERS E800
LINK HIPERLF4 QDIOIP HIPERDF4
...
HOME
192.0.1.1 HIPERLF4
...
GATEWAY
; Network Subnet First Link MTU
; Address Mask Hop Name Size
; ------------- --------------- ---------------- --------- -----
192.0.1.0 255.255.255.0 = HIPERLF4 8192
; ------------- --------------- ---------------- --------- ----
...
START HIPERDF4
At the time a TCP/IP HiperSockets device is started, the IP addresses in the HOME list that
correspond to HiperSockets and VIPA interfaces are loaded into the HiperSockets IP address
lookup table. Any change in the HOME list for HiperSockets and VIPA interface IP addresses
will be reflected in the IP address lookup table.
In the DEVICE statement, only the first of three I/O device addresses is defined because our
addresses are contiguous. z/VM will determine the role of each device. These addresses are
attached to the TCP/IP virtual machine via the SYSTEM DTCPARMS file.
The following SYSTEM DTCPARMS entry attaches the HiperSockets I/O devices
permanently to the z/VM logical partition:
:nick.TCPIP :type.server
:class.stack
:attach. E800-E802
If other network I/O device addresses are defined to the TCP/IP stack, then add the
HiperSockets I/O devices, for example:
:nick.TCPIP :type.server
:class.stack
:attach. C200-C203, E800-E802
Chapter 4. z/VM support 87
Updating the PROFILE TCPIP and SYSTEM DTCPARMS files will permanently define the
device to TCPIP. To dynamically bring the new device online without having to take TCPIP
down, do the following:
1. From a privileged z/VM user ID, such as MAINT, attach the HiperSockets devices to
TCPIP, as shown in this example:
CP ATTACH E800-E802 TCPIP
2. From a user ID authorized to use the TCPIP OBEYFILE command, such as MAINT or
TCPMAINT, issue the command for TCPIP to re-read the PROFILE TCPIP file. You will
need to have access to that file before issuing the command, for example:
CP LINK TCPMAINT 198 198 rr
CP ACC 198 F
OBEYFILE PROFILE TCPIP F
If there is a link password for accessing TCPMAINTs 198 disk, then TCPIP will have to be
authorized to link to the disk or a password needs to be included in the command, for
example:
OBEYFILE PROFILE TCPIP F (read_password
3. If you wish to stop the TCPIP service, log on to the TCPIP user ID and issue:
#CP EXTERNAL
4. To restart TCPIP, issue TCPRUN or IPL CMS.
At device start, the z/VM TCP/IP stack is registered in the HiperSockets IP address lookup
table with its IP address (192.0.1.1) and becomes part of the HiperSockets network on
CHPID F4.
Since we are using static routes in our environment, we have to define a GATEWAY
statement. Also note we have specified a maximum packet size (MTU size) of 8 KB to
accommodate the maximum frame size of the IQD CHPID.
Note: If you are using dynamic routing (RIP or OSPF), omit this GATEWAY statement.
Note: z/VM can be connected to a HiperSockets network that a z/OS sysplex is using for
DynamicXCF. z/VM cannot exploit the DynamicXPF protocol, but will establish a
communication connection using the HiperSockets LIC.
In order to establish a connection, we needed to add a port name to the TCPIP PROFILE
file device statement, as in this example:
DEVICE HIPERDF5 HIPERS E900 PORTNAME IUTIQDF5
The value for the port name is the device identifier from the z/OS sysplex network
configuration.
88 HiperSockets Implementation Guide
TCP/IP verification
Example 4-6 shows information pertinent to the TCP/IP HiperSockets device, which is
displayed by using the NETSTAT DEVLINKS command. It ensures that the proper I/O device
and Maximum Frame Size (MFS) were defined to TCP/IP, and verifies that the device,
HIPERLF4, and link, HIPERLF4, are in a READY status.
Example 4-6 Display TCP/IP DEVICE and LINK
NETSTAT DEVL
Device HIPERDF4 Type: HIPERS Status: Ready
Queue size: 0 CPU: 0 Address: E800 Port name: UNASSIGNED
IPv4 Router Type: NonRouter Arp Query Support: No
Link HIPERLF4 Type: QDIOIP Net number: 0
BytesIn: 4008 BytesOut: 1328
Forwarding: Disabled MTU: 8192
Maximum Frame Size : 16384
Broadcast Capability: Yes
Multicast Capability: Yes
Group Members
----- -------
224.0.0.1 1
The NETSTAT GATEWAY command, as in Example 4-7, indicates that the correct routing
table entry has been added for HiperSockets.
Example 4-7 Display routing table
NETSTAT GATE
Subnet Address Subnet Mask FirstHop Flgs PktSz Metric Link
-------------- ----------- -------- ---- ----- ------ ------
192.0.1.0 255.255.255.0 <direct> US 8192 <none> HIPERLF4
Example 4-8 shows similar information about the HiperSockets connection using the
IFCONFIG command.
Example 4-8 Display configuration
ifconfig hiperlf4
HIPERLF4 inet addr: 192.0.1.1 mask: 255.255.255.0
UP BROADCAST MULTICAST MTU: 8192
vdev: E800 rdev: E800 type: HIPERS
vlan: 1 cpu: 0 forwarding: DISABLED
RX bytes: 1040 TX bytes: 2888
For IP connectivity verification, we used ping commands. We were able to successfully ping
all other TCP/IP stacks that participate in HiperSockets CHPID F4 (192.0.1.2, 192.0.1.3,
192.0.1.4, 192.0.1.5 and 192.0.1.6).
Note: To use these TCP/IP commands, the z/VM user ID has to be linked to TCPMAINTs
592 disk.
Chapter 4. z/VM support 89
4.3.2 z/VM guest system network definitions
After the HiperSockets devices are DEDICATEd or ATTACHed to a guest z/VM system, the
previously detailed TCP/IP definitions are performed on the guest z/VMs TCP/IP service
machine.
See Chapter 5, Linux support on page 93 for information about setting up the Linux guest
systems.
4.4 VLAN
z/VM can use VLAN within a HiperSockets network to logically subdivide the network without
needing of additional hardware resources or HiperSockets networks.
4.4.1 VLAN definitions
The VLAN is defined and started like any other network with the addition of the VLAN
parameter to the TCPIP PROFILE file LINK statement. Example 4-9 is exactly like
Example 4-5 on page 86 with the addition of the VLAN 1 added to the LINK statement.
Example 4-9 z/VM TCPIP definition for VLAN
...
DEVICE HIPERDF4 HIPERS E800
LINK HIPERLF4 QDIOIP HIPERDF4 NOFWD VLAN 1
...
HOME
192.0.1.1 HIPERLF4
...
GATEWAY
; Network Subnet First Link MTU
; Address Mask Hop Name Size
; ------------- --------------- ---------------- --------- -----
192.0.1.0 255.255.255.0 = HIPERLF4 8192
; ------------- --------------- ---------------- --------- ----
START HIPERDF4
...
90 HiperSockets Implementation Guide
VLAN verification
To verify the connection to the VLAN, use the NETSTAT DEVLINKS command. In
Example 4-10, the device HIPERDF4 is connected to VLAN 1. Only systems with devices
connected to VLAN 1 will be able to communicate with one another.
Example 4-10 Display TCP/IP VLAN connection
netstat devl
VM TCP/IP Netstat Level 520

Device HIPERDF4 Type: HIPERS Status: Ready
Queue size: 0 CPU: 0 Address: E800 Port name: UNASSIGNED
IPv4 Router Type: NonRouter Arp Query Support: No
Link HIPERLF4 Type: QDIOIP Net number: 0
BytesIn: 0 BytesOut: 308
Forwarding: Disabled MTU: 8192
Maximum Frame Size : 16384
VLAN ID: 1
Broadcast Capability: Yes
Multicast Capability: Yes
Group Members
----- -------
224.0.0.1 1
4.5 Commands
These commands can be used to create and verify virtual LAN, NIC, COUPLED definitions
for the HiperSockets emulation called z/VM Guest LAN.
Table 4-1 z/VM Guest LAN commands
Command Description
ATTACH <rdev> <vmid> <vdev> Attaches real device address to the VM ID at its virtual
address.
DEFINE LAN Creates a VM LAN segment managed by the CP system.
DEFINE NIC Installs a virtual network adapter (a Network Interface
Card) in the invoker's virtual machine configuration.
DETACH LAN Eliminates a VM LAN segment from the CP system.
DETACH NIC Removes a virtual network adapter (a Network Interface
Card) in the invoker's virtual machine configuration.
QUERY <dev_adrs> Displays information about the device.
QUERY CHPID <xx> Displays information about CHPID xx.
QUERY LAN Displays information about the designated VM LAN (or
every LAN in the System LAN Table).
QUERY NIC Displays information about a virtual Network Interface
Card (NIC) in your virtual machine configuration.
QUERY OSA Displays information about the OSA defined to z/VM.
QUERY VMLAN Determines the current status of VM LAN activity on the
CP host.
Chapter 4. z/VM support 91
4.6 References
z/VM V5R2.0 CP Command and Utility Reference, SC24-6081
z/VM V5R2.0 CP Planning and Administration, SC24-6083
z/VM V5R2.0 TCP/IP Planning and Customization, SC24-6125
z/VM V5R2.0 TCP/IP User's Guide, SC24-6127
z/VM V5R2.0 TCP/IP Messages and Codes, GC24-6124
z/VM V5R2.0 TCP/IP Diagnosis Guide, GC24-6123
These and other z/VM publications are available in PDF format on the Internet at:
http://www.vm.ibm.com/library/index.html
SET LAN Modifies the attributes of a VM LAN.
UNCOUPLE Disconnects a virtual CTCA from a coupled CTCA device,
or disconnects a virtual network adapter from a VM LAN
segment.
92 HiperSockets Implementation Guide
Copyright IBM Corp. 2002, 2006 93
Chapter 5. Linux support
In this chapter, we describe how to set up HiperSockets for Linux as well as the support that is
available with the QETH device driver.
This chapter includes the following:
An overview of Linux support
A configuration example
Required definitions for Linux
An example using VLAN with HiperSockets
An example using the Linux HiperSockets Network Concentrator
References
5
94 HiperSockets Implementation Guide
5.1 Overview
The TCP/IP component of Linux provides support for the System z HiperSockets (see 1.3,
HiperSockets mode of operation on page 5). Although different HiperSockets cannot be
directly connected together, any TCP/IP stack that has a connection to the specific
HiperSockets can connect them through routing. The iQDIO logic required by HiperSockets is
provided by the System z at the following levels:
z9 EC and z9 BC
z990 and z890
Linux uses the QETH device driver to support HiperSockets on System z and is delivered as
part of the current Red Hat and SUSE Linux distributions.
The QETH device driver also supports OSA-Express ports operating in QDIO mode. The
HiperSockets (iQDIO) support is an extension to the OSA-Express QDIO support.
Each Linux HiperSockets connection requires three I/O devices. One device is used for read
control, one device is used for write control, and one device is used for data exchange. The
device number for the control write device must be the device number for the read control
device plus 1. The device number for the data exchange device can be any number.
The Linux support for HiperSockets on System z is the same for Linux running in a
stand-alone logical partition and running as a guest under z/VM. With this in mind, we chose
to work with Linux guests in our test configurations. This chapter provides information to
utilize HiperSockets with Linux systems. To enable HiperSockets support, these tasks need to
be done:
Confirm the proper IOCP definitions used for Linux environments.
Define the virtual environment for Linux guest systems to use HiperSockets.
Configure the HiperSockets hardware and network when installing Linux systems.
On existing Linux systems, configure the HiperSockets hardware and network on existing
Linux systems which includes:
Verifying the I/O environment and the network.
If a VLAN environment is needed, configure and verify a VLAN with the HiperSockets
network.
If a network concentrator is needed, configure a Linux system using the Linux
HiperSockets Network concentrator.
IP address takeover
Linux supports IP address takeover. Only IP addresses from another Linux system on the
same System z can be taken over. HiperSockets checks the type of operating system before
a change in the IP address lookup table is made. IP address takeover must be initiated on the
system that takes over an IP address (add). IP address takeover must be enabled on both
systems when loading the driver (enable_takeover). The devices are disabled by default. To
enable the device for takeover, write to this file:
/sys/devices/qeth/<device_number>/ipa_takeover/enable
For example:
echo 1 > /sys/devices/qeth/0.0.7000/ipa_takeover/enable
Chapter 5. Linux support 95
5.1.1 Software requirements
The System z QETH Linux network device driver is required. IBM makes this driver available
to the developer community, which integrates it into their Linux distribution. Contact your Linux
distributor for availability details of this device driver.
5.1.2 Linux configuration example
The configuration we used is shown in Figure 5-1.
Figure 5-1 Linux test configuration
For the HiperSockets network, we used CHPID F4 devices connected as I/O addresses
7000-7002 for both Linux environments.
5.2 Setup for Linux
This section describes how to define the IQD CHPIDs to your system configuration and add
the required Linux definitions for stand-alone logical partition and guest environments. Our
Linux guest environment was built on the Red Hat Enterprise Level 4 (RHEL4), 2.6.9 kernel,
LNXRH2, and the Novell SUSE Linux Enterprise Server 10 (SLES 10), 2.6.16 kernel,
LNXSU2.
When Linux is running as a stand-alone logical partition, define the HiperSockets device
addresses in the IOCP. Example 2-1 on page 37 shows the IOCP definitions we created for
this environment. If you use z/OS HCD to generate your IOCP definitions, then refer to
Chapter 2, Hardware definitions on page 21.
For a stand-alone Linux logical partition, the device addresses will be the same as defined in
the IOCP. This is not necessary for Linux guests running under z/VM. As shown in Figure 5-1,
we chose to use common virtual addresses for the Linux guests.
HiperSockets
base
scenario
E800-E802
192.0.1.1
TCPIP
LP-A12
z/VM VMLinux7
SYSPLEX SYSPLEX
LP-A23
z/OS SC30
E800-E802
192.0.1.4
LP-A24
z/OS SC31
E800-E802
192.0.1.5
LP-A25
z/OS SC32
E800-E802
192.0.1.6 Linux
LNXSU2
(7000-7002)
E808-E80A
192.0.1.3
(7000-7002)
E804-E806
192.0.1.2
Linux
LNXRH2
HiperSockets CHPID F4
192.0.1.0/24
96 HiperSockets Implementation Guide
5.2.1 z/VM definitions when running Linux guest systems
To define permanent allocation for HiperSocket devices to the guest systems, we used
DEDICATE statements in the z/VM user directory USER DIRECT.
The syntax for the DEDICATE statement is:
DEDICATE <virtual_address> <real_address>
The following z/VM user directory statements were used to attach the HiperSockets I/O
devices permanently to the Red Hat guest LNXRH2. We recommend that DEDICATE
statements are placed just before any SPOOL, LINK, SPECIAL, and MDISK directory entries:
DEDICATE 7000 E804
DEDICATE 7001 E805
DEDICATE 7002 E806
The following were used for the SUSE guest LNXSU2:
DEDICATE 7000 E808
DEDICATE 7001 E809
DEDICATE 7002 E80A
To dynamically attach the device addresses to a running guest system, use the CP ATTACH
command from a privileged z/VM user ID. This is a temporary definition that will be lost when
the guest is logged off or the z/VM system is IPLed.
The syntax for the CP ATTACH commands is:
CP ATTACH <real_address> <guest_id> <virtual_address>.
We used following z/VM commands in order to allocate I/O devices to our running guest Linux
systems:
attach E804 lnxrh2 7000
attach E805 lnxrh2 7001
attach E806 lnxrh2 7002
attach E808 lnxsu2 7000
attach E809 lnxsu2 7001
attach E80A lnxsu2 7002
5.2.2 Linux I/O definitions - initial install of Linux system
If your server environment will only be using the HiperSockets network, you will define the
HiperSockets devices as your primary network when installing your Linux system. The
installation process for both the SUSE and Red Hat distributions is very much the same as
you would follow using OSA or other network devices. The primary difference would be
identifying the type of network device as HiperSockets.
For the SUSE installation, if you are using a minimal parm configuration file (parmfile) when
installing the Linux system, at the network device prompt, select HiperSockets. Example 5-1
shows the prompt from a SUSE install, selecting option 2.
Example 5-1 SUSE installation prompt
Please select the type of your network device.
1) OSA-2 or OSA-Express
2) HiperSockets
--------------------
3) Channel To Channel (CTC)
4) ESCON
Chapter 5. Linux support 97
5) Inter-User Communication Vehicle (IUCV)
> 2
For the Red Hat installation, using a minimal parm configuration file (parmfile), select the qeth
driver and at the bus ID and Device Number prompt, enter the HiperSockets devices, such as:
0.0.7000,0.0.7001,0.0.7002
Both SUSE SLES10 and Red Hat EL4 allows the use of installation parameter files (parmfile)
to identify installation resource rather than having to input these parameters during the
installation. The format of the parmfile would be the same as in an installation using a
different type of network device, identifying HiperSockets as the installation device.
For a SUSE parmfile, include the line shown in Example 5-2.
Example 5-2 SUSE PARMFILE
....
InstNetDev=hsi OsaInterface=eth OsaMedium=qdio
....
For a Red Hat parmfile, point to the configuration file to be used and include the lines in
Example 5-3 and Example 5-4.
Example 5-3 Red Hat PARMFILE
...
CMSCONFFILE=rhel4.conf
...
Example 5-4 RHEL4 CONF
...
DEVICE="hsi0"
NETTYPE="qeth"
...
5.2.3 Linux I/O definitions - adding to an existing Linux system
The HiperSockets definitions for Linux in our environment were built on the 2.6 kernel. We
used a Red Hat 2.6.9-42.EL and a SUSE 2.6.16.21-0.8-default installation with identical
virtual addresses for the HiperSockets device addresses. This section shows the Linux setup
from the perspective of the /sysfs file system. As such, the definitions are temporary and will
be lost on reboot. 5.2.4, Permanent Linux definitions on page 99 explains/describes how to
make the definitions permanent. .
Note: In order to use the HiperSockets network during the initial installation, the installation
code must be accessible from the HiperSockets network.
98 HiperSockets Implementation Guide
Linux I/O verification
Ensure that your Linux system has the HiperSockets devices available. From the Linux guest,
issue the lscss command to display network devices, as shown in Example 5-5.
Example 5-5 lscss command
lscss | grep 1732
0.0.7000 0.0.0000 1732/05 1731/05 80 80 FF F4000000 00000000
0.0.7001 0.0.0001 1732/05 1731/05 80 80 FF F4000000 00000000
0.0.7002 0.0.0002 1732/05 1731/05 80 80 FF F4000000 00000000
This shows that the Linux system knows of the devices, but the absence of the word Yes
between the 1731/05 and the 80 indicates that the devices are not configured or online. The
first two characters of the second to the last column shows the CHPID, F4 in the example.
Hardware definition
The device driver configures itself and the devices it controls when values are written to its
files in the /sysfs file system. It creates these files when it is loaded. For example to activate
our HiperSockets device, we used the following command:
echo 0.0.7000,0.0.7001,0.0.7002 > /sys/bus/ccwgroup/drivers/qeth/group
This defines the device as part of the qeth group. The three devices are specified in device
bus-ID form with 0.0 proceeding each device address, that is, the device number. Address
7000 is the read control I/O device, address 7001 is the write control I/O device, and address
7002 is the data exchange device. The driver handles the distribution for read and write buffer
space dynamically. The qeth device driver uses the device bus-ID of the read subchannel to
create a directory for the device, for example, this file should now exist:
/sys/devices/qeth/0.0.7000
This directory contains one file for each of the attributes that control the device. The driver
automatically senses the device as a HiperSockets device and sets all device attributes to
their default values. Writing the new value to the appropriate file changes the attribute. A
HiperSockets device usually does not need any changes. Also, the driver creates additional
directories that are symbolic links to the device directory, for example:
/sys/bus/ccwgroup/drivers/qeth/0.0.7000
/sys/bus/ccwgroup/devices/0.0.7000
Linux on System z, Device Drivers, Features, and Commands, December, 2006, SC33-8281
describes the HiperSockets device attributes and the qeth device driver.
For the interface, Linux assigns an MTU size according to the IQD CHPID definition. (The
values are shown in Table 2-1 on page 22.) According to the CHPID F4 definition, the MTU
size is 8 KB. HiperSockets TCP/IP devices will have an interface name starting with hsi
appended with the interface number.
To bring the HiperSockets device online, write a 1 to the online file, as in this example
command:
echo 1 > /sys/devices/qeth/0.0.7000/online
The device driver associates the device with an interface name when it brings the device
online. The interface name is found in the if_name for the device, for example:
cat /sys/devices/qeth/0.0.7000/if_name
hsi0
Chapter 5. Linux support 99
At device start, the LNXRH2 and LNXSU2 servers are registered in the HiperSockets IP
address lookup table with their IP addresses 192.0.1.2 and 192.0.1.3 and becomes part of
HiperSockets CHPID F4. Linux creates a network route for HiperSockets on CHPID F4.
Network definition
Use ifconfig to define the network interface, as shown in this example:
ifconfig hsi0 192.0.1.2 netmask 255.255.255.0 up
Verification of the setup
This section shows how the configuration can be verified regardless of the way it has been
defined.
Example 5-6 shows the ifconfig command we issued to verify our setup on the SUSE guest.
Example 5-6 Verifying Linux HiperSockets setup
ifconfig
hsi0 Link encap:Ethernet HWaddr 00:00:00:00:00:00
inet addr:192.0.1.3 Bcast:192.0.1.255 Mask:255.255.255.0
inet6 addr: fe80::200:ff:fe00:0/64 Scope:Link
UP BROADCAST RUNNING NOARP MULTICAST MTU:8192 Metric:1
RX packets:17 errors:0 dropped:0 overruns:0 frame:0
TX packets:42 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:1628 (1.5 Kb) TX bytes:3204 (3.1 Kb)
The dmesg command (shown in Example 5-7) is a useful method for checking the initialization
of any interface.
Example 5-7 HiperSockets initialization
dmesg
qeth: Device 0.0.7000/0.0.7001/0.0.7002 is a HiperSockets card (level:()
with link type HiperSockets.
hsi0: no IPv6 routers present
5.2.4 Permanent Linux definitions
This section provides the Linux information to permanently define the same configuration in
Figure 5-1 on page 95. Although the definition can be done using the Linux distribution setup
software, for example, SUSE YAST, we show in this section how it is done manually.
Hardware definition for SUSE
The /etc/sysconfig/hardware directory contains the hardware configuration files. These files
are named using the driver name and device address. For example, the file name for our
configuration would be hwcfg-qeth-bus-ccw-0.0.7000 The contents of this file is shown in
Figure 5-2 on page 100. This file may be built from a copy of an existing OSA definition file or
the sample file /etc/sysconfig/hardware/skel/hwcfg-qeth. In either case, the file name must
named as we have shown with your device bus-ID value.
Remember: Linux is case sensitive. When using hardware addresses with hex alphabetic
characters in commands and files, keep the case the same.
100 HiperSockets Implementation Guide
To activate the new file after it have been properly customized, use the hwup command. If the
file is for an existing adapter, then just the command is needed. Otherwise, use the command
with a shortened configuration file name as an argument, for example:
hwup qeth-bus-ccw-0.0.7000
hwup: module 'qeth' already present in kernel
Configuring group 0.0.7000
This will permanently create the files used and described previously in Hardware definition
on page 98
Figure 5-2 Example hwcfg-qeth-bus-ccw-0.0.7000 file
Network definition for SUSE
In a similar manner, the /etc/sysconfig/network directory contains the network configuration
files. Also, the files are named with the driver name and device address. For example, the file
name for our configuration would be ifcfg-qeth-bus-ccw-0.0.7000. The contents of this file is
shown in Figure 5-3. This file may be built from a copy of an existing OSA definition file or the
sample file /etc/sysconfig/network/ifcfg.template file.
Figure 5-3 Example ifcfg-qeth-bus-ccw-0.0.7000
To activate the new file after it have been properly customized, use the ifup command with a
shortened configuration file name as an argument, for example:
ifup qeth-bus-ccw-0.0.7000
hsi0
hsi0 configuration: qeth-bus-ccw-0.0.7000
This will permanently define the network interface configuration set up by the command in
Network definition on page 99.
STARTMODE="auto"
MODULE="qeth"
MODULE_OPTIONS=""
MODULE_UNLOAD="yes"
SCRIPTUP="hwup-ccw"
SCRIPTUP_ccw="hwup-ccw"
SCRIPTUP_ccwgroup="hwup-qeth"
SCRIPTDOWN="hwdown-ccw"
CCW_CHAN_IDS="0.0.7000 0.0.7001 0.0.7002"
CCW_CHAN_NUM="3"
CCW_CHAN_MODE="any"
QETH_LAYER2_SUPPORT="0"
BOOTPROTO="static"
UNIQUE=""
STARTMODE="onboot"
IPADDR="192.0.1.3"
NETMASK="255.255.255.0"
NETWORK="192.0.1.0"
BROADCAST="192.0.1.255"
Chapter 5. Linux support 101
Hardware/network definition for Red Hat
The Red Hat hardware definition is the same as described previously in Hardware definition
on page 98. In order to retain the files created, additional steps are necessary.
Add an alias for the HiperSockets device for /etc/modprobe.conf. For our configuration, this
line would be added:
hsi0 alias qeth
The /etc/sysconfig/network-scripts directory contains the network configuration files. These
files contain hardware information as well. These files are named with the interface type and
number. For example, our file name is ifcfg-hsi0. The contents shown in this file may be built
from a copy of an existing configuration file.
Figure 5-4 shows the example ifcfg-hsi0 file.
Figure 5-4 Example ifcfg-hsi0 file
To activate the interface, issue the ifup command, for example:
ifup hsi0
# IBM QETH
DEVICE=hsi0
BOOTPROTO=static
# BROADCAST=192.0.1.255 This is being deprecated.
IPADDR=192.0.1.2
# HWADDR=00:00:00:00:00:00 Causes long boot wait followed by errors.
NETMASK=255.255.255.0
NETTYPE=qeth
# NETWORK=192.0.1.0 This is being deprecated.
ONBOOT=yes
SUBCHANNELS=0.0.7000,0.0.7001,0.0.7002
TYPE=Ethernet
ARP=no
102 HiperSockets Implementation Guide
5.3 VLAN
This section demonstrates that Linux on System z can use VLAN within a HiperSockets
network to logically subdivide a network. The HiperSockets network is set up as described in
5.2, Setup for Linux on page 95. Recall that the device addresses being assigned to the
various systems are on the same CHPID and therefore on the same HiperSockets network.
We illustrate the commands needed to set up a VLAN for the environment shown in
Figure 5-5 for the Red Hat and SUSE guest servers.
Figure 5-5 VLAN test configuration
The only difference between the two Linux guest systems is specifying the VLAN number on
the vconfig and ifconfig commands. The vconfig command defines VLAN 1 on the Red
Hat server and VLAN 3 on the SUSE server to the HiperSockets network hsi0.
The important thing to note in the examples is the HiperSockets network is up before the
VLAN is defined, that is, before the network is subdivided, and started. The other commands
show the process of defining and starting the VLAN. We will start with the LNXRH2 system.
VLAN 1 on LNXRH2
This ifconfig command defines and starts the HiperSockets network. It could have been
defined as shown in the preceding sections of this chapter:
ifconfig hsi0 192.0.50.1 netmask 255.255.255.0 up
HiperSockets
VLAN
scenario
E800-E802
192.0.1.1
TCPIP
LP-A12
z/VM VMLinux7
SYSPLEX SYSPLEX
LP-A23
z/OS SC30
E800-E802
192.0.3.4
LP-A24
z/OS SC31
E800-E802
192.0.1.5
LP-A25
z/OS SC32
E800-E802
192.0.3.6 Linux
LNXSU2
Linux
LNXRH2
(7000-7002)
E808-E80A
192.0.3.3
(7000-7002)
E804-E806
192.0.1.2
HiperSockets CHPID F4
VLAN 1
192.0.1.0/24
VLAN 3
192.0.3.0/24
Chapter 5. Linux support 103
However, the IP address does not relate to any real device, but to the HiperSockets network.
Its purpose is to anchor the VLAN subnet to the HiperSockets network. The ifconfig
command shows that the HiperSockets network definition is up:
ifconfig hsi0
hsi0 Link encap:Ethernet HWaddr 00:00:00:00:00:00
inet addr:192.0.50.1 Bcast:192.0.50.255 Mask:255.255.255.0
inet6 addr: fe80::ff:fe00:0/64 Scope:Link
UP BROADCAST RUNNING NOARP MULTICAST MTU:8192 Metric:1
RX packets:3 errors:0 dropped:0 overruns:0 frame:0
TX packets:4 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:840 (840.0 b) TX bytes:224 (224.0 b)
Nothing new has been introduced up to this point. The following example uses the cat
command to display what the Linux VLAN configuration file looks like when no VLAN is in
use:
cat /proc/net/vlan/config
VLAN Dev name | VLAN ID
Name-Type: VLAN_NAME_TYPE_RAW_PLUS_VID_NO_PAD
Now, we use the vconfig command to define the VLAN and subdivide the HiperSockets
network using VLAN:
vconfig add hsi0 1
Added VLAN with VID == 1 to IF -:hsi0:-
WARNING: VLAN 1 does not work with many switches,
consider another number if you have problems.
The VLAN number will be common to all systems participating in the subnetwork. Once
again, we look at the Linux VLAN configuration file and see that VLAN 1 is defined:
cat /proc/net/vlan/config
VLAN Dev name | VLAN ID
Name-Type: VLAN_NAME_TYPE_RAW_PLUS_VID_NO_PAD
hsi0.1 | 1 | hsi0
Note that we now have VLAN 1, hsi0.1, associated with the HiperSockets hsi0. When we look
at hsi0, we will see no change from when we saw it earlier:
ifconfig hsi0
hsi0 Link encap:Ethernet HWaddr 00:00:00:00:00:00
inet addr:192.0.50.1 Bcast:192.0.50.255 Mask:255.255.255.0
inet6 addr: fe80::ff:fe00:0/64 Scope:Link
UP BROADCAST RUNNING NOARP MULTICAST MTU:8192 Metric:1
RX packets:3 errors:0 dropped:0 overruns:0 frame:0
TX packets:6 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:840 (840.0 b) TX bytes:336 (336.0 b)
104 HiperSockets Implementation Guide
However, since the Linux configuration file showed hsi0.1, the ifconfig command will now
show it:
ifconfig hsi0.1
hsi0.1 Link encap:Ethernet HWaddr 00:00:00:00:00:00
BROADCAST NOARP MULTICAST MTU:8192 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 b) TX bytes:0 (0.0 b)
Note the missing second and third lines, and the fields missing from the fourth line from the
hsi0 information. This is because hsi0.1 is defined only and needs to be started:
ifconfig hsi0.1 192.0.1.2 netmask 255.255.255.0 up
Now we see that the VLAN 1 subnet is up and running:
ifconfig hsi0.1
hsi0.1 Link encap:Ethernet HWaddr 00:00:00:00:00:00
inet addr:192.0.1.2 Bcast:192.0.1.255 Mask:255.255.255.0
inet6 addr: fe80::200:ff:fe00:0/64 Scope:Link
UP BROADCAST RUNNING NOARP MULTICAST MTU:8192 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:2 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 b) TX bytes:112 (112.0 b)
VLAN 3 on LNXSU2
We will further subdivide the HiperSockets network on the LNXSU2 system by setting up
another VLAN.
First, we bring up the HiperSockets network:
ifconfig hsi0 192.0.9.1 netmask 255.255.255.0 up
We verify that the HiperSockets network is up:
ifconfig hsi0
hsi0 Link encap:Ethernet HWaddr 00:00:00:00:00:00
inet addr:192.0.9.1 Bcast:192.0.9.255 Mask:255.255.255.0
inet6 addr: fe80:200:ff:fe00:0/64 Scope:Link
UP BROADCAST RUNNING NOARP MULTICAST MTU:8192 Metric:1
RX packets:15 errors:0 dropped:0 overruns:0 frame:0
TX packets:33 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:1268 (1.2 Kb) TX bytes:2396 (2.3 Kb)
The Linux VLAN configuration file shows no VLAN defined:
cat /proc/net/vlan/config
VLAN Dev name | VLAN ID
Name-Type: VLAN_NAME_TYPE_RAW_PLUS_VID_NO_PAD
Chapter 5. Linux support 105
Up to this point, we see no difference between the LNXRH2 and LNXSU2 Linux systems
except for their unique IP addresses. Now we will subdivide the network with another VLAN:
vconfig add hsi0 3
Added VLAN with VID == 3 to IF -:hsi0:-
WARNING: VLAN 3 does not work with many switches,:
consider another number if you have problems.
The Linux VLAN configuration file shows that LAN 3 has been defined to hsi0 as hsi0.3:
cat /proc/net/vlan/config
VLAN Dev name | VLAN ID
Name-Type: VLAN_NAME_TYPE_RAW_PLUS_VID_NO_PAD
hsi0.3 | 3 | hsi0
Looking at the HiperSockets hsi0 and its VLAN hsi0.3, note that the VLAN has been defined
and needs to be started:
ifconfig hsi0
hsi0 Link encap:Ethernet HWaddr 00:00:00:00:00:00
inet addr:192.0.9.1 Bcast:192.0.9.255 Mask:255.255.255.0
inet6 addr: fe80:200:ff:fe00:0/64 Scope:Link
UP BROADCAST RUNNING NOARP MULTICAST MTU:8192 Metric:1
RX packets:15 errors:0 dropped:0 overruns:0 frame:0
TX packets:44 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:1268 (1.2 Kb) TX bytes:3100 (3.0 Kb)
ifconfig hsi0.3
hsi0.3 Link encap:Ethernet HWaddr 00:00:00:00:00:00
BROADCAST NOARP MULTICAST MTU:8192 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 b) TX bytes:0 (0.
We start the VLAN 3 on LNXSU2:
ifconfig hsi0.3 192.0.3.3 netmask 255.255.255.0 up
Finally, we can see that VLAN 3 is up and running on LNXSU2:
ifconfig hsi0.3
hsi0.3 Link encap:Ethernet HWaddr 00:00:00:00:00:00
inet addr:192.0.3.3 Bcast:192.0.3.255 Mask:255.255.255.0
inet6 addr: fe80::200:ff:fe00:0/64 Scope:Link
UP BROADCAST RUNNING NOARP MULTICAST MTU:8192 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:2 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 b) TX bytes:112 (112.0 b)
106 HiperSockets Implementation Guide
5.4 HiperSockets Network Concentrator
HiperSockets Network Concentrator support allows a Linux system to be configured as a
connector server to transparently bridge network traffic between HiperSockets and
OSA-Express networks. The network traffic flows between the HiperSockets LAN and the
physical LAN on the same subnet without requiring intervening network routing overhead.
The Linux connector server provides inbound and outbound connectivity as though all
systems connected to the LANs are on the same physical network. The HiperSockets
Network Concentrator works on behalf of the TCP/IP stack using the next-hop-IP-address in
the QDIO header. VLANs in a switched Ethernet network are not supported. TCP/IP stacks
see no difference when using only HiperSockets and without any external network
connection.
Although the HiperSockets Network Concentrator could be a stand-alone Linux logical
partition, there is no functional difference between Linux in a logical partition and running as a
z/VM guest Linux server. Our configuration used a Linux guest server.
Figure 5-6 shows the configuration scenario that we use to set up Linux. It is attached by an
OSA card to the external network that implements Network Concentrator functions.
Therefore, it permits all systems inside a HiperSockets Internal LAN to communicate with an
external network.
Figure 5-6 HiperSockets Network Concentrator test configuration
For the servers inside the HiperSockets Internal LAN, the routing parameters are the same as
they were if they were connected on the external net directly. The default gateway setting is
the same. See Table 5-1 on page 107.
Network
Concentrator
scenario
E800-E802
192.0.1.1
TCPIP
LP-A12
z/VM VMLinux7
SYSPLEX SYSPLEX
LP-A23
z/OS SC30
E800-E802
192.0.1.4
LP-A24
z/OS SC31
E800-E802
192.0.1.5
LP-A25
z/OS SC32
E800-E802
192.0.1.6 Linux
LNXSU2
(7000-7002)
E808-E80A
192.0.1.3
(7000-7002)
E804-E806
192.0.1.2
Linux
LNXRH2
HiperSockets CHPID F4
192.0.1.0/24
(7200-7203)
192.0.1.0/24
OSA
2200-2203
CHPID 06
192.0.1.8
Chapter 5. Linux support 107
Table 5-1 Details of the configuration scenario
On the connector Linux server, LNXSU3:
Define a OSA-Express LAN. This is similar to the information given previously in the
chapter for HiperSockets because the same qeth device driver is used.
Define a HiperSockets LAN per information given previously in this chapter.
Make the HiperSockets a primary concentrator, for example, issue the command:
echo primary_connector > /sys/bus/ccwgroup/drivers/qeth/0.0.7000/route4
When using multicasting, make the OSA-Express a multicast router, for example, issue:
echo multicast_router > /sys/bus/ccwgroup/drivers/qeth/0.0.7200/route4
Enable IP forwarding:
sysctl -w net.ipv4.ip_forward=1
Update /etc/sysctl.conf with this line to retain the setup across a reboot.
Remove the network route for the HiperSockets interface, for example:
route del -net 192.0.1.0 netmask 255.255.255.0 dev hsi0
Start the HiperSockets Network Concentrator:
start_hsnc.sh &
Update /etc/init.d/boot.local with start_hsnc.sh to retain the setup across a reboot.
Warnings and errors are written to the var/log/messages file. No messages are written if
the start is successful unless multicasting is used. Then the following messages are
written to var/log/messages:
xcec-bridge: *** started ***
xcec-bridge: rechecking interfaces
xcec-bridge: added interface eth1
xcec-bridge: added interface hsi0
LP name Environment System name CHPID Device address IP address
A12 z/VM VMLINUX7 F4 E900-E902 192.0.1.1
A12 Linux under z/VM LNXRH2 F4 7000-7202 192.0.1.2
A12 Linux under z/VM LNXSU2 F4 7000-7002 192.0.1.3
A12 Linux under z/VM LNXSU3 F4 7000-7002 192.0.1.7
A12 Linux under z/VM LNXSU3 06 7200-7202 192.0.1.8
A23 z/OS sysplex SC30 F4 E900-E902 192.0.1.4
A24 z/OS sysplex SC31 F4 7000-7002 192.0.1.5
A25 z/OS sysplex SC32 F4 7000-7002 192.0.1.6
Note: When issuing the ifconfig command for each HiperSockets LAN include the mtu
parameter with the same value, as in this example:
ifconfig hsi0 192.0.1.7 netmask 255.255.255.0 mtu 1492 up
We used mtu 1492, which is the OSA-adapter ifconfig default. Also, to preserve this
setting across Linux boots, add an mtu=1492 statement to the appropriate network
definition file. See 5.2.4, Permanent Linux definitions on page 99.
108 HiperSockets Implementation Guide
Example 5-8 shows the results of the ifconfig command for the Network Concentrator. Note
the identical mtu values. Otherwise, nothing else is unique in this output.
Example 5-8 Network Concentrator ifconfig
eth1 Link encap:Ethernet HWaddr 00:09:6B:1A:73:A3
inet addr:192.0.1.8 Bcast:192.0.1.255 Mask:255.255.255.0
inet6 addr: fe80::9:6b00:41a:73a3/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1492 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:6 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:0 (0.0 b) TX bytes:468 (468.0 b)
hsi0 Link encap:Ethernet HWaddr 00:00:00:00:00:00
inet addr:192.0.1.7 Bcast:192.0.1.255 Mask:255.255.255.0
inet6 addr: fe80::200:ff:fe00:0/64 Scope:Link
UP BROADCAST RUNNING NOARP MULTICAST MTU:1492 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:7 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:0 (0.0 b) TX bytes:392 (392.0 b)
From the Red Hat system, LNXRH2, we demonstrated an FTP transfer of some large files to
some other system.
The following traceroute command shows that the path was through the Network
Concentrator at IP address 192.0.1.7:
traceroute 192.0.10.88
traceroute to 192.0.10.88 (192.0.10.88), 30 hops max, 46 byte packets
1 192.0.1.7 (192.0.1.7) 0.197 ms 0.262 ms 0.065 ms
2 192.0.1.99 (192.0.1.99) 0.496 ms 0.700 ms 0.516 ms
3 192.0.10.88 (192.0.10.88) 0.615 ms 0.703 ms 0.624 ms
The ftp example is shown in Example 5-9.
Example 5-9 FTP using Network Concentrator
ftp 192.0.10.88
Connected to 192.0.10.88.
220 (vsFTPd 2.0.4)
ftp> put install.log install.log.test2
local: install.log remote: install.log.test2
227 Entering Passive Mode (192,0,10,88,166,250)
150 Ok to send data.
226 File receive OK.
55654 bytes sent in 0.004 seconds (1.4e+04 Kbytes/s)
ftp> ftp> bin
200 Switching to Binary mode.
ftp> get .fonts.cache-2 .fonts.cache-2.test2
local: .fonts.cache-2.test2 remote: .fonts.cache-2
227 Entering Passive Mode (192,0,10,88,67,86)
150 Opening BINARY mode data connection for .fonts.cache-2 (929793 bytes).
226 File send OK.
929793 bytes received in 0.045 seconds (2e+04 Kbytes/s)
Chapter 5. Linux support 109
5.5 Commands
shows some useful commands for HiperSockets for Linux support.
Table 5-2 Useful commands
5.6 References
Device Drivers, Features, and Commands, SC33-8281. The most recent version can be
found in the Whats new link at
http://www.ibm.com/developerworks/linux/linux390/index.html
Additional information is available at these sites.
http://www-03.ibm.com/systems/z/os/linux/
http://www.vm.ibm.com/linux
For details on Linux distributions and support, go to the following Web site:
http://www.ibm.com/servers/eserver/zseries/os/linux/
Command Description
cat
/sys/bus/ccwgroup/drivers/qeth/<device_bus
_id>if_name
Displays the network interface name, for
example, hsi0 for the device_bus_id.
echo <text> > <file> Writes text into a file.
ifconfig
ifconfig <nw_if_name>
ifconfig <nw_if_name> down
ifconfig <nw_if_name> up
ifconfig <nw_if_name> <ip_adrs> netmask
<subnet_mask>
Displays all network interfaces defined.
Displays the given network interface.
Brings down a network interface.
Starts a network interface.
Defines the network interface with the given IP
address and subnet mask with default values.
lscss Displays the channel subsystem.
lsqeth
lsqeth <nw_if_name>
Displays all qeth information for the network
interface name.
Displays the specific qeth information for given
network interface name.
readlink
/sys/class/net/<nw_if_name>device
Displays the device_bus_id, for example,
0.0.7000, for the network interface name,
nw_if_name.
route Displays the current network routes.
110 HiperSockets Implementation Guide
Copyright IBM Corp. 2002, 2006 111
Related publications
The publications listed in this section are considered particularly suitable for a more detailed
discussion of the topics covered in this IBM Redbooks publication.
IBM Redbooks publications
Communications Server for z/OS V1R8 TCP/IP Implementation Volume 1: Base
Functions, Connectivity, and Routing, SG24-7339
IBM System z Connectivity Handbook, SG24-5444
OSA-Express Implementation Guide, SG24-5948
Other resources
These publications are also relevant as further information sources:
Linux on System z, Device Drivers, Features, and Commands, SC33-8281
System z9 Stand-Alone Input/Output Configuration Program Users Guide, SB10-7152
z/OS Communications Server, IP Configuration Guide, SC31-8775
z/OS Communications Server, IP Configuration Reference, SC31-8776
z/OS Communications Server: SNA Network Implementation, SC31-8777
z/OS Communications Server, SNA Resource Definition Reference, SC31-8778
z/OS Hardware Configuration Definition (HCD) Planning, GA22-7525
z/OS Hardware Configuration Definition (HCD) User's Guide, SC33-7988
z/OS Resource Measurement Facility (RMF) Performance Management Guide,
SC33-7992
z/OS Resource Measurement Facility (RMF) Report Analysis, SC33-7991
z/OS Resource Measurement Facility (RMF) User's Guide, SC33-7990
z/VM V5R2.0 CP Command and Utility Reference, SC24-6081
z/VM V5R2.0 CP Planning and Administration, SC24-6083
z/VM V5R2.0 TCP/IP Diagnosis Guide, GC24-6123
z/VM V5R2.0 TCP/IP Messages and Codes, GC24-6124
z/VM V5R2.0 TCP/IP Planning and Customization, SC24-6125
z/VM V5R2.0 TCP/IP User's Guide, SC24-6127
112 HiperSockets Implementation Guide
Referenced Web sites
These Web sites are also relevant as further information sources:
z/VM information and documentation
http://www.vm.ibm.com
z/OS Internet Library
http://www-03.ibm.com/servers/eserver/zseries/zos/bkserv/
Linux on IBM System z
http://www-03.ibm.com/systems/z/os/linux/
http://www.vm.ibm.com/linux
ibm developerWorks
http://www-128.ibm.com/developerworks
http://www-128.ibm.com/developerworks/opensource/
How to get IBM Redbooks publications
You can order hardcopy IBM Redbooks publications, as well as view, download, or search for
IBM Redbooks publications at the following Web site:
ibm.com/redbooks
You can also download additional materials (code samples or diskette/CD-ROM images) from
that site.
IBM Redbooks publications collections
IBM Redbooks publications are also available on CD-ROMs. Click the CD-ROMs button on
the IBM Redbooks publications Web site for information about all the CD-ROMs offered, as
well as updates and formats.
Copyright IBM Corp. 2002, 2006 113
Index
B
bridge network traffic 106
BSDROUTINGPARMS statement 49
C
channel path 2223, 25
Channel path ID 67, 26
Channel Subsystem ID 2527
Channel Subsystem list 2425
Channel To Channel (CTC) 96
CHPID F4 1819, 28, 31, 4142, 4546, 8385, 95, 98
HiperSockets connection 76
lookup table 18
CHPID F5 42
CHPID F7 42
CHPID type 6, 9, 2526, 2829, 42, 44
CNTLUNIT CUNUMBR 37, 41, 76
collisions 99, 103
configuration rules 22
control unit 22
address 32
definitions 30
type 30
control unit (CU) 22, 24, 30, 4041, 44, 52, 66, 75
control unit number 30
CP host 90
CS V1R8 47, 49, 57, 61
D
DEDICATE statements 96
DEVICE IUTIQDF4
MPCIPA 45, 59, 77
device name 4445, 47
Dynamic XCF 17, 49, 54, 56, 87
connection 17, 53, 57
definition 49
device 44, 49, 51, 69
DYNAMICXCF parameter 53
E
Example 84
EZZ4313I Initialization 46, 54, 68
F
FTP test 80
G
GLOBALCONFIG statement 64
IQDVLANID parameter 64
XCFGRPID parameter 65
H
Hardware Configuration Definition 2123
Hardware definition 17, 40, 99, 101
HCD 40
HiperSockets 13, 2122, 3941, 63, 82, 94
Direct routing 19
implementation 3
limitations 3
Linux support 9495
overview 2
z/OS support 40
z/VM support 8283
HiperSockets Accelerator 3, 1415, 39, 73, 7576
HiperSockets channel 22, 44, 47
HiperSockets device 5, 7, 12, 4446, 58, 82, 8485,
9798
address 95, 97
attribute 98
LINK statement 58
specific path 84
VLANID 60
HiperSockets network 82, 8687, 89, 9496
HiperSockets Network Concentrator 1012, 9394, 106
HiperSockets support 3, 9, 11, 17, 8182
Host Page-Management Assist (HPMA) 10, 17
I
I/O definition file (IODF) 2324, 26
I/O device 3, 18, 22, 24, 82, 8485, 94, 96
address 4, 86
data exchange 4
definition 21, 33
limitation 22
list 33, 36
number 33
type 33
IFCONFIG command 88, 99, 102, 104
inet6 addr 99, 103
Integrated Facility for Linux (IFL) 82
internal Queued Input/Output (iQDIO) 2
IOCP definition 41, 83, 95
IODEVICE address 37, 41, 76
IP address 3, 5, 7, 12, 43, 46, 50, 86, 94, 99, 107
192.0.1.4 19, 46
192.0.1.7 108
192.0.2.4 53
lookup table 3, 5, 7, 1819, 46, 53, 8687, 94
network interface 109
takeover 9395
IP address takeover 94
IP forwarding 107
IP packet
forwarding processing 73
IP Subplex 6465, 68
114 HiperSockets Implementation Guide
IPCONFIG DYNAMICXCF 49
IPCONFIG statement 15, 53, 67
DATAGRAMFWD option 76
IQDIORouting option 73
IQD CHPID 2223, 26, 4446, 87
definition 98
maximum frame size 22
IQDCHPID 52
IST075I Name 48, 56, 70
IST087I Type 48, 56, 70
IST097I Display 47, 56, 68
IST1715I MPCLEVEL 48, 56, 70
IST1716I PORTNAME 48, 56, 70
IST1717I ULPID 48, 56, 70
IST1724I I/O Trace 48, 56
IUTIQDIO 55
IUTSAMEH 63
J
jumbo frame option 7
L
Linux
adding HiperSockets to existing system 97
definitions 96
Initial install with HiperSockets 96
IP address takeover 94
QETH device driver 94
running as a standalone LPAR 95
VLAN 102
Linux guest
environment 95
server 106
system 8283, 89, 94, 96, 102
Linux references 109
Linux support 9394
Linux system 10, 58, 94, 9697, 105
Only IP addresses 94
Local Area Network (LAN) 23, 5, 44, 58, 74, 90, 106
logical partition 4, 8, 11, 22, 2627, 4142, 44, 82,
9495, 106
A23 43, 46, 78
A24 43, 46, 53
A25 43, 46, 53
HiperSockets connection 49, 57
IP connections 13
required Linux definitions 95
separate security zones 13
VTAM nodes 43
M
maximum frame size 2, 7, 2223, 26, 41, 46, 8788, 90
CHPARM value 37
Maximum Transmission Unit (MTU) 22, 26
Media Access Control (MAC) 11
MTU size 7, 16, 22, 41, 4647, 55, 69, 83, 87, 98
Multicast 47, 55, 69, 88, 90
Multicast support 9, 12
Multiple Image facility (MIF) 22
multiple security zones 62
Multiple stack 51
multi-tiered server 4
N
Network Interface Card (NIC) 90
O
operating system 2, 5, 7, 24, 3435, 40, 82
Configuration 35
operation mode 26
OSA-Express port 11, 7375, 94
OSA-Express verification 77
P
Program Controlled Interrupt (PCI) 19
Q
QDIO Enhanced Buffer-State Management (QEBSM) 9
Queued Direct Input/Output (QDIO) 2, 5
R
read control 4, 18, 33, 82, 94, 98
Recommended Service Upgrade 83
Red Hat Enterprise Level 4 95
Red Hat parmfile 97
Red Hat server 102
Redbooks Web site 112
Contact us x
routing table 50
S
Scope
Link 99, 103
Server integration 4
setup software, e.g. (SUSE) 94, 9697
SLES 10 95
START statement 49
SUBNETMASK 47, 55, 69
subnetwork 12, 50
subplex group
IDs 13, 63
name 66
SUSE Linux Enterprise Server (SLES) 83, 95
SUSE parmfile 97
sysplex Subplex 4041, 62, 6667
TCP/IP configuration setup 67
VTAM configuration setup 66
System Assist Processor (SAP) 5
System z 25, 94, 102
T
TCP/IP 23, 5, 22, 26, 33, 40, 42, 44, 8283, 85, 94
DEVICE statement 86
LINK statement 86
Index 115
NETSTAT command 88
PROFILE 86
SYSTEM DTCPARMS 86
TCP/IP profile 49
z/OS 45
z/VM 86
TCP/IP stack 3, 56, 15, 22, 33, 40, 44, 88
flexible backup 7
internal and external networks 13
maximum number 3
TCPIP 47, 55, 60, 78, 106
Test Configuration 94
Transport Resource List Element (TRLE) 45
TRLE definition 54, 56, 70, 76
U
unit address 34
V
vconfig command 102
Virtual Local Area Network (VLAN) 3, 10, 58, 8182, 88,
9394, 102
virtual machine 82, 85
VLAN 1 58, 61, 89, 102, 104
VLAN 3 5859, 61, 102, 104105
VM LAN
activity 90
segment 90
VTAM TRLE
definition 54
major node 79
X
XCF group 50, 6567, 71
EZBTCP11 71
EZBTCP22 71
XCF link 63
Z
z/OS sysplex 18, 87, 107
z/VM
CP ATTACH command 85, 96
DEDICATE statement 84, 96
Reference 91
VLAN 89
z/VM I/O verification 84
z/VM support 81
z/VM system 18, 8486, 96
HiperSockets environment 86
116 HiperSockets Implementation Guide
(
0
.
2

s
p
i
n
e
)
0
.
1
7

<
-
>
0
.
4
7
3

9
0
<
-
>
2
4
9

p
a
g
e
s
H
i
p
e
r
S
o
c
k
e
t
s

I
m
p
l
e
m
e
n
t
a
t
i
o
n

G
u
i
d
e

SG24-6816-01 ISBN 0738486094


INTERNATIONAL
TECHNICAL
SUPPORT
ORGANIZATION
BUILDING TECHNICAL
INFORMATION BASED ON
PRACTICAL EXPERIENCE
IBM Redbooks are developed
by the IBM International
Technical Support
Organization. Experts from
IBM, Customers and Partners
from around the world create
timely technical information
based on realistic scenarios.
Specific recommendations
are provided to help you
implement IT solutions more
effectively in your
environment.
For more information:
ibm.com/redbooks
HiperSockets
Implementation Guide
Discussing
architecture,
functions, and
operating systems
support
Planning and
implementation
Setting up examples
for z/OS, z/VM and
Linux on System z
This IBM Redbooks publication discusses the System z
HiperSockets function. It offers a broad description of the
architecture, functions, and operating systems support.
This IBM Redbooks publication will help you plan and implement
System z HiperSockets. It provides information about the
definitions needed to configure HiperSockets for the supported
operating systems.
This IBM Redbooks publication is intended for system
programmers, network planners, and system engineers who will
plan and install HiperSockets. A solid background in network and
TCP/IP is assumed.
Back cover

S-ar putea să vă placă și