Documente Academic
Documente Profesional
Documente Cultură
University
Information
Technology
Services
System
Assurance Group
DATA CENTER
STANDARDS
This document outlines Indiana University Data Center guidelines and
standards, including equipment installations, data center access, and
operational procedures.
Table of Contents
1. Requesting installation
1.1
Space request
2. Acquisition guidelines
2.1
2.2
Equipment specifications
2.3
Power
2.4
2.5
3. Equipment installation
3.1
3.2
Cabinet design
3.3
Cabinet/Rack
3.4
UPS
3.5
KVM solutions
3.6
Hardware identification
3.7
Disposal of refuse
3.8
3.9
Installation review
3.10
Negotiations
3.11
Replacement parts
4. Equipment removal
5. Operations procedures
5.1
5.2
Equipment registration
5.3
Essential information
5.4
Change Management
5.5
Monitoring tools
5.6
Security
5.7
System backups
5.8
6.2
Projects / RFPs
6.3
6.4
Firewall Security
6.5
6.6
6.7
6.8
6.9
6.10
6.11
Multicast
6.12
Jumbo Frames
6.13
Tagged VLANs
6.14
IPv6
Exception requests
7.2
A general guideline
7.3
7.4
Outbound traffic
7.5
7.6
7.7
7.8
7.9
7.10
7.11
7.12
7.13
7.14
All UITS staff and external departmental staff who have equipment
responsibilities in the Data Centers should accept the terms and
responsibilities outlined in this document.
1. Requesting installation
1.1 Space request: Prior to your submitting a proposal for new
equipment, youll need to begin initial discussions regarding machine
room space. Once you submit a machine room space request form, the
System Assurance Group (SAG) will schedule a meeting. The SAG
includes the following participants: the UITS Operations Manager, a
representative from UITS Facilities, a representative from Networks,
and the system administrator and/or system owner. The purpose of the
SAG meeting is to address environmental issues (i.e., equipment BTU
and power specifications), space, and floor location. The floor location
may be determined based on environmental data.
2. Acquisition Guidelines
2.1 Rack mounted devices: Ensure that youre purchasing rack
mounted equipment -- everything needs to be either mounted in the
rack or in its own proprietary cabinet (as opposed to free standing).
The Operations Manager must approve any exceptions.
2.2 Equipment specifications: Upon making your equipment
selections, send the vendor specification sheet to sag-l@indiana.edu
2.3 Power: Request power for each device from the electrical
engineer. Power requests will be handled as follows:
IUPUI: Two rack-mounted cabinet distribution units (CDU) will be
installed in each standard rack. These CDUs utilize 208V IEC C13 and
C19 outlets. Your hardware will need to operate at 208V and have the
proper power cords. Installation of CDUs can take up to a month, so
please request power as early as possible. For non-standard or
proprietary racks, twist-lock receptacles shall be provided under the
floor for connection of user-supplied CDUs.
IUB Data Center: In order to maintain Uptime Institute Tier III
requirements, two rack-mounted CDUs fed from different power
5. Operations procedures
5.1 Data Center access: Due to the sensitive nature of the data and
computing systems maintained within its facilities, security and access
are important aspects of the OVPIT/UITS environment. In most cases,
the university is contractually and legally obligated to limit access to
only those who have IT responsibilities requiring frequent access.
Security cameras are located throughout OVPIT/UITS buildings. These
cameras record footage for follow-up in the case of a security incident.
They also provide an effective deterrence function in the safe operation
of the building.
UITS staff with responsibilities in the data center may gain access
through an arrangement between the department manager and
Operations. Requests should be made via the Special Access Request
Form.
Persons other than full-time UITS staff are permitted in the data center
only under one of the following conditions:
A. They are full-time staff of vendors providing services to UITS:
Contract consultants or service representatives may be
authorized by prior arrangement with Operations.
B. They are full-time staff of Indiana University working on a system
owned by an IU department and housed in the data center,
under terms specified in a Co-location Agreement -- access will
be granted in situations requiring hands-on system
administration, not simply because a system is present on a
machine in the data center.
C. They are full-time or contracted staff of a non-IU entity that
owns a system housed in the data center, under terms specified
in a co-location agreement again, access will granted when
hands-on system administration is necessary, not simply
because a system is present on a machine in the data center.
D. They are escorted by a full-time UITS staff member as part of a
tour of the facilities.
ID badges and access cards will be provided for those individuals who
meet criterion A, B, or C. The ID badges must be worn and visible
during visits to the data center. All staff who meets criteria A, B, or C is
expected to sign in to the data center through Operations prior to
entering the room, and to sign out upon exiting.
Biometric hand geometry scanners are installed at both Data
Centers. A registration process will be scheduled and performed by the
UITS Facilities or Operations staff.
For additional information and to learn about biometric hand geometry
scanners, review the internal KB document at
https://kb.iu.edu/data/azzk.html
(Note: the internal document requires authentication).
The Vice President for Information Technology has developed a policy
related to the handling, management and disposition of "biometric"
data used in the hand geometry scanner. It is stored in an internal KB
document at https://kb.iu.edu/data/bapr.html
(note: the internal document requires authentication).
not manage their own switches. This applies to any Ethernet switch
located in a rack in the enterprise environment. This policy also
includes private switches that are not designed to connect to the
campus network switches.
Blade chassis switches are allowed in the enterprise environment in
certain cases. If you intend to install a chassis server environment,
please contact noc@indiana.edu to schedule a meeting with a campus
networks Data Center engineer to discuss the chassis networking.
6.6 Internal Rack Wiring: Internal rack wiring should follow rack
cabinet management standards. Cables should be neatly dressed. All
cables should be properly labeled so they can be easily identified.
Refer to TIA/EIA-942 Infrastructure Standard for Data Centers, section
5.11; a copy is available in the Operations Center. Applies to all users.
Users in the Enterprise environment are not allowed to run cables
outside of the racks.
6.7 Sub-Floor Copper/Fiber Wiring Requests: All data cabling
under the floor, including SAN and Ethernet, must be installed by CNI
in a cable tray. Any requests for sub-floor copper or fiber can be made
via the telecom.iu.edu site. CNI can supply copper, single-mode fiber,
and multi-mode fiber connectivity between locations. This applies to
anyone with systems in the data center. The requestor is responsible
for paying for these special requests. CNI can provide an estimate for
any work requested.
6.8 Server Network Adapter Bridging: Server administrators are
not permitted to use any form of software based network adapter
bridging. Attempts to bridge traffic between two server interfaces are
subject to automated detection and shutdown. This policy applies to
Enterprise and any Research System using IU campus networks.
6.9 Server Network Adapter Teaming/Trunk/Aggregate Links:
Server administrators may use the teaming of network adapters to
increase bandwidth and redundancy. CNI can also setup static LACP
trunks on the switch when the aggregation of links is required.
6.10 SAN Fiber Channel (FC) Switches: CNI does not provide the
management or hardware for SAN switches. Administrators are
allowed to install and manage their own SAN switches. CNI can
provide fiber trunks outside of the racks as needed (policy 6.7).
6.11 Multicast: Multicast is available by request only. Multicast
functionality should not be assumed to work until requested and tested
with a network engineer.
7.3 Hosted based firewall rules should be used on every host where
applicable. Host based rules should be more restrictive than the data
center firewall rules when possible.
7.4 Outbound traffic (traffic leaving a firewall zone) is allowed
by default. You only need to create exceptions for traffic entering the
data center from outside of a hosts security zone.
7.5 Service port Any (1-65535) as a destination port in a firewall
policy is allowed when meeting all of the following criteria:
7.12 Firewall security zones are used to split up networks within the
data center.
UITS Core
Services
Servers and systems UITS Staff Only
managed by
UITS Hosted
Services
UITS Staff Only
IU Community
Any IU Staff
Operating system
root, administrator,
or equivalent access
Operating system
level interactive
logins or virtual
desktop sessions1
User-provided code2
Examples
Any IU Staff
Any IU Staff
Any IU Staff
No
DNS, DHCP, NTP,
ADS, CAS,
Exchange, Lync,
Oracle Databases,
HRMS, FMS,
Onestart, etc
Yes
WebServe, CHE,
CVE
Yes
Non-UITS physical
servers, Intelligent
Infrastructure virtual
servers provisioned
for departments
Y
R
o
S
O
D
H
Each Firewall Security Zone exists at both the IUPUI and IUB Data
Centers. The following list describes which zones are related as well as
what the relationship means:
IN-30-CORE
BL-30-CORE
IN-32-UITS
BL-32-UITS
1 Interactive logins include technologies such as SSH, RDP, Citrix, VNC, Remote
Powershell etc.
2 User-provide code includes all executable code installed on the system with user-level
access instead of by a designated system administrator, system developer, or enterprise
deployment process. This may include binaries, shell scripts, interpreted languages such
as Perl or Python, as well as web-based code such as PHP/ASP.
IN-33-COLO
BL-33-COLO
7.14 Global address groups are built-in groups that any department
can use. These are commonly used source groups that are maintained