Documente Academic
Documente Profesional
Documente Cultură
Issue 01
Date 2015-08-19
and other Huawei trademarks are trademarks of Huawei Technologies Co., Ltd.
All other trademarks and trade names mentioned in this document are the property of their respective
holders.
Notice
The purchased products, services and features are stipulated by the contract made between Huawei and
the customer. All or part of the products, services and features described in this document may not be
within the purchase scope or the usage scope. Unless otherwise specified in the contract, all statements,
information, and recommendations in this document are provided "AS IS" without warranties, guarantees or
representations of any kind, either express or implied.
The information in this document is subject to change without notice. Every effort has been made in the
preparation of this document to ensure accuracy of the contents, but all statements, information, and
recommendations in this document do not constitute a warranty of any kind, express or implied.
Website: http://enterprise.huawei.com
1. As a technical proposal template for the National Distributed Cloud Data Center platform, this
document provides comprehensive information. Modify the content based on project
requirements.
2. Content in this document is in blue, green, or black.
Content in blue indicates prompt information, which must be deleted when this
document is presented to customers.
Content in green indicates examples, which must be modified based on projects.
Content in black indicates essential information, which can be used directly or modified
based on project requirements.
3. Technical proposal must be prepared based on projects; otherwise, it will be like a product
overview. Therefore, you need to add information about a project when using this template.
4. This template is for HUAWEI National Distributed Cloud Data Center.
Contents
9 Infrastructure Solution............................................................................................................... 90
9.1 Computing Resource Planning ................................................................................................................................... 90
9.1.1 Server Requirements ................................................................................................................................................ 90
9.1.2 Server Selection ....................................................................................................................................................... 91
9.1.3 Server Quantity Planning ......................................................................................................................................... 91
9.2 Network Resource Planning ....................................................................................................................................... 94
9.2.1 Switch Requirements ............................................................................................................................................... 94
9.2.2 Switch Selection ...................................................................................................................................................... 94
9.2.3 Switch Quantity Planning ........................................................................................................................................ 94
9.3 Storage Resource Planning ......................................................................................................................................... 94
9.3.1 Storage Requirements .............................................................................................................................................. 94
9.3.2 Storage Selection ..................................................................................................................................................... 94
9.3.3 Storage Capacity Planning ....................................................................................................................................... 98
1.1 Background
Jordan was one of the first Arab countries to introduce communication technology (CT) to the
economy and also one of the first Arab countries to introduce information technology (IT) to
industry and the economy. ICT technology brings a huge and far-reaching impact to the
Jordan people's lifestyle, social pattern, economic development and all other aspects. Jordan
people enjoy the convenience the ICT technology brings to them. With the improving of the
Jordan people's lives and the development of economy, the requirements of ICT technologies
are increasingly high. How to make ICT technology serve the Jordan people better and
promote Jordan economic development, it is a big challenge for us.
Nowadays the staff of most enterprises in Jordan is less than 5 people and they can not afford
to build their own ICT platform. At the same time there are a lot of families do not have
network and can not enjoy the ICT services. How to make our ICT technologies to better
serve them, it is a pressing issue placed in front of us. Meanwhile, the connection between ICT
and medical industry and tourism is still not tight enough and can not provide more proactive
and targeted services, and how to ensure information security while providing services is also
a huge challenge for the ICT construction.
Jordan government initiated a technologies transformation program to rebuild the ICT
infrastructure to better serve its Citizens and enterprises. This program will also bring live
services through technologies and innovation to the people of Jordan. These services will help
advance Jordan into the future and the level of services will be raised in order to provide
more proactive services to the Citizens.
Despite ICT becoming increasingly universal, the question of access and usage remains
important—especially for developing countries, given their need to narrow the digital divide.
Even within developed nations, the need to provide high speed broadband to all segments of
the population has acquired importance in recent years. For example, in Brazil, Broadband
has added up to 1.4% to the employment growth rate. In Africa, ICTs directly contribute
around 7% of Africa’s GDP, which is higher than the global average.
As shown in the following figure, in low and middle-income economies, when the penetration
of broadband rises 10%, the GPD will rise 1.38%. The relationship between ICT drivers and
impacts are very strong. All countries have realized that an integrated ICT industry will
enhance the competitiveness and creativity of their economies and fuel the sustainable growth
of the economy.
Figure 1-1 The connection between ICT development and social development
1.4 1.38
High-income economies
1.21
1.2 Low-and middle-income 1.12
economies
1
0.81 0.77
0.8 0.73
0.6
0.6
0.43
0.4
0.2
0
Fixed Mobile Internet Broadband
Note: The vertical axis is the percentage-point rise in GDP per 10-percentage-point rise in penetration.
There is a new understanding of future national cloud data center. It is that the national
developing strategy drives the ICT strategy. The ICT strategy based on ICT intent. ICT
architecture and ICT governance supports national developing strategy and realize the targets
of national developing strategy. And the national broadband network, future national cloud
data center are the key fundamental facilities of ICT strategy.
This opinion is a global consensus. There are some examples about the consensus:
Information and communication technologies (ICT) play a decisive role. They are the key to
productivity in all industries.
Designed with high performance and large capacity, the system is scalable and supports a
large number of concurrent users.
Easy to use
The system provides intuitive graphical user interfaces (GUIs) on which users can easily
find desired operations and information. Operation steps are properly arranged, and
detailed help information is provided.
Different GUIs are displayed for different roles. Advanced features that are seldom used
are displayed by options.
Green & Saving Energy
Take energy-saving measures, use green materials and improve the anti-electromagnetic
interference to meet the requirements of today’s centralized and hyper-scale data centers
which are strained by huge power consumption, even reduce the CAPEX.
2 Requirements Analysis
2.1.1 e-Government
Public information portal and service center
One stop online service for citizens
ICT strategy based e-government service planning
Distributed cloud data center resource pool
End to end security and DR solution
Unified data center management
Efficient internal automation office for government
2.1.2 e-Education
Massive Open Online Courses : setup rich teaching resource libraries & enable sharing
among universities
Digital library : realize the digitization of books, journals & newspapers to expand
knowledge scope & methods
Education cloud disk: provide web-based storage to teachers & students and enable
them in storage, backup & sharing datarealize sharing of high quality education
resources to maximize the value
2.1.3 e-Health
EHR:build complete medical info for citizens & share them among healthcare
organizations
Disease control & prevention: support all level healthcare organizations to report
certain diseases
Drug management: monitor the whole distribution processes of drugs
Cloud HIS service: provide HIS service to small hospitals & clinics via network
2.1.5 e-Police
Crime management, including an alarm receiving and dispatching command center, law
enforcement and crime investigation, etc.
Public service: gun management, population management, ID management, traffic
management, control of the exit and entry of citizens.
Administrative management: financial management, human resource management, etc.
Therefore, segment the data center network into several functional areas and ensure the
service traffic and efficiency of functional areas while strictly controlling mutual access
between the functional areas. Additionally, isolate the data center network from external
networks and also isolate different business service areas to ensure security of service
systems.
The data center network must provide a variety of distinct features such as quick
convergence, easy maintenance, and easy management.
The data center network must feature high reliability and high availability to prevent
single point of failures.
The data center network must be scalable and meet service demands of today and
tomorrow.
The data center network must support network virtualization.
Therefore, virtualize core switches and access switches into a logical device by using
switch cluster virtualization or stacking technology, thereby reducing the number of
nodes and simplifying configuration.
Networks of multiple Data center connect to each other.
For an enterprise that has multiple data centers, consider interconnection of these data
centers.
Virtualization security
Provide the security protection capability for the virtualization infrastructure in the cloud
computing platform to ensure VM isolation, monitor the communication between
specific VMs, and ensure the security of VMs.
Data security
− Ensure the confidentiality, integrity, and availability of sensitive data defined by
enterprises in the life cycle of the sensitive data.
− Identify sensitive data. Establish and maintain sensitive data directories. Formulate
protection policies and mechanisms to prevent unauthorized data distribution.
− Provide a security communication mechanism to ensure the confidentiality and
integrity of sensitive data transferred on the Internet.
− Provide a data destruction mechanism to ensure that data cannot be accessed after the
life cycle expires.
− Three or more management tools are adopted in 70% data centers, which raises very
high requirements for O&M personnel competence.
− Data centers are developed based on the cloud computing technology. Lacking of
O&M experience, traditional enterprises can build the capability only after countless
practices.
As the service user of a VDC, an end user applies resources in the VDC offline or on the
self-service platform.
4 Application Solution
Education Teaching-
Resource Digital Education Web
Network Learning Video Teaching Network
Service Shareing Library Disk
Interaction Service
Education VDC
Layer 2/3
WEB WEB App App App App Database Replication App App App App
Backup Server
App App
Media Server
DB
VM Replication
Storage Pool(Common) Storage Pool(High Performance) Storage Pool(Common) Storage Pool(High Performance)
DC1 DC2
Teaching Resources
The teaching Resources include Teaching App, Media material, Courseware, Teaching
plan.
Learning Resources
The learning resources include Learning App, Digital textbook, Excises & Practice,
Digital reading.
By using OAI-PMH 2.0 protocol and released functions of the OAI Metadata Harvesting
System V2.0, it can realize a series of services of browsing and retrieving region-wide and
nation-wide metadata through taking the center as a uniform platform.
Web pages
Figure 4-14 Accessing the online storage (web disk) service from web pages
The following functions are supported when users access the online storage service from web
pages:
2. Friendly information management and group management
3. Multiple data sharing modes, including data sharing among accounts and data sharing
among groups (the read/write authority attribute can be set for data sharing)
4. Sending a document link to the specified email recipient so that the recipient can access
file resources according to the link
5. File search
PC client
Figure 4-15 Accessing the online storage (web disk) service from a PC client
The following functions are supported when users access the online storage service from a PC
client:
Mobile client
Figure 4-16 Accessing the online storage (web disk) service from a mobile client
The following functions are supported when users access the online storage service from a
mobile client:
10. Directly uploading photos taken with the embedded camera to the cloud storage space
11. Uploading image files to micro blogging websites
12. Sending image and audio files as multimedia messages or email attachments
13. Backing up and recovering the local contact list
14. Discontinuous transmission (DTX)
15. Traffic control
16. File compression and decompression
Unified data center O&M involves the following aspects:
Overall architecture
User role system
Data center routine O&M
Troubleshooting
Proactive intelligent O&M
Report management
ManageOne
Hospital community
The national healthcare service center is based on a cloud computing data center and
provides services such as public health surveillance, collaborative healthcare, and
education in SaaS mode.
Upper-level and lower-level hospitals perform online or offline remote collaboration
diagnosis or medical education based on the collaborative healthcare and education
provided by the national healthcare service center.
Figure 4-20 shows the service process of the epidemic reporting system.
The following sections describe functional modules of the epidemic reporting system.
The following sections describe functional modules of the drug monitoring system.
The following sections describe functional modules of the healthcare collaboration platform.
can be videotaped or saved as files in common formats in the telemedicine center. Streaming
media courseware can be prepared or sorted on this system.
This system also provides COD services and enables courseware adding, deleting, uploading,
or query functions.
National Criminal
Lost & Found
Firearm Records crime Intelligence
Asset Car Tracking
Management management records and
Control System
System system management Profiling
System
system System
Fleet and
Human National
Crime Vehicle Detention Access
Resource Asset
Information Tracking Management Control
Network Management Management
System Management System System Network
Service System System Service
System
e-Police VDC
Layer 2/3
WEB WEB App App App App Database Replication App App App App
Backup Server
App App
Media Server
DB
VM Replication
Storage Pool(Common) Storage Pool(High Performance) Storage Pool(Common) Storage Pool(High Performance)
DC1 DC2
a) Scenes of Crime
This section is responsible for:
* Uplifting fingerprints from scenes of crime.
* Classifying and searching of fingerprints.
* Keeping records of all fingerprints from crime scene.
* Assisting Investigating officers in identifying criminals through fingerprint search
process.
The function of all the above sections are interwoven, they depend upon each other.
However, the indexing system is still manual and labour intensive.
Charge
complainant report station book docket
Office
instructions
book
With CR number
Complete Dockets
Register
Record
Record
Office
Office
InComplete Dockets
Register
record
Particulars of Crime Exhibits
record
complainant in Register register
alphabetic order
Index against the CR
Docket with reference
Results of Register
trial record
using the CR
reference exhibits
detail
trial
Investigating
Officer (IO) or
relevant station
inquiries investigat
docket complete IC Crime
Count form ions
The informatization of resource management information will enable HRM itself away from
the positioning of traditional transactional role. The traditional resume processing, police
officer information management, police officers attendance management and other business
works with little contribution to organizational strategy will be done by the human resources
informatization technology, strengthening and improving the service conditions of the entire
organization, human resources system and process. Human resource management can be more
used in the planning of human resources, police officer career design, strategic
decision-making consultation and other works with strategic significance for the organization,
achieving effective auxiliary organizational transformation and re-design.
Meet personalized needs of the policemen, and provide value-added services. For the police
officers belong to the knowledge workers over a long period of time, they pay more attention
on participation in the management, transparent and personalized services. Human resources
information system allows police officers to quickly and easily understand the career plans
and incentives tailored for their own. At the same time, the police officers also can through
independent design training, dynamic work arrangement and personal development plan to
make plans and programs favorable for their development. The human resources department
can more conveniently provide value-added services for the other managers and the vast
majority of police officers.(Advancement of skills and increase the motivation of staff in order
to achieve the highest possible level of performance over time) 。
Advanced reporting tools can support generation and distribution of all kinds of report, such
as attendance reports, performance reports, personnel statements, with easy and secure data
capturer of employees and retention of a historical record of HRMS data which will be used
to generate a variety of specialised reports.
Improve the management efficiency and reduce management costs. human resource
management with information technology can make a complete record of all police personnel
information, quick and convenient access to a variety of statistical analysis results, which
provides decision support of human resources elements for Police Service strategic goals, and
Decision Support System(DSS)to be embedded that will assist management at different levels
to make informed decisions, that are consistent with human resource planning and relating
costs to results. It’s convenient for high-level managers and department heads to know about
personnel status, talent needs standards, making human resources management more
scientific, talent allocation more reasonable. The purpose of reducing the operation costs is
achieved by reducing the operating costs of the HR work, reduce administrative HR staff,
reduce communication costs.
Strengthen the organization internal communication and enhance core competencies. Police is
with a wide range of organizational units and complex mechanism, but human resource
information system makes centralized data management, distributed application, using a full
range of network operating mode, which can greatly enhance the police internal
communicate. It promotes resource sharing of talent, technology, knowledge in police
internal, strengthens mutual ties, and improves human capacity. Maintenance of the employee
portal i.e. employees should be able to log on to the HR System and make authorised Human
Resources related queries
The problems that can be solved by human resource management system with information
technology are generally as follows:
Effective human resources management solved the problem of brain drain, idleness and
waste;
Systemic vocational training management solves the problem of lack of talent reserve;
Good talent maintenance solves the uneven trend of human resources structure and
distribution;
Improved systemic planning addresses the problem of self-contain and the lack of a virtuous
circle of the recruitment, training and assessment.
apply the modern information technology, under a network environment ,fast store, send, and
query this kind of information has been already imperative.
Stolen vehicle tracking system uses advanced information storage technology , number
identification technology and mass database dynamic retrieval technology, making stolen
vehicle dynamic tracking find possible. On the one hand, standard register of the stolen
vehicle information makes it more convenient to real-time query management, and let the
police recognize the stolen vehicles in daily patrol. On the other hand, using the street level
deployed number recognition system of vehicle number to locate the lost car from the flow of
vehicles, the video information from food monitoring system to track vehicle, and the border
bayonet vehicles’ pass record query improves the ratio of the stolen vehicle recover as well as
protects the personnel's property.
organizations are a few examples of the investment, and they constitute the system
intelligence foundation.
Specific to the current multi-source intelligence information for the police forces of Country
X, the sub-system offers the technical means of intelligence analysis and management
methods; designs intelligence analysis engine; supports the intelligence situation and trend
analysis, and establishes a unified intelligence information service system so as to form a
complete comprehensive application system of intelligence analysis and judgment including
intelligence collection, information processing, intelligence analysis and intelligence services
for different police departments of Country X
The cloud service operation manages all cloud and non-cloud resources of data centers based
on resource pools and provides highly customizable resource services, including unified
resource orchestration, customized resource scheduling policies, automatic resource allocation
and deployment, and customized enterprise service integration. The cloud service operation
provides a platform for enterprises to manage and provision resources of multiple data centers
in a unified manner. The overall architecture of the cloud service operation as following.
Service definition
− User management
− Service catalog management
− Metering management
5 Management Solution
ManageOne is an all-in-one solution for the operation and maintenance of NDC2. It can
integrate dispersed resources into a logical resource pool, provide computing, storage, and
network resources as cloud services to users, support user self-service, schedule, control, and
deploy data center physical and virtual resources in a unified manner, and monitor and
maintain cloud services using processes in a standard manner.
Management software used in the ManageOne solution is classified into two layers:
Resource layer: Software at this layer is used to manage resource information (for
example, collecting device information) and send resource information to the service
layer for service assembling and provisioning and O&M analysis.
Service layer: Two kinds of software are used at this layer:
− Operation software: provides operational services for tenants after resource
orchestration, and provides a unified operation platform for administrators.
− Maintenance software: implements comprehensive analysis on collected maintenance
information (such as alarm information and performance information), displays the
analysis results, and provides a unified maintenance platform for administrators.
Error! Reference source not found. describes the function modules in the ManageOne
solution.
Multiple data centers are managed as one data center: Data centers are physically distributed
and logically centralized. Unified management of multiple data centers, cloud and non-cloud
resources, heterogeneous virtual platforms, and operation and maintenance is supported.
One data center is used as multiple data centers: Based on the virtual data center (VDC)
mode, one data center can be used to provide different resource services for different
departments and services, implementing the separation of resource construction and usage and
matching the enterprise and carrier management modes better.
Easy-to-use application templates can define SDN networks, VMs, and physical machines,
including the software and databases that are installed. Templates are associated with services.
An actual application can be generated by instantiating a template based on the environment,
such as the Oracle test environment and the ERP system+OA system small branch
environment.
Report
− Provides real-time and historical monitoring reports of hosts and VMs.
− Allows users to query reports generated at specified periods of time, for example,
daily, weekly, or monthly reports.
Server monitoring information includes:
− Alarm statistics
− CPU usage
− Memory usage
− Inbound and outbound network traffic rates
− Disk I/O and disk usage
Storage device monitoring information includes:
− Alarm statistics
− Mounting status
− Total size
− Allocated size and available size
Network monitoring information includes:
− Inbound and outbound network traffic rates
− Port status
− Port traffic
VM monitoring information includes:
− VM status
− CPU usage
− Memory usage
− Inbound and outbound network traffic rates
− Disk I/O and disk usage
Open APIs
FusionSphere provides open APIs for external systems to obtain alarm data.
− Alarm query interfaces (HTTP REST):
Querying the alarm list and alarm status
Querying alarm resources
− Alarm subscription interfaces (HTTP REST)
− Alarm reporting interfaces (SNMP)
The FusionSphere system supports VM affinity, which allows multiple VMs to be placed on
different servers based on the configured rules to implement mutual-assistant VMs or active
and standby VMs, and achieve cost-effectiveness.
Location Affinity
− Keep VMs together: VMs that are added to this rule must run on the same host. One
VM can be added to only one Keep VMs together rule.
− Mutually exclusive: VMs that are added to this rule must run on different hosts. One
VM can be added to only one Mutually exclusive rule.
− VMs to hosts: This rule associates a VM group with a host group so that VMs in the
VM group can be only deployed on and migrated to hosts in the host group.
Capability Affinity: Non-uniform memory access (NUMA) nodes are introduced in
physical servers to improve the memory access efficiency of CPUs. The CPUs and
memory resources used by VMs (guests) are grouped into NUMA nodes based on the
memory access efficiencies of the CPUs. A CPU can achieve its maximum memory
access efficiency when accessing memory within its own NUMA node. When a VM is
created, FusionSphere preferably allocates CPU and memory resources required by this
VM on one NUMA node, thereby reducing memory access latency and improving
memory performance.
VM Resource Management
Users can create VMs using a VM template or in a custom way, and manage clustered
resources, including automatic resource scheduling, VM management (such as creating,
deleting, starting, stopping, restarting, hibernating, and waking up a VM), storage resource
management (such as common disk and shared disk management), and VM security
management.
The FusionSphere system also supports VM live migration and VM HA.
FusionSphere allows users to adjust the number of virtual CPUs (vCPUs), memory size,
NICs, and volume attaching and detaching status.
Network Virtualization
The FusionSphere system supports the following features for network virtualization:
Network bandwidth control, ensuring network QoS
Distributed virtual switch (DVS)
Single-root I/O virtualization (SR-IOV), improving network processing performance
Storage Virtualization
The FusionSphere system supports Huawei distributed storage software FusionStorage as well
as disk arrays, such as fibre channel storage area network (FC SAN) and IP SAN storage.
Users can apply for a security group based on VM security requirements and configure access
rules for the security group. After a VM is added to the security group, the VM is subject to
these rules. Security groups implement secure isolation and access control for VMs, thereby
improving VM security.
An elastic IP address allows users to use a fixed public IP address to access the VM to which
the public IP address is mapped.
Multi-Data-Center Management
If an enterprise or carrier has multiple data centers scattered in different regions, the
OpenStack cascading technology helps implement centralized management and maintenance
of multiple data centers.
Campus Enterprise
Partner External user DR center
network branch
External
Enterprise
dedicated Internet DR center
intranet
network
4 5
Partner DR center
Intranet Internet
network network
1 Core network
2
Production Other
area
Office area
areas
... Test area DMZ
3 Storage area
Backup area
The core network connects server areas, enterprise's intranet, partner's network, DR
center network, and access network for external users.
Server area
Servers and application systems are deployed in this area. Based on different functions,
the network architecture can be divided into extranet area (including Internet access area
and enterprise remote access area), enterprise office network access area, and intranet
core area. The intranet core area includes network service area, service production area
(including high-security service production area and common service production area),
office automation (OA) area, operation management area, and development and test area.
Storage area
This area houses fiber channel storage area network (FC SAN), IP storage area network
(IP SAN), and Fibre Channel over Ethernet (FCoE) devices.
Network area
This area connects enterprise users and external users to the data center. Considering
security and scalability, the network is classified into the intranet, partner network, and
Internet based on user types.
The intranet connects to networks of the headquarters and branches through the campus
network and wide area network (WAN).
The partner network connects to networks of partners through metropolitan area
dedicated lines and wide area dedicated lines.
The Internet allows external users to access the data center and staff on business trips to
access offices where the WAN covers.
Egress routers are connected to different carrier networks to improve Internet egress
reliability. For example, enterprises in mainland China will choose China Telecom or
China Unicom as Internet egress.
DR center network area
This area connects the production center to DR centers. The production center connects
to the DR center in the same city through transmission devices, and connects to the DR
center in a different city through the dedicated WAN.
O&M management area
This area is responsible for network, security, server, application system, and storage
management. In this area, fault management, configuration management, performance
management, security management, alarm management, and log management are
implemented.
External area
iStack
Core are
iStack
ManageOne+iSoC
CSS Background
management area
Internet
iStack
CSS
CSS
Aggregation
switch
Aggregation switch
iStack iStack
iStack iStack
UVP
UVP
UVP
UVP
Traditional computing Cloud computing
resources Traditional computing Cloud computing
resources resources resources
Storage aggregation
network
IP SAN FC SAN
Storage traffic between the computing subsystem and storage subsystem is transmitted
through the storage plane. The storage network is independent and isolated from other
networks. This ensures the QoS and storage security.
Component Function
FusionStorage A management process of the FusionStorage system.
Manager It supports O&M functions including alarm management, service
monitoring, operation logging, and data configuration.
Two FusionStorage Managers are deployed on the FusionStorage in
active/standby mode.
FusionStorage A management agent process of the FusionStorage system.
Component Function
Agent It is deployed on each node or server and communicates with the
FusionStorage Manager.
MDC A service control process that controls status of distributed clusters
and data distribution and reconstruction rules.
The MDC is deployed on three nodes to form an MDC cluster.
VBS A service input and output (I/O) process of the FusionStorage system.
It manages metadata and provides an access service that enables
computing resources to connect to distributed storage resources.
A VBS process is deployed on each server to form a VBS cluster.
OSD A service I/O process that performs I/O operations.
Multiple OSD processes can be deployed on each server and one disk
requires an OSD process.
Resource Consumption
Table 4-1 lists the resources consumed by FusionStorage on a computing-storage converged
server in the Xen or KVM hypervisor.
Total memory size required by FusionStorage = MDC process memory + VBS process
memory + OSD process memory x Number of OSD processes
The number of OSD processes can be calculated based on the following formulas:
Number of OSD processes = Actual number of hard disks (if HDDs or SSDs are used)
Number of OSD processes = Capacity of an SSD card/Size of the SSD fragmentation unit For
example, if the capacity of an SSD card is 2.4 TB and the default size of the SSD
fragmentation unit is 400 GB in the configuration file, the number of OSD processes is 6 (2.4
TB/400 GB). If a server is equipped with two 2.4 TB SSD cards, altogether 12 OSD processes
are running on this server.
in the memory cache will be lost if a data center is powered off, and then you need to
configure FuisonStorage again. Although disabling cache can ensure zero data loss when a
data center is faulty, I/Os will write through to disks, decreasing performance by 70% to 90%.
FusionStorage supports storage server interconnection based on a variety of networks, such as
IB and GE/10GE.
energy, transportation, and manufacturing, the OceanStor 18500/18800 V3 is the best choice
for mission-critical applications.
The following figure shows the storage network diagram.
FC SAN
switch Core switch
Core switch FC SAN switch
Server 1 Server 2
VLAN 20 VLAN 40
VLAN 20 VLAN 40
VLAN 30 VLAN 50
VLAN 30 VLAN 50
Controller A Controller B
Controller enclosure
Each server is equipped with two storage NICs that are not bound. Each IP SAN storage
controller is equipped with eight NICs. Two NICs are in one network segment, so there are
four storage network segments. Each physical NIC on a server is assigned two IP addresses on
different network segments. A server has IP addresses from four network segments, which
correspond to four storage network segments on IP SAN storage devices. The storage plane
provides eight logical links (with multipathing configured) and four physical links.
The IP SAN device in a cabinet employs the eight-path load balancing mode to ensure
reliability and stability of storage services. The storage services will not be interrupted
even if any one of the eight paths drops the connection.
Controller A and controller B of the IP SAN device are connected to the two S57XX
switches in the cabinet through four GE optical interfaces in layer 2 networking mode.
Each S57XX switch has two VLANs configured. Controller A and controller B use four
IP network segments to communicate with the four VLANs of the switches. The ports
connected to the IP SAN device allow traffic from two VLANs, that is, from two IP
network segments.
Multipathing software is running on the server to ensure load balancing efficiency and
reliability. Each server provides two network ports, and each network port is assigned
two VLAN IP addresses. These VLAN IP addresses each map a network segment of an
IP SAN controller.
Front-end ports 1 Gbit/s Ethernet, 10 Gbit/s FCoE, 10 Gbit/s TOE, 16 Gbit/s FC, and 56
Gbit/s InfiniBand
(per controller)
Max. number of 12 12 28 28 20
front-end host
ports (per
controller)
SmartMulti-Tenant (multi-tenancy)
SmartVirtualization (intelligent heterogeneous virtualization)
HyperClone (clone)
HyperReplication (remote replication)
HyperLock (WORM)
HyperMirror(volume mirroring)
Physical Features
Power supply AC: 100 V to 127 V AC: 100 V to 127 V or 200 V to 240 V
or 200 V to 240 V
Environment 5% to 95%
humidity
(relative
humidity)
8.6.2 Intelligent
Multiple tenancy and service levels
V3 converged storage systems allow storage resources to be intelligently allocated in
cloud computing environments based on customer requirements. Data isolation and a
variety of data security policies such as data encryption and data destruction are
employed to meet data security requirements of different users. The systems provide four
service levels and allocate resources based on service priorities. High-priority services
use resources first to ensure performance and response.
SmartX series software
Advanced technologies such as SmartTier, SmartMotion, and SmartVirtualization are
employed to achieve vertical, horizontal, and cross-system data traffic. Resource
utilization can be improved by three times.
HyperX series software
HyperX series software includes comprehensive data protection software such as remote
replication, snapshot, and LUN copy. HyperX series software satisfies the local, remote,
and multi-site data protection requirements of customers to ensure service continuity and
data availability.
One software suite can manage multiple product models and provides powerful functions
such as global topology view, capacity analysis, performance analysis, fault diagnosis,
and end-to-end service visualization.
Mobile management
Systems can be left unattended because users can use a tabloid or a smart phone to
manage systems at any time with status information delivered automatically.
Easy management
A V3 series storage system can be initially configured in five steps within 40 seconds
and expanded in two steps within 15 seconds. See Figure 8-5.
9 Infrastructure Solution
No. Server CPU Memory Number Hard Disk Server Reusable Used As Remarks
Model Model (GB) and Quantity, Quantity
Traffic Capacity,
Rate of and Type
Network
Ports
1 RH228 Intel 48 Four GE Two 600 GB 20 Yes Computing
8H V2 E5620 ports SAS hard nodes
disks
No Physical Describe
servers for the reason
deploying why the
the XXX server
service cannot be
system reused.
Allocatable server computing capability = SPEC value x CPU usage x (1 – Number of UVP
hyperthreadings/Total number of hyperthreadings) = 775 x 70% x [1 – 2/(4 x 8 x 2)] = 525
The number of hyperthreadings consumed by the underlying hypervisor is 2. The CPU usage is from
50% to 70%.
When calculating the actual number of servers, take redundancy into consideration. You must reserve at
least one redundant server for each cluster to support the VM HA feature.
If 8 GB memory modules are used, the number of memory modules of each server can be
calculated as follows:
Number of memory modules of a server = (Total memory size/Number of servers + 8 GB)/8
GB = (987 GB/7 servers + 8 GB for virtualization consumption)/8 = 19 memory modules
You are recommended to configure an even number of memory modules. Make sure that the memory
usage is no more than 80%.
Therefore, the computing capability of a single server can be calculated according to the
following formulas:
Computing capability of a single server vCPU = SPEC CINT2006 rates value x CPU
usage/(Number of CPUs x Number of cores x 2 – Number of logical cores consumed by
virtualization) = 775 x 70%/(4 x 8 x 2 – 2) = 8.7
Number of required vCPUs = Roundup (118 x 20%/8.7) = 3
Required memory size: 8 GB
VM resources:
Total number of VMs: 107
Total number of vCPUs: 322
Total VM memory size: 856 GB
Server quantity calculation:
To ensure VM reliability on the cloud platform and enable smooth VM migration in the event
of server failures, reserve 20% (configurable based on the specific project) CPU and memory
resources on the computing servers during system deployment.
Based on the preceding principles, the number of computing resources required by the system
can be calculated as follows:
Number of vCPUs: 322 x 120% = 387
Memory size: 856 GB x 120% = 1028 GB
Based on server models (four 8-cores) and the 30% redundancy requirement, the number of
required servers can be calculated as follows:
Number of servers = Number of vCPUs/(Number of CPUs x Number of CPU cores x 2 – 2) =
387/(4 x 8 x 2 – 2) = 7 (Roundup)
If 8 GB memory modules are used, the number of memory modules of each server can be
calculated as follows:
Number of memory modules of a server = (Total memory size/Number of servers + 8 GB)/8
GB = (1028 GB/7 servers + 8 GB for virtualization consumption)/8 = 20 memory modules
Table 9-2 lists the number of required servers.
Storage interface layer: provides volumes for operating systems (OSs) and databases over the
Small Computer System Interface (SCSI).
Storage service layer: provides various advanced storage features, such as snapshots, linked
cloning, thin provisioning, distributed cache, and backup and DR.
Storage engine layer: provides basic storage functions, including management status control,
distributed data routing, strong-consistency replication, cluster self-recovery, and parallel data
rebuilding.
Storage management layer: provides the O&M functions, including software installation,
automatic configuration, online upgrade, alarm reporting, monitoring, and logging, and also
provides a portal for user operations.
Huawei distributed cloud data center solution uses the FusionStorage system. FusionStorage
employs the new-generation distributed storage architecture and parallel, distributed grid
storage technologies. The horizontally scalable architecture and distributed multiple-node grid
implement storage load balancing. Fine-grained data distribution algorithms are used to
ensure constantly even data distribution. FusionStorage improves system reliability,
availability, and data storage and retrieval efficiency. In addition, the capacity of
FusionStorage can be easily expanded. Simply speaking, FusionStorage can be deployed on
common servers to consolidate local disks on all servers into a virtual storage resource pool.
Volumes are fragmented and distributed to all hard disks of the resource pool, thereby
achieving fine-grained, high-concurrency data storage and retrieval.
Figure 9-2 shows the principles of the FusionStorage distributed storage resource pool.
Plug-and-play capacity expansion: After resources are added, the system automatically
balances loads among all servers, achieving smooth capacity expansion.
Easy management: The simple FusionStorage structure simplifies management.
No configuration and management at low layers: FusionStorage is integrated in Huawei
virtualization solutions, and therefore only the application-layer management is required.
Zero performance management cost: FusionStorage implements automatic load balancing and
fault recovery. Manual performance optimization is not required.
Rapid data rebuilding: FusionStorage implements rapid parallel data rebuilding.
Data is distributed to different servers or different cabinets so that data can be obtained even if
a server or cabinet is faulty.
Data is fragmented in the resource pool. If a hard disk is faulty, FusionStorage automatically
rebuilds these data fragments by simultaneously restoring data copies in the resource pool,
without requiring hot spare disks.
Deep integration of computing and storage resources
FusionStorage is deployed on servers that have local hard disks attached to virtualize all the
local disks on the servers into a virtual resource pool. This resource pool integrates computing
and storage resources of the servers and can function like an external storage device of the
servers.
Storage Arrays
//(Delete this sentence before delivering this document to the customer.) If FusionStorage is
used, delete this section.
Storage arrays consist of IP SAN and FC SAN arrays. FC SAN is a closed network based on
traffic control, and therefore it has higher traffic transmission efficiency than IP SAN. This
project uses FC SAN storage to ensure high storage performance and reliability.
SAS, SATA, and NL SAS are the three mainstream disks in the industry. SAS disks are
typically recommended for carrying services.
RAID 5, RAID 6, and RAID 10 are all the commonly used RAID arrays. Among them, RAID
5 is typically used by service systems, whereas RAID 10 is typically used by databases.
Table 9-3 describes the example storage planning for this project.
Item Specifications
Subrack RH2288H V2 subrack (with 14 hard disks configured)
Memory 18 x 32 GB
NIC Four 10GE optical interfaces
SSD card 400 GB
CPU Two Xeon® E5-2690 V2 CPUs
Hard disk Twelve 3.5-inch 2 TB SATA hard disks and two 2.5-inch 600 GB
SAS hard disks
Each storage node is equipped with 14 hard disks. Two 2.5-inch 600 GB SAS disks are used
to group RAID 1 for installing the virtualization software, and the rest 12 hard disks are
virtualized by FusionStorage to provide virtual disks for service VMs.
Storage Arrays
Table 9-5 describes the example configuration of storage arrays.
S5300 V3 4 XXX
XXX XXX XXX
10 Security Solution
Based on the preceding development trend and best practice of the industry and Huawei, the
data center security architecture, as shown in Figure 10-2, is defined. This architecture is
considered in the process of designing the data center solution.
This architecture consists of nine security sub-modules: security service, physical facility
security, network security, application security, host security, virtualization security, data
protection, user management, and security management. Each security sub-module integrates
systems, devices, and tools, and provides security control from the technical perspective.
Huawei provides security consulting, security integration, and professional security services
to support the implementation and running of the data center security architecture.
The security consulting service helps design and construct security management systems.
The security integration service helps build various types of security infrastructure.
The professional security service provides security risk assessment and conformity
auditing that are required in security management activities.
Based on optimal planning principles for enterprise information security and the overall data
center architecture, this document describes security sub-modules complying with the design
of most Data center. The following sections describe security design from perspectives of
physical facility security, network security, host security, host security, virtualization security,
and data security.
Technical Requirement
A.9 Physical and Environment Security
Physical Security
A.9.1 Secure Area
Purpose: To prevent unauthorized physical access,
damage, and interface to the area.
Security perimeters, such as
wall, card-controlled
Physical entrance, or attended
A.9.1.1 Peripheral reception desk, must be used 2. Physical Access Control
Security to protect the area containing
information and information
processing devices.
The secure area must be
Physical Access protected by entrance control
A.9.1.2 2. Physical Access Control
Control so that only authorized
personnel can access the area.
Security Physical security measures
Protection for the must be designed and taken
A.9.1.3
Offices, Rooms, for offices, rooms, and
and Facilities facilities.
Physical security measures 1. Physical Location
Security
must be designed and taken to 4. Lightning Protection
Protection against
protect against fire, flooding,
A.9.1.4 External and 5. Fire Protection
earthquake, explosion, social
Environmental
turbulence, and other natural 6. Water and Moisture
Threats
or artificial disasters. Protection
Physical protection and
Work in the manual s applicable to work
A.9.1.5
Secure Area in the secure area must be
available.
Special control must be
performed for the point of
presence (such as the
Security of the
cross-connection area) and
Common Access
other points where
A.9.1.6 Area and 2. Physical Access Control
unauthorized personnel can
Cross-Connection
visit. If possible, establish
Area
isolation from the information
processing facilities to
prevent unauthorized access.
A.9.2 Device Security
Purpose: To prevent loss, damage, stealing of assets, and
interruption of activities.
For the physical security infrastructure design in the data center, the physical security
requirements for the highest grade of the information system security in the enterprise must be
incorporated with the control requirements specified in ISO27001:2005 to present complete
requirements for the physical security.
The network of the data center can be classified into four security zones: public zone,
transitional zone, restricted zone, and core zone.
Typ Description
e
Public zone The public zone refers to the zone where the data center can connect to the
external public network. The security entity in the public zone includes
Internet access devices of the enterprise. The public zone connects to the
entities and zones that are out of control. For example, the public zone
connects to the user resources and circuit resources from the Internet.
Therefore, the public zone is defined as non-secure zone with high risk
level. The data stream from this zone must be strictly controlled.
Typ Description
e
Transitional The transitional zone is located between the public zone and restricted
zone zone/core zone. The transitional zone isolates the public zone from the
restricted and core zones and hides resources of the public and core zones.
The network data stream does not reach the transitional zone directly.
The security entity in the transitional zone includes all systems and
devices that may be accessed by unauthorized parties and may provide
services to unauthorized parties.
The systems and devices are those providing services externally,
including web servers, DNS servers, application front-end servers,
application gateways, and communication front-end processors.
The transitional zone is a semi-trusted zone and is vulnerable to attacks.
You are advised not to store secret data in this zone.
Restricted The restricted zone is a high security level zone. Its security entity includes
zone internal terminals, such as service and office terminals. Non-core OA areas,
and development and test server areas can also be defined as restricted
zones.
The restricted zone is the trusted zone. In principle, the server in the
transitional zone works as the gateway or proxy to transmit the data stream
between the public zone and restricted zone. The data stream cannot access
the public and restricted zones directly. If the data stream accesses the
public and restricted zones directly, the data stream must be under strict
security control because of application restriction.
Core zone The core zone provides the highest security level. The key application
server, core database server, management console, and management server
are deployed in the core zone. The key application server provides critical
service applications. The database server stores the secret data. The
management console and management server are configured with the
permission and function to manage all systems. Therefore, the core area
must be protected with the most comprehensive security technology. The
access to and operation of systems and devices must be strictly controlled
based on the security management procedure.
The core zone is the trusted zone. In principle, the server in the transitional
zone works as the gateway or proxy to transmit the data stream between the
public zone and core zone. The data stream cannot access the public and
core zones directly. If the data stream accesses the public and restricted
zones directly, because of application restriction, the data stream must be
under strict security control. In addition, the access between the restricted
zone and core zone also must be controlled strictly to ensure strong security.
[Keep the preceding security zone model and description as they are. The security zone of the
data center can be designed based on the model and actual situations.]
Security sub-domains are defined in each zone. Figure 10-4 shows the data center security
zone.
The public zone is the Internet security zone. Access devices in the Internet access area on the
data center network connected to the Internet belong to the public zone.
The transitional zone is the Internet demilitarized zone (DMZ). The DMZ in the Internet
access area where external servers are deployed belongs to the transitional zone.
The restricted zone includes three security sub-domains: remote access, office network access,
and development and testing areas.
The remote access area contains network devices used to connect the production data
center to partners, branches, and DR data centers.
The office network access area contains network devices used to connect the production
data center to the enterprise office network.
The development and testing area contains all types of devices used for development and
testing. In this zone, multiple security zone cases can be defined to isolate development
and tests, or support multiple concurrent development and test tasks.
The core zone includes four security sub-domains: the OA area, common service production
area, operation management area, and high-security service production area. The security
protection level of the high-security service production area and operation management area
is higher than that of the common service production area and OA area.
The OA area includes the servers and devices that support OA applications. The OA
applications with higher security requirements can be deployed in the high-security
service production area.
The common service production area includes non-critical service applications. Multiple
security zone cases can be defined to isolate applications from each other.
The operation management area includes the devices related to operation management
systems, such as the network management, system management, and security
management systems. Multiple instances can be defined to isolate these system
applications from each other.
The high-security service production area includes core service applications and data that
have the highest security level. Multiple security zone cases can be defined to isolate
applications from each other.
The data stream between security zones must be controlled based on the following principles:
The cross-security-zone data stream must be controlled by the pre-defined border control
component.
By default, the border control component blocks all data streams, except the data stream
permitted to transmit.
The fault of the border control components will not cause the unauthorized access among
security zones.
All data streams from the Internet or business partners are strictly controlled and
monitored. Each link must be authorized and audited.
The data center network security infrastructure contains the following components:
Firewall
High-performance firewalls can be deployed in the external connection area, and the
firewall NAT function can be enabled to hide the intranet topology to ensure the security
of the data center network.
High-performance firewalls can be deployed in the network service area, and each
firewall can be virtualized into multiple logically isolated virtual firewalls. Each virtual
firewall provides independent security policies based on which security prevention
measures are specified for service areas or security zones in the data center.
Communication validity can be protected based on strict ACL policies and connection
status detection, and the security prevention function of firewalls can be enabled to
defend against increasingly rampant attacks on the application layer to ensure the
security of the data center network.
Firewalls in the data center work in active/standby mode to avoid the single point of
failure and meet high availability requirements.
Intrusion prevention system
With the improvement of network attack techniques and the increasing of security
loopholes, firewalls cannot detect attack traffic hidden in the traffic permitted to
transmit. The intrusion detection system (IDS) detects malicious codes, attacks, DDoS
attacks contained in application data flows, and responds to these threats in real time.
Based on the preset security strategy, the IPS engine can detect data traffic that passes
through it and perform in-depth detection on each packet, including protocol analysis
tracing, feature matching, traffic statistics analysis, and event association analysis. If the
IPS engine detects a network attack, it adopts prevention measures based on the security
level. The IPS engine may adopt the following prevention measures: reporting an alarm
to the management center, discarding the packet, releasing the session, disconnecting the
TCP connection, and performing traffic limit on abused packets to protect bandwidth
resources.
This solution deploys firewalls with the IPS function in data center scenarios to protect
the application layer.
The following functions are supported:
− Ensuring the security of the network infrastructure
Automatically detects and blocks attacks and abnormal traffic to ensure the security
of the network infrastructure, including routers, switches, and DNS servers.
− Intrusion prevention
Implements multi-protocol analysis, ISO layer 7 in-depth protocol analysis, content
control, and URL filtering to effectively verify or block security threats, including
buffer overflow, Trojan horses, worms, spyware, DDoS attacks, IP fragment attacks,
and browser attacks; provides the packet competitiveness analysis function and the
virus scanning and cleaning function. When an attack is detected, the IPS records the
source IP address of the attack, attack type, attach purposes, and attack time, and
reports an alarm if a critical intrusion event occurs.
− Loophole attack prevention
Provides loophole attack protection and prevents loophole attacks in real time;
provides million-level attack signatures.
− Congestion-free transmission of key data
Provides the bandwidth management function; differentiates different levels of data
services and prepares related bandwidth policies for these data services to ensure that
normal communication between key services in the case of network congestion.
Transmission security
data center user data may be interrupted, copied, tampered, intercepted, or monitored
during transmission. Therefore, data integrity, confidentiality, and effectiveness must be
ensured during transmission.
Data transmission security in the data center must be ensured from the following
perspectives:
− SSL encryption between the trusted zone and the non-trusted zone on the
management plane
− HTTPS access for user management and SSL VPN for higher secure access.
− SSL VPN for the access of O&M personnel
− SSH for user access to VMs
− IPSec VPN for data transmission in enterprise branches or the headquarters
If hackers spread virus in the data center network, the whole data center network cannot
properly operate. The spreading virus occupies large amounts of bandwidth and launches
DDoS attacks to the key service hosts, causing a sharp decline in the system performance.
The data center virus protection must be designed from a comprehensive perspective, taking
into consideration any links that are vulnerable to virus. The data center devices must be
centrally managed to prevent missing any virus intrusion point.
An agent must be installed on a host to be protected, as show in yellow areas in Figure 10-6.
These agents implement unified antivirus management over the AV Server deployed in the
operation management area. These agents provide the comprehensive antivirus function to
Windows-, Linux-, or Unix-based servers based on antivirus requirements of the data center
to ensure information security of key service servers and LANs and prevent virus attacks.
The following functions are supported:
2、 Remote management
Remote management includes remote installation, remote update, and remote uninstallation,
update of virus pattern files, download of the scan engine and correction procedure, virus
scanning and removal, installation and setting, real-time virus alarming, virus event record
and report, and real-time scanning.
3、 Virus pattern update
The virus scanner can function only after the latest antivirus components are updated. The
latest virus pattern and scanner engine that can be automatically updated are allocated to the
specific server. The intelligent incremental update mode is used when the new virus pattern is
updated. That is, the server downloads only the newly added virus pattern. This efficient
update mode reduces the download time and network bandwidth.
Security group rules take effect automatically upon the start of the VM and remain
unchanged when the VM migrates to another host. Users only need to set the rules
without considering on which host the VM runs.
VM protection
The client OSs running on the VMs have the same security risks as physical systems.
Virtualization cannot eliminate these risks. However, the attacks on a single VM only
endanger the security of the VM itself and do not harm the virtualization server that runs
the VM.
The VM antivirus system consists of endpoint protection servers and endpoint protection
clients on virtual servers. The endpoint protection servers control endpoint protection
clients on the network and perform host antivirus, host IPS, the setting and configuration
of host firewall strategies, log collection, and update of virus patterns and scanning
engines.
An antivirus client can be deployed on each running VM to protect the VMs.
VM template security hardening
The template is configured with the security enhanced basic OS image, which is not
equipped with any application programs. The image enables all the newly created VMs
to share the same security level. The template can be used to deploy the VMs. The patch
programs and security tools of the template must be updated in time.
VM management
The virtualization platform can accurately allocate host resources.
The resource management functions, such as share and restriction, can control the server
resources consumed by VMs. Therefore, the attacked VM does not affect the other VMs
running on the same physical host. This mechanism helps prevent DDoS attacks.
Communication management from VMs to the physical host
VMs can write the troubleshooting information to log files, which are stored on the cloud
platform system. The intentional or unintentional configurations on VM users and
processes may result in the abuse of the log record function. A great mass of data is
written in log files. The log files occupy large file system space in the physical host and
use up the hard disk space. This causes DDoS attacks, and the host system cannot run
properly. However, the system is configurable. When one log file space reaches a certain
point, the system can be configured to use the other log files by turning or deleting the
large spaced log file.
Table 10-3 Main security features of the basic and lost-cost security solution for Data center
Type Feature Rem Lost-Cost and
arks Basic Security
Solution
11 Backup Solution
The eBackup VM backup plan uses Huawei eBackup backup servers, the FusionCompute
snapshot function, and the Changed Block Tracking (CBT) function to back up VM data. By
collaborating with FusionCompute, the eBackup software backs up data of a specified VM or
a VM volume based on the configured backup policies. If a VM becomes faulty or its data is
lost, the VM can be restored using the backup data. The data can be backed up to an external
SAN or NAS storage device.
The eBackup VM backup plan delivers the following characteristics:
No backup agent needs to be installed on the VM to be backed up.
eBackup supports concurrent backup and restoration. One backup agent supports up to
40 concurrent tasks.
VM disks can be backed up and restored across FusionCompute sites.
The eBackup backup plan employs the distributed architecture that blends backup
servers and backup agents. One backup server manages up to 64 backup agents. The
backup servers can also function as backup agents. Therefore, no additional backup
agent servers are required. Both backup servers and the backup agents can be centrally
managed using a browser. It is recommended that each backup agent backs up data for
200 VMs. You can add backup agents based on the VM scale. A maximum of 10,000
backup agents are supported.
The eBackup backup plan delivers high reliability.
− If a backup agent fails, its services are distributed to other backup agents.
− The eBackup backup system supports self-recovery in the disaster scenarios, for
example, the OS, host, or storage is damaged.
The eBackup backup plan supports easy management and maintenance.
− The backup system can be deployed on VMs using templates or on physical servers.
− The eBackup backup system supports centralized backup, restoration, and system
management using the GUI or command-line interface (CLI), which is easy and
straightforward for users to perform operations.
The VM backup plan applies to the following scenarios:
Server consolidation, data center virtualization, FusionCube, and desktop cloud.
Storage resources at the production site are provided by FusionStorage or virtualized
SAN devices, NAS devices, or local disks.
To meet service continuity requirements, the DR modes shown in the following figure are
recommended for different classes of service systems.
The following table lists the detailed classification of major service systems in the public
security industry to meet DR construction needs
12.2 DR Solution
Based on the overall system design principle, success cases of DR system deployment in
industry xx, and years of accumulated experience, Huawei recommends an overall DR
architecture for the customer, as shown in the following figure:
R emote D R c enter
Produc tion c enter Intra-c ity D R c enter
Internet
D ata-lev el ac tiv e/s tandby mode IP WAN
LAN LAN LAN
Applic ation-lev el ac tiv e/s tandby mode D ata-lev el ac tiv e/s tandby
mode
Class A Class B Class C Applic ation-lev el ac tiv e -ac tiv e Class A Class B Class C Class A Class B Class C
mode
VM s Ph y s i c a l VM s Ph y s i c a l VM s Ph y s i c a l m a c h ines
m ac hines m ac hines WAN
Web Web APP Web APP Applic ation-lev el ac tiv e - Web APP
APP APP APP APP APP APP
APP APP
APP
APP APP
APP
APP
APP
s tandby mode APP
APP
APP
APP
APP APP
OS
APP O S APP APP O S APP O S APP O S APP O S APP
OS
OS OS OS OS OS OS
OS
OS OS OS OS OS OS OS
Ph y s i c a l m a c h in e s Ph y s i c a l m a c h in e s Ph y s i c a l m a c h in e s
DB DB DB DB DB DB DB DB DB DB
D WD M
SD H loop
VIS VIS SAN
SAN SAN
Mirroring
12.2.1 Architecture
Huawei proposes application active/standby architecture to meet DR system needs, achieve
DR goals of various application systems in XXXX, and ensure service continuity in case of
large-scale disasters. The overall architecture is shown in the following figure:
IP
Production center DR center
Support
heterogeneous
servers and storage
devices; reduce the
RTO and RPO
① Database layer
DR
SAN
SAN
DR decision-
making platform
Storage Pool
② Unified visual Storage Pool
management and
control reduce the
switchover decision-
making time.
Architecture description
1. The database replication software based on log database replication technology is used to
implement data synchronization between the production center and DR center.
2. The DR management platform is used to visually monitor the status of the DR system,
data recovery time object (RTO) and recovery point object (RPO) indicators, as well as data
replication status in real time.
Solution highlights
1. Asymmetrical architecture is supported for the production center and DR center.
Heterogeneous storage and servers are compatible in the production center and DR center.
2. Second-level RPO and minute-level RTO.
3. The DR center is standby and also provides services, achieving a typical Active-Query
DR mode to improve resource utilization. The unified DR monitoring and decision-making
platform greatly reduces decision-making time and O&M costs.
Replication ratio can be as high as 32:1 (the sum of synchronous remote replication and
asynchronous remote replication)
Primary and secondary storage can mirror each other
Applicable to local and intra-city data disaster recovery
3. Networking Architecture
The data consistency during the synchronous replication of the storage array is made possible
by logging. The realization process is illustrated as below
4. Technical Highlights
The highlights and realization of synchronous replication are as follows:
a) After a synchronous replication relationship is set up between a primary LUN at the
primary site and a secondary LUN at the remote replication site, an initial synchronization is
initiated to replicate all the data from the primary LUN to the secondary LUN.
b) If the primary LUN receives a write request from the production host during the initial
synchronization, the storage system checks the synchronization progress. If the original data
block to be replaced is not synchronized to the secondary LUN, the new data block is written
to the primary LUN and the storage system returns a write success response to the host. Then,
the synchronization task will synchronize the new data block to the secondary LUN. If the
original data block to be replaced has already been synchronized, the new data block must be
written to the primary and secondary LUNs. If the original data block to be replaced is being
synchronized, the storage system waits until the data block is copied. Then, the storage system
writes the new data block to the primary and secondary LUNs.
c) After the initial synchronization is complete, data on the primary LUN and on the
secondary LUN are the same. If the primary LUN receives a write request from the
production host later, the I/O will be processed based on the following steps.
d) The primary LUN receives a write request from a production host and sets the
differential log value to differential for the data block corresponding to the I/O.
e) The data of the write request is written to both the primary and secondary LUNs. When
writing data to the secondary LUN, the primary site sends the data to the secondary site over a
preset link.
f) If data is successfully written to both the primary and secondary LUNs, the
corresponding differential log value is changed to non-differential. Otherwise, the value
remains differential, and the data block will be copied again in the next synchronization.
g) The primary LUN returns a write completion acknowledgement to the production host
4. Technical Highlights
The highlights and workflow of asynchronous replication are described below:
a) After an asynchronous remote replication relationship is set up between a primary LUN
at the primary site and a secondary LUN at the secondary site, an initial synchronization is
initiated to replicate all the data from the primary LUN to the secondary LUN.
b) If the primary LUN receives a write request from the production host during the initial
synchronization, data is written only to the primary LUN.
c) After the initial synchronization, the status of the secondary LUN is synchronized or
consistent. (If the host sends no write request during the initial synchronization, the status of
the secondary LUN is synchronized; otherwise, the status is consistent). Then, I/Os are
processed according to the following steps.
d) The primary LUN receives a write request from a production host.
e) After data is written to the primary LUN, a write completion response is immediately
returned to the host.
f) Incremental data is automatically synchronized from the primary LUN to the secondary
LUN based on the user-defined synchronization period that ranges from 1 to 1440 minutes. (If
the synchronization type is Manual, users need to trigger the synchronization manually.)
Before synchronization starts, a snapshot is generated for each of the primary LUN and the
secondary LUN. The snapshot of the primary LUN ensures that the data read from the
primary LUN during the synchronization remains unchanged. The snapshot of the secondary
LUN backs up the secondary LUN's data in case that the data becomes unavailable when an
exception occurs during the synchronization.
g) During the synchronization, data is read from the snapshot of the primary LUN and
copied to the secondary LUN.
h) After the synchronization is complete, the snapshot of the primary LUN and that of the
secondary LUN is canceled, and the next synchronization period starts
Data Guard is one of the multiple integrated high availability (HA) features of the Oracle
database shown in Figure that ensures business continuity by minimizing the impact of
planned and unexpected downtime
• In addition to data protection and availability, Data Guard standby databases delivery
high return on investment by supporting ad-hoc queries, reporting, backups, or test activities,
while in standby role. Specifically:
• The Active Data Guard option (Oracle Database 11g) enables a physical standby database
to be used for read-only applications while simultaneously receiving updates from the primary
database. Queries executed on an active standby database return up-to-date results.
• Snapshot Standby enables a physical standby database to be open real-write for testing or
any activity that requires a real-write replica of production data. A Snapshot Standby
continues to receive, but not apply, updates generated by the primary. These updates are
applied to the standby database automatically when the Snapshot Standby is converted back
to a physical standby database. Primary data is protected at all times.
• A logical standby database has the additional flexibility of being open read-write. While
data that is being maintained by SQL Apply cannot be modified, additional local tables can be
added to the database, and local index structures can be created to optimize reporting, or to
utilize the standby database as a data warehouse, or to transform information used to load data
marts.
• A physical standby database, because it is an exact replica of the primary database, can
also be used to offload the primary database of the overhead of performing backups.
A Data Guard configuration includes a production database, referred to as the primary
database, and up to 30 standby databases. Primary and standby databases connect over
TCP/IP using Oracle Net Services. There are no restrictions on where the databases are
located provided that they can communicate with each other. A standby database is initially
created from a backup copy of the primary database. Data Guard automatically synchronizes
the primary database and all of its standby databases by transmitting primary database redo
(the information used by Oracle to recover transactions) and applying it to the standby
database.
Maximum Zero data loss and SYNC Stall the primary database
Availability single failure until acknowledgement is
protection received or the
NET_TIMEOUT threshold
period expires and then
resume processing.
Avoids data loss and downtime when the production site is unavailable.
Support a maximum of 30 standby databases for one primary database.
of Huawei headquarter provide 24/7 O&M management support for service support
teams around the world to provide timely response to user requests, solve problems, and
ensure stable and reliable service provisioning.
After the Huawei data center solution is successfully delivered, various value-added
services, such as health check tools are provided to ensure stable and efficient running of
user data centers.
Based on the NDC2 solution, data center resource plan is listed in the following table:
In order to meet the requirements of small, medium, and large application scenarios, three
public cloud data center resource plans are available.
In the small application scenario:
Number of server: 10
Deployment of 30,000 HD cameras, 150 base stations, and 40 modular data centers, 7,000
LTE portable terminals;
Incident taking and dispatching systems, comprehensive dispatch system and integrated
intelligent analysis system.
Establishment of level 5 national security and intelligence networks, greatly improving the
national intelligence information sharing;
Greatly improve citizen satisfaction in the public security environments;
The intelligent video surveillance system gradually replaces manual operation, greatly
reducing labor costs.
Networking platforms are established in 1 council, 7 branches of the county, and 42 police
stations;
Deployment of 16,000 cameras, reuse of 2,000 cameras, and employment of video-aided
investigation, implementing intelligence analysis;
Visual integrated emergency command scheduling system;
Three-level monitoring networking, achieving resource sharing;
Improve the efficiency of public security investigation at prevention, control, and fighting.
Support the original surveillance equipments from multiple vendors, reducing roughly 20%
investment.
Huawei Solution
Offer hospital information system, iPACS and others digital hospital systems.
the system of customized office automation (OA ) system and email system for
government, universities and hospitals.
Offer the government data center for Angola government to provide hosting capabilities
for e-government applications.
Offer the information security system for government and hospitals.
Offer the VOIP and Video Conference system.
Set up the government-specific network.
Customer Benefits
Improve the information level of the hospitals in Angola. With the digital hospital
systems, the hospitals operate more effectively and people of Angola get better medical
service.
Integration of health care resources to promote resource exchanges and cooperation
between hospitals.
With OA and email system, the government, universities and hospitals get better working
efficiency and office functions can be handle more quickly.
Customer Benefits
The system will form the situation of the province telemedicine platform center to the
First Affiliated Hospital of Zhengzhou University, expert resources to maximize sharing,
improve the distribution of medical resources in Henan Province uneven status quo. Greatly
enhance the status and influence of the First Affiliated Hospital of Zhengzhou University the
medical profession in China.
With OA and email system, the government, universities and hospitals get better working
efficiency and office functions can be handle more quickly.
• Extra Cost:Poor power grid environment and UPS protection for each PC bring
high extra costs.
Solution
School Information
Web based Digital
e-Education M anagement Education Cloud
Library and Schools
Systems
Infrastructur
e Modular Data center
IP networking
Server Storage Security
Terminals
Thin Client Camera
Plasma PC Smart Phone Table IP Phone
t
Huawei E2E Product and Service
16 Appendix
B
BIOS basic input/output system
BMC baseboard management controller
BPS bit per second
C
CA Certificate Authority
CAS central authentication service
CIM common information model
CMDB configuration management database
CPU central processing unit
D
DDoS distributed denial of service
DMZ demilitarized zone
DNET destination network address translation
DNS domain name system
E
EJB enterprise JavaBean
H
HA high availability
HMC hardware management console
HTML Hypertext Markup Language
HTTP Hypertext Transfer Protocol
HTTPS Hypertext Transfer Protocol Secure
I
IDS intrusion detection system
Internet internetwork
IP Internet Protocol
IPMI Intelligent Platform Management Interface
IPS intrusion prevention system
IPsec Internet Protocol Security
ISO International Organization for Standardization
IT information technology
ITIL information technology infrastructure library
ITSM IT service management
J
JDBC Java database connectivity
JMS Java message service
JMX Java management extensions
JSP Java server pages
JTA Java Transaction API
JVM Java virtual machine
L
LAN local area network
LDAP Lightweight Directory Access Protocol
N
NAS network attached storage
NAT Network Address Translation
NetBIOS network basic input/output system
NPS network policy server
NTP Network Time Protocol
O
OA office automation
Orchestrator orchestrator
OS operating system
R
RADIUS Remote Authentication Dial In User Service
RAM random access memory
REST Representational State Transfer
S
SAML Security Assertion Markup Language
SAN storage area network
SLA service level agreement
SLO service level objectives
SMI-S storage management initiative specification
SNET source network address translation
T
TCO total cost of ownership
TCP Transmission Control Protocol
TLS Transport Layer Security
Topo topology
U
UDP User Datagram Protocol
UI user interface
UMA unified maintenance and audit
URL uniform resource locator
V
VDC Virtual Data Center
VEM VM encryption management
VES VM encryption system
VLAN virtual local area network
VM virtual machine
VPC Virtual Private Cloud
VPN virtual private network
W
WBEM Web-based enterprise management
WMI Windows management instrumentation
X
XML Extensible Markup Language