Sunteți pe pagina 1din 22

Editorial Guide

Evolving Metro/
Access Networks
The demands placed on metro and access
networks have created new requirements
that must be accommodated. These
articles will help network planners create
a winning strategy that will keep their
networks ahead of the game.

sponsored by:

2
Monitoring fiber
access links 9 Choosing an optimal
FTTH architecture 15 Optimizing and
monetizing data center
metro networks

Reprinted with revisions to format from Lightwave. Copyright 2014 by PennWell Corporation
Originally published July 1, 2014

Monitoring fiber access links

Access link monitoring can bring new reliability to optical


access networks, unlocking a variety of new services.

By ULRICH KOHN

A
ccess has long loomed as optical fiber’s last, great, unconquered
frontier. But it’s now clear that fiber’s role in access networks in
multiple parts of the world is growing rapidly more pervasive every
day. Long undersubscribed, optical access infrastructure is today in
prime demand.

As reliance on fiber for access to radio towers and customer locations has
intensified, service interruptions associated with optical access have become
more consequential and costly. In turn, the pressure to protect access fiber and
the highly valued services traversing it has begun
to climb. Access link monitoring (ALM) is a timely
new concept for the task, and it figures to pave
the way for even wider-scale adoption of optics in
the access in ways that meet stringent cost and
scalability requirements.

Shift toward fiber for access


There can be no more denying that copper fails
to meet the complex and growing bandwidth and
service demands of business and residential users
globally. Similarly, mobile operators have come to
understand that the bandwidth requirements of
their base stations can’t be dependably and cost-
efficiently met by microwave alone, which demands
frequent upgrades for any higher capacity needs.
2
Lightwave :: EDITORIAL GUIDE
Monitoring fiber access links

Fiber break detection Fiber integrity monitoring


ALM ALM
CPE
×
CPE at Terminal Terminal
enterprise site at CO Radio base station at CO

FIGURE 1. Access link monitoring application scenarios.


1407LWcstoryF1

That leaves massively scalable and reliable fiber; as a result, network operators
have rolled out optical fiber to an ever-increasing share of network endpoints.
Residential users enjoy the ability to download high definition movies in
less than a minute and full TV shows or dozens of songs in mere seconds.
Businesses value the ability to back up their mission-critical data to the cloud
and across multiple locations and use video conferencing more extensively.
Many business managers now also wonder what new business models and
innovations might be unlocked by wider-scale access at significantly higher
speeds. Furthermore, fiber installation frequently has become part of the
infrastructure projects of utilities and municipalities as well.

This trend toward optics in access is evidenced globally.

Google, for example, appears that it might have designs on evolving into a
full-blown telecommunications operator in the United States – or at least in
bringing fiber-based ­services for broadband access to enough U.S. markets
that other network ­operators feel the competitive pressure to introduce cost-
effective, higher-speed access to more Internet users, too. AT&T, meanwhile, has
announced a major initiative to expand its ultra-fast fiber network to up to 100
candidate cities and ­municipalities across the United States.

In the United Kingdom, BT has said that its open fiber network now passes about
two-thirds of the homes and businesses in the country, lending credence that the
government’s goal of ensuring high-speed (or “superfast”) access to 95% of the
U.K. by the year 2017 can be achieved.

3
Lightwave :: EDITORIAL GUIDE
Monitoring fiber access links

Among the 34 developed ­countries in the Organization for Economic Co-operation


and Development (OECD), fiber was reported to account for 15.75% of fixed
­broadband subscriptions as of June 2013. “Two-digit annual growth in fiber was
sustained thanks to increases in large OECD economies with low penetration
levels such as France (32% in 6 months), Spain (34%), Turkey (33%), and the
United Kingdom (47%),” read a January 9, 2014 report from the OECD. “Japan and
Korea remain the OECD leaders, with fiber making up 68.45% and 62.76% of fixed
broadband connections.”

On a variety of fronts, signs indicate an intensifying trend toward more and more
sites worldwide becoming connected via optical fiber.

Dark fiber services challenges


In the business services context of fiber’s expansion, there are different flavors of
connectivity services:
:: M
 anaged bandwidth such as Ethernet private lines or Layer 2/Layer 3 virtual
private networks (VPNs).
:: L
 eased lines, including leased wavelengths or optical channel data units
(ODUs).
:: Simple dark or dim fiber.
While network operators are typically more interested in offering higher-value
services such as VPNs, there are a number of scenarios more favorably addressed
with dark fiber.

One example is the mobile ­operator that seeks the cost ­advantages of a centralized
radio access network (C-RAN) architecture. In a C-RAN deployment, centrally pooled
baseband units are connected to lightweight, simplistic remote radio head ­technology
at the antenna site, so complicated processing is more cost-effectively concentrated
within the network. Common public radio ­interfaces (CPRIs) are applied to connect
the pooling site and antenna site. The CPRI interface and the ­digitized analog radio
signal that goes across it are both delay and jitter ­sensitive – and therefore best
transported over transparent wavelengths or direct fiber connections. Hence, the
mobile operator is interested in leasing dark fiber from a fiber provider.

4
Lightwave :: EDITORIAL GUIDE
Monitoring fiber access links

Another scenario that tends to favor dark fiber connectivity involves


infrastructure companies like ­utilities or municipalities that might own fiber but
frequently do not want to invest in a communications network. Offering dark
fiber services is an obvious business opportunity that enables the utilities or
­municipalities to create business value from their fiber assets. Within customer
organizations, there is frequently a split between the unit responsible for
operating the fiber infrastructure and the unit responsible for putting traffic onto
the fiber. However, each unit is keen on managing and monitoring the service
that it provides to the other.

Unfortunately, the fiber ­infrastructure operator often has no means to assure


the availability of the purely passive service. On the other hand, the fiber
operator could create more value by leasing a monitored fiber service if real
time information on the integrity of its fiber could be ­provided, enabling a faster
localization of failures and resolving questions about responsibility of failures.

New concept for today’s requirements


ALM is a novel approach to optical connectivity assurance for fiber-based access
and dark fiber services that operates completely separately from the transmission
system using the fiber. While fiber monitoring is frequently done reactively
by field services staff using optical time domain reflectometers (OTDRs), ALM
extends this methodology and optimizes it for low cost, high volume applications
in access networks. Service providers benefit from ­continuous out-of-band and in-
service monitoring of their increasingly important access and dark fiber links.

No matter how many ­customer services are multiplexed across the network,
service providers can use ALM to maintain a steady assessment of the health
of their optical access infrastructures. The impact for service ­providers is
meaningful: faster repair cycles with no interference to other customers’ traffic,
which ­translates into improved service quality and, hence, higher revenue from
their dark fiber offering.

Independent of services, ­equipment, data rates, data formats, and protocols,


ALM requires a processor module for multiple fibers in the central office (CO)
or headend, one passive coupler per access fiber in the CO/headend, and one
5 wavelength-selective optical reflector per access fiber in the customer premises

Lightwave :: EDITORIAL GUIDE


Monitoring fiber access links

­ quipment (CPE) or far end. A low cost OTDR with passive, transparent optical
e
demarcation enables ALM to monitor multiple in-service fibers.

This monitoring of the fiber enables early recognition of the fiber stress that
affects fiber attenuation and frequently leads to fiber breaks. Service ­providers
can thus initiate preemptive counteraction before services are disrupted (see
Figure 1).

The integrity of the fiber is ­monitored by measuring the ­reflection of an optical


signal that’s coupled into the access fiber (see Figure 2). The optical power of
the reflection from the point of ­demarcation is monitored, and the intensity
of the reflected ­demarcation peak provides an estimate of the insertion loss
of the link. A reduction in the ­demarcation reflection peak can indicate a fiber
cut or increased insert loss; ­measuring the Rayleigh scattering can detect
the location of a potential fiber cut. In this way, ALM delivers immediate
failure identification and localization through optical layer demarcation
and simplified root cause analysis, thereby fueling faster repair cycles and
improved fiber availability.

Just in time
ALM provides an efficient, ­unobtrusive approach that allows network operators
to create more value from their fiber assets. Completely independent from the
applications running over the fiber – which might be 100G Ethernet traffic,
special interfaces for interconnecting data centers, or even analog optical cable

ALM scheme Measurement principle


Reflectance

CPE CO
device device
Probe Access Probe x1
reflect fiber link in/out Distance

6 FIGURE 2. ALM architecture, point-to-point configuration.


1407LWcstoryF2

Lightwave :: EDITORIAL GUIDE


Monitoring fiber access links

TV systems – ALM delivers out-of-band, in-service real time monitoring, fault


detection, and fault ­localization optimized for fiber-based access and dark fiber
services.

Optical offers the clear, ­futureproofing answer to the ­ever-escalating bandwidth


demands now placed upon access ­infrastructures. With fiber access rising rapidly
around the world, ALM is a timely addition to network operators’ arsenal of
capabilities.

ULRICH KOHN is director of technical marketing at ADVA Optical Networking.

7
Lightwave :: EDITORIAL GUIDE
Optical Time Domain Reflectometers MT9090A
Since 1895

FTTx/PON Testing Simplified


Portable. Powerful. Proven.
Anritsu’s handheld Network Master™ MT9090A µOTDR portable design delivers 1 x 64

split completely from end-to-end on high resolution, color 4.3-inch wide screen display.

Now standard with Fiber Visualizer, our one-button fault locate function, will make your life

easier by automatically selecting test parameters and providing a summary within seconds.

FREE Fiber Optic Testing Simplified Application Note. www.goanritsu.com/LWG9090

1-800-ANRITSU
www.anritsu.com
© 2014 Anritsu Company

LW_9090NEW.indd 1 8/13/14 10:14 AM


Originally published May 1, 2014

Choosing an optimal
FTTH architecture

Making the right choice ­requires weighing the


benefits of d
­ ifferent ­architectures against the
business ­requirements of your FTTH network.

By ERIK GRONVALL

F
iber-to-the-home (FTTH) deployment is increasing globally, led
by the Asia Pacific region, rapid acceptance in Europe, and continuing
rollouts in North America. Providers are feeling the pressure like
never before to take fiber ever closer to residential and small-business
locations. FTTH has emerged as the best option for offering both higher
speeds and longer reach – not to mention peace of mind about future network
requirements.

But network architects must make dozens of decisions before they break ground
on new deployments and network upgrades to FTTH. These decisions involve
splitter locations, connectorization methods, future upgradability, long-term
maintenance, and cost (first cost, total cost, operating cost, etc.). Making the right
choices is critical to ensure the new infrastructure design aligns as closely as
possible with business expectations now and throughout the life of the network.

Although some parameters may overlap during the network planning process,
important areas that ­ultimately drive ­architecture decisions include geographical
location, business case, pre-deployment considerations, and futureproofing.
Understanding any unique challenges posed within each of these areas – such
as population ­densities, required take rates, advantages and ­disadvantages of
connectorization options, or ease of migration to next generation technologies –
9 will help service providers choose an optimal FTTH architecture.

Lightwave :: EDITORIAL GUIDE


Choosing an optimal FTTH architecture

Once the service provider is clear


on where the network is going and
what it needs to do, an informed
architecture decision is possible. There
are many architectures to choose
from – centralized using closures or a
fiber distribution hub, cascaded with
or without closures, fiber indexing
model, fiber reuse model, and any
number of hybrid approaches. The
correct option might be any one of
these ­architectures or a combination
FIGURE 1. The use of hardened connectors in the
of several. Weighing the benefits, outside plant is an important consideration when
drawbacks, and tradeoffs associated temperatures vary and harsh weather conditions
with each FTTH architecture will may exist.
put service providers on the path to a
proper balance of capital and operational cost, time, flexibility, reconfigurability, and
overall performance.

Geographical, customer landscape


One obvious consideration is the ­geographical area the network will serve,
particularly for a new ­deployment. Will the network primarily serve business
customers, ­multidwelling units (MDUs), or single-family homes? Taking fiber
to a rural area versus a densely ­populated environment will present different
­architectural challenges. Expected take rates must also be factored into the density
equation, including any expected future development to the area.

The bandwidth requirements will also vary. In a situation that has both residential
and business ­customers, for example, requirements may vary greatly in terms of
availability and peak usage periods. Existing service-level agreements must also be
honored for large businesses and institutions like hospitals, schools, government
entities, or other large-scale users. Dark fiber applications such as fixed wireless,
wireless LAN or WiMax, mobile networks, and key security monitoring devices may
be required to serve all the ­customer needs in one area. The FTTH network may
need to connect to some or all of these types of applications.
10
Lightwave :: EDITORIAL GUIDE
Choosing an optimal FTTH architecture

The physical environment should also be considered for the outside plant portion of
the network. For example, in an area where outside temperatures vary considerably,
the use of ­hardened connectors or other hardened products may be required to
provide ­protection from harsh weather (see Figure 1). Is the area prone to flooding,
high wind, or other climatic events that may require everything to be in the ground
or high above the ground?

In the case of a brownfield or overbuild scenario, much of this customer landscape


information will already be available. But in these cases, a re-evaluation with an
eye to the future will still help determine the type of architecture upgrade that’ll
meet both current and future FTTH network requirements.

Meeting business-case expectations


The business case is where the balance between capital expenditures (capex)
and operational ­expenditures (opex) is determined to reach the desired return on
investment (ROI). In general, the infrastructure layer of the network has a useful
life in the 10–20-year range. Spending more on capex typically reduces opex over
time. But there are ­important issues that determine the overall cost, including
speed of installation, ROI expectations, ease of ­maintenance and, in the case of
overbuilds, any reuse of existing infrastructure.

A decision to spend less on capex to achieve a faster ROI, for instance, may be a good
decision if the network’s life expectancy is ­relatively short or if it will likely require
major changes in the near future. On the other hand, when longevity of the network
is a primary concern, spending more on capex may yield a longer ROI but save
significant opex over time. In either case, this decision will have a major impact on
the architecture choice.

There are also many factors that will influence how quickly the network can be
deployed or how long an overbuild or upgrade will take to complete. For example,
a ­completely connectorized network will be in place much faster than an all-
spliced network due to the latter’s ­specialized labor requirements and the time
required for each splice. This decision also should factor in the availability of
skilled labor – not just in the area of splicing but in general. The speed and scale
of broadband growth is putting a strain on the availability of skilled and qualified
11 workers to install networks and turn up services.

Lightwave :: EDITORIAL GUIDE


Choosing an optimal FTTH architecture

Another factor to consider is securing rights-of-way for placing cables or


enclosures. As previously mentioned, some areas may need everything
underground, including terminals and other enclosures, while some situations
may call for aerial cables. Aesthetics are a huge concern in new developments
and many MDUs and should not be ignored.

Finally, it’s important to ascertain what, if any, current ­infrastructure can be


reused in an upgrade or overbuild. If some fiber already exists, will it meet all the
current requirements (e.g., low water peak, bend insensitivity)? Will any current
active equipment still be usable or will it need replacing?

Deployment considerations
As service providers get closer to making an architectural choice, it’s time to
consider issues likely to be encountered during the actual ­deployment. For
instance, if the network deployment is on a short timeline, then it may be
beneficial to put emphasis on time saving methods such as a ­connectorized
architecture that requires less skilled labor. But if time isn’t a pressing concern,
then acquiring the skilled technicians to splice the network together or using a
combination of splices and connectors may provide more benefits. The level of
capex must also be considered in the splicing versus connectors choice.

Where is the labor coming from? Some service providers may have their own labor
while others rely solely on contractors.
Again, that makes a d ­ ifference in cost
and skilled labor is not always readily
available. However, trained labor
remains an important ­requirement,
particularly for ensuring proper
installation and handling of optical fiber.
Even with more resilient and robust
reduced bend radius ­varieties, optical
fiber is still glass and ­requires proper
care (see Figure 2).

FIGURE 2. Theneed and availability of skilled labor


must be factored into the decision to splice or Once the decisions regarding labor are
12 connectorize the FTTH distribution network. made, picking a vendor partner for

Lightwave :: EDITORIAL GUIDE


Choosing an optimal FTTH architecture

the project is another important step. Since FTTH is


a relatively new venture for some service ­providers
and municipalities, researching the experience and
technology of ­different vendors of active and passive
equipment could make the difference in meeting
business-case goals.

Every FTTH deployment plan should consider the ease


of ­migrating to the next generation of ­technologies.
FIGURE 3. New FTTH technologies
Passive-optical-network (PON) ­technology must be
are always on the horizon, such as
upgradable to whatever is expected to emerge as the
DWDM-PON (above). So it’s always
next ­generation, such as NGPON or WDM PON (see
important to consider how easily
Figure 3). The network should also be easily ­accessible and cost-efficiently the network
for implementing these ­upgrades as well as future components chosen today will migrate
­maintenance, changes, and reconfigurations. to tomorrow’s networks.

In the case of brownfield or ­overbuilding existing ­infrastructures where there


are already active ­customers, what effect will the project have on current
services? Taking ­customers out of service is ­undesirable for any service provider
and could result in losing customers to ­competitors. Therefore, an approach that
minimizes or eliminates possible outages is highly advantageous.

Ready to choose
Once the “what, where, why, and who” questions have been ­addressed, it’s time
to get down to the “how” ­regarding which type of ­architecture will deliver on
the expectations for the FTTH deployment. Since each deployment has its own
set of challenges, expectations, and unique circumstances, a “one size fits all”
architecture is impractical.

Thus, there is an array of ­architectural options that provide benefits under the
right set of ­conditions. The optimal ­architecture for the service provider will
always be the one that’s most cost-­effective, flexible, and scalable while adapting
to the unique challenges of a ­particular deployment scenario.

13 ERIK GRONVALL is a business development manager at TE Connectivity.

Lightwave :: EDITORIAL GUIDE


Wireline and Optical MT1000A
Since 1895

OTN Simplified
All-In-One Field Tester with Full OTN Mapping
The NEW MT1000A Network Master Pro – An all-in-one-field tester that will redefine the

future of your test platform. The NEW MT1000A includes dual port testing and multi-stage

OTN mapping of client signals at all data rates – bringing your current and legacy network

testing requirements into a single, easy-to-use, light-weight, portable device.

FREE OTN Mapping White Papers. www.goanritsu.com/LWG1000A

1-800-ANRITSU
www.anritsu.com
© 2014 Anritsu Company

LW_MT1000.indd 1 8/11/14 3:01 PM


Originally published March 1, 2014

Optimizing and monetizing


data-center metro networks

Software-defined networking and network


functions virtualization hold significant progress.
The trick is getting there from here.

By JULIUS FRANCIS

C
arriers and content service providers are experiencing a tsunami
in traffic levels fueled by the ever-increasing consumer and business
demands driven by over-the-top (OTT) applications and the movement
of “all things” into the cloud. To support these levels of traffic – along
with the new and expanded data centers being deployed globally to accommodate
them – network infrastructures have to evolve. That means operators must
embrace change – particularly change that improves network optimization,
maximizes agility, and increases monetization.

There are multiple financial, operational, and competitive reasons operators


must start alleviating the bottlenecks that exist in network infrastructures
today. Operational costs are skyrocketing. Complexity is increasing. Revenues are
­decreasing. In most cases, this “trifecta of trouble” is unfolding because it takes
too long to capitalize on new ­innovations and ­incorporate them into services and
­applications that customers want.

Legacy networks’ functions typically are built with fixed, vertically integrated,
and propriety operating systems, software, and hardware. These layers are
“locked in” together and restrict ­innovation. Lifecycles for hardware are short,
but product development cycles are longer than for the other layers. Software,
operating systems, and hardware need to be decoupled to spur innovation and
15 enable new revenue generating ­services to be tested and launched quickly.

Lightwave :: EDITORIAL GUIDE


Optimizing and monetizing data-center metro networks

Forces behind network functions virtualization

Resource
Innovation Monetization optimization
Service velocity Improved Optimization
opex/capex across organization

Hardware-centric Software-centric

• Fixed function proprietary HW • COTS HW to ride the


• Tight SW-HW coupling cost/performance cycle
• Innovation barrier – vendor • SW innovation decoupled
HW development cycle from HW development cycle
• Operational complexity • Virtual application as an
elastic service
• Operational efficiency

1403LWfeat04F1
FIGURE 1. Forces behind network functions virtualization.

Fortunately, with new open standards for software-defined networking (SDN) and
network functions virtualization (NFV), “untethering” software from hardware
is for the first time a reality. This decoupling enables more elastic network layers
that will be key to deploying new applications and virtualized network functions
more flexibly and rapidly.

That said, existing investments in network infrastructure have to be protected.


Performance must be maintained. Network operators can’t conduct a network
overhaul overnight. There must be a transformational approach to existing
operational and organizational models. Moreover, there’s some risk involved.

But there’s clearly much greater risk in doing nothing. Traditional operators are
conservative in adopting new technologies, and many wonder what benefits SDN-
and NFV-enabled platforms bring. Currently, operating expenses (opex) contribute
a major part of an operator’s cost of delivering ­services; as such, optimization is in
focus. Many are evaluating SDN and NFV, however, with an eye toward ­minimizing
­disruptions to their networks and the ­customers who depend on them.

Transformation to virtual-network infrastructures


The transformation of network ­infrastructures has to include network
16 virtualization, ­centralized policy-centric application ­integration, and the

Lightwave :: EDITORIAL GUIDE


Optimizing and monetizing data-center metro networks

s­ implification of underlying networks. It also must be extended beyond the four


walls of the data center – and down the protocol stack to the optical layer. SDN
brings virtualization to the networking infrastructure to reduce operational
complexity and ­escalating costs. While initially developed for use cases within
the data center, the true value of SDN will be even more fully realized beyond the
data center, as it adds automation and programmability to a multivendor network
­infrastructure layer in an end-to-end manner.

NFV expedites services ­innovation and velocity and improves economics


through open computing, operating systems, and a new network functions
environment (see Figure 1). As virtual-network functions (VNFs) are deployed
at various distributed and centralized NFV ­environments, SDN will play a key
role in enabling service paths between VNFs to chain them together.

Traditional legacy networks have no idea where or when ­applications are


running because the applications and networks are not integrated. Valuable
­information necessary to deliver more ­customized and profitable services
simply can’t be ­captured. Especially at the data-center ­interconnect layer,
application-aware ­intelligent ­networking enables operators to determine
optimal traffic flows, optimize performance, improve network utilization,
guarantee service and application assurance, secure transport, and support
programmable and automated transport to enhance application experience
with superior economics.

Looking at the increasing number of recent ­infrastructure, control, and


orchestration announcements, it’s clear that the computing, storage, and
networks have to be centered on ­applications. The underlying ­infrastructure
layer – which must be built on an ­intelligent platform as the ­foundational
building block– is where ­operators can differentiate. By doing so, they can
more ­meaningfully enhance the experience of the ­application – and the
­subscribers – through better performance, lower latency, increased security,
higher ­availability, increased ­utilization, simplified operations, and even
improved economics.

NFV offers operators the ­opportunity to move away from the forced-fit
17 legacy approach to add application awareness. Combined with a move to

Lightwave :: EDITORIAL GUIDE


Optimizing and monetizing data-center metro networks

Balanced hybrid model is NFV imperative


CENTRALIZE what you CAN and DISTRIBUTE what you MUST

Distributed Centralized

Balanced hybrid model


• Contextual affinity • Passive VNF functions
• Low latency feedback control • Slow feedback control
• Compute vs. backhaul costs • Long-tail/specialized apps
VNF examples VNF examples
• Network analytics probes • Application optimization
• Network SLA probes • Security
• Active network security • Long-tail functions

1403LWfeat04F2
FIGURE 2. A balanced hybrid model is imperative for effective NFV.

­ ommercial off-the-shelf (COTS) hardware and open operating systems, the


c
­innovation cycle of software also is separated from hardware. That fuels much
faster rates of innovation and new services.

NFV promises to enable ­operators to drive higher margins through better network
­utilization and innovative service monetization with greater agility. Importantly,
this is accomplished while defining and ­deploying new revenue generating
services in “cloud computing” time.

All or nothing?
So, is the next generation network pendulum going to swing all the way from
distributed to ­centralized with all VNFs hosted centrally? Absolutely not. A hybrid
network model will prevail – centralizing what you can and distributing what you
must (see Figure 2). There are many VNFs that should move to a cloud model,
with others remaining ­distributed to preserve the ­application ­centricity of the
underlying network layer without ­compromising the NFV value propositions.

A balanced hybrid model might have VNFs such as forensic ­analytics, application
18 ­optimization, ­application security, and long-tail functions centralized. These

Lightwave :: EDITORIAL GUIDE


Optimizing and monetizing data-center metro networks

are passive VNFs with slow feedback control. Distributed VNFs would include
network analytics probes, network service-level-­agreement (SLA) probes, active
network security, contextual service assurance, and low latency edge services.
Distributed VNFs are ­characterized by ­contextual affinity, low latency feedback
controls, and consideration of compute versus backhaul costs.

The optimized NFV platform is a key building block in ­delivering the network
transformation (see Figure 3). It needs to have the following dimensions built in:
Con textual affinity, which is ­critical in some VNFs that require simplicity,
relevance, low latency, and domain control. Examples such as network analytic
probes, network SLA probes, application-aware ­steering, and subscriber/tenant
contextual service ­chaining can be hosted VNFs within ­intelligent networking
platforms.
:: A
 location-agnostic service plane that hosts VNFs should be ­decoupled from
the control and data planes to put the ­flexibility and mobility of hosting VNFs
directly into the hands of the operator. VNF hosting should be application-
centric and not be dictated by the tightly coupled legacy approach of fixed
networks.
:: A
 distributed service plane with service chaining is possible through the
location-­agnostic dimension that gives control to the operator to seamlessly
transition the VNFs within the distributed platform, local point of presence,
or cloud data centers based on ­changing ­application-centric demands such as
scale, ­experimentation, and control. The ability to chain these service locations
should be built into the service plane. SDN plays a key role in making that
­possible. The centralized control of the service plane, ­regardless of where
it resides, gives the ­operator the dynamism and control that applications
demand.
:: The COTS and open operating-system value must be preserved to ride the
cost and ­innovation curve of COTS compute and open-source community to
bring operational simplicity and reduced costs. Bringing the COTS model to
server-based hosting is well understood, but preserving the COTS and open
operating-system mindset should be architected from the ground up within
this new breed of ­intelligent networking platforms.

19 When the multilayer SDN control comes together with NFV and ­application-

Lightwave :: EDITORIAL GUIDE


Optimizing and monetizing data-center metro networks

Optimized NFV platform

Fixed hardware applications Virtual software applications

Optimized NFV model


for distributed functions

Telcos Content providers

• Long delivery cycle


• High latency compute interconnect
• High compute cost
• High latency feedback control
• Proprietary compute
• Low port density
• Proprietary static operating system
• Barrier for service velocity
• Faster delivery cycle
• COTS compute
• Low latency feedback control • Low compute cost
• Low latency on service chaining • Open virtualized operating system
• Greater port density • High SW service velocity

FIGURE 3. Features of an optimized NFV platform.


1403LWfeat04F3
aware ­intelligent ­networking platforms, the true value of application ­centricity
will be integrated with the network layer. This ­application ­centricity needs
to be ­extended outside the walls of the data center to deliver an end-to-end
application experience.

The pace of transformation


Network operators will introduce changes gradually. The hybrid model, leveraging
SDN and NFV, provides a seamless migration path without ­jeopardizing
performance or the ­integrity of the network. Operators who realize they must
respond to the ­unprecedented levels of traffic traveling across their data centers –
likely to increase with demands fueled by bandwidth-intensive video ­applications,
for example – will bring new ­competitive services to market much faster. That, in
turn, will allow them to drive additional revenues.

It can no longer take up to a year or more to deliver new services for


application-hungry consumers and businesses. Large content providers, such
as Facebook and Amazon, are investing heavily to build their own networks
20 for optimal control and keep pace with rapidly changing business demands

Lightwave :: EDITORIAL GUIDE


Optimizing and monetizing data-center metro networks

and requirements. New SDN- and NFV-enabled ­networking platforms will


have the massive density, space, and power savings required for cloud-
based services and applications. Importantly, they will also provide network
analytics, programmability, and increasingly granular application awareness.

Resistance is futile. Continuing with the status quo of a proprietary, fixed legacy
network is akin to the telecom companies that refused to react when Internet-
based communications changed the communications field forever. Those
companies are no longer a factor in a very vibrant and active marketplace.

Innovation and open ­environments make the next ­generation of networks not
only feasible but also a reality for ­operators seeking to more fully capitalize on
their networks while optimizing opex with ­innovative monetization. Vendors
can start by building new ­intelligent ­networking platforms as building blocks
to enable a network fabric that can be monetized.

Network operators can start by enabling “VNF-as-a-service” as the first step


toward monetization. If we just focus on driving out the margins of vendors and
carriers, we will also drive out innovation and turn cloud networks into a pure
utility. Nobody in the communications innovation ecosystem would benefit from
that environment.

JULIUS FRANCIS is director of product management at BTI Systems.

21
Lightwave :: EDITORIAL GUIDE
Company Description:
Anritsu Company (www.anritsu.com) is the United States subsidiary of
Anritsu Corporation, a global provider of innovative communications test and
measurement solutions for more than 110 years. Anritsu provides solutions for
existing and next-generation wired and wireless communication systems and
operators. Anritsu products include wireless, optical, microwave/RF, and digital
instruments as well as operations support systems for R&D, manufacturing,
installation, and maintenance. Anritsu also provides precision microwave/RF
components, optical devices, and high-speed electrical devices for communication
products and systems

links:

NEW PRODUCT: MT1000A Network Master Pro

Portable PIM Test Analyzer

MT9083x2 Series with Fiber Visualizer

Network Master MT9090A

LMR Master Land Mobile Radio Modulation Analyzer S412E

22
Lightwave :: EDITORIAL GUIDE

S-ar putea să vă placă și