Sunteți pe pagina 1din 9

Tethered Linux CPE for IP Service Delivery

Motivations, Model and Implementation Considerations

Fernando Snchez David Brazewell


Principal Systems Engineer Head of Access and Service Development
PLUMgrid, Inc. Sky Plc.
Sunnyvale, CA, USA London, United Kingdom
fernando@plumgrid.com david.brazewell@bskyb.com

Abstract This paper presents a new delivery model for IP


communication services based on using lightweight Customer
Premises Equipment (CPE) that is remotely controlled by These devices are standalone, relatively complex,
virtualized software images running on a provider cloud. intelligent and somewhat in-expensive black boxes that
perform networking functions such as routing, switching,
By removing the complex pieces of the service logic from the firewalling and local IP Address Management (IPAM,
equipment in the customer premises, and by placing them in a generally in the form of DHCP server), amongst several others.
virtualized software instance running inside the Service As opposed to mobile terminals, these fixed service
Providers datacenter, relevant Operational and Capital terminals are generally not perceived by the end customer as
Expenditure (CAPEX and OPEX) savings can be obtained. highly valued pieces of the service, but rather as service
enablers. The customer usually values the quality of the
CAPEX savings are achieved by reducing the complexity of network service itself (the VPN connection, the IPTV service,
the piece of hardware to be deployed at the customer premises, the internet connectivity), while the CPE is the box in the
or by extending the life cycle of existing devices already deployed. corner with blinking lights.
Potential OPEX savings are even larger: the complex logic of Fig. 1. IP service delivery model with an integrated stack CPE
the service implementation (the software complexity) is under
more manageable control of the service provider, thus enabling
simpler troubleshooting and software upgrades, and reducing
technical support calls and truck rolls.

Hardware development is simplified as the service complexity


is implemented in software with a continuous development
model, thus reducing the time to market for new services whilst
maintaining consistency of service regardless of hardware in the
home.

This model is enabled by the latest advances in the


networking stack of the Linux kernel (which allow to implement
network functions as programs running in kernel space), coupled
with the use of Software-Defined Networking (SDN) and
Network Functions Virtualization (NFV). The combination of
these technologies allows controlling the broadband service in the
cloud, while maintaining local, on-premises service enforcement,
security and address management.
A. CPEs are a key enabler for IP services
KeywordsCPE, tethered, SDN, Linux, BPF
However, the CPE is a key piece of the service and often
I. THE IP CPE PROBLEM LOW PERCEIVED VALUE, HIGH overlooked in its importance. It is the only real bit of the
COMPLEXITY. network the customer sees, it also is the presence of the service
Most fixed line services offered by Communication in the home, and yet often its derided (turn it off and turn it
Service Providers (CSPs) to both the residential and business on again!) when it should be front and center (or functionally
market segments are transported on top of the IP protocol. discrete if preferred), fully operational, working whenever a
These are generally implemented through the installation of customer needs it.
Customer Premises Equipment (CPE) devices such as home It is the also the ideal place to run certain services as it is
gateways, cable modems, Digital Subscriber Line (xDSL) the gateway to the many things the customer needs the
routers or Passive Optical Network (xPON) Network broadband service for. It should be used for their protection
Termination Units (NTUs). (Fig.1) and security, it is the portal into the home for other Over-the-

Sponsors Alexei Starovoitov Distinguished Engineer at PLUMgrid


Inc., Linux Plumber, and key contributor to the BPF subsystem in the Linux
kernel; Pere Moncls CTO at PLUMgrid Inc.
top (OTT) delivered services and many operators demand it be unavoidably leading to numerous failures and service
service aware to ensure best delivery of those things they disruptions.
provide to the customer.
Service Providers have been looking for ways to effectively
Additionally, as more complex IP services are introduced centralize the logic of these services in a location under their
(e.g. Corporate Security, Parental Controls or IPTV), more control (move the CPE to the cloud), and simplify the device
demands are made of this legacy hardware and in many cases located in the customer premises.
this demand may be beyond the devices capability, requiring a
This would lead to a model where the service intelligence is
new device or such a change to the existing software that it
usually takes many months to develop and distribute an hosted and controlled in the network, while the CPE equiment
becomes passive and simple. It could be an enabler for these
upgrade. Likely at great expense to the Operator, and with the
risk of being potentially restrictive to the launch of a new devices to be looked at the in the future in the same way we
view old style terminal devices today.
service.
As an example, up to 2014 the UK market has seen 5 C. Moving the CPE to the cloud
different models of the home hub that British Telecom installs This paper suggests a way to solve this issue by taking
for their customers broadband service. Sky Broadband has advantage of modern efficiencies in software-defined
reportedly used 8 different CPE models between 2006 and networking and cloud based computing. It is to *shrink* the
2013. device, make it even smaller and cheaper. It is also to remove
the software complexity from the consumer-grade piece of
This high-importance/low-perceived-value conundrum has hardware sitting in the customer premises (far from the Service
been limiting the profitability of operator-managed networking Provider control), and put it in a virtualized container that can
services ever since broadband services were first introduced. be hosted in carrier-class, highly-available, Service Providers
B. IP CPEs are complex but need to be cost-effective premises.
IP-based communication services are by design This model allows for remote control of a service
decentralized. Any node in an IP network (including the implementation point that sits in the customer premises. The
CPE) must intelligently peer with the adjacent network nodes, CPE becomes a passive device that is tethered to a virtualized
and process, multiplex, encapsulate and de-encapsulate software image running in the Service Providers datacenter.
information across a number of layers in a stack of networking (Fig.2)
protocols. Fig. 2. High-level view of the Tethered CPE Service Delivery architecture.
Additionally, the need for tight security (derived from the
any-to-any connectivity principle of the IP protocol) and
private addressing (derived from its nature and addressing
limitations), add up complexity to the CPE in the form of
features such as IPAM/DHCP, Network Address Translation,
stateful firewalling and many others.
On top of that, IP now transports dozens of different
services with diverging requirements. Voice, Video, Internet,
and even Cellular Communications (through femtocells) are
now transported on top of these CPEs. This means that the
device must be able to classify each and every one of the
packets flows travelling across it, and intelligently provide
differentiated treatment to each one of them.
Finally, the CPE must also provide visibility into the
services traffic flows, and must perform metering and This new model provides a win for cost and service
sometimes even complex authentication and accounting performance for now and into the future. It enables rapid
features on them, so that the Service Provider can properly run change via agile development on services hosted in a
a profitable business out of the network. centralized fashion, allowing for upgrades and adoption of new
standards as they come out.
All these features must be performed or enforced locally
at the customer premises, and build up into a complex list of Also, the advantages of reducing the complexity of
features that translate into an elaborate and large amount of customer premises equipment are numerous:
logical functions that have to be implemented in large amounts Dramatically reduced cost/economies of scale
of complex software.
Lower failure rate
But, on the other hand, the device itself into which these
functions are implemented has to be built into consumer-grade, Lower dependency on end-customer interaction
cost-effective and commoditized hardware, so as not to eat up (power, location, maintenance)
the small margins Operators receive through subscriptions.
This leads to failure-prone and loosely maintained devices, Software is easier to customize and can be
provided by a different vendor than hardware
(software innovators with hardware The Staging phase needs to be very carefully
manufacturers) performed, as the configuration finally pushed to
the customer premises will later be difficult to
Software bugs can be easily fixed if the software
change. Additionally, they have to deal with
runs in the telco premises (Central Office or customized firmware loads from all these vendors,
Datacenter)
due to the high complexity of the software
Reduce the code base in the device, meaning functions to be implemented in the device, and the
minimal changes required over time, reducing difficulty of later upgrading a device that is
bugs and extending the lifetime of the code base in running in the customer premises.
the router.
The Deployment of these devices is often quite
By keeping the service implementation complexity in their complex, as existing standards such as TR-069 are
centralized location, operators can have tight control of the not always fully supported on these devices, and
service and perform any maintenance required without the sometimes are not able to perform every
need of making any changes in the customer premises, which configuration task required for these complex
would mean logistic problems in the form of technician visits services. This could result in an installation
or device swap-outs. process that requires either a technician visit, or
with a long technical support call that could end
Crucially this model also shows how Operators can extend up as a replacement.
the reach of their IP services to those customers with legacy
CPE hardware who may have required an upgrade or be In the Operation phase, Service providers are
exempt otherwise. forced to support and maintain a device that sits in
the customer premises under control of a
II. CHALLENGES IN THE CURRENT CPE MODEL potentially unskilled end user. It will be
A. Challenges along the entire CPE lifecycle commonly misconfigured, unplugged, hacked, or
even swapped for other equipment that the user
buys directly, and that will be probably untested.
Large numbers of CPE (sometimes in the order of Remote reconfiguration capabilities are limited or
magnitude of millions) require features to be locked down 12 non-existing. Efficient remote management,
months in advance of being required. Large scale or iterative telemetry or metering are difficult and unreliable.
changes are currently not really possible in real time. CPE And upgrades or bug fixes are often not possible
firmware rollouts can take weeks or months. or are limited.
As Table 1 shows, hardware development is usually started All this has turned it cheaper to just toss a problematic
in advance of the software development. However, software device in the trash bin and send a new one to a customer in
development has a longer lead time so extends the overall trouble, thus restarting the entire process. This is far from
length of the time to market. Removing the complexity of the being an efficient solution to the problem and shows a clear
software could mean overall CPE development timescales need for a better model.
could be reduced by as much as 12 months.
B. Current CPEs feature network stacks that cant be
TABLE I. CPE LIFECYCLE CHART effectively upgraded
CPE Lifecycle Charta.
CPE Component Lifecycle Min Max The speed at which communications technology changes
seems to be exponentially increasing. New protocols and
Hardware Lifecycle (day 0) 6 mo. 12 mo. technologies have become enablers of innovation and value
Software Lifecycle (+6 mo.) 12 mo. 18 mo. creation, so new standard versions keep constantly appearing
into the market.
Total timeline 18 mo. 24 mo.
a. In the residential services market, the introduction of
From procurement and development to deployment; estimated data from UK Broadband provider
(2014) protocols such as IPv6, SIP, CGN or PCP have demonstrated
the need for updateable network protocol stacks and periodic
driver changes. You dont want to lock an IPv6 stack down
The current integrated stack CPE model (where the same then find out 12 months later when it is ready that the network
low-end device performs the control plane and the data plane its connecting to has changed, as have the standards.
functions), presents challenges along the entire supply chain:
In the business services segment, the increased control
The Procurement phase is long and complex, with plane complexity of protocols such as BGP, IS-IS, OSPF, SIP
service providers in the need to contact vendors and many others drive the price of these CPEs to higher orders
with long lists of features, issuing RFx processes, of magnitude.
and performing in depth testing and certification
processes due to the high failure rate that these The same needs are also applicable to the devices used in
consumer-grade devices could suffer otherwise. public wireless access points. Newer protocols and the need for
features such as real multitenancy (enabling the virtual
wireless provider business model on public Access Points), according to the last set of instructions received until the
show a clear need for devices that can be remotely controlled connectivity is restored.
from the Service Provider cloud, and easily upgraded.
Nevertheless, all of the control plane functions can be
handled with general-purpose CPUs, without the need for Implementing the data plane with standard tools available
in the Linux kernel (leveraging the recent additions to its
specialized hardware. These functions can be virtualized and
moved as workloads into the Service Provider cloud, while the networking stack that are discussed later in this paper)
significantly reduces the set of features required in the device.
data plane can be running in a much simpler (and cost-
effective) device in the customer premises. As long as it can run a Linux kernel, it will be able to behave as
a Tethered CPE.
III. TETHERED CPE MODEL IMPROVEMENTS This presents numerous advantages across all phases of the
CPE lifecycle:
The tethered CPE proposed in this paper leverages an
The Procurement phase is greatly simplified, as
SDN model by splitting the CPE device into two entities: a
the list of requirements on the device is reduced
control plane representing the software service logic and
and standardized, thus making them more cost-
complexity (the brain of the device), and a data plane
effective and stable. This also means that these
entity performing the actual traffic manipulation and the local
features are less dependant on the performance of
enforcement of the service logic. (Fig.3)
the hardware, and dont require large amounts of
Fig. 3. Split-plane model for network elements. flash or running memory.
As customers migrate between services, the
hardware that needs to be changed to support new
features is minimal. Current market trends of
Home Gateways vs simple CPE means that home
network costs are increasing., and this model
reduces that need.
It also allows for customer features to be managed
independently of Wide Area Network (WAN)
interfaces (Fibre, xDSL or Cable Modems) and
allows customers to maintain settings between
WAN upgrade or migration as everything is held
in the cloud.
And even more importantly, investment protection
is guaranteed, as the short list of features required
remains unchanged over time.
The cloud image that implements the service
complexity runs on the service providers
The control plane logic (the complex software) is removed datacenter, so it can easily be managed, patched
from the device sitting in the customer premises and and upgraded. This allows switching from long
implemented in a virtual machine in the Service Provider development cycles to an agile development
datacenter that can be easily managed, upgraded or troubleshot model able to change every month and not every
by the Service Provider. The Data Plane then is remotely every year for new features and bug resolutions,
controlled by the virtualized control plane through a while significantly reducing the time to market of
messaging protocol. new features.
The device sitting in the customer premises acts as a The Staging is reduced as the set of features to be
passive enforcement point of the logic controlled in the cloud. tested is standardized. The tethered device now
Essentially it is turned into an NTU-like device, somewhat has a lower risk of failure and a reduced numbers
like an evolved version of and old phone jack. But it is able of bugs. This helps reduce avoidable
to provide locally in the customer premises all data plane replacements and associated ticket escalations or
functions required for the service, such as firewalling, NAT, technician site visits. Moreover, the configuration
QoS, DHCP, DNS, etc. It is local enforcement of service initially pushed to the device in the customer
logic that sits in the cloud. premises can be easily updated remotely, so the
risk associated with closing down a configuration
Additionally, this model must allow for headless in the Staging phase is removed.
operation of the Tethered CPE: if the connection between the
control plane running in the datacenter and the data plane Deployment of these tethered devices turns into an
running in the customer premises is temporarily lost, the simple and error-free task, as the device is
tethered CPE will keep working without service interruption,
configured as a virtualized software image for configuration of a variety of features, and extensible so that
running in the Service Provider cloud. these features can be augmented over time.
Additionally, and as discussed later in this paper, a
large group of tethered CPEs can be handled from
a single control plane software instance so
configurations and changes are simplified. The A. The Linux kernel as the right framework to implement a
service can be configured once in the cloud, and networking data plane
implemented in and number of tethered CPEs
When looking for a technology framework that would
automatically.
guarantee flexibility and extensibility in the feature set that a
The Operations phase improves by limiting access data plane can implement, interoperability between vendors
to the device by the end users. They can be (hardware manufacturers and software developers), wide
provided with an online service portal that enables support across different hardware architectures, and
changes to a limited set of features (for example, widespread knowledge of the technology (thus allowing for
NAPT and port forwarding). From a management abundant potential development resources), use of the Linux
perspective, the service provider now has full kernel presents a fantastic option. Implementing the data plane
control and easy access to the software running using standard tools available in Linux enables interoperability
the service, so all monitoring, metering, telemetry between manufacturers, portability to different hardware
and support functions are simplified. platforms, and widely available talent for software
development and operations.
Moreover, using the Linux kernel directly without adding
IV. : ENABLING THE TETHERED CPE MODEL THROUGH SDN, any vendor-specific library or API framework on top of it
NFV AND THE LATEST CHANGES TO THE LINUX KERNELS BPF guarantees that this architecture is not limited to work with any
SUBSYSTEM. specific vendors hardware, or bound by the capabilities that a
vendors API may support.
The concept of separating the control plane and data plane The Linux kernel has traditionally supported several
of networking appliances has been periodically popular in the frameworks providing pieces of what could be turned into fully
networking industry. Early attemps to implement this capable networking functions. There are low-level filtering
separation in firewalls and network address translators were tools available (perf, ftrace, stap, ktap), traffic analyzers and
articulated under the Middlebox Communication (midcom) monitors (tcpdump, iptraf), tools for monitoring the networking
IETF working group that published an architecture and stack at different layers (ss, iptraf, netstat, nicstat, ip), tight
framework RFC in 2002 [1]. Some authors, though, consider control of the network interfaces and the link layer (ethtool,
that the arrival of the Openflow protocol in 2008 was the lldptool) and powerful hierarchical queuing and prioritization
starting point of the Software-Defined Networking (SDN) mechanisms (qdisc).
model [2]. SDN was adopted first in academic environments
around 2009, and has been making a big impact in the These tools offer access to different pieces of information
networking industry since 2011. Early applications of this about the running networking subsystem. However, in order to
model appeared in datacenter and transport networks, but its build a fully capable networking data plane, the Linux kernel
applicability to the CPE model presents plenty of advantages as should have the ability to execute atomic networking functions
discussed earlier in this paper. and serialize them to form a complex data plane composed of
several of those networking functions. This would enable
This split-plane model requires an implementation building these atomic networking functions as simple
mechanism able to efficiently run a tethered or remotely programs, and serializing them to build a full networking
controlled networking data plane inside consumer-grade topology (the full data plane of the service) in kernel space.
networking equipment, and under a well-known and standard
technology framework, with standard Application Recently, interesting improvements have been included in
Programming Interfaces (APIs), interoperable software and, the kernels networking subsystem, adding the ability to
ultimately, enabling commoditization of the hardware used to implement all the functions required to build all pieces that
implement it. The data plane should be flexible enough to compose the data plane of virtually any networking function.
implement a variety of networking functions (not only This ability has been introduced recently starting with the
switching and routing), and be programmable to allow for new introduction of kernel version 3.15, and are included under the
feature implementation. Additionally, this model also requires extended Berkeley Packet Filter (BPF) subsystem.
the capability to re-package the control plane logic as an B. Evolution of the Berkeley Packet Filter (BPF) subsystem
independent software instance running inside a virtual machine
The Berkeley Packet Filter (BPF) subsystem traditionally
(following the Networks Function Virtualization, or NFV,
allowed an application to quickly filter packets out of a stream.
model).
An early BPF user was the tcpdump application, which used
The protocol used for communication between control BPF to implement the filtering behind its complex command-
plane and data plane should also be flexible enough to allow line syntax. Support for BPF in Linux was added in version
2.1.75 of the kernel in 1997. The BPF interpreter stayed
without changing much for some time (except for some Another significant addition in the latest set of kernel
performance tweaks and minor changes), but it has later seen a patches set is the addition of "maps." A map is a simple
number of extensions through the years. Around the 3.0 kernel key/value data store that can be shared between user space and
release, other features such as a JIT (just-in-time) compiler BPF scripts and is persistent within the kernel. As an example
(allowing for real-time updates of the filters and associated of their use, we could consider a program that creates a map
actions over a running system) were added [3]. with two entries, indexed by IP protocol type, and adds a BPF
script that inspects passing packets and increments the
Recently in versions 3.15 - 3.19, the Linux kernels BPF
appropriate entry for each. The program in user space can
subsystem has received a major overhaul which drastically query those entries and take action depending on the values
expands its applicability, adding a number of capabilities and
stored in them.
performance improvements. BPF is now available for direct
use from user space. Additionally, BPF programs can now be Maps can only be created or deleted from user space; BPF
attached to sockets. Once the BPF program is loaded, it will be programs do not have that capability. Maps are created and
run on every packet that shows up on the given socket. This deleted with:
allows to launch programs that process traffic showing up on a int bpf_create_map(enum bpf_map_type map_type,
socket, thus opening up the capability to implement traffic int key_size, int value_size,
processing applications as the traffic is received. int max_entries);
close(map_fd);
Consequently, the BPF subsystem is able to filter packets
out of a stream, and launch user-supplied BPF programs that To store values into and retrieve values from maps, a user
would be able to process that traffic and perform any actions space program can call:
on it. int bpf_update_elem(int map_fd,
These recent additions to the kernel add a new system call void *key, void *value);
int bpf_lookup_elem(int map_fd,
named, simply, bpf(); this new call acts as a sort of void *key, void *value);
connector for other external operations that can be executed int bpf_delete_elem(int map_fd, void *key);
as external programs inside the BPF in-kernel virtual int bpf_get_next_key(int map_fd,
machine. So, for example, user space can load a BPF program void *key, void *next_key);
into the kernel with a call to [4]:
int bpf(BPF_PROG_LOAD, union bpf_attr *attr, But how are these BPF programs actually run? There is
unsigned int size); little point in running an eBPF program on demand from user
space; there is not much that it could do that couldn't be more
The relevant sub-structure of union bpf_attr in this case is: easily accomplished directly. Instead, eBPF programs are
meant to respond to events within the kernel.
struct { /* used by BPF_PROG_LOAD command */ One common event, of course, is the reception of a packet
__u32 prog_type; /* program type */
__u32 insn_cnt;
from the network. BPF now includes a new form of access to
__aligned_u64 insns; /* 'struct bpf_insn *' */ the socket filtering mechanism, allowing a program to directly
__aligned_u64 license; /* 'const char *' */ attach an BPF program to an open socket:
__u32 log_level; /* verbosity level */
__u32 log_size; setsockopt(sock, SOL_SOCKET, SO_ATTACH_FILTER_EBPF,
__aligned_u64 log_buf; &prog_id, sizeof(prog_id));
};
Here, prog_id must be the ID number of a program previously
loaded into the kernel with the bpf() system call.
Here, prog_type describes the context in which a program is
Summarizing, the linux kernel now has the ability to match
expected to be used; it controls which data and helper traffic upon arrival on a socket, filter it and, on a match
functions will be available to the program when it runs. condition, launch a small in-kernel program from userspace
BPF_PROG_TYPE_SOCKET is used for programs that will be that can process the traffic comparing against a key/value table,
attached to sockets, while BPF_PROG_TYPE_TRACING is for modify it and send it to the next stage.
tracing filters. The size of the program (in instructions) is
provided in insn_cnt, while insns points to the program These elements allow implementation of virtually any
itself. The license field points to a description of the license atomic networking function that is available today on a
for the program; it may be used in the future to restrict some network appliance, as the scheme of parsing packets, looking
them up inside a table, updating the system according to the
functionality to GPL-compatible programs.
information contained within them and modifying and queuing
the packets in the output, comprises nearly all actions that the
Some security questions could arise given that this provides data plane of a network appliance performs. Moreover, the
the ability to run programs within the kernel. So a key part of ability to add these filters in a Just In Time (JIT) fashion,
the code ensures that BPF programs cannot harm the running allows to dynamically update these network functions inside a
system. Should something suspect turn up, the program will running kernel.
not be loaded.[5]
The model presented in this paper is based on writing
atomic networking functions (like switching, routing,
firewalling, etc.), as BPF programs launched inside the amongs dozens of others, so portability, interoperability and
Linux kernel that process the traffic. The mapping feature of API simplicity are guaranteed.
BPF adds the ability to implement tables against which the Fig. 5. Several data planes in residential gateways tethered to a single control
packets can be matched. (Fig. 4) plane instance.
Fig. 4. Traffic being processed through several BPF programs.

These in-kernel data plane programs performing each


networking stage can be complemented with user space V. : CONSIDERATIONS FOR A TETHERED CPE
helper programs that are able to perform the communication IMPLEMENTATION
between the in-kernel, on-premises, tethered CPE data plane, A. Data plane of a Broadband Internet service
and the centralized, virtualized, control plane instance running
in the Service Providers premises.
The IP service provided by the CPE in question will be
Moreover, an adequate overall system architecture design formed by a limited set of atomic networking functions
would allow to define topologies across an entire domain (with running inside the BPF subsystem. As an example,
a domain being formed by an arbitrarily large number of data considering the network topology used to provide most of
planes), and to dynamically attach endpoints connected to any todays broadband services, they can be broken down into a
of those data plane instances to the topologies defined centrally limited number of networking functions. Many of todays
for that domain. Topologies can be defined in the control residential Internet access services are formed by a
plane, including information about the identifier of the
combination of:
endpoints that will belong to each topology. For example,
parameters related with the endpoints MAC address, the
physical port on which they appear (or the data plane they SWITCH function, interconnecting all terminals
appear on), or even a generic pre-configured user-defined tag, inside the customer premises amongst them and into
can be used by the control plane to place each endpoint in the the exit towards the Internet.
right place of the right topology, dynamically enabling it in the ROUTER function, separating the customer premises
data plane where and when the endpoint is detected (enabling network from the external network.
any changes just in time). This opens the possibility of NAT function, masquerading the numerous private
defining a service as a simple topology just once inside the addresses used at the customer premises for all
control plane entity, and dynamically attaching an arbitrary terminals, into a single (or limited set of) publicly
number of tethered CPEs to it, just when the endpoints to be routable address(es). Additionally, this function will
serviced are detected in the CPE in question. To simplify the likely be performing some sort of Network Address
concept, a single service control plane can control an Port Translation (NAPT) so that certain layer-4 ports
arbitrary number of data plane tethered CPEs, so a change in are mapped across the NAT connection to/from
the service can be implemented as change once, apply to all terminals with private addresses.
(Fig.5)
FIREWALL function, limiting the traffic into and out
Finally, on the simplicity, portability and interoperability, a of the private side of the connection, and preventing
key advantage of this proposal is the fact that support of the malicious traffic from reaching the inside of the
Linux kernel itself is the only requirement for the tethered customer premises.
CPE. The Linux kernel has been ported to pretty much every ENCAPSULATION/DECAPSULATION function,
hardware architecture available, including Intel and ARM Allowing for isolation of each customer or service
domain by including the traffic inside tunnels such as
GRE/IPSEC/MPLS/VxLAN/etc. This also enables C. Control Plane considerations
connectivity with existing VPN services. Most of the complexity of the networking system is moved
in this model towards a virtualized software image running in
As packets arrive at the entry socket, they will be filtered the providers cloud. This software image will have to
and then processed by the SWITCH function (or program) implement all software functions that were previously hosted
which will parse the packets, and look them up against the inside the control plane entity of the CPE, including:
local switch table (map) to find their destination. The packet
may then be forwarded to the ROUTER function, where once
again they will be matched against the local router table/map Dynamic routing protocol processing (BGP, OSPF,
to decide whether they should be passed on to the next etc.)
function. Along the way, packets will be modified by the Dynamic link-layer protocol processing
program in order to perform functions like decreasing their Address resolution processing (ARP, etc.)
Time-to-live (TTL), calculating checksums, or modifying their Management protocol processing (SNMP, etc.)
destination MAC address, for example. This scheme will be Configuration management
repeated as required until the entire set of functions forming the
And, in general, all functions that are typically hosted
data plane is completed.
in the control/supervisor module of a distributed
B. Communications channel between data plane and control switch or router
plane
Given the aforementioned capacity of the Linux kernel to In addition to these functions, a complete system will have
behave as a complex data plane that comprises a number of features typically belonging to higher layers such as topology
atomic network functions, it is now possible to design a full management, system state maintenance, event correlation, and
distributed system that links that data plane instance with a all interactions with other OSS/BSS systems performing
control plane software image able to inject the desired network management, authentication, accounting or service
topology into the data plane running in the Linux kernel. orchestration.
Each of these atomic networking functions will require Whether these features are implemented in the same
communication with the centralized control plane to receive software image or a in a separate one, and the way the system
the topology configuration, update the local and centralized would interact with the rest of the Service Provider ecosystem,
tables (in the case that an incoming packet does not match a is completely implementation-dependent. There are several
previously existing entry in the table), or may decide that a models possible and discussing the advantages and
particular packet needs to be passed on to a control plane disadvantages of each one of them is considered out of the
function (if the packet belongs to a control plane protocol). scope of this paper.
This means that an efficient data interchange method needs to
VI. : SUMMARY, CONCLUSIONS AND FUTURE WORK
exist for each networking function towards the control plane,
so that the control plane can configure entries into each atomic This paper presented the reasons why the current IP CPE
data plane program, and each program can inform the deployment model (based on deploying intelligent
control plane of the events happening in the local network. independent equipment in the customer premises) has
important challenges that have been limiting the profitability of
This channel can be established by helper programs IP based services for telecommunication service providers.
residing in user space, through the use of a dedicated
communications socket towards the control plane. The It also presented an alternative service delivery architecture
mechanism used for this socket is once again implementation based on a tethered CPE. This model leverages SDN and
dependent. Given that there could be a potentially high number NFV concepts by splitting the control plane and data plane
of endpoints (data plane entities) in a domain that would be entities traditionally built in networking appliances. It is
sending information to, and receiving information from the formed by a remotelly controlled data plane instance which
control plane, the use of a communications mechanism that is sits at the customer premises and provides local enforcement of
able to handle a potentially high number of endpoints is an service policies (firewall, IPAM, routing, QoS, multicast, etc.),
important design decision. Potentially, there could be hundreds while moving most of the software complexity to a virtualized
or even thousands of data plane endpoints (CPEs) in a single software instance running in a carrier-class location that is
control plane domain, so the choice of an adequate messaging under more manageable control of the service provider.
mechanism between them will be a relevant factor determining
Finally, this paper presented an implementation of the data
the overall system performance and scalability.
plane for this architecture exclusively based on using the Linux
Finally, it is relevant to mention that these communications kernel. In particular, the latest advances in the BPF subsystem
channels established between control plane and data plane and its capability to implement in-kernel virtual machines
entities need to be protected by a security model based on some allow for a very complete, efficient and portable
sort of trusted computing framework. Discussing this implementation of this model on a varied range of hardware
framework is considered out of the scope of this paper. platforms.
The authors believe that this architecture solves some of the
issues currently outstanding in most of the virtual CPE
models in the market. Therefore, we view this paper as an IP Internet Protocol
initial step towards a real implementation in the environment of
a telecommunications service provider. IPAM IP Address Management
IPTV IP Television
Efforts are currently ongoing towards completing a real
implementation showing the feasibility of this model, NAT Network Address Translation
combining the technlogies described in this paper with
additional pieces provided by hardware and software vendors. NAPT Network Address Port Translation
These pieces include cost-effective hardware platforms in NFV Network Functions Virtualization
which to implement this model, applying an efficient
communications protocol and a complete object model NTU Network Termination Unit
between data plane and control plane, or providing OPEX Operational Expense
orchestration and management tools for controlling and
provisioning the potentially large number of Tethered CPEs OSPF Open Shortest Path First
and services that a real implementation would require. OTT Over the top
This model uses software and hardware pieces that are PCP Port Control Protocol
well-known and already being deployed at a large scale (even
if they were conceived for other areas of the network or other PON Passive Optical Networking
services and applications), so we expect that reusing them for a
QOS Quality of Service
tethered CPE model will be significantly faster than
rebuilding the technology from scratch. We consider it should SIP Session Initiation Protocol
be possible to have a full working model built with the pieces
currently available (either as open source tools or as tools SDN Software-defined Networking
provided by the vendors involved in this paper) in a very short VPN Virtual Private Network
timeframe.
WAN Wide Area Network

ABBREVIATIONS AND ACRONYMS


ACKNOWLEDGMENT
AAA Authentication, authorization and accounting
The authors would like to acknowledge Alexei Starovoitov
API Application Programming Interface (Distinguished Engineer at PLUMgrid Inc. and key contributor
BGP Border Gateway Protocol to the BPF subsystem used to implement the data plane in the
model presented in this paper), and Pere Monclus (CTO at
BPF Berkeley Packet Filter PLUMgrid Inc.) for sharing a piece of their vast knowledge,
and also for their time and useful suggestions while reviewing
CAPEX Capital Expense
this paper.
CLI Command-line interface
REFERENCES
CO Central Office
CPE Customer Premises Equipment [1] P. Srisuresh et al, RFC 3303: Middlebox communication architecture
and framework, IETF Midcom WG, 2002 (http://www.rfc-
CSP Communication Service Provider editor.org/rfc/rfc3303.txt)
[2] P. Goransson, C. Black, Software Defined Networks: A
CGN Carrier-grade NAT Comprehensive Approach. Morgan Kauffman, 2014, p. 50
DC - Datacenter [3] J. Corbet, BPF: the universal in-kernel virtual machine. LWN.net,
2014 (https://lwn.net/Articles/599755/)
DHCP Dynamic Host Configuration Protocol [4] J. Schulist, D. Borkmann, A. Starovoitov. Linux Socket Filtering aka
Berkeley Packet Filer (BPF)
DNS Domain Name Service (https://www.kernel.org/doc/Documentation/networking/filter.txt)
DSL Digital Subscriber Line [5] J. Corbet, Extending Extended BPF. LWN.net, 2014
(https://lwn.net/Articles/603983/)
IS-IS Intermediate System to Intermediate System

S-ar putea să vă placă și