Sunteți pe pagina 1din 19

Copyright Quocirca 2014

Clive Longbottom
Quocirca Ltd
Tel : +44 118 948 3360
Email: Clive.Longbottom@Quocirca.com

Bernt Ostergaard
Quocirca Ltd
Tel: +45 45 50 51 00
Email: Bernt.Ostergaard@Quocirca.com



WAN Speak Musings Volume VI
Over the l ast months, Quoci rca has been bloggi ng for Silver Peak Systems
independent blog site, http://www. WANSpeak.com. Here, the blog pieces are
brought together as a single report.

May 2014


In the continuing series of WAN Speak aggregated blog articles from the
Quocirca team covering a range of topics.


WAN Speak Musings Volume VI





Quocirca 2014 - 2 -



WAN Speak Musings Volume VI
Over the last months, Quocirca has been blogging for Silver Peak Systems independent blog site,
http://www.WANSpeak.com. Here, the blog pieces are brought together as a single report.
Networking in the Year of
the Horse
The Chinese New Year ushered in the Year of the Horse maybe time to have a quick look
and see what sort of a horse your network equates to. Just as a bit of fun.
Storing up a load of
nonsense
New, small, agile storage vendors are popping up all over the place. Many have exciting
technology that is proving to be disruptive in the markets but they do seem to pushing
the limits of truth in many of their marketing messages.
Dont Sell SLAs To Me I
Want Outcome
Guarantees
Most service level agreements (SLAs) are not worth the paper they are printed on. Failure
to meet their terms often just results in a lot of talk to no-ones benefit. However, using
technology to attempt to manage toward guaranteed outcomes is far more intelligent. Can
it be done, though?
4 reasons why IPv6 has
not taken off and 3 why
it should
IPv6 was first described in full detail at the end of 1998. In the 16 years since, continuous
warnings of the end of the Internet as we know it have been bandied about due to how we
have completely run out of IPv4 addresses. Somehow, the world has continued how?
Content Delivery In The
Virtual World: If The
VICAP Fits, Wear It
Virtual capacity providers (ViCaPs) are emerging that can pool resources across multiple
data centres to monetise what spare capacity each data centre owner may have available.
As a replacement for a fixed content distribution network (CDN), ViCaPs may be a good
approach.
Welcome the Data
Scientist, your next CEO
Unless you have been on a different planet, it will have been hard to avoid the hyperbole
over the need for data scientists. Indeed, one commentator has stated that your next CEO
will come from these ranks of data overlords. However, danger lurks within this space

Itsamoneymakingscam.tld
It seems not that long ago when your choice for an Internet address was a .com, a .co.uk
or a similar one. Now, it seems that you can stick pretty much anything at the end of your
address. Is this a matter of great flexibility of choice or just a way for those companies
selling and managing internet addresses to make shed loads of money?
Protecting the Digital
Economys Soft
Underbelly
Internet purchases continue to accelerate, alongside acceleration in the attacks by
blackhats to try and obtain users credit card details ramp up. PCI-DSS has been developed
to try and make the blackhats job harder: but it is not being universally adopted.
Performance Is The First
Victim In Application
Warfare
Managing Quality of Experience (QoE) for users is increasingly complex, as more and more
traffic traverses the same data lines. Identifying the right traffic types and managing them
to ensure a suitable QoE is becoming a major market.
Its the IP, not the IT that
matters
It is tempting to look at all that shiny technology in your data centre and wonder at all the
money that has been spent on it. Unfortunately, though, it has little actual worth to the
business. What is really valuable is the intellectual property held in the data that is created
and stored by that technology.
The internet and a Magna
Carta
There have been calls, led by Tim Berners-Lee, for a charter for Internet usage.
Although this sounds fine and dandy on the surface, it is highly unlikely that what
is being asked for could ever be provided in real life.
Can the Global InterCloud
Mesh With The Global
Internet?
Ciscos launch of its InterCloud platform for use by itself and its partners marks a significant
change for Cisco itself and for the partners. Just what will this mean for all concerned?


WAN Speak Musings Volume VI





Quocirca 2014 - 3 -



Networking in the Year of the Horse
Happy New Year to everyone. The Chinese New Year on 31st January brings in the Year of the Horse is this analogous
with what we can expect to happen in the networking world through the rest of the year?

Luckily, the horse world is varied, so we can probably draw enough comparisons to come up with a picture of the
year.
The Carthorse the trusty, strong engine of the pre-industrial revolution, the carthorse was the mainstay
for agricultural Europe. Although the move to IP-everywhere continues on apace, older systems,
including TDM, will continue to be present and facilitate information transport in many instances.
Vendors in this space will not set the world on fire, but will continue to have decent revenues. The
carthorse will be with us for a while yet.
The Pony the small but endearing pet that is there for first-timers and the young. The market will
remain full of ponies the new kids on the block with interesting ideas that younger people in the
networking space will swear will be their focus for ever many of which will be around extensions to
SDN. However, as the people grow up, they will realise that the pony doesnt meet their needs any
longer and will look to other, larger horses instead. A lot of vendors in this space will find themselves
becoming a little less loved less investment; more requirement to prove themselves; more being
passed from one owner to another via acquisition.
The Nag the other end of the scale, these are the horses that are on their last legs. Sway-backed, they
have carried the weight of the networking world for a long time, but they are now struggling to keep up
with the younger horses around them. Software defined everything (SDx) is taxing their abilities: it could
be that it is the knackers yard for some of these old favourites. Others will buy in younger horses to try
and replace the nags, but they will still need to find a way of retiring the nags in one way or another.
The Hack useful in general terms for every day work, the hack provides the general backbone for the
average user. SDx will be embraced by vendors in this space and will allow them to race those that are
generally considered more likely favourites. With no real stress being placed on the requirements for
the hack, we could see a proliferation of vendors entering the market in this way particularly from the
East.
The Thoroughbred flighty, nervous and sometimes unpredictable, the thoroughbred is what people
think they want, but then find that it is expensive to keep, with a need for in-depth skills to stop the
horse from damaging itself, and to keep it in top order. Vendors trying to differentiate themselves from
the rest of the stable could find that they often fail to finish in the Network Purchasing Stakes.
The Three Day Eventer a bit of a jack of all trades, the three day eventer needs to be capable of getting
over obstacles, moving at speed and being elegant. For many, they show expertise in one of these areas
and are poor in the other two. The gold-medal winner needs to excel in all three. This could be the
focus for the larger network vendor instead of just concentrating on differentiating itself through
network capabilities, it could go for the SDx play, working to integrate alongside software defined
server and storage needs.
The Lipazzaner highly specialised, trained to do one thing to jaw-dropping standards. Some vendors
in the optical space will continue to make inroads with longer-distance, multi-lambda optical capabilities
to take metro and wider data transport speeds to a new level.
What is clear is that the networking world is varied in itself it is not just a case of transporting the greatest volumes
of data in a faster manner. Indeed, many users will find themselves needing a mix of capabilities and so will find
themselves running a stable of different vendors to meet their needs. This could then mean that an external
management capability needs to be brought in, with the skills available to run the multiple different systems together,
whether solely through SDx or through a hybrid mix of software abstraction running alongside silicon-based
capabilities still held within the networking equipment box itself.

Anyway let the race begin!

WAN Speak Musings Volume VI





Quocirca 2014 - 4 -


Storing up a load of nonsense
Weve had the server wars, with crazy speeds and feeds data being spouted by vendors. Weve had the network wars,
with stories of how just throwing more bandwidth at a problem solves everything. And now, we seem to be in the
middle of the storage wars, where numbers and facts are being thrown around by vendors, muddying the waters
and causing confusion in the market.

The advent of flash-based storage seems to be at the bottom of this. Dont get me wrong I am a firm believer in
flash-based storage and the impact it could have on the storage world but I am a little bit fed up with the approach
some vendors are taking.

My main gripes? Read on.

1) IOPS. Input/Output Operations Per Second. This should be a relatively good way of comparing one
storage system against another. However, some vendors are using internal IOPS or the speed that
data can be moved within a storage array yet as soon as you move off the array, the IOPS drop
alarmingly due to poor controller architectures or other issues. Also, the IOPS figure can be massaged
by using different block sizes and the vendors will generally chose the block size that suits their
equipment and this is unlikely to be the same as your real-world workload needs. When talking to a
storage vendor, make sure that they provide you with meaningful IOPS figures that allow you to compare
like with like.
2) Capacity. A terabyte (TB) is a terabyte, yes? Well, actually it never has been, but at least all vendors
have played the same approximate game in comparing what they see as a TB against each other. Now,
however, too many of the flash-based vendors are using effective capacity where they are using
intelligent data compression and data deduplication to get the best capacity by lowering the amount
needing to be stored by up to 80%. This enables them to say that they can compete on price against
equivalent spinning disk but this is only the case if the spinning disk hasnt applied the same
compression and deduplication. If the same approach is taken on both platforms, then you will still need
the same capacity of flash-based storage and this can have an alarming impact on price. Again, make
sure that the vendor is comparing like with like.
3) Lifecycle management. I have had astonishing discussions with flash-based storage vendors who believe
that their product will do away with the need for any tiering of storage. As their flash storage is so fast,
then it takes over everything from tier 3 upwards in one resource pool well, they might make an
allowance for something long-term like tape for deep archival, but thats all. Pointing out that this means
that their current portfolio has to be the ultimate product (in the correct sense of the word) leaves them
non-plussed. As flash is so fast, then tiering isnt needed, surely? But if your next generation flash is
faster than this generation, then tiering will automatically be needed. Make sure the vendor
understands that tiering is a necessity and make sure they have plans to support it intelligently.
4) All flash, hybrid flash and server-side flash. There are storage workloads, and then there are storage
workloads. Some are better suited to a storage system which uses flash as an advanced cache; some
will be better suited an all-flash array. Some workloads may need server-side flash, such as PCI-X cards
from FusionIO. However, bear in mind that whereas SAN- or NAS-based flash can be virtualised, doing
the same for server-based flash is a little more difficult. Either architect to avoid the need for server-
side flash virtualisation or look to a vendor such as PernixData to provide a server-side flash hypervisor
that adds the required intelligence to mitigate any issues.

These are my main bug-bears in the current storage markets. I could go on (and probably will in a later post).
However, what it really shows is that the Romans were right: it is a case of caveat emptor (buyer beware).


WAN Speak Musings Volume VI





Quocirca 2014 - 5 -


Dont Sell SLAs To Me I Want Outcome
Guarantees
Do I want to replace the lock in my front door with a radio-controlled device linked to a telco-hosted app on my
mobile? Will company fleet managers want a cloud service to automate and remotely manage driving and
maintenance functions of company cars or why not top up with one of Googles self-driving vehicles? Well, with
the imminent arrival of the Internet of Things (IoT) its all in the pipeline offering convenience and resource
optimisation. But as these functions proliferate, will the core network performance stand up to expectations, and is
real-time error correction good enough?
Real-time network management was one of the topics at this weeks 2014 C-Scape event in London, where Cisco
reiterated its strong belief in the imminent arrival of the IoT and over ever-more efficient, all-IP networks. Together
these drive exponential traffic growth. With this follows increasing reliance on these processes in many aspects of our
everyday lives. So the ability to spot and rectify error conditions before they seriously degrade or alter intended traffic
flows, will become a serious competitive differentiator ultimately a question of life and death, if for example, alarms
from a network-connected pacemaker doesnt reach the defibrillator or the right medical personnel in time!
The analogy in the material world is in the aviation field. Here manufacturers and public authorities painstakingly
develop huge maintenance programs to ensure that airplane parts get replaced before they break, pilots go through
a long check list process before taking off, and air traffic controllers make sure that when a plane is cleared for take-
off it also has a slot to land in at its intended destination. When errors do occur and accidents happen, a well-defined
and thoroughly standardised forensics process identifies root accident causes, and advises the whole aviation industry
and operators of similar aircrafts to undertake error-correcting measures.
Network error conditions or anomalies are context specific and may be caused by software glitches, failing hardware
or malware/network attacks (DDoS etc.) Whatever the cause, fixing the problem before the end-user or the
application is affected, and advising other networks with similar configurations of issues that may affect them, will
require more network integration and a deeper cross-industry approach to network and service delivery. Four major
requirements present themselves:
Sensors and switching fabrics able to monitor traffic flows end-to-end and contextually identify
abnormal network flows
Big data toots to analyse anomalies in the traffic and identify root cause and resolution
An upstream and downstream propagation mechanism for error correction, malware eradication and
stopping network based attacks
A global information dissemination process that informs all relevant parties of the anomaly
characteristics and how to handle them.
A crucial step involves standardisation of information and analysis exchanges between different network
hardware and software components. That requires open standards and heralds the end of proprietary
protocols as we know them today! This will be painful for a wide range of communication vendors
Microsofts Skype protocol being a prime example.

Telcos are rising to the challenge acknowledging that having ten different best-of-class routing fabrics in their fixed
and mobile core network is just too complex to manage real-time, let alone proactively. Next generation components
must all adhere to the standards being hammered out in the new world of software defined networks (SDN). Most
likely upcoming telco infrastructure investments will go to vendors that address data flows from data centre to edge
router.
All this puts Cisco in pole position with its breadth of hardware and software products and application centric
architectures that gets them closer to proactive, end-to-end application performance management. It also forces
competitors both in the infrastructure (Ericsson, Huawei, Juniper, Nokia and Alcatel Lucent) servers (IBM, HP, Dell,
Fujitsu) and network managers (CA and BMC) to get their acts together. Purchase decisions will no longer be based
on price or best-of-breed but on total performance outcomes.



WAN Speak Musings Volume VI





Quocirca 2014 - 6 -


4 reasons why IPv6 has not taken off and
3 why it should
Some time back, I had a chat with the UKs Southampton University School of Electronics and Computer Science (ECS)
group around IPv6. The warnings were of disaster caused by the lack of IPv4 addresses and the chaos that would then
ensue. Making a few points to the ECS team, they were not impressed with my belief that IPv6 was not a shoe-in, and
we parted on not the most amicable terms.

This was over 15 years ago.

In the far more recent past, Quocirca got a request from the RIPE NCC for a discussion on IPv6. Expecting a re-run of
the ECS discussions, we were pretty amazed at how the RIPE NCC representative pretty much accepted that IPv6 was
not progressing as hoped, and that users really didnt perceive much additional value apart from a huge increase in
actual numbers. His challenge was much as Sisyphus trying to roll his stone up the mountain.

It has to be accepted that IPv6 is needed the Internet of Things (IoT) is well on its way, and the forecasts of billions
of new internet connected items means that IPv4 just cannot be man enough for this.

Or can it?

Lets consider why IPv6 has not made the strides it should have done.

1. Every IPv6 address needs an IPv4 one. OK not quite true, but if an internet-connected thing wants
to talk across the internet as a full peer, it needs to be able to talk to non-IPv6 enabled sites. As IPv6
was not designed to be IPv4 compatible, the only way that this can be done is to give every IPv6 address
a corresponding IPv4 one. See the problem here? If we are running out of IPv4 yet each IPv6 address
needs an IPv4 address, then what do we do? If IPv6 had been designed so that there was a good means
of failover from an IPv6 address to an IPv4 one, then this problem could have been mitigated.
2. If there are so few IPv4 addresses, why are so many being misused? According to the RIPE database,
there are less that 16 million IPv4 addresses left for use. However, these are only the reserved addresses
that have not been previously given out. In the early days, IPv4 addresses were given to anyone who
wanted them and often in very large blocks. There are a very large number of IPv4 addresses that have
never been used. There are also a lot of IPv4 addresses in the hands of blackhat groups who are using
them for a few minutes and then dropping them so as to be less traceable. Too late now to put the genie
back in the bottle, but the loss of all those precious addresses was something only a bunch of techies
could have done so effectively.
3. We dont actually need IPv6. Hang on if there are billions of new items connecting to the internet,
then we really do need IPv6, surely? Actually, no and this is why IPv6 has really struggled. The vast
majority of systems sit behind a network address translation (NAT) wall. As I sit here in my little
cocooned environment, I have a full IPv4 address table available to me my items all have 192.168.x.x
addresses. These are by no means unique on the internet, but it doesnt matter, as only my WAN address
is seen by the rest of the world, and I can use port forwarding to get any of my internal items to talk to
the world. Even if I had a very large number of items, I could cascade-NAT with multiple NAT tables in
operation.

The perception pushed by the IPv6 crowd is that everything needs to be interconnected as pure peers.
However, alongside the lack of unique IPv4 address problem, NAT is used for security purposes: a NAT
firewall is a point of aggregation where IP traffic can be inspected and security applied. If everything is
connected to everything else directly; then this is actually seen as a security issue.
4. IPv6 is just so &*%$! hard. I know the IPv4 addresses of the main equipment on my network.
Remembering 192.168.1.1 is easy. I dont need to give it a friendly address within a DNS table. I cant

WAN Speak Musings Volume VI





Quocirca 2014 - 7 -


do that with IPv6 remembering something of the format 2001:0db8:85a3:0042:1000:8a2e:0370:7334
is just that little bit harder.

However, in that complexity lies the actual promise of IPv6 and why organisations should be looking more seriously
at it. Time and time again, Quocirca finds through its research the standard top three issues that business and IT feel
need dealing with when it comes to IT. And these three are Security, security and security. There is little trust in
the IT world yet IPv4 was designed for a simpler world where that trust was taken as a given.

IPv6 was designed for a more grown-up, corporate world. The three reasons why it should be taken up are:

1. It can have security built-in, through its design for supporting IPSec. It does, however, need setting up
correctly
2. It is more efficient in how it deals with data packets, making the whole internet more efficient.
3. It uses multicast services, rather than broadcast, and can therefore preserves bandwidth and enables
streamed services to operate in a more optimised manner.

These can all have direct positive impact on businesses that are dependent on using the internet and they would be
interested in investing in, rather than paying for an insurance policy against a numbers game that is so patently
confused.

Maybe if the IPv6 brigade concentrated more on these areas rather than the numbers game, then IPv6 would be taken
more seriously. If not, Im pretty sure that in another 15 years, I may still be having the same discussions with people
as I have been having up until now.

Content Delivery In The Virtual World: If
The VICAP Fits, Wear It
Were set for one heck of a ride on the Internet over the next half-decade. For the period 2013-17, Ciscos Visual
Networking Index predicts the compound annual growth rate of machine-to-machine (M2M) modules as a component
of overall Internet traffic will be 82%, along with 79% for smartphones, 104% for tablets, and 24% for TVs. So how is
the infrastructure going to cope? More fibre, higher speed network protocols, and faster 4G mobile connections plus
smarter network utilisation will rush to sate our ever-growing demand for more content, more apps and the whole
Internet of Things.

These developments are driving fast changes for operators on the global Internet. Content providers can no longer
stay competitive with market leaders like Akamai, Limelight, and Edgecast by just streaming content efficiently, or
providing static content over dedicated content delivery networks (CDNs) to a global subscriber base. Similarly, in
order to stay in business, infrastructure and data centre operators going up against global carriers and cloud providers
need to reach higher utilisation levels than their traditional business models allow for. They need to monetise their
spare server and network capacity.

Crucially, virtualisation allows for storage and networks to retrieve data much more efficiently, and cloud computing
can provide content and apps anywhere with users only paying for the data resources actually consumed. Were
getting a lot more for a lot less, because we will be paying by the drink. It also frees the application provisioning from
the data centre operations.

Leading-edge content providers are combining the two at still higher integration levels. They are melding their
visualisation strategies with hybrid- and public clouds, and running combined operations off a single pane of glass
with no on-premise servers at all. Content providers in a pay-as-you-go model can spin up all the virtual machine
capacity they need on-line, and have it distributed around the globe to whatever customer base they want to reach.


WAN Speak Musings Volume VI





Quocirca 2014 - 8 -


A new breed of virtual capacity providers is emerging to provide that single pane of glass to virtual application
providers. Apart from providing the software management layer, they are also federating the access to and purchase
of cloud capacity. Content and application providers can thus shop around for the cheapest or most suitable cloud
resources without changing management system.

One such provider is VPS.net in the US; another is OnApp in Europe. Looking at the companies service model, they
are definitely not classic CDN providers! OnApps capo-de-capos federated marketplace approach illustrates well how
the virtual marketplaces are mushrooming fast these days, leaving simple CDN resource aggregators like CDN Planet
far behind.

The success of these virtual capacity providers (VICAPs) will be determined by their ability to attract quality of service
defined bandwidth, and add storage and processing supplier capacity to their marketplaces, coupled with their ability
to provide the overarching management software. The OnApp GUI looks really neat, and now includes the Wowza
Media Server. But crucially, the SP customers need to understand the new service model and the accompanying
pricing scheme.

Clearly most of the SPs OnApp wants as customers already have virtualisation and cloud capabilities, but may not be
utilising their resources and infrastructure fully. For green field sites OnApp has configured joint hw/sw packages with
Dell for fast implementation of pre-integrated solutions. That is a smart move as OnApp can piggyback on Dells
marketing machine. It also moves Dell in the direction of being a solutions provider and service integrator.

Certainly, competition for customer attention in this market is coming from many different directions including colos,
system integrators, hardware vendors and carriers. Ease of use, fast deployment, flexibility and positioning vis-a-vis
specific customer types are crucial factors for the new kid in class: the VICAP.

Welcome the Data Scientist, your next CEO
I attended a round table with Actian recently, one of a series it has been running around the world. Nominally on the
subject of business analysis, it seems that at one of the US events, a comment that data scientists would become the
next generation of CEOs seemed to get a warm reception.

Inside my own head, I could hear the screams of thousands of voices all saying NO!

The theory behind the comment seemed to be that all decisions should be based on better data and analysis, and
therefore, the data scientist would be the optimal person to be in the post.

A couple of things wrong with this. First, I doubt that Carl Benz came up with the first diesel car based on in-depth
analysis of spreadsheets; that Larry Ellison used someone elses database and some massive analytics engine before
deciding to found Oracle, or that Mark Zuckerberg sat down with a massive heap of data before coming up with the
idea for Facebook. No each of these were entrepreneurs, dealing with gut instinct and a nose for the next big thing
something that business analytics and business intelligence are not that good for. Sure, many entrepreneurs are
serial failures; many are actually followers; and many make very poor CEOs anyway, but lets look at the second issue.
Weve had rock star employees before particularly in IT. Weve had the web app rock star; the web site rock star;
the social networking rock star. Like all rock stars, eventually their star will wane: sometimes pathetically so. Having
in-depth knowledge of a single environment is dangerous particularly for a CEO. And data scientists of the wrong
sort will be in-depth nerds of the first order.

Dont get me wrong. I am a fully-trained and practised scientist: I spent my first 10 years working in research for car
catalysts, anti-cancer drugs and fuel cells, amongst other things. And I therefore feel that I have more of a reason to
fear true scientists in the business. The act of science is far more attractive to us than the outcome particularly the
business outcome. Does your business want to be run by someone whose discussions with prospects and the markets
is likely to run along the lines of When I looked at the Bayesian probability of a favourable outcome within the

WAN Speak Musings Volume VI





Quocirca 2014 - 9 -


Jackknife variation against the variation over time with some pivotal z-score values and categorical variables, it
became apparent that I should have no more than one lump of sugar in my coffee?

A good scientist will be very good at their job. That job is to posit a position and test against that position to see
whether it stands up or fails. Very few true scientists will give you a 100% answer to anything. How certain is it that
the sun will rise tomorrow? Not 100%. How many legs does a person have? Less than two, on average. Should we
take this decision in response to whats happening in a complex and dynamic market? Wait for a few months while I
hit my data.

The real key is to try and encode the capabilities of the data scientist into ways that mortals can make easy use of
them. Let the impetuous but clever entrepreneur test their idea extremely rapidly against as much available data as
possible. Let groups of business people work together as a community using different types of data analysis to come
to a more consensual agreement based on multiple readings of the same data but all based on valid statistical
approaches. Lets embrace big data in its truest form volume, velocity and variety to maximise the value and
veracity of the findings.

But dont let a data scientist take over the company unless they also can be seen to be business- and finance-savvy.
If they are, they probably wouldnt want to work for you, anyway.

Itsamoneymakingscam.tld
Wow! There are a whole new raft of top level domains (TLDs) becoming available, such as .guru and .trend. This
opens up the capabilities for people and organisations to better position themselves in the internet world.

Doesnt it?

Not really. When was the last time that you actually typed in a URL with anything but .com or a .country-specific (e.g.
.co.uk) suffix? Indeed, when was the last time you actually typed in a URL directly at all? Dont most of us use Google
or Bing and then click on the companys name anyway?

Do we really want to be able to go to acompanyname.guru? What will we expect to see there? Would it be so much
different than going to acompanyname.com and clicking on a tab with something like Guru marked on it? Im sure
that for a few egotists, registering their own name as a .guru is seen as worth it. That the vast majority of people will
be too busy laughing at such vanity to read anything on their web page is likely to pass the .guru owner by.
Just why are we seeing such an explosion of TLDs coming through?

TLDs are broken down into two main groups the 295 ccTLDs (country code TLD), such as .uk, .cn, .ch and so on, and
the gTLDs (generic TLDs), such as .guru and .trend. I argued strongly against the introduction of the .eu ccTLD in 2005
it seemed pointless to me and was just another cost for an organisation. As well as these two TLD types, there are
a couple of special cases, but these do not have any bearing on this piece, so well brush them under the carpet.

The Internet Corporation for Assigned Names and Numbers (ICANN) was set up in 1998 to look after what should
happen within the realm of TLDs. It started to introduce small numbers of new TLDs starting in 2001, but this was
reasonable growth, overall.

Now, an organisation would generally have bought itself its country specific and its .com TLDs. ICANN now suggested
that for brand reasons, it really should buy its .biz, .info and possibly a few other gTLDs as well. At a few dollars a
throw, no big deal.

Wind forwards to 2008, and ICANN started up a debate on opening up the world to many more gTLDs. By 2012,
applications had been made for extra potential gTLDs. How many more? Actually, over 2,000. Some of these have

WAN Speak Musings Volume VI





Quocirca 2014 - 10 -


distinct use and value Mandarin Chinese and Arabic gTLDs will allow countries to have locally meaningful URLs.
However, many have no discernable value at all.

The world is facing a move from 22 available gTLDs to over 2,000. Many of these are paid-for gTLDs, where a business
can register their own name, or a trademark name as a gTLD (for example, .amazon, .google, .marmite). These will
be sacrosanct once they have been registered, no-one else apart from the registered company can use them. This
has led to a rush for land-grab. Google has applied to register 101 gTLDs, Amazon 76 and that world-wide superbrand
Donuts (a domain registry founded in 2011 specifically to deal with gTLD registrations) has submitted 307. Such
grabbing has initiated much gnashing of teeth and arguing as to who should have which gTLD strings.

However, the open gTLDs are the real problem. Over 700 new gTLDs that anyone can use. If you are bothered about
squatters taking your brand, then what do you do? Do you buy all 700 gTLDs so that you are completely covered?
With an average price per gTLD of around 20, thats 14,000 per year for a load of URLs that will never be used or
will just re-direct to the .com URL anyway.

And there is nothing to stop more gTLDs from being released. Each one raises yet another problem for an organisation
do we buy it, or run the risk of our brand being hijacked by someone else, who will then try and hold us to ransom
for a much higher cost of gTLD recovery? By buying a gTLD and then paying enough to get it high into search engine
results, a cybersquatter can soon make a case for payment by the aggrieved party being the best option. OK, there is
meant to be a process in place to resolve cases of cybersquatting, but the costs can still be high and the process
interminably slow and certainly higher than the 20 or so of buying the TLD.

To me, it all seems a little like a scam. As long as each gTLD cost is kept low, it is cheaper for a mid- to large-sized
organisation to just pay up, rather than employ someone to look at each gTLD and figure out if the company should
ever need it. For the small guys, it becomes a case of finding the top 5, 10, 20 or whatever number of gTLDs that they
believe are the most important to them and then trying to forget that acompanyname.construction could be taken
by someone else and used in inappropriate ways. And what happens if someone registers acompanyname.support or
acompanyname.complaints and spoofs your site, opening up major phishing issues?

The domain registrars soak up the money; ICANN states that as a not-for-profit organisation, it will not benefit directly
from the new gTLD processes. However, the question has to be asked as to how such unfettered open gTLD expansion
serves ICANNs remit. It is nominally there to make the use of the internet easier for all, against its tag-line of One
world, one Internet. This opening up of gTLDs seems to work against this ease of use it is confusing to users;
confusing and expensive for organisations and only seems to work for the registrars and the cybersquatters.

However, it is too late to lock this stable door the horse has bolted. That it has been allowed to happen should,
however, raise questions over how the internet is policed. Meantime, all that can be done is for organisations
particularly smaller ones - to choose carefully and monitor any abuses of their brand on the internet that could be
seen by their target audience.



WAN Speak Musings Volume VI





Quocirca 2014 - 11 -


Protecting the Digital Economys Soft
Underbelly
That most basic human activity: selling and buying, is undergoing hefty changes these days and Im not referring to
the buzz around the quixotic Bitcoin currency and its issues of legality and flux, or the fact that most of the worlds
ATM machines are still running on Microsoft XP. Im talking about the trust relationship between buyer and remote,
unknown sellers. Some pundits have coined the term The Experience Economy to describe our increased risk-
willingness in order to get a pleasurable experience out of any transaction.

We know that fraud levels in our digital economy continue to rise. The Nilson 2013 Report puts the 2012 figure at
$11bn up from $3bn in 2000. We accept the risk because of the sheer convenience of the process, and the
willingness of banks to soak up most of the losses at the front end, while charging consumers higher fees at the back
end. With 74% of cyber-attacks on retail, accommodation, and food services companies targeting payment card
information, its the soft underbelly of electronic shopping.

On the merchant side, many companies rely on PCI-DSS 2.0 (the Payment Card Industry Data Security Standard) to
protect our payments and personal information. This is a 12-step industry standard defining information security
measures for organisations that handle cardholder information for the major debit, credit, prepaid, e-purse, ATM,
and POS cards.

Verizons 2014 PCI Compliance Report gives an inside look at the sectors ability to protect this information, based on
detailed quantitative results from hundreds of compliance assessments carried out by our PCI Security practice across
hundreds of sites between 2011-2013, and supplemented with data from Verizons 2013 Data Breach Investigations
Report.

Of the companies claiming to be PCI-DSS compliant in 2013, Verizon found that only 11% were compliant with all 12
requirements. However, there are significant overall improvements over 2012 in the percentage of organisations that
meet at least 80% of the controls and sub-controls specified. This increased from just 32% in 2012, to 82% in 2013!

PCI-DSS compliance is not mandatory, and Europe lags behind the US and Asia in PCI-DSS adoption. This may be
because of complacency due to the better chip+PIN card security, or because the mandatory SEPA (Single European
Payment Area) regulations have higher priority.

The most serious issue facing companies that opt for PCI-DSS compliance is clearly focused on requirement #11:
regular testing of systems and processes. This requires organisations to have a sustainable network and application
vulnerability management program. Most organisations that suffered a data breach in 2013 werent compliant with
Requirement 11.

Going forward, there are also important issues for PCI-DSS 3.0 to address. The 2.0 standard gives little guidance to
secure mobile payment systems that are emerging fast. Some retail organisations have started to pilot mobile
payment applications in their environments, but PCI SSC (the PCI Security Standards Council) stopped all certification
reviews for mobile payment applications in 2011, due to lack of clear requirements. This then becomes a significant
threat to mobile transfer of cardholder data.

Another area relates to securing virtualised environments and multi-tenant clouds where mixed environments (in-
scope and out-of-scope systems) are hosted in the same physical server.

PCI-DSS is still evolving, and like most standards it is behind the curve of leading-edge attackers. But Verizon found
that less than 1% of the breaches used tactics rated as high on its difficulty scale, whereas 78% of the techniques
used were in the low or very low categories.

WAN Speak Musings Volume VI





Quocirca 2014 - 12 -


PCI-DSS is much more than a tick-in-the-box for the company board. PCI-DSS compliance does not guarantee
protection against the theft of payment card details that requires continued vigilance. So company boards in retail,
e-commerce, and other industries handling card payments need to include PCI-DSS compliance programs as part of a
broader compliance regime addressing outstanding virtualisation and mobility issues in their GRC (Governance, Risk,
Compliance) strategy.

Performance Is The First Victim In
Application Warfare
The more, the merrier! goes the old cry. Well, not when a multitude of applications jostle for priority across a best
effort Internet. Oldies among us still remember the days of crystal-clear analogue telephone conversations across
connections that were completely reserved for our conversing pleasure (of course, telecom costs were outrageous,
and there wasnt any data to crowd voice calls out). Now, thousands of apps spew out across any available bandwidth.
This makes traffic load predictions and management an increasingly daunting task, if you still have intentions of
delivering end-to-end quality of service.

If your business relies on ecommerce or providing consumer network services, then there are several internal
company constituencies who are seriously concerned about service quality degradation. These include (at least):
customer care, product owners, sales, marketing, engineering, and, at the end of the day, corporate management
who have to answer to their boards and shareholders.

The first requirement is to understand the networks performance in real time as the user experiences it, and then to
manage (and meet) those customers expectations. This requires a combination of a number of methods: detection
using DPI (Deep Packet Inspection) to identify what apps are running; monitoring of application response time and
device performance; and resourcing, allocating network resources (bandwidth and processing capacity) to ensure
customer QoE (Quality of Experience).

Cisco has, over the past two decades, invested heavily in addressing these issues on its wide range of Internet routing
and switching platforms. Today many of its proprietary solutions have become quasi-industry standards, partly
because they are very efficient and backed up by a huge global support and marketing organisation, but partly also
because of Ciscos ecosystem business strategy that allows certified, third-party software developers easy access to
its hardware platforms on kit such as the ASR5000. Three Cisco solutions currently dominate the quality of service
application management space: AVC (Application Visibility and Control), NBAR2 (Next Generation Network-Based
Application Recognition), and Netflow for class-of-service and network congestion management.

CA, with its recently launched NFA 9.2 (Network Flow Analysis), is one such partner in the Cisco ecosystem. Using the
NBAR2 ability to fingerprint more than 1000 commonly-used applications, the CA solution complements Ciscos
capabilities with heuristic anomaly detection that learns about the network over time, and automatically detects and
creates alarms for anomalies that can impact performance and create security risks. The aim is to provide a proactive
safeguarding of critical service levels while reducing the costs associated with network troubleshooting. Recent Zeus
data-stealing malware attacks on Salesforce customers highlights this issue for SaaS providers.

Better application optimisation capabilities are also needed to identify and remediate traffic congestion issues before
they degrade service quality. The CA NFA 9.2 monitors response times for the NBAR2 fingerprinted applications, and
also allows network managers to create additional profiles for their organisations custom applications. This extended
application-monitoring capability can improve internal enterprise IT efficiency, but also opens up new revenue
streams for telcos and other cloud service providers that can provide application-centric monitoring as a cloud service.

Traffic anomaly identification provides the analytics that can identify security and application performance threats
such as misconfigured application servers, the onset of a Denial-of-Service attack, or internal data leakage. This is

WAN Speak Musings Volume VI





Quocirca 2014 - 13 -


done by relating ports and protocols to specific applications, understanding response times for those applications,
and applying analytics to reveal potential issues on a proactive basis.

Certainly, the rush of new cloud-based apps, greater user mobility, and higher bandwidth demands (just look at the
news gushing out from the Mobile World Congress event in Barcelona) maintains the pressure on internal IT and
service providers to meet growing user and customer QoE expectations.
The symbiotic dev-ops relationship between a hardware-centric Cisco and a wide range of software developers like
CA is hard to beat, because performance improvements can quickly become available to a very large global customer
base, offering improved performance with minimal investments in new hardware.

Smaller competitors like Extreme Networks with its ASICs-based Purview solution have to provide it all themselves,
notably combining the ability to fingerprint 13,000 apps in the context of the users present role in the business
process, location, time of day, type of device, and type of network they are connected on. Their advantage is the
ability to craft more customised solutions.

The business-criticality of application performance and security in many companies requires the IT department to
adopt an application QoE strategy that meet real-time performance demands, but also ensures buy-in from a wide
range of internal stakeholders from the call centre to the board room.


Its the IP, not the IT that matters
Many organisations seem to believe that their IT equipment is valuable. They spend inordinate amounts of money in
applying security to their servers, storage, network switches and appliances and then assume that as they have done
this, they are suddenly a secure company. Even where there is an understanding that hardware security is not enough,
the focus simply shifts to application and database security.

Then something like bring your own device (BYOD) comes along and throws all of this into disarray. The device isnt
the organisations; it isnt connected over an end-to-end corporate network; the apps can be (and are) downloaded
from an app store that the organisation has no control over.

It is increasingly difficult for an organisation to draw a line around itself and say this is us: the need to share
information up and down a value chain of customers and suppliers now means that such borders between steps of a
process are becoming less clear.

And herein lies the eternal problem for an IT group IT-based security is never going to work. Instead what is needed
is IT-facilitated security.

If you look at what makes a successful organisation, it is not great IT, it is not even great products, nor is it great
employees; it is the successful utilisation of its intellectual property (IP). But where does that intellectual property
come from?

In a world of big data, it should come from the effective aggregation, filtering and analysis of a large base of mixed
data sources, which can include everything from data held in formal databases through in-house Office documents to
web searches and subscription-based services. It also needs to allow for information coming in from the value chain
and from the humans along this chain.

Without the right means for filtering and analysis, the data remains just that a massive great store of ones and
zeroes that take up a lot of space and maintenance in the data centre for no visible business benefit. Once filtering
and basic analysis are applied, the data becomes information the platform for intellectual property.


WAN Speak Musings Volume VI





Quocirca 2014 - 14 -


With advanced analysis, the information becomes knowledge and when fed to the right people, this knowledge can
lead to the right decisions being made at the right time, which then creates business value and so to a more successful
company.

Any loss of information or knowledge based on poor security could lead to a competitor being able to steal your IP,
or even for a valuable patent to be lost due to prior disclosure. IP can be exceedingly valuable: look at the acquisitions
of failing companies by the likes of Google, Microsoft and IBM where it was the patent library that was seen as having
the value.

So, what does this mean for IT security? Standard approaches to IT security as mentioned above only work when you
have control over the hardware and applications. New approaches are needed ones that focus on the IP, not the
IT.

By focusing on the information, a different view can be taken. What happens if that piece of information escapes
outside of the value chain? Well, if it is the company canteen menu, not much. If it is the latest details on possible
acquisitions, it could be very harmful.

Information needs to be classified. Once it is classified (even something as simple as Public/Commercial/Secret),
actions can be taken against it. For example, a Secret document that is attached to an email can be quarantined by
data leak prevention (DLP) tools so that it doesnt go where it shouldnt. A Commercial document can be timed so
that unless a new certificate is provided by a central digital rights management (DRM) system, it will encrypt or
securely erase itself after 4 hours. All of this is predicated on understand the context of any access and of the identity
of the person who is attempting to access the information. To this extent, identity becomes the new perimeter; IP
what has to be kept safe within that perimeter, and approach described in Quocircas report.

By taking an information-centric approach to security, the IT becomes just a platform: the security embraces new
approaches and can operate across boundaries, enabling the organisation to work in a more effective manner.
It is time to ditch IT security and move to IP security. Your organisation will thank you for doing it.


The internet and a Magna Carta
Sir Tim Berners-Lee has recently stated that he believes that now is the time for internet citizens to have their own
bill of rights, or a Magna Carta, covering what they should expect as freedoms when using the internet.

Its a neat idea, but one that is full of issues.

Lets start with the main thrust of Sir Tims thoughts. He is against any government body, such as the NSA, snooping
on a persons internet usage. As an individual not doing anything that bad on the internet, the first reaction is to
agree. The second reaction should be to look further at why the NSA, GCHQ and others are looking at mass activity
on the internet. Is it for fun, or so that they can catch a person downloading a pirated copy of The Hunger Games?
Hardly they are more interested in pattern matching and hacking to gain access to those involved in more nefarious
activities, mainly associated with terrorism. Yes at times, these government security groups seem to have over-
extended their reach, but Im not quite sure how their snooping could have impacted me.

Then there are the various divisions of police forces: these may be after the pirate; the drug dealer; the fraudster.
Many at the lower end of these issues (the occasional pirate, the bit of weed smoker) may feel that this is a waste
of time or that the activities going on here are more intrusive to their daily lives. But look at Operation Ore led by UK
police forces. This broke a world-wide paedophile ring, identifying 7,250 suspects in the UK alone and leading to 1,451
convictions. It could not have worked without snooping.


WAN Speak Musings Volume VI





Quocirca 2014 - 15 -


But lets say that in a burst of good feeling, we could get the government bodies to sign up to a charter. Would this
give the average user the freedom on the internet they seek, and the peace of mind while they are doing it?

Of course not. Without any government interference, the black hats would be free to do whatever they wanted.
There would be an increased flood of spear phishing far greater than we are having to deal with now; more attacks
against firewalls; more ransomware hitting more and more people. Without the capability to at least carry out
forensic pattern matching against these ever more mature attacks, the internet moves more towards the Wild West:
everything is up for grabs, and the Sheriff has been run out of town. As a side question as the anti-malware and anti-
spam vendors are dependent on snooping on patterns on the internet, should they be shut down as well?

I believe that we have to accept that the internet genie is well out of the bottle. The original academic view of it being
a definitive force for good has been watered down. A lot of good (depending on your point of view) has come through
the internet for example, the use of social networking around the various uprisings around the world has ensured
that citizens can generally get their point of view of what is happening past the government sensors. Crowdsourcing
has led to new financial models for example around how micro-businesses can get funding. People are generally more
aware of the rest of the world and what is going on around them.

We can counter this with the way that the global village has led to the growth in the number of village idiots. There
is far more garbage available to people out there and it is possible to find at least one nominally scholarly paper out
there to support pretty much any view that a person wants to have. It has also allowed anyone who wants to be bad
to be bad either in a really bad bad way (for example, with the bad English you need to reset your bank password
to make sure nasty people do not access your account phishing messages) to the nasty bad (for example We have
just encrypted all your files and all files attached to storage on your computer. Pay us with BitCoins to get them
unencrypted).

We need to be able to have targeted law enforcement that can identify bad activities on the internet and deal with
them. Yes, it does come down to one mans terrorist is another mans freedom fighter. Yes, there are problems in
drawing a line between this is bad and this is not really bad but again, what different people see as being really
bad is a movable feast. Policing the internet will never be easy, and the forces doing so will continue to get it wrong
as well as getting some of it right.
The biggest problem is that to ensure that a full forensic investigation of a situation can be carried out where the
various vectors underlying the data are not fully understood, you need to drink the ocean. This means that a whole
host of innocent data has to be pulled in to find the little grains of bad stuff that are in there as well its like mining
for gold.

Many years ago, there was a man in the UK who wrote little ditties about various areas, including history. One was
on the Magna Carta (a copy of it can be read here). It finishes off with the eternal words that I think, with slight
modification, should be chiselled onto all access points to the internet:

And it's through that there Magna Charter,
As were made by the Barons of old,
That in England today we can do what we like,
So long as we do what we're told.


WAN Speak Musings Volume VI





Quocirca 2014 - 16 -


Can the Global Intercloud Mesh With The
Global Internet?
As netizens and consumers of the World we are all daily users of the Internet and of shared computing resources.
Equal access and general availability of the Internet is emerging as a human right, with a raging global debate about
net neutrality. However, no similar debate has yet emerged on the equal access to standardised, application-centric
cloud computing resources in part because we do not yet recognise these cloud computing resources to be a global
resource. Cloud service providers have strong national affiliations, despite their global user base, and they still retain
significant proprietary elements of code and protocols that complicate global delivery.

A leading vendor in the Internet space now wants to challenge this state of affairs. At its recent annual partner
conference, Cisco unveiled its Global Intercloud initiative, a two-year, billion-dollar investment aimed at extending its
VCE vBlock stake in the Internet of Everything well, at least everything business. The press release states:

The Cisco global Intercloud is being architected for the Internet of Everything, with a distributed network and security
architecture designed for high-value application workloads, real-time analytics, near infinite scalability and full
compliance with local data sovereignty laws. The first-of-its-kind open Intercloud, which will feature APIs for rapid
application development, will deliver a new enterprise-class portfolio of cloud IT services for businesses, service
providers and resellers. . . . Its partner-centric business model, which enables partner capabilities and investments, is
expected to generate a rapid acceleration of additional investment to drive the global scale and breadth of services
that Cisco plans to deliver to its customers.

What distinguishes this initiative from other global cloud operations like Amazon EC2, Rackspace, Apple iCloud,
Microsoft Azure, or the NetApps-based Orange Business Services Global Cloud is the multi-provider, decentralised,
and global approach adopted by Cisco. The Intercloud will have a unified architecture with OpenStack and open APIs.
It will be provided by any number of Cisco partners (resellers: Ingram Micro, system integrators: Atos Canopy and
Wipro, managed service providers: Logicalis, carriers: Telstra etc.), so Cisco expects its initial $1bn investment to be
multiplied several times by its partners investments. That represents a massive scalability factor.

Cisco is uniquely located in the junction of Internet routing & switching infrastructure, cloud hardware, collaboration
services, and network security. Its business model of maintaining a large and vibrant partner ecosystem is also well
suited to bring such an Intercloud initiative to fruition.

So what hurdles does the Cisco Intercloud have to address to become a successful latecomer to a crowded market?
Well, first of all, the tech hurdles facing any ultra-scalable global hybrid cloud solutions are daunting: consistent
security, Layer 2 scale, VM migration, compatibility, network billing, network provisioning, and lack of bandwidth
guarantees just to name a few. Cisco, with its lineup of tier-1 eco-system partners, will have to convince customers
that these bases are covered.

The legal and compliance hurdles focus not only on where data is stored, but also how data is encrypted and routed
through different regulatory regimes something routing algorithms today do not take into account. Then theres
pricing. Cisco products and services do not come cheap, and in the school of hard knocks and financial crisis, there are
plenty of ways competitors can undercut this Cisco initiative on pricing. A mature Intercloud offering will clearly need
to adapt to different audiences and value propositions.

The Cisco Intercloud proposition also presupposes significant advances in the underlying network infrastructure,
notably network virtualisation, NFV (Network Functions Virtualisation), VON (Virtual Overlay Networks), SDN
(Software-Defined Networks), and alignment of OpenFlow and Ethernet-based cloud architectures. Using OpenStack
for cloud orchestration framework provides rich APIs so that resources like virtual machines, storage, virtual NFV
appliances, overlay tunnels, and underlay networks can be woven together programmatically to ensure dynamic and
rapid service delivery.

WAN Speak Musings Volume VI





Quocirca 2014 - 17 -


The Intercloud presents multinational customers with a new hybrid cloud value proposition. It will take time to
emerge, but the Cisco vision does go further than any of its present day competitors.



About Silver Peak Systems

Silver Peak software accelerates data between data centres, branch offices and the cloud. The companys software-
defined acceleration solves network quality, capacity and distance challenges to provide fast and reliable access to
data anywhere in the world. Leveraging its leadership in data centre class wide area network (WAN) optimisation,
Silver Peak is a key enabler for strategic IT projects like virtualisation, disaster recovery and cloud computing.

Download Silver Peak software today at http://marketplace.silver-peak.com.













About Quocirca

Quocirca is a primary research and analysis company specialising in the
business impact of information technology and communications (ITC).
With world-wide, native language reach, Quocirca provides in-depth
insights into the views of buyers and influencers in large, mid-sized and
small organisations. Its analyst team is made up of real-world practitioners
with first-hand experience of ITC delivery who continuously research and
track the industry and its real usage in the markets.

Through researching perceptions, Quocirca uncovers the real hurdles to
technology adoption the personal and political aspects of an
organisations environment and the pressures of the need for
demonstrable business value in any implementation. This capability to
uncover and report back on the end-user perceptions in the market
enables Quocirca to provide advice on the realities of technology adoption,
not the promises.

Quocirca research is always pragmatic, business orientated and conducted in the context of the bigger picture. ITC
has the ability to transform businesses and the processes that drive them, but often fails to do so. Quocircas mission
is to help organisations improve their success rate in process enablement through better levels of understanding and
the adoption of the correct technologies at the correct time.

Quocirca has a pro-active primary research programme, regularly surveying users, purchasers and resellers of ITC
products and services on emerging, evolving and maturing technologies. Over time, Quocirca has built a picture of
long term investment trends, providing invaluable information for the whole of the ITC community.

Quocirca works with global and local providers of ITC products and services to help them deliver on the promise that
ITC holds for business. Quocircas clients include Oracle, IBM, CA, O2, T-Mobile, HP, Xerox, Ricoh and Symantec, along
with other large and medium sized vendors, service providers and more specialist firms.

Details of Quocircas work and the services it offers can be found at http://www.quocirca.com

Disclaimer:
This report has been written independently by Quocirca Ltd. During the preparation of this report, Quocirca may have
used a number of sources for the information and views provided. Although Quocirca has attempted wherever
possible to validate the information received from each vendor, Quocirca cannot be held responsible for any errors
in information received in this manner.

Although Quocirca has taken what steps it can to ensure that the information provided in this report is true and
reflects real market conditions, Quocirca cannot take any responsibility for the ultimate reliability of the details
presented. Therefore, Quocirca expressly disclaims all warranties and claims as to the validity of the data presented
here, including any and all consequential losses incurred by any organisation or individual taking any action based on
such data and advice.

All brand and product names are recognised and acknowledged as trademarks or service marks of their respective
holders.


REPORT NOTE:
This report has been written
independently by Quocirca Ltd
to provide an overview of the
issues facing organisations
seeking to maximise the
effectiveness of todays
dynamic workforce.

The report draws on Quocircas
extensive knowledge of the
technology and business
arenas, and provides advice on
the approach that organisations
should take to create a more
effective and efficient
environment for future growth.

S-ar putea să vă placă și