Sunteți pe pagina 1din 23

1

Why the mainframe is the cheapest


solution for most organizations x
THE
DI NOSA DI NOSA
DI NOSA DI NOSA DI NOSAUR UR
UR UR UR
MYTH
2
CONTENTS
This Report is based on ongoing research into the cost of ownership
of large systems, storage, and software, carried out by Xephon since
the mid-1980s.
Copyright on the picture of the Tyrannosaurus Rex on the cover
belongs to the Chinese Web site dinosaur.net.cn; our thanks to them
for permission to reproduce it.
Copyright Xephon 2002-2003. Please respect our copyright, and
dont give copies of this Report to others. A free version, in HTML, is
available from the Mainframe Week Web site.
4 Why a new edition?
4 Measures of cost-effectiveness
7 The true costs of computing
8 Hardware and basic software costs
9 Application software
10 Personnel costs
15 Hidden costs and other factors
18 Partial downsizing and the incremental trap
19 The best of both worlds?
20 Future cost trends
22 Conclusion
Why the mainframe is the cheapest solution for
most organizations
3
THE DI NOSAUR MYTH
The Dinosaur Myth was first published in 1992. At that time, it was
very rare to find a reference to mainframe computers in the business
press that did not state, or at least imply, that they were obsolete,
expensive, and doomed to extinction in the near future. Indeed they
were, quite often, likened to dinosaurs.
Because of this negative image, there was much talk of downsizing
from mainframes to smaller systems AS/400s, minicomputers
running Unix, or PC servers. The notion of distributed systems, with
end-users taking more control, also fitted well with the then-current
vogue among management consultants for decentralization and
employee empowerment.
In marked contrast, The Dinosaur Myth explained in non-technical
terms why, far from being obsolete, mainframes at that time offered
the most cost-effective computing facilities for all but the smallest
organizations and, we predicted would continue to do so for the
foreseeable future.
Perhaps partly because of the widespread distribution of The
Dinosaur Myth, and the success of the associated Xephon seminar,
The Downside of Downsizing, which attracted capacity audiences in
dozens of cities in every continent apart from Antarctica, the
downsizing fad more or less fizzled out in the mid-1990s.
For example, the 15 December 1992 issue of the Financial Times
stated that the jury appears still to be out on the cost savings from
downsizing . . . What is not in question is that downsizing involves
costs that are neither obvious nor negligible and are frequently
ignored in making the case for downsizing.
The 19 May 1993 edition of the Wall Street Journal reported that
Computer downsizing can often be an uphill effort. The much-touted
Far from being obsolete, mainframes offer the
most cost-effective computing facilities for all
but the smallest organizations.
4
process . . . is the hottest thing in computing today. But despite the
hype about its benefits . . . the transition for many companies is
proving painful. Indeed, the article continues, boardroom
disillusionment about the pace of downsizing has prompted some
analysts to think what last year would have been unthinkable: that
demand for mainframe computers could surge as companies realize
that the downhill shuffle isnt all it was cracked up to be.
By 1994 even the consultants had seen the problems of downsizing.
Price Waterhouse actually provided the following quote at a Compass
meeting in the UK: "you have to decentralize to see what a mistake it
is...the moment data becomes the personal property of the users,
fragmentation starts, and infrastructures end".
Why a new edition?
We decided to publish a new edition of The Dinosaur Myth for two
reasons:
1 There are a great many people who have entered the industry in
the past decade who have had no experience of mainframes
and just assume that they are obsolete and must be more
expensive than their newer and vastly more widespread
alternatives.
2 The relative cost-effectiveness of the mainframe against its
competitors has changed quite markedly in the past few years.
This new edition will put the relative costs of the various platforms into
context today, and also explain the recent changes in the economics
of computer systems.
Measures of cost-effectiveness
Before we can compare different types and sizes of systems, we need
some common yardstick of computer performance or effectiveness by
which systems of all sizes can be measured and compared.
Processing speed the rate at which the processing unit of a
computer can execute instructions is one plausible measure, and
the commonest yardstick of processing speed is MIPS, or millions of
The relative cost-effectiveness of the mainframe
against its competitors has changed quite
markedly in the past few years.
5
instructions per second. Dividing the cost of a computer by its MIPS
rating would, it seems, provide a convenient measure of its cost-
effectiveness. Its certainly a comparison thats often made by
journalists, invariably to the detriment of the mainframe. Todays
mainframes have a hardware cost of a few thousand dollars per
MIPS. A PC on the other hand, may have a cost-per-MlPS of just a
few dollars and Unix systems a cost per MIPS measured in hundreds
of dollars. Clearly, by this measure, the mainframe appears to be
under a major disadvantage!
That MIPS really stands for Meaningless Indicator of Performance is
an old joke among computer technologists. Computers of different
designs have different sets of instructions to which they respond.
Some instructions invoke complex and time-consuming operations (eg
moving a large block of data around in memory), others call on the
computer to do very little (eg adding two numbers together). So MIPS
isnt even a sensible measure of processing speed, except in
comparing systems of similar design (and even then it has to be used
with great caution).
In any case, most commercial work is data-intensive rather than
processor intensive relatively simple operations are applied to very
large amounts of data. The calculations involved in creating an
invoice, or validating a cash withdrawal, or making a seat reservation,
are relatively trivial, but a lot of data has to be located, retrieved,
updated, and stored again for each transaction. For commercial work,
a computers MIPS rating is about as meaningful as the 0-60 mph
acceleration time of a forklift truck.
Mainframes are designed specifically for data-intensive work, with
very sophisticated data handling facilities. Minicomputers and
workstations, on the other hand, are designed to be very fast at
computation but are rather feeble at data handling. Graphics is also a
compute-intensive rather than a data intensive task, so PCs with a
graphical user interface such as Windows do need a lot of processing
muscle. On the other hand, they have only a single data path between
processor and storage, capable of transferring typically only a few
million characters per second. A mainframe, by contrast, can have
effectively thousands of channels, each capable of transferring
For commercial work, a computers MIPS rating
is about as meaningful as the 0-60 mph
acceleration time of a forklift truck.
6
hundreds of megabytes per second many thousand times more
than a PC.
In practice, mainframe systems tend to be data rich and MIPS poor
that is, they control very large amounts of data relative to their
processing power. PCs and Unix systems, on the other hand,
generally have the opposite profile: they are MIPS rich and data
poor. For example, a mainframe today typically has 5 to 10 gigabytes
of data per MIPS, compared to less than one gigabyte for Unix minis
or PCs (a gigabyte is a thousand million bytes or characters).
Now, the amount of data that a computer system can manage is of
rather greater practical interest to most commercial organizations than
the speed with which it can perform a million subtractions or additions
many organizations have massive files that need to be accessible to
its computer users. And, by that measure, mainframes clearly have a
great advantage over Unix systems and PCs.
However, even data handling capability is not an entirely satisfactory
measure of a computers value to an organization. What really
matters is the number of users, performing whatever functions are
necessary to the organization, that a computer can support, with a
reasonable level of service. Therefore, the key yardstick of a
computers cost-effectiveness is the total cost per user, measured
over a reasonable time-span to eliminate any high up front rather than
long term costs say five years.
To help them achieve this multiple application capability, mainframes
have evolved mechanisms for the efficient sharing of resources
among large numbers of concurrent users. In particular, they have
multiple interrupt levels, permitting them to switch from task to task
without losing track. This means that a task waiting for an external
event (a transaction from a terminal, or a data transfer from a disk
drive, for example) can be suspended and returned to later, while
other tasks are attended to in the meantime. They also have very
sophisticated resource management capabilities which allow the
users to have their work completed on a priority basis such that even
when fully loaded the key applications get the capacity needed to
perform the task in hand. These resources can be reallocated literally
second by second to achieve this goal. Unix systems and PCs do not
have such sophisticated mechanisms.
The amount of data that a computer system can
manage is of greater interest than the speed with
which it can perform a million subtractions or
additions.
7
This ability for all work to be completed on one system is crucial. For
example, lets say ten applications are each used by all of the staff; if
each required up to 10 MIPS of capacity, but in total no more than,
say, 20 MIPS were needed at peak load, then on a mainframe 20
MIPS would suffice, whereas in the Unix or PC case 10 systems of 10
MIPS each would be needed five times more capacity in total than
on the mainframe. Indeed, in many cases things are far worse than
this, with users having three systems for each application one for
production, one for back-up, and one for testing.
Neither PC servers nor Unix systems can run effectively at 100%
utilization. At anything above 50% utilization response times suffer
and system failures occur. So we must once again double the
required capacity. This also increases the storage and support
requirements and by default lowers the availability as the more
complex the environment the more likely the system is to fail.
The true costs of computing
The true costs of computers fall into these main categories:
1 The cost of the hardware (including terminals, printers, and other
peripheral devices) and the basic operating software, over a
reasonable period. This figure should include the cost of
maintaining the hardware over that period, and incidental costs
like office space, electrical power, special cooling requirements,
etc.
2 The cost of the application software the off-the-shelf packages
or customized programs that allow the computer to perform
useful work.
3 The personnel costs associated with operating the hardware and
software and sorting out any problems that may occur. To this
should be added the cost of any time wasted waiting for the
computer system.
There are, in addition, other costs that can be directly attributed to
computer systems, which may not be so readily quantified but should
also be considered. These will be touched on later.
Neither PC servers nor Unix systems can run
effectively at 100% utilization.
8
Hardware and basic software costs
The following comparisons are based on a representative selection of
systems from amongst our clients performing the same or similar
tasks:
Various mainframe configurations supporting large numbers of
users.
Several Unix servers from different vendors supporting similar
numbers of users.
A selection of PC servers from different vendors supporting
similar numbers of users.
We calculated the basic hardware, software, and maintenance costs
over five years for these systems (excluding the cost of finance, and
ignoring inflation). Our estimates per end-user were:
Mainframes Unix minis PC servers
$0
$5,000
$10,000
Basic hardware, software, and maintenance costs
$4,750
$5,750
$7,500
Already it can be seen that the alternative platforms do not have any
advantage over mainframes. This is largely due to the additional
capacity required as outlined above.
These figures may surprise some readers who have seen the
mainframe as expensive and the software in particular being
perceived as exceptionally expensive. But if you need ten times less
capacity to perform the same work then the perceived hardware price
disadvantage soon evaporates.
The Unix and PC server cost figures used here are higher than those
for very small numbers of users, as it has become apparent that
If you need ten times less capacity to perform
the same work then the perceived hardware
price disadvantage soon evaporates.
9
neither Unix nor PC server systems are truly scalable at the same
level of cost. By that we mean that, as the number of users increases,
the cost per user increases. Our own estimate, based on extensive
research, is that for a doubling of the number of users the costs
increase by close to 125% on a non-mainframe platform but by only
90% on a mainframe.
These figures make no real allowance for batch work (work that
requires no on-line interaction with end-users for example, overnight
updates of customer accounts from data generated during the day by
cash dispensers or off-line data entry clerks, printing invoices, creating
management reports, etc). Most mainframe sites also use the
overnight shift to reorganize files and defragment data, to improve
on-line performance during the day. Mainframes are the undisputed
masters of batch processing most run 24 hours a day whereas their
on-line networks are active for far less time, even today.
However, it is difficult to quantify batch processing in terms of on-line
users, which is the measure weve chosen to adopt, and so we have
opted to discount the benefits of batch processing altogether, even
though weve retained the costs within the mainframe system costs.
In the past we had to add something for the cost of floor-space and
special environmental requirements. This would typically be higher for
the mainframe than for Unix systems, with no costs under this
heading for PCs, since they occupied much the same space as the
terminals for mainframe and minicomputer systems. For mainframes,
we also needed to add the cost of the network hardware and software
whilst the equivalent costs for interconnecting minicomputer systems
and PCs were very variable, and any figure we proposed would have
been open to dispute.
But today all of these costs are at worst similar on each platform, with
the mainframe if anything proving the cheaper today in most
instances.
Application software
The application software required will obviously vary widely for
different organizations. However, with most packages available across
all platforms and most platforms requiring similar levels of tailoring of
Mainframes are the undisputed masters of batch
processing.
10
applications or custom-built applications, the costs today are similar
regardless of the platform for the equivalent number of users. A figure
of around $150 per user per year ($750 over five years) is the
average of the clients studied to date. Adding these costs, we get the
following approximate figures:
Mainframes Unix minis PC servers
$0
$5,000
$10,000
Basic hardware, software, and maintenance costs
Application software
$5,500
$6,500
$8,250
Personnel costs
All computer systems require some human supervision, ranging in
complexity from loading the printer with paper to diagnosing and fixing
hardware or software faults. End-users may be able to handle some
of this work themselves, but even the most independent will
occasionally require the assistance of specialist staff. At the other
extreme, end-users supported by mainframes are largely shielded
from both the complexities and the chores involved in tending to the
computers needs instead, full-time specialists are employed. Unix
systems fall somewhere between these two extremes.
The staff costs of running mainframes are very visible: operators and
technical support staff do nothing but minister to the mainframe, and
their salaries and employment costs are easily identified. Current
mainframes on average require one technician (systems programmer
or operator) for every 250 mainframe users, which, at an average
employment cost of $75,000, amounts to $1,500 per end-user over a
five-year period.
End-users supported by mainframes are largely
shielded from both the complexities and the
chores involved in tending to the computers
needs.
11
Two points are worth making here. First, the number of operators and
systems programmers required per mainframe MIPS has fallen
tenfold in the past seven years, and is expected to at least halve
again in the next five years. Second, the estimate weve adopted
assumes multi-shift 24-hour operation, which means that the batch
work typically carried out overnight is included in the cost, even
though we have made no allowance for it in our cost-per-end-user
calculations.
For Unix systems, fewer technical staff are required to tend the
system because they do not normally operate 24 hours a day and our
research puts the level at close to one person per 500 users. A cost of
around $750 per user over five years at the typical cost of $75,000 per
annum per person. In addition, it is generally reckoned that, on
average, one full-time support specialist is required for every 100 end-
users in a typical Unix environment, which is close to the three to one
ratio compared to the mainframe reported by our clients. If that
specialist costs (say) $75,000 a year to employ, the five-year cost per
end-user will amount to $3,750. Putting the operational and support
needs together we get a figure of $4,500 per user over five years.
In the PC environment, in many cases the end-user is the operator.
Its his or her responsibility to take back-ups, copy files, put the
appropriate paper in the printer, look up error messages in the
manual, and so on. It is estimated by the users studied that the
average PC user spends one hour a week, or 12 minutes a day, either
tending to the system or waiting for a response from it. This seems
like a conservative figure to us, but it nonetheless equates to 2.5% of
the end-users time. How much that might cost will of course vary
depending on the end-users jobs, but if we assume a minimum
annual end-user employment cost of $36,000, 2.5% of that amounts
to $900 a year, or $4,500 over five years.
To this must be added the cost of specialist support staff or local help
within the user group to help out when users are unable to solve a
problem themselves. The users surveyed reported that PC-based
systems require the equivalent of one support person for every 50 PC
users today which may seem a high figure but is in fact only the
equivalent of 2% of each end-users time. At $45,000 a year (an
average between the cost of the end-user and the technical people),
that costs another $4,500 per end-user over a five-year period.
In the PC environment, the end-user is the
operator.
12
Adding these estimates to the running totals gives the following
results:
Mainframes Unix minis PC servers
$0
$5,000
$10,000
$15,000
$20,000
Basic hardware, software, and maintenance costs
Application software
Support
$7,000
$11,000
$17,250
Returning to our cost-comparison, the Unix systems on which these
figures are based are less effective compared with the mainframe,
and will only provide response times in the 2-4 second range. An
average extra delay of two to three seconds for every interaction
(which weve assumed will take place on average every 45 seconds)
equates to a 5% overhead. Its unlikely that end-users will be able to
do any useful work during that time, so in effect the minicomputer
solution levies a hidden cost equivalent to 5% of all end-users time
(their salary plus other employment costs). At $36,000 per person per
year this adds a minimum figure of $1,800 a year to the Unix system
costs, or $9,000 over the five years.
PC server-based systems are typically no better in this respect, and
often far worse, but for the sake of this comparison we will assume a
hidden cost of $9,000 over five years for these systems as well.
Minicomputers levy a hidden cost equivalent to
5% of all end-users time.
13
If you accept the argument that a fair cost-comparison should take
account of time wasted because a system is slower to respond, we
now have the following estimated five-year costs per end-user:
Mainframes Unix minis PC servers
$0
$5,000
$10,000
$15,000
$20,000
$25,000
$30,000
Basic hardware, software, and maintenance costs
Application software
Support
Personnel costs
$7,000
$20,000
$26,250
These figures are compared below with the equivalent or comparable
figures published in 2001 by ITG, in a management brief entitled The
Cost Implications of Platform Choice:
A fair cost-comparison should take account of
time wasted because a system is slower to
respond.
14
Mainframes Unix PC
0
$10,000
$20,000
$30,000
$40,000
$50,000
$
7
,
0
0
0$
1
4
,
0
0
0
$
2
0
,
0
0
0
$
3
9
,
4
4
0
$
2
6
,
2
5
0
$
4
5
,
0
0
0
Xephon ITG
Our figures are on the low side around half of the ITG figures.
However, in terms of the relative costs they are very similar. The
reason for them being lower in our case is that they are based upon
very large organizations where the cost per user is lower through
simple economies of scale. These large users are also more efficient
in their use of any of the platforms than the average user represented
in the ITG case.
The breakdown of our estimates is shown graphically in the chart on
page 13. While mainframes show only slightly lower costs for the
hardware and applications areas, their advantage in support and
employee efficiency costs are enormous. In particular, for the
mainframe system the cost of personnel accounts for just 21% of the
total, while for the PC and Unix solutions this figure is around 68%.
For mainframes, the cost of personnel accounts
for just 21% of the total, while for the PC and
Unix solutions this figure is around 68%.
15
This ratio of over three to one in favour of the mainframe is even
better than the level we found in the early 1990s. As personnel is the
one area which has been increasing in costs over time it is not
surprising that the mainframes current cost advantage is greater than
we last reported, and is also more or less certain to continue to
improve into the future.
In effect, the PC solution, and to a lesser extent the Unix solution,
move much of the personnel costs out from the Data Center to the
end-user. That has two consequences: first, it tends to hide the costs,
which are absorbed into other budgets, and, second, it increases the
total costs, because more people are involved in identical
housekeeping activities. For example, whereas a mainframe system
will back-up the data of thousands of end-users either automatically or
with minimal operator intervention, with the PC solution each
individual, or perhaps each workgroup, has to initiate the process.
We should emphasize that all our cost-estimates ignore inflation and
the cost of money, and assume a green-field site. They cannot be
compared with the budgeted costs of existing installations. For
mainframes in particular, given that prices are falling all the time,
systems installed some time in the past will be correspondingly more
expensive, and our figures are also based on complete systems
rather than upgrades, which tend to cost considerably more for
equivalent performance. Many organizations write off capital costs
more quickly than the five years weve allowed. And the figures dont
take account of batch applications. If these and other salient factors
are taken into account, the budget costs over a five-year period for the
hardware and software costs of all solutions will be higher than the
figures weve quoted here. However, the relative costs would certainly
not change as the figures we have calculated are conservative for the
non-mainframe solutions.
Hidden costs and other factors
Mainframes have been around for quite a while, and their direct and
indirect costs are now well known. Unix systems and PC servers are
more recent. Problems that have been recognized and solved (at a
cost) in the mainframe environment are often not even acknowledged,
The PC and Unix solutions move personnel
costs out from the Data Center to the end-user.
16
let alone tackled, in these less mature technologies. Some examples
follow.
Mainframes provide very high levels of data integrity, by taking regular
back-up copies of important data and keeping a log of transactions,
so that, in the event of a system crash (caused, for example, by
software/operator error or a power failure though most large
mainframes are protected by uninterruptible power supplies), data can
be restored to its pre-crash state by reinstating the last back-up copy
and re-applying all intervening transactions from the log automatically.
Indeed for most large users today everything is mirrored on a
disaster recovery system and production can continue uninterrupted
by virtually any type of hardware, software, or operational failure.
Taking these precautions against data loss and corruption costs time
and capacity though much of this takes place at night during batch
processing, there is a continuing overhead while the on-line network is
live, so the system has to be powerful enough to suffer the overhead
and still deliver sub-second response times to end-users. Our
mainframe cost estimates allow for this overhead.
In over 99.99% of mainframe system crashes, no significant data is
lost today. Contrast this with the typical PC or Unix environment,
where its often left to the user to remember to take back-up copies
before going home. If a crash occurs, all work since the last back-up
copy was taken has to be redone. And PC and Unix systems are far
more prone to system crashes than mainframes today most
mainframe installations experience on average fewer than one system
crash, or outage for change, in a year. The other platforms experience
frequent (by comparison) failures, and also frequent periods of
downtime to install new software or hardware.
Downtime costs money for some organizations, literally hundreds of
thousands of dollars per hour and millions per day. For others it
means the end of the business! With e-commerce growing rapidly
these costs will escalate and the 99.99% availability of the mainframe
will prove invaluable. In this case remember that even 99.9%
availability, something no large and complex PC or Unix based system
can deliver, means over 8 hours down time per year compared to the
mere minutes endured by most mainframe users.
Downtime costs money for some organizations,
literally hundreds of thousands of dollars per hour
and millions per day.
17
Similarly, mainframes can offer very high levels of security through
password protection and data encryption. Again, there is a penalty to
be paid in terms of the security softwares performance overhead
the system has to be that much more powerful in order to deliver
acceptable response times; this overhead, too, is allowed for in our
mainframe cost estimates as the impact on the performance is
negligible. Security packages are available for PCs and Unix systems,
but in general security is the responsibility of the end-user, who may
or may not be conscientious. The level of security to be expected from
end-users themselves is fairly dubious particularly since computer
crime is mostly committed by end-users anyway.
Other security-related problems are virtually unknown on the
mainframe but are very prevalent in the PC and (to a lesser extent)
Unix environments. One is computer viruses fragments of code
written by malicious individuals, which attach themselves to programs
and infect other systems via shared diskettes or across LANs. These
can be very destructive, and guarding against them is becoming an
increasingly time-consuming chore for end-users and their support
staff.
The second problem is less obvious, and that is software theft the
unlicensed copying of software. An audit of the hard disks of end-
users PCs often reveals one or more stolen programs. Corporate
management may be quite unaware of these illicit copies, but
nonetheless corporate management is legally liable, and the Software
& Information Industry Association and Business Software Alliance
are waging an increasingly high-profile campaign against offending
companies. Policing end-users to prevent illegal copying is also
becoming a time-consuming and unpopular chore for
management and support staff.
One final aside on the topic of security there is a thriving black
market in stolen PCs, but no record of a mainframe being hi-jacked.
The financial impact of a stolen PC could well exceed the actual
hardware costs by a huge margin if the PC in question holds data
relating to individuals which is covered by privacy laws. To avoid this
problem future PCs, when connected to a mainframe, have the
capability of losing all data and even software when switched off,
with the mainframe reloading the PC when it is powered on again.
Policing end-users to prevent illegal copying is
also becoming a time-consuming chore for
management and support staff.
18
This ability to download the software will in itself create large savings
and avoid problems caused by different levels of software on different
PCs.
There are other non-obvious costs associated with Unix and PC
server based systems. For example, there is an active second-hand
market for mainframe systems, and used mainframes are typically
worth many times more than other systems as a percentage of their
original purchase price (and its difficult to find a buyer for any but the
most common Unix servers). After three or four years, PC servers are
virtually worthless. Possibly most significant of all, the other solutions,
unlike mainframe ones, do not provide scalability with linear cost
increases, as we pointed out earlier in this report. For example, the
Sun 10K systems in our survey typically cost over 125% more per
user than the smaller Sun servers. We believe from our research that
all non-mainframe servers will exhibit this same tendency, with the
actual cost per user increasing as the number of users increases. This
means that in practice all of the mini and PC costs in this report
should be increased substantially if it is your intention to support more
than a few thousand users.
Partial downsizing and the incremental trap
Though there are a few instances where organizations have fallen on
hard times and replaced mainframes with smaller systems to run a
much reduced workload, we know of no case where a sizable modern
mainframe has been wholly replaced by another platform running the
same workload. Many press reports of downsizing turn out on closer
inspection to be nothing of the kind. Downsizing in the commonly
understood sense just didnt happen, and doesnt now. But what has
happened, and still does happen, is a sort of incremental downsizing,
which can have equally disastrous consequences.
The problem with mainframes was that a relatively small number of
models covered a very wide power spectrum, from perhaps 50 end-
users up to 25,000. Unless its workload was growing very fast, an
organization could be faced with a much larger, and more expensive,
upgrade than it really needed in order to add another application with
a small number of end-users. In that situation, the cost of the upgrade
With non-mainframe servers the cost per user
increases as the number of users increases.
19
could seem exorbitant compared with the cost of a Unix or PC server
system capable of handling the new application. The temptation was,
of course, to implement the new application separately on a smaller
free-standing system. When the need for another new application
arose, the same logic applied. And so it went on in time, the
organization had both a mainframe and a number of separate smaller
systems running individual applications, all of which could far more
economically be accommodated on a larger mainframe.
And often, removing small applications from the mainframe does not
reduce the required mainframe capacity anyway! The reason is
simple: in most organizations 20% of the applications take 80% of the
capacity, and 100% when peaking. The smaller applications use up
idle time between peaks. Removal of such applications therefore has
little or no impact on the overall capacity needed.
Now, however, with capacity on demand and workload pricing, users
are able to add quite small increments of processing power to their
mainframe relatively cheaply. This is one important development that
has occurred recently, which helps to reduce the single significant
drawback of mainframes that we identified in the early 1990s.
The best of both worlds?
Another major new opportunity for mainframe users is the availability
of Linux. This solution is a halfway house, as it brings many of the
mainframes advantages to the Open world. It allows users to run
multiple Unix applications on a single system and allows literally
hundreds or thousands of simultaneous servers to be accommodated
on one system. In this mode it eliminates many of the support and
management issues of the massive Unix and NT server farms that
have materialized in many organizations today.
The main advantage of Linux is that it allows the traditional mainframe
user to add new applications to the current systems at very low
incremental cost. This eliminates the sole problem that we found with
the mainframe in the past (the incremental cost of adding a single
small application to an existing system).
Linux on the mainframe brings many of the
mainframes advantages to the Open world.
20
Future cost trends
Even if the mainframe is the most cost-effective alternative at present,
what about the future? Everyone believes that the price/performance
of PCs is falling far faster than that of Unix systems, which in turn is
falling faster than the price/performance of mainframes. It therefore
may be argued that sooner or later and probably sooner rather than
later any advantage the mainframe may have will disappear.
In fact, the opposite seems likely in the future, just as it has proved in
the past, because of another, less widely publicized trend. PC servers
and Unix servers still lack much of the functionality that mainframe
users take for granted. These missing functions, along with the
performance overhead that they impose, are being added in
successive software releases.
Meanwhile, the cost per MIPS of the mainframe is falling steadily at
25% to 40% a year. And the emphasis in system software
development is less on adding functionality (much of which it already
has) and more on improving performance, in particular taking
advantage of new hardware features such as 64-bit storage. As a
result, the same system running a later release of the operating
system can show significant performance gains over a system running
an earlier release.
One more or less certain trend is that staff costs will continue to rise in
real terms. Bearing in mind that staff-related expenditure currently
accounts for around 68% of the total costs for both PC and Unix
server systems, compared with around 21% for mainframes, the
relative effect on PC and Unix server costs will be over twice as great
as on mainframe costs. And, as we remarked above, the established
trend is for mainframes to require fewer technical staff each year.
Linux on a mainframe can offer a six-to-one
price advantage over Unix or PC server based
systems.
21
Taking all these factors into account, our estimated average five-year
costs per end-user in 2010 are as follows:

Mainframes Unix minis PC servers
$0
$10,000
$20,000
$30,000
$6,250
$19,000
$24,000
Incidentally, our estimates in 1994, in the second edition of The
Dinosaur Myth, for the costs in 2000 compared to the actual costs
today are shown below:
One more or less certain trend is that staff costs
will continue to rise in real terms.
22
Mainframes Unix PC
$0
$10,000
$20,000
$30,000
$40,000
$50,000
$
8
,
4
5
8
$
7
,
0
0
0
$
1
7
,
8
7
1
$
2
0
,
0
0
0
$
1
7
,
9
3
5$
2
6
,
2
5
0
Prediction Actual
Our predictions were somewhat higher than the actual costs today for
the mainframe, as a result of IBMs efforts to lower mainframe prices,
but much lower than the current PC costs, which have improved little
over time, and close to the figure for Unix.
According to our projections, the PC and Unix server solutions will be
slightly worse relative to the mainframe 204% more expensive now
(compared with 186%) for Unix, and 284% more expensive now
(compared with 275%) for PC servers.
Our predictions were somewhat higher than the
actual costs today for the mainframe, but much
lower than the current PC costs.
23
This Report is based on ongoing research carried out by Xephons
Enterprise Market Service, which has tracked the cost of ownership of
large systems, storage, and software since the mid-1980s.
Copyright Xephon 2002-2003. Please respect our copyright, and
dont give copies of this Report to others. A free version, in HTML, is
available from the Mainframe Week Web site.
Not only did the mainframe not die, but it has re-
invented itself and is now set to dominate the
market for the next decade.
Conclusion
We believe that our cost estimates are realistic, and if anything
understate the financial advantages of the mainframe. For example,
our choice of a five-year period is very flattering to the other platforms,
which rarely last so long. If any kind of intercommunication or data
sharing between systems is required, then more powerful Unix or PC
servers, or more of them, would be required. However, we have taken
no account of this in our costings. Nonetheless, we would not claim
that our figures are universally applicable. They should instead be
viewed as a checklist of the costs to take into account in making a
meaningful comparison between different systems. If such a
comparison is carried out without bias, we believe that the mainframe
will prove to be the cheapest option for all but the smallest multi-user
systems.
With all of these changes, the mainframe has begun a new life. And
mainframe skills now command a salary premium, with new staff
being trained in mainframe technology, according to a front page
article in Computerworld (March 4 2002).
Not only did the mainframe not die, but it has re-invented itself and is
now set to dominate the market for the next decade.
Those who dismiss them as dinosaurs should remember that
mainframes have existed in their present form for fifty years at most
and dinosaurs ruled the earth for 150 million years!