Sunteți pe pagina 1din 15

In the process of operating system evolution, DOS was a very small operating

system. Today's Android is perfected, optimized, and precipitated. There are


hundreds of millions of lines of code supporting the entire ecosystem.  With
the integration of artificial intelligence, such as voice, vision, AR/ VR , and
gestures, the access of cloud platforms, and the support of middleware and
mass applications such as UI engines, the technology of operating systems will
become more and more complex. Whether it can be recognized by the
industrial ecology is a problem faced by software suppliers.  As a bridge
between hardware and software, the application platform of the application,
the operating system is as difficult as the chip. With the emergence of new
computing platforms, application scenarios and market demands, software
developers need to seize the gap. Realize the innovation of the operating
system and seek market breakthroughs.

With the gradual maturity of big data , artificial intelligence, and 5G


technology , the era of the Internet of Everything is quietly coming. By then,
the hundreds of millions of smart devices connected to the Internet will
generate a lot of pressure on communication technology. At present, everyone
generally agrees that the Internet of Things has such a trend that data
processing requires “end, edge, and cloud” to coordinate with each other
and eventually aggregate effective data in the cloud. In the face of the massive
Internet of Things, what is the next generation operating system?  Zhao Hongfei
gave his own opinion: "The future of the new operating system will be the
cloud. When the Internet of Things, the cloud will carry the OS management of
the device, file system, security mechanism, AI computing and other OS
functions, the device side is like APP, The communication protocol 

With the upgrade of autonomous driving technology, today's intelligent


networked car has become the next computing platform after the smart phone,
attracting the domestic BAT Big Three to make a big investment.  Intelligent
networked cars are known by the industry as smart phones on wheels. It is
widely believed that operating system software will become the next
battleground in the automotive market.

Batch operating system


The users of a batch operating system do not interact with the computer
directly. Each user prepares his job on an off-line device like punch cards
and submits it to the computer operator. To speed up processing, jobs
with similar needs are batched together and run as a group. The
programmers leave their programs with the operator and the operator
then sorts the programs with similar requirements into batches.

The problems with Batch Systems are as follows −

 Lack of interaction between the user and the job.

 CPU is often idle, because the speed of the mechanical I/O devices is slower
than the CPU.

 Difficult to provide the desired priority.

Time-sharing operating systems


Time-sharing is a technique which enables many people, located at
various terminals, to use a particular computer system at the same time.
Time-sharing or multitasking is a logical extension of multiprogramming.
Processor's time which is shared among multiple users simultaneously is
termed as time-sharing.

The main difference between Multiprogrammed Batch Systems and Time-


Sharing Systems is that in case of Multiprogrammed batch systems, the
objective is to maximize processor use, whereas in Time-Sharing
Systems, the objective is to minimize response time.

Multiple jobs are executed by the CPU by switching between them, but
the switches occur so frequently. Thus, the user can receive an
immediate response. For example, in a transaction processing, the
processor executes each user program in a short burst or quantum of
computation. That is, if nusers are present, then each user can get a
time quantum. When the user submits the command, the response time
is in few seconds at most.

The operating system uses CPU scheduling and multiprogramming to


provide each user with a small portion of a time. Computer systems that
were designed primarily as batch systems have been modified to time-
sharing systems.

Advantages of Timesharing operating systems are as follows −

 Provides the advantage of quick response.


 Avoids duplication of software.

 Reduces CPU idle time.

Disadvantages of Time-sharing operating systems are as follows −

 Problem of reliability.

 Question of security and integrity of user programs and data.

 Problem of data communication.

Distributed operating System


Distributed systems use multiple central processors to serve multiple
real-time applications and multiple users. Data processing jobs are
distributed among the processors accordingly.

The processors communicate with one another through various


communication lines (such as high-speed buses or telephone lines).
These are referred as loosely coupled systems or distributed systems.
Processors in a distributed system may vary in size and function. These
processors are referred as sites, nodes, computers, and so on.

The advantages of distributed systems are as follows −

 With resource sharing facility, a user at one site may be able to use the
resources available at another.

 Speedup the exchange of data with one another via electronic mail.

 If one site fails in a distributed system, the remaining sites can potentially
continue operating.

 Better service to the customers.

 Reduction of the load on the host computer.

 Reduction of delays in data processing.

Network operating System


A Network Operating System runs on a server and provides the server
the capability to manage data, users, groups, security, applications, and
other networking functions. The primary purpose of the network
operating system is to allow shared file and printer access among
multiple computers in a network, typically a local area network (LAN), a
private network or to other networks.

Examples of network operating systems include Microsoft Windows


Server 2003, Microsoft Windows Server 2008, UNIX, Linux, Mac OS X,
Novell NetWare, and BSD.

The advantages of network operating systems are as follows −

 Centralized servers are highly stable.

 Security is server managed.

 Upgrades to new technologies and hardware can be easily integrated into the
system.

 Remote access to servers is possible from different locations and types of


systems.

The disadvantages of network operating systems are as follows −

 High cost of buying and running a server.

 Dependency on a central location for most operations.

 Regular maintenance and updates are required.

Real Time operating System


A real-time system is defined as a data processing system in which the
time interval required to process and respond to inputs is so small that it
controls the environment. The time taken by the system to respond to an
input and display of required updated information is termed as
the response time. So in this method, the response time is very less as
compared to online processing.

Real-time systems are used when there are rigid time requirements on
the operation of a processor or the flow of data and real-time systems
can be used as a control device in a dedicated application. A real-time
operating system must have well-defined, fixed time constraints,
otherwise the system will fail. For example, Scientific experiments,
medical imaging systems, industrial control systems, weapon systems,
robots, air traffic control systems, etc.

There are two types of real-time operating systems.

Hard real-time systems


Hard real-time systems guarantee that critical tasks complete on time. In
hard real-time systems, secondary storage is limited or missing and the
data is stored in ROM. In these systems, virtual memory is almost never
found.

Soft real-time systems


Soft real-time systems are less restrictive. A critical real-time task gets
priority over other tasks and retains the priority until it completes. Soft
real-time systems have limited utility than hard real-time systems. For
example, multimedia, virtual reality, Advanced Scientific Projects like
undersea exploration and planetary rovers, etc.

Batch systems

In the early days of computing, scheduling computing tasks was a problem


because time on the computer had to be booked in advance, typically in
half-hour blocks or multiples thereof. If the operator encountered problems
and the program did not successfully run to completion in the allotted time,
a new time slot had to be booked and the program would have to be run
again from the beginning. Each program had to be loaded into memory and
compiled before it could run, which could involve a significant amount of
setup time. Gradually, the software tools that would automate some of the
setup and troubleshooting tasks were evolved, including linkers, loaders,
compilers, assemblers and debuggers.

In order to improve the utilisation of computing resources, the concept of


a batch operating system was devised. Such an operating system was
developed in the mid-1950s by General Motors for use on IBM mainframes,
and was known as the monitor. The operator now batched individual jobs
together sequentially and placed them on an input device. The monitor
would load and run one job at a time. On completion, each job returned
control to the monitor, which would then load the next job.

Control would also return to the monitor if an error was encountered. The
results of each job were sent to an output device such as a printer.
Because the monitor controlled the sequence of events, part of the
program (the resident monitor) had to reside in memory. Other parts of the
program were loaded as subroutines at the beginning of any job that
required them.

The monitor created some additional system overhead because it occupied


a portion of memory and used the resources of the processor. Despite this,
it improved utilisation of computing resources, enabling jobs to be executed
quickly, with little or no idle time and a greatly reduced setup time. One
issue that came to the fore at this time was the need to protect the memory
occupied by the monitor program from a running program. In addition, the
execution of certain machine level instructions, such as I/O instructions,
was deemed to be the preserve of the monitor. These "privileged"
instructions could not be executed directly by a program. To perform an
input or output task, for example, the program would have to send a
request to the monitor asking it to carry out the required actions.
The monitor program was an early form of operating system

Multiprogramming and time sharing

Even with batch operating systems, the processor was often idle because
I/0 devices were slow by comparison with the processor. The computer
spent much of its time waiting for I/O operations to finish. At that time, the
memory held the operating system and the currently executing user
program. If it were possible for memory to hold a second program, then the
processor could be used by the second program while the first program
was waiting for I/O operations to finish, making more effective use of the
processor. If enough memory was available, a number of programs could
be accommodated allowing the processor to be utilised continuously.
This multiprogramming or multitasking is a characteristic of modern
operating systems.

In a multiprogramming system, a running application can request an I/O


operation by sending a system call to the operating system. The operating
system will issue the necessary commands to the I/O device controller, and
then initiate or continue the execution of another program.
Multiprogramming systems rely on the generation of interrupts, which are
basically signals sent by a hardware device to inform the processor that it
has completed a task (or has encountered an error). On receipt of an
interrupt, the processor passes control back to the operating system, which
executes the necessary interrupt-handling routine. Multiprogramming
operating systems require a high degree of sophistication, since they are
responsible for managing memory usage for, and scheduling the execution
of, multiple programs.

In order for the processor's time to be divided fairly between the programs
competing for its use, the operating system interleaves the execution of
each program in short time slots. This process is known as time-sharing,
and it prevents one program from hogging the processor to the detriment of
other programs. In a multiprogramming system there is inevitably the
possibility that, due to a programming error for example, one program’s
code will attempt to access or overwrite memory that is being used by
another program. In addition to its other duties, therefore, the operating
system must also ensure that programs do not interfere with each other (for
example by modifying each other’s program code or data).

Personal computer systems

In the early days of commercial computing, the cost of both computer


hardware and software greatly outweighed the cost of employing the data
entry personnel who operated the computers. Programmers therefore
wrote programs with a view to creating the most efficient code, and little
consideration was given to the convenience of the operator, or to the
suitability of the human-computer interface. Operators had therefore to
learn a number of obscure commands in order to be able to carry out their
data entry duties. Despite a steady decline in the cost of hardware and
software, this situation persisted, even though the economic factors that
had created it no longer applied.
As early as the 1960s and 1970s, researchers were working on the idea of
a personal computer. One in particular, Alan Kay, was working for IBM
when he first began to consider how computing would change once
personal computers became more affordable. Kay formulated the idea of
a graphical user interface (or GUI). The interface he envisaged had all of
the features we can now see in a modern desktop environment, including a
separate window for each application, a menu-driven application
environment, and many other features now in common use.

Kay left IBM to work at the Xerox Palo Alto Research Center (PARC),
where he continued his research for another decade until it came to the
attention of Steve Jobs of Apple Computers. Apple purchased the rights to
the concept, and subsequently produced the Lisa, which was the first
commercially available personal computer to incorporate a graphical user
interface, complete with a mouse.

When Lisa failed to make an impression on the marketplace because it was


too expensive, Apple developed the Macintosh, which was far cheaper and
consequently more successful. For the first time, an operating system had
been created that reflected the economic reality that the operator was now
a far greater cost factor than the computer hardware or software. The
computer was adapted to suit the operator rather than the other way round.
Almost all operating systems now offer a sophisticated, event-driven,
graphical desktop environment designed primarily for ease of use.

Parallel, real time and distributed


Systems

Parallel operating systems are designed to make efficient use of computers


with multiple processors, and many operating systems currently on the
market can take full advantage of multiple processors if present, providing
the application programs they are running have been written using a
programming language that supports multi-threading. The general idea is
that different threads of the program can be executed simultaneously,
speeding up the execution of the program as a whole. Programs written for
single-processor computers can often run on a multi-processor system, but
can only execute on one processor at a time.

A real-time operating system is one in which processing activity is triggered


by one or more random external events, each of which will initiate the
execution of a predetermined set of procedures that must be completed
within rigid time constraints. Real-time systems are often dedicated to
highly specialised applications, such as flying an aircraft or controlling an
industrial process, where an input event must result in an immediate
response. In the early 1980s, real-time systems were used almost
exclusively for applications such as process control. A process control
system typically accepts analogue or digital sensor input, analyses the
data, and takes the appropriate action.

A distributed system is one in which various user resources reside in


separate computer systems connected by a network. The computer
systems involved may have diverse hardware and software platforms, but
the user is presented with a uniform computing environment thanks to a
distributed system architecture that provides a standard communications
interface. This intermediate layer of software hides the complexity created
by the intercommunication of such diverse technologies.

• Project/Community open source is developed and managed by a distributed community of


developers who cooperatively improve and support the source code without remuneration.
These projects may be copyrighted by the contributors directly but larger projects are
typically run by non-profit foundations. Well-known examples of community open source
projects are Linux and Apache Web Server.

• Commercial Open Source Software, or COSS, is distinguished by open source software of


which the full copyright, patents and trademarks are controlled by a single entity. The
owner only accepts code contributions if the contributor transfers copyright of the code to
this entity. They may distribute their software for free or a fee. Their business model
typically includes revenue from providing technical support and consulting services. In terms
of revenue from licensing, Red Hat is still the largest COSS company, but Facebook is the
largest COSS code contributor.
Community open source is completely free to anyone to download, including source code, for
evaluation. Even COSS vendors usually have a free version of their software packages, which includes
source code. In fact, much open source software, especially OSs, are available as “live” media, which
means you need not actually install the software but instead run it directly from a DVD or USB flash
drive.

Whether an open source package is being evaluated or integrated commercially, it has the same
global community of users and developers available for asking questions and advice. Support
includes detailed documentation, forums, wikis, newsgroups, email lists and live chat. None of this
costs anything except time.

Open source communities are leery of proprietary standards, preferring instead to adhere to open
standards around communication protocols and data formats. This aspect meaningfully improves
interoperability within and between open source and proprietary software including OSs, which in
turn means a high level of interoperability for business and customer applications as well.

Because large open source software projects can literally have millions of eyes examining the source
code, there is a much higher probability that more bugs are exposed compared to the code from a
proprietary vendor with a far smaller development staff. Furthermore, open source communities are
typically quick to implement a fix or report a workaround. Additionally, since the source code comes
with the software, customers are free to apply their own patches at will.

A side effect of the above point is that open source software is more secure overall. Since the
security of proprietary software vendors depends to some extent on their source code being
opaque, it does not follow that security bugs are not present in their software. It is more probable
that the security holes have simply not been found yet.

Except in the case of COSS, there is minimal reliance on a single vendor or group for continued
improvements, maintenance and support for open source software. Additionally, since the open
source community is distributed and diverse, there is little risk that you will end up holding orphaned
software, which would be the case if the proprietary vendor were to fold or abandon their project.
If an enterprise is also a software vendor, then building products on open source code affects the
revenue model for the enterprise’s software depending on the open source licensing agreement.
Also, an organization’s core competency could be partially diluted if the value of the proprietary
code built on top of the open source platform is not enough to offset the lowered barrier to entry of
competitors that could build a similar product on top of the same open source code.

Aside from Red Hat, large financially strong open source software vendors are few and far between.
Although great products may come from smaller, more nimble companies, there is a significantly
higher risk that they will not be there when you need them the most Large open source projects
have a vast, supportive community that provides documentation, tools and support systems to back
up users of the software. Free support is not always the fastest support, however, especially if the
enterprise is seeking a solution to a thorny problem resulting from seemingly random code bugs,
design flaws or integration difficulties. Larger enterprises with the ability to pay for top-tier support
packages can expect prompt and detailed attention that is rarely available from open source
communities.

Open source projects, even COSS, are complex packages of software that are not as closely aimed at
markets of unskilled end users as is much proprietary software. Unskilled users will never look at the
source code let alone compile it. This aspect explains why open source Apache Web Server is the
leading deployment in data centers, but desktop Linux has barely penetrated the PC market where
alternate, easy-to-use products already exist that do not have to compete based on high
performance metrics.

Aside from Red Hat, large financially strong open source software vendors are few and far between.
Although great products may come from smaller, more nimble companies, there is a significantly
higher risk that they will not be there when you need them the most

Commercial, proprietary products are typically designed with a smaller scope of features and
abilities. They are focused on a narrower market of end users than those products developed within
open source communities. Commercial vendors’ users may include developers utilizing a firm’s APIs
and libraries, but they are just as often to be composed of application users more concerned with
ease-of-use and functionality than how those aspects are accomplished behind the screen.
Proprietary software vendors must, if they are to survive, maintain tight control of their product
roadmap. Their products are designed from the start to nurture a long and prosperous future with
many paid upgrades along the way. Putting aside the arguments that proprietary software can
become stale if not re-architected at regular intervals, in general it exhibits a stability that often
exceeds that of open source software

A company building upon proprietary software may pay a bigger fee for acquisition, but typically that
acquisition includes full rights to the ownership of their own software product and the expectation
that the vendor will promptly supply them with updates, bug fixes and revised documentation as
new product versions are released.

Customer support packages from larger closed source vendors are specifically designed and fine-
tuned for their own products over many years. Since the scope of their software is typically narrower
than that from open source projects, training and after-sale support is more complete, accessible
and succinct. There is a huge difference between posing questions in an online open source forum
compared to receiving support directly from technical reps or consultants from a proprietary
software firm, especially at integration time.

Customers of closed source software companies are more or less at the whim of where their
software supplier wants to take them. They have minimal influence, unless they are their number
one customer, of influencing the vendor’s priorities, timelines and pricing structure. To change
vendors once their software has become embedded within your enterprise is likely to be
prohibitively expensive.

By definition, the internals of closed source software are closed to viewing. Users of this software
are unable to modify the code let alone debug it effectively. They are only able to supply error
codes, messages and dump stacks to the vendor and wait for a fix if there is no existing workaround
or patch. Such fixes may not be anywhere near the top of their priority list. This opacity also means
that it is usually more difficult for customers to make customizations or optimizations in their final
product.
At this point, you understand that the distinction between open source and proprietary software is
not that one is free and the other is not. They are each based on differing philosophies,
methodologies and business models. These fundamental factors are what lead to their separate sets
of pros and cons. These must be weighed within the context of each individual software
development process.

By the way, overall cost is a secondary consideration of companies that choose to adopt open source
platforms. That factor typically comes up as no. 2 in surveys. The number one reason is the belief
that the open source software chosen is technically superior to software from proprietary vendors.
The answers on that point very likely differ depending on the business and market of the respondent
however.

From a big picture point of view, the basis of a decision to adopt one over the other is an example of
the classic tradeoff between flexibility and usability. Open source software is, almost by definition,
more flexible but requires more effort to use, whereas the opposite is true for proprietary software
in general.

You can build a house by having all the raw materials dumped on the lot and then build whatever
you like as you go along. Or, let a third-party design, architect and build the house and hope that it
suits you. In the latter case, you enlist the architect and builder to correct problems, but in the first
case, you must fix deficiencies yourself, which may be your preference.

Many enterprises are successful with either approach and many utilize both approaches
simultaneously. Neither choice can ever be said to be the absolutely best, but currently open source
is gaining ground over proprietary solutions in some areas. Whereas most companies are quite
familiar with closed source software, new adopters of open source solutions should monitor
progress and fairly compare the advantages and disadvantages for themselves.

S-ar putea să vă placă și