Sunteți pe pagina 1din 65

Why Free Software is better than Open Source

Open Source misses the point of Free Software is an updated version of this article. While free software by any other name would give you the same freedom, it makes a big difference which name we use: different words convey different ideas. In 1998, some of the people in the free software community began using the term open source software instead of free software to describe what they do. The term open source quickly became associated with a different approach, a different philosophy, different values, and even a different criterion for which licenses are acceptable. The Free Software movement and the Open Source movement are today separate movements with different views and goals, although we can and do work together on some practical projects. The fundamental difference between the two movements is in their values, their ways of looking at the world. For the Open Source movement, the issue of whether software should be open source is a practical question, not an ethical one. As one person put it, Open source is a development methodology; free software is a social movement. For the Open Source movement, non-free software is a suboptimal solution. For the Free Software movement, nonfree software is a social problem and free software is the solution.

Relationship between the Free Software movement and Open Source movement
The Free Software movement and the Open Source movement are like two political camps within the free software community. Radical groups in the 1960s developed a reputation for factionalism: organizations split because of disagreements on details of strategy, and then treated each other as enemies. Or at least, such is the image people have of them, whether or not it was true. The relationship between the Free Software movement and the Open Source movement is just the opposite of that picture. We disagree on the basic principles, but agree more or less on the practical recommendations. So we can and do work together on many specific projects. We don't think of the Open Source movement as an enemy. The enemy is proprietary software. We are not against the Open Source movement, but we don't want to be lumped in with them. We acknowledge that they have contributed to our community, but we created this community, and we want people to know this. We want people to associate our achievements with our values and our philosophy, not with theirs. We want to be heard, not obscured behind a group with different views. To prevent people from thinking we are part of them, we take pains to avoid using the word open to describe free software, or its contrary, closed, in talking about nonfree software. So please mention the Free Software movement when you talk about the work we have done, and the software we have developedsuch as the GNU/Linux operating system.

Comparing the two terms

This rest of this article compares the two terms free software and open source. It shows why the term open source does not solve any problems, and in fact creates some.

Ambiguity
The term free software has an ambiguity problem: an unintended meaning, Software you can get for zero price, fits the term just as well as the intended meaning, software which gives the user certain freedoms. We address this problem by publishing a more precise definition of free software, but this is not a perfect solution; it cannot completely eliminate the problem. An unambiguously correct term would be better, if it didn't have other problems. Unfortunately, all the alternatives in English have problems of their own. We've looked at many alternatives that people have suggested, but none is so clearly right that switching to it would be a good idea. Every proposed replacement for free software has a similar kind of semantic problem, or worseand this includes open source software. The official definition of open source software, as published by the Open Source Initiative, is very close to our definition of free software; however, it is a little looser in some respects, and they have accepted a few licenses that we consider unacceptably restrictive of the users. However, the obvious meaning for the expression open source software is You can look at the source code. This is a much weaker criterion than free software; it includes free software, but also some proprietary programs, including Xv, and Qt under its original license (before the QPL). That obvious meaning for open source is not the meaning that its advocates intend. The result is that most people misunderstand what those advocates are advocating. Here is how writer Neal Stephenson defined open source: Linux is open source software meaning, simply, that anyone can get copies of its source code files. I don't think he deliberately sought to reject or dispute the official definition. I think he simply applied the conventions of the English language to come up with a meaning for the term. The state of Kansas published a similar definition: Make use of open-source software (OSS). OSS is software for which the source code is freely and publicly available, though the specific licensing agreements vary as to what one is allowed to do with that code. Of course, the open source people have tried to deal with this by publishing a precise definition for the term, just as we have done for free software. But the explanation for free software is simplea person who has grasped the idea of free speech, not free beer will not get it wrong again. There is no such succinct way to explain the official meaning of open source and show clearly why the natural definition is the wrong one.

Fear of Freedom

The main argument for the term open source software is that free software makes some people uneasy. That's true: talking about freedom, about ethical issues, about responsibilities as well as convenience, is asking people to think about things they might rather ignore. This can trigger discomfort, and some people may reject the idea for that. It does not follow that society would be better off if we stop talking about these things. Years ago, free software developers noticed this discomfort reaction, and some started exploring an approach for avoiding it. They figured that by keeping quiet about ethics and freedom, and talking only about the immediate practical benefits of certain free software, they might be able to sell the software more effectively to certain users, especially business. The term open source is offered as a way of doing more of thisa way to be more acceptable to business. The views and values of the Open Source movement stem from this decision. This approach has proved effective, in its own terms. Today many people are switching to free software for purely practical reasons. That is good, as far as it goes, but that isn't all we need to do! Attracting users to free software is not the whole job, just the first step. Sooner or later these users will be invited to switch back to proprietary software for some practical advantage. Countless companies seek to offer such temptation, and why would users decline? Only if they have learned to value the freedom free software gives them, for its own sake. It is up to us to spread this ideaand in order to do that, we have to talk about freedom. A certain amount of the keep quiet approach to business can be useful for the community, but we must have plenty of freedom talk too. At present, we have plenty of keep quiet, but not enough freedom talk. Most people involved with free software say little about freedomusually because they seek to be more acceptable to business. Software distributors especially show this pattern. Some GNU/Linux operating system distributions add proprietary packages to the basic free system, and they invite users to consider this an advantage, rather than a step backwards from freedom. We are failing to keep up with the influx of free software users, failing to teach people about freedom and our community as fast as they enter it. This is why non-free software (which Qt was when it first became popular), and partially non-free operating system distributions, find such fertile ground. To stop using the word free now would be a mistake; we need more, not less, talk about freedom. If those using the term open source draw more users into our community, that is a contribution, but the rest of us will have to work even harder to bring the issue of freedom to those users' attention. We have to say, It's free software and it gives you freedom!more and louder than ever before.

Would a Trademark Help?


The advocates of open source software tried to make it a trademark, saying this would enable them to prevent misuse. This initiative was later dropped, the term being too descriptive to qualify as a trademark; thus, the legal status of open source is the same as that of free software: there is no legal constraint on using it. I have heard reports of a number of companies' calling software packages open source even though they did not fit the official definition; I have observed some instances myself.

But would it have made a big difference to use a term that is a trademark? Not necessarily. Companies also made announcements that give the impression that a program is open source software without explicitly saying so. For example, one IBM announcement, about a program that did not fit the official definition, said this: As is common in the open source community, users of the ... technology will also be able to collaborate with IBM ... This did not actually say that the program was open source, but many readers did not notice that detail. (I should note that IBM was sincerely trying to make this program free software, and later adopted a new license which does make it free software and open source; but when that announcement was made, the program did not qualify as either one.) And here is how Cygnus Solutions, which was formed to be a free software company and subsequently branched out (so to speak) into proprietary software, advertised some proprietary software products: Cygnus Solutions is a leader in the open source market and has just launched two products into the [GNU/]Linux marketplace. Unlike IBM, Cygnus was not trying to make these packages free software, and the packages did not come close to qualifying. But Cygnus didn't actually say that these are open source software, they just made use of the term to give careless readers that impression. These observations suggest that a trademark would not have truly prevented the confusion that comes with the term open source.

Misunderstandings(?) of Open Source


The Open Source Definition is clear enough, and it is quite clear that the typical non-free program does not qualify. So you would think that Open Source company would mean one whose products are free software (or close to it), right? Alas, many companies are trying to give it a different meaning. At the Open Source Developers Day meeting in August 1998, several of the commercial developers invited said they intend to make only a part of their work free software (or open source). The focus of their business is on developing proprietary add-ons (software or manuals) to sell to the users of this free software. They ask us to regard this as legitimate, as part of our community, because some of the money is donated to free software development. In effect, these companies seek to gain the favorable cachet of open source for their proprietary software productseven though those are not open source softwarebecause they have some relationship to free software or because the same company also maintains some free software. (One company founder said quite explicitly that they would put, into the free package they support, as little of their work as the community would stand for.) Over the years, many companies have contributed to free software development. Some of these companies primarily developed non-free software, but the two activities were separate; thus, we

could ignore their non-free products, and work with them on free software projects. Then we could honestly thank them afterward for their free software contributions, without talking about the rest of what they did. We cannot do the same with these new companies, because they won't let us. These companies actively invite the public to lump all their activities together; they want us to regard their nonfree software as favorably as we would regard a real contribution, although it is not one. They present themselves as open source companies, hoping that we will get a warm fuzzy feeling about them, and that we will be fuzzy-minded in applying it. This manipulative practice would be no less harmful if it were done using the term free software. But companies do not seem to use the term free software that way; perhaps its association with idealism makes it seem unsuitable. The term open source opened the door for this. At a trade show in late 1998, dedicated to the operating system often referred to as Linux, the featured speaker was an executive from a prominent software company. He was probably invited on account of his company's decision to support that system. Unfortunately, their form of support consists of releasing non-free software that works with the systemin other words, using our community as a market but not contributing to it. He said, There is no way we will make our product open source, but perhaps we will make it internal open source. If we allow our customer support staff to have access to the source code, they could fix bugs for the customers, and we could provide a better product and better service. (This is not an exact quote, as I did not write his words down, but it gets the gist.) People in the audience afterward told me, He just doesn't get the point. But is that so? Which point did he not get? He did not miss the point of the Open Source movement. That movement does not say users should have freedom, only that allowing more people to look at the source code and help improve it makes for faster and better development. The executive grasped that point completely; unwilling to carry out that approach in full, users included, he was considering implementing it partially, within the company. The point that he missed is the point that open source was designed not to raise: the point that users deserve freedom. Spreading the idea of freedom is a big jobit needs your help. That's why we stick to the term free software in the GNU Project, so we can help do that job. If you feel that freedom and community are important for their own sakenot just for the convenience they bringplease join us in using the term free software.

Free and open source software


From Wikipedia, the free encyclopedia Jump to: navigation, search Not to be confused with Free software, open source software, or freeware. "FOSS" redirects here. For the K8 science curriculum, see Full Option Science System.

Free and open-source software (F/OSS, FOSS) or free/libre/open-source software (FLOSS) is software that is both free software and open source. It is liberally licensed to grant users the right to use, copy, study, change, and improve its design through the availability of its source code.[1] This approach has gained both momentum and acceptance as the potential benefits[clarification needed] have been increasingly recognized by both individuals and corporations.[2][3] In the context of free and open-source software, free refers to the freedom to copy and re-use the software, rather than to the price of the software. The Free Software Foundation, an organization that advocates the free software model, suggests that, to understand the concept, one should "think of free as in free speech, not as in free beer".[4] FOSS is an inclusive term that covers both free software and open source software, which despite describing similar development models, have differing cultures and philosophies.[5] Free software focuses on the philosophical freedoms it gives to users, whereas open source software focuses on the perceived strengths of its peer-to-peer development model.[6] FOSS is a term that can be used without particular bias towards either political approach. Free software licences and open source licenses are used by many software packages. While the licenses themselves are in most cases the same, the two terms grew out of different philosophies and are often used to signify different distribution methodologies.[7]

Contents
[hide]

1 History o 1.1 Naming 1.1.1 Free software 1.1.2 Open source 1.1.3 FOSS 1.1.4 FLOSS 2 Dualism of FOSS o 2.1 Beyond copyright o 2.2 Future economics of FOSS 3 Adoption by governments 4 See also 5 Notes 6 References

7 External links

[edit] History
Main article: History of free and open source software

In the 1950s, 1960s, and 1970s, it was normal for computer users to have the freedoms that are provided by free software. Software was commonly shared by individuals who used computers and most companies were so concerned with selling their hardware devices, they provided the software for free.[8] Organizations of users and suppliers were formed to facilitate the exchange of software; see, for example, SHARE and DECUS. By the late 1960s change was inevitable: software costs were dramatically increasing, a growing software industry was competing with the hardware manufacturer's bundled software products (free in that the cost was included in the hardware cost), leased machines required software support while providing no revenue for software, and some customers able to better meet their own needs did not want the costs of "free" software bundled with hardware product costs. In United States vs. IBM, filed 17 January 1969, the government charged that bundled software was anticompetitive.[9] While some software might always be free, there would be a growing amount of software that was for sale only. In the 1970s and early 1980s, the software industry began using technical measures (such as only distributing binary copies of computer programs) to prevent computer users from being able to study and customize software they had bought using reverse engineering techniques. In 1980 the copyright law (Pub. L. No. 96-517, 94 Stat. 3015, 3028) was extended to computer programs in the United States[10] In 1983, Richard Stallman, longtime member of the hacker community at the MIT Artificial Intelligence Laboratory, announced the GNU project, saying that he had become frustrated with the effects of the change in culture of the computer industry and its users.[11] Software development for the GNU operating system began in January 1984, and the Free Software Foundation (FSF) was founded in October 1985. An article outlining the project and its goals was published in March 1985 titled the GNU Manifesto. The manifesto also focused heavily on the philosophy of free software. He developed The Free Software Definition and the concept of "copyleft", designed to ensure software freedom for all. The Linux kernel, started by Linus Torvalds, was released as freely modifiable source code in 1991. The licence wasn't exactly a free software licence, but with version 0.12 in February 1992, he relicensed the project under the GNU General Public License.[12] Much like Unix, Torvalds' kernel attracted the attention of volunteer programmers. In 1997, Eric Raymond published The Cathedral and the Bazaar, a reflective analysis of the hacker community and free software principles. The paper received significant attention in early 1998, and was one factor in motivating Netscape Communications Corporation to release their popular Netscape Communicator Internet suite as free software. This code is today better known as Mozilla Firefox and Thunderbird. Netscape's act prompted Raymond and others to look into how to bring free software principles and benefits to the commercial software industry. They concluded that FSF's social activism was not appealing to companies like Netscape, and looked for a way to rebrand the free software

movement to emphasize the business potential of the sharing of source code. The new name they chose was "open source", and quickly Bruce Perens, publisher Tim O'Reilly, Linus Torvalds, and others signed on to the rebranding. The Open Source Initiative was founded in February 1998 to encourage use of the new term and evangelize open source principles.[13] While the Open Source Initiative sought to encourage the use of the new term and evangelize the principles it adhered to, corporations found themselves increasingly threatened by the concept of freely distributed software and universal access to an application's source code. A Microsoft executive publicly stated in 2001 that "open source is an intellectual property destroyer. I can't imagine something that could be worse than this for the software business and the intellectualproperty business." [14] This view perfectly summarizes the initial response to FOSS by the majority of big business. However, while FOSS has historically played a role outside of the mainstream of private software development, companies as large as Microsoft have begun to develop official open source presences on the Internet. Corporations ranging from IBM, Oracle, Google and State Farm are just a few of the big names with a serious public stake in today's competitive open source market signalling a shift in the corporate philosophy concerning the development of free to access software.[15]

[edit] Naming
Main article: Alternative terms for free software [edit] Free software

The Free Software Definition, written by Richard Stallman and published by Free Software Foundation (FSF), defines free software as a matter of liberty, not price.[16] The earliest known publication of the definition was in the February 1986 edition[17] of the now-discontinued GNU's Bulletin publication of FSF. The canonical source for the document is in the philosophy section of the GNU Project website. As of April 2008, it is published there in 39 languages.[18]
[edit] Open source

The Open Source Definition is used by the Open Source Initiative to determine whether a software license can be considered open source. The definition was based on the Debian Free Software Guidelines, written and adapted primarily by Bruce Perens.[19][20] Perens did not base his writing on the four freedoms of free software from the Free Software Foundation, which were only later available on the web.[21]
[edit] FOSS

The first known use of the phrase free open source software on Usenet was in a posting on 18 March 1998, just a month after the term open source itself was coined.[22] In February 2002, F/OSS appeared on a Usenet newsgroup dedicated to Amiga computer games.[23] In early 2002, MITRE used the term FOSS in what would later be their 2003 report Use of Free and Open Source Software (FOSS) in the U.S. Department of Defense.

[edit] FLOSS

The acronym FLOSS was coined in 2001 by Rishab Aiyer Ghosh for free/libre/open source software. Later that year, the European Commission (EC) used the phrase when they funded a study on the topic.[24] Unlike libre software, which aimed to solve the ambiguity problem, FLOSS aimed to avoid taking sides in the debate over whether it was better to say "free software" or to say "open source software". Proponents of the term point out that parts of the FLOSS acronym can be translated into other languages, with for example the F representing free (English) or frei (German), and the L representing libre (Spanish or French), livre (Portuguese), or libero (Italian), liber (Romanian) and so on. However, this term is not often used in official, non-English, documents, since the words in these languages for free as in freedom do not have the ambiguity problem of free in English. By the end of 2004, the FLOSS acronym had been used in official English documents issued by South Africa,[25] Spain,[26] and Brazil.[27]

The terms "FLOSS" and "FOSS" have come under some criticism for being counterproductive and sounding silly. For instance, Eric Raymond, co-founder of the Open Source Initiative, has stated: "Near as I can figure ... people think theyd be making an ideological commitment ... if they pick 'open source' or 'free software'. Well, speaking as the guy who promulgated 'open source' to abolish the colossal marketing blunders that were associated with the term 'free software', I think 'free software' is less bad than 'FLOSS'. Somebody, please, shoot this pitiful acronym through the head and put it out of our misery."[28] Raymond quotes programmer Rick Moen as stating "I continue to find it difficult to take seriously anyone who adopts an excruciatingly bad, haplessly obscure acronym associated with dental hygiene aids" and "neither term can be understood without first understanding both free software and open source, as prerequisite study."

[edit] Dualism of FOSS


While the Open Source Initiative includes free software licenses as part of its broader category of approved open source licenses,[29] the Free Software Foundation sees free software as distinct from open source.[30] The key differences between the two are their approach to copyright and appropriation in the context of usage. The primary obligation of users of traditional open source licenses such as BSD is limited to appropriation that clearly identifies the copyright owner of the software. Such a license is focused on providing developers who wish to redistribute the software the greatest level flexibility. Users who do not wish to redistribute the software in any form are under no

obligation. Developers can modify the software and redistribute it either as source or as part of a larger, possibly proprietary, derived work, provided the original appropriation is intact. These appropriations throughout the distribution chain ensure the owners' copyrights are maintained. The primary obligation of users of free software licenses such as the GPL is to preserve the rights of other users under the terms of the license. Such a license is focused on ensuring that users' rights to access and modify the software cannot be denied by developers who redistribute the software. The only way to accomplish this is by restricting the rights of developers to include free software in larger, derived works unless those works share the same free software license. Free software uses copyright to enforce compliance with the software license. To strengthen its legal position, the Free Software Foundation asks developers to assign copyright to the Foundation when using the GPL license.[31] From a user's (non-distributor's) perspective, both free software and open source can be treated as effectively the same thing and referred to with the inclusive term FOSS. From a developer's (distributor's) perspective, free and open source software are distinct concepts with much different legal implications.[32]

[edit] Beyond copyright


While copyright is the primary legal mechanism that FOSS authors use to control usage and distribution of their software, other mechanisms such as legislation, patents, and trademarks have implications as well. In response to legal issues with patents and the DMCA, the Free Software Foundation released version 3 of its GNU Public License in 2007 that explicitly addressed the DMCA and patent rights. As author of the GCC compiler software, the FSF also exercised its copyright and changed the GCC license to GPLv3. As a user of GCC, and a heavy user of both DRM and patents, it is speculated that this change caused Apple, Inc. to switch the compiler in its Xcode IDE from GCC to the open source Clang compiler.[33] The Samba project also switched to the GPLv3 in a recent version of its free Windows-compatible network software. In this case, Apple replaced Samba with closed-source, proprietary software - a net loss for the FOSS movement as a whole.[34] Some of the most popular FOSS projects are owned by corporations that, unlike the FSF, use both patents and trademarks to enforce their rights. In August, 2010, Oracle sued Google claiming that its use of the open source Java infringed on Oracle's patents. Oracle acquired those patents with its acquisition of Sun Microsystems in January, 2010. Sun had, itself, acquired MySQL in 2008. This made Oracle the owner of the most popular proprietary database and the most popular open source database. Oracle's attempts to commercialize the open source MySQL database have raised concerns in the FOSS community.[35] In response to uncertainty about the future of MySQL, the FOSS community used MySQL's GPL license to fork the project into a new database outside of Oracle's control.[36] This new database, however, will never be MySQL because Oracle owns the trademark for that term.[37]

[edit] Future economics of FOSS


According to Yochai Benkler, Jack N. and Lillian R. Berkman Professor for Entrepreneurial Legal Studies at Harvard Law School, free software is the most visible part of a new economy of

commons-based peer production of information, knowledge, and culture. As examples, he cites a variety of FOSS projects, including both free software and open source.[38] This new economy is already under development. In order to commercialize FOSS, many companies, Google being the most successful, are moving towards an economic model of advertising-supported software. In such a model, the only way to increase revenue is to make the advertising more valuable. Facebook has recently come under fire for using novel user tracking methods to accomplish this.[39] This new economy is not without alternatives. Apple's App Stores have proven very popular with both users and developers. The Free Software Foundation considers Apple's App Stores to be incompatible with its GPL and complained that Apple was infringing on the GPL with its iTunes terms of use. Rather than change those terms to comply with the GPL, Apple removed the GPLlicensed products from its App Stores.[40] The authors of VLC, one of the GPL-licensed programs at the center of those complaints, recently began the process to switch from the GPL to the LGPL.[41]

[edit] Adoption by governments


See also: Linux adoption

The German City of Munich announced its intention to switch from Microsoft Windows-based operating systems to an open source implementation of SuSE Linux in March 2003,[42][43] having achieved an adoption rate of 20% by 2010.[44] In 2004, a law in Venezuela (Decree 3390) went into effect, mandating a two year transition to open source in all public agencies. As of June 2009 this ambitious transition is still under way.[45][46] Malaysia launched the "Malaysian Public Sector Open Source Software Program", saving millions on proprietary software licences till 2008.[47][48] In 2005 the Government of Peru voted to adopt open source across all its bodies.[49] The 2002 response to Microsoft's critique is available online. In the preamble to the bill, the Peruvian government stressed that the choice was made to ensure that key pillars of democracy were safeguarded: "The basic principles which inspire the Bill are linked to the basic guarantees of a state of law."[50] In September, the Commonwealth of Massachusetts announced its formal adoption of the OpenDocument standard for all Commonwealth entities.[42] In 2006, the Brazilian government has simultaneously encouraged the distribution of cheap computers running Linux throughout its poorer communities by subsidizing their purchase with tax breaks.[42] In April, Ecuador passed a similar law, Decree 1014, designed to migrate the public sector to Software Libre.[51] In February 2009, the United States White House moved its website to Linux servers using Drupal for content management.[52] In March, the French Gendarmerie Nationale announced it will totally switch to Ubuntu by 2015.[53]

In January 2010, the Government of Jordan announced that it has formed a partnership with Ingres Corporation, a leading open source database management company based in the United States that is now known as Actian Corporation, to promote the use of open source software starting with university systems in Jordan. [54]

Open-source software
From Wikipedia, the free encyclopedia Jump to: navigation, search Not to be confused with free and open source software or free software. This article has multiple issues. Please help improve it or discuss these issues on the talk page.

It needs additional citations for verification. Tagged since February 2010. It needs attention from an expert on the subject. WikiProject Free Software or the Free Software Portal may be able to help recruit one. Tagged since September 2008.

The logo of the Open Source Initiative

Open-source software (OSS) is computer software that is available in source code form: the source code and certain other rights normally reserved for copyright holders are provided under an open-source license that permits users to study, change, improve and at times also to distribute the software. Open source software is very often developed in a public, collaborative manner. Open-source software is the most prominent example of open-source development and often compared to (technically defined) user-generated content or (legally defined) open content movements.[1] A report by the Standish Group states that adoption of open-source software models has resulted in savings of about $60 billion per year to consumers.[2][3]

Contents
[hide]

1 History 2 Definitions o 2.1 The Open Source Definition o 2.2 Perens' principles 3 Proliferation of the term 4 Non-software use o 4.1 Business models 5 Widely used open source products 6 Development philosophy 7 Licensing 8 Funding 9 Comparison with closed source 10 Comparison with free software 11 Open source vs. source-available 12 Pros and cons for software producers 13 Development tools 14 Projects and organizations

15 Certification 16 Criticism 17 See also 18 References 19 Further reading o 19.1 Legal and economic aspects 20 External links

[edit] History
Main article: Open source movement

The free software movement was launched in 1983. In 1998, a group of individuals advocated that the term free software should be replaced by open source software (OSS) as an expression which is less ambiguous and more comfortable for the corporate world.[4] Software developers may want to publish their software with an open source license, so that anybody may also develop the same software or understand its internal functioning. Open source software generally allows anyone to create modifications of the software, port it to new operating systems and processor architectures, share it with others or, in some cases, market it. Scholars Casson and Ryan have pointed out several policy-based reasons for adoption of open source, in particular, the heightened value proposition from open source (when compared to most proprietary formats) in the following categories:

Security Affordability Transparency Perpetuity Interoperability Localization.[5]

Particularly in the context of local governments (who make software decisions), Casson and Ryan argue that "governments have an inherent responsibility and fiduciary duty to taxpayers" which includes the careful analysis of these factors when deciding to purchase proprietary software or implement an open-source option.[6] The Open Source Definition, notably, presents an open source philosophy, and further defines the terms of usage, modification and redistribution of open source software. Software licenses grant rights to users which would otherwise be reserved by copyright law to the copyright holder. Several open source software licenses have qualified within the boundaries of the Open Source Definition. The most prominent and popular example is the GNU General Public License (GPL), which allows free distribution under the condition that further developments and applications are put under the same licence thus also free.[7] While open source distribution presents a way to make the source code of a product publicly accessible, the open source licenses allow the authors to fine tune such access. The open source label came out of a strategy session held on April 7, 1998 in Palo Alto in reaction to Netscape's January 1998 announcement of a source code release for Navigator (as Mozilla). A group of individuals at the session included Tim O'Reilly, Linus Torvalds, Tom

Paquin, Jamie Zawinski, Larry Wall, Brian Behlendorf, Sameer Parekh, Eric Allman, Greg Olson, Paul Vixie, John Ousterhout, Guido van Rossum, Philip Zimmermann, John Gilmore and Eric S. Raymond.[8] They used the opportunity before the release of Navigator's source code to clarify a potential confusion caused by the ambiguity of the word "free" in English. Many people claimed that the birth of the Internet, since 1969, started the open source movement, while others do not distinguish between open source and free software movements.[9] The Free Software Foundation (FSF), started in 1985, intended the word "free" to mean freedom to distribute (or "free as in free speech") and not freedom from cost (or "free as in free beer"). Since a great deal of free software already was (and still is) free of charge, such free software became associated with zero cost, which seemed anti-commercial. The Open Source Initiative (OSI) was formed in February 1998 by Eric S. Raymond and Bruce Perens. With at least 20 years of evidence from case histories of closed software development versus open development already provided by the Internet developer community, the OSI presented the 'open source' case to commercial businesses, like Netscape. The OSI hoped that the usage of the label "open source," a term suggested by Peterson of the Foresight Institute at the strategy session, would eliminate ambiguity, particularly for individuals who perceive "free software" as anti-commercial. They sought to bring a higher profile to the practical benefits of freely available source code, and they wanted to bring major software businesses and other hightech industries into open source. Perens attempted to register "open source" as a service mark for the OSI, but that attempt was impractical by trademark standards. Meanwhile, due to the presentation of Raymond's paper to the upper management at NetscapeRaymond only discovered when he read the Press Release, and was called by Netscape CEO Jim Barksdale's PA later in the dayNetscape released its Navigator source code as open source, with favorable results.

[edit] Definitions
The Open Source Initiative's definition is widely recognized as the standard or de facto definition.[citation needed] The Open Source Initiative (OSI) was formed in February 1998 by Raymond and Perens. With about 20 years of evidence from case histories of closed and open development already provided by the Internet, the OSI continued to present the 'open source' case to commercial businesses. They sought to bring a higher profile to the practical benefits of freely available source code, and wanted to bring major software businesses and other high-tech industries into open source. Perens adapted the Debian Free Software Guidelines to make The Open Source Definition.[10]

[edit] The Open Source Definition


The Open Source Initiative wrote a document called The Open Source Definition and uses it to determine whether it considers a software license open source. The definition was based on the Debian Free Software Guidelines, written and adapted primarily by Bruce Perens.[11][12] Perens did not base his writing on the "four freedoms" of Free Software from the FSF, which were only widely available later.[13]

[edit] Perens' principles

Under Perens' definition, open source describes a broad general type of software license that makes source code available to the general public with relaxed or non-existent copyright restrictions. The principles, as stated, say absolutely nothing about trademark or patent use and require absolutely no cooperation to ensure that any common audit or release regime applies to any derived works. It is an explicit "feature" of open source that it may put no restrictions on the use or distribution by any organization or user. It forbids this, in principle, to guarantee continued access to derived works even by the major original contributors.

[edit] Proliferation of the term


Main article: Open source

While the term "open source" applied originally only to the source code of software,[14] it is now being applied to many other areas such as Open source ecology,[15] a movement to decentralize technologies so that any human can use them. However, it is often misapplied to other areas which have different and competing principles, which overlap only partially.

[edit] Non-software use


The principles of open source have been adapted for many forms of user generated content and technology, including open source hardware. Supporters of the open content movement advocate some restrictions of use, requirements to share changes, and attribution to other authors of the work. This culture or ideology takes the view that the principles apply more generally to facilitate concurrent input of different agendas, approaches and priorities, in contrast with more centralized models of development such as those typically used in commercial companies.[16] Advocates of the open source principles often point to Wikipedia as an example, but Wikipedia has in fact often restricted certain types of use or user, and the GFDL license it has historically used makes specific requirements of all users, which technically violates the open source principles.

[edit] Business models


Main article: Business models for open source software

There are a number of commonly recognized barriers to the adoption of open source software by enterprises. These barriers include the perception that open source licenses are viral, lack of formal support and training, the velocity of change, and a lack of a long term roadmap. The majority of these barriers are risk-related. From the other side, not all proprietary projects disclose exact future plans, not all open source licenses are equally viral and many serious OSS projects (especially operating systems) actually make money from paid support and documentation. A commonly employed business strategy of commercial open-source software firms is the duallicense Strategy, as demonstrated by Ingres, MySQL, Alfresco, and others.

[edit] Widely used open source products


Open source software (OSS) projects are built and maintained by a network of volunteer programmers. Prime examples of open source products are the Apache HTTP Server, the ecommerce platform osCommerce and the internet browser Mozilla Firefox. One of the most successful open source products is the GNU/Linux operating system, an open source Unix-like operating system, and its derivative Android, an operating system for mobile devices.[17][18] In some fields, open software is the norm, like in voice over IP applications with Asterisk (PBX). Open source standards are not, however, limited to open-source software. For example, Microsoft has also joined the open-source discussion with the adoption of their OpenDocument format[5] as well as creating another open standard, the Office Open XML formats.

[edit] Development philosophy


In his 1997 essay The Cathedral and the Bazaar,[19] open source evangelist Eric S. Raymond suggests a model for developing OSS known as the bazaar model. Raymond likens the development of software by traditional methodologies to building a cathedral, "carefully crafted by individual wizards or small bands of mages working in splendid isolation".[19] He suggests that all software should be developed using the bazaar style, which he described as "a great babbling bazaar of differing agendas and approaches." In the traditional model of development, which he called the cathedral model, development takes place in a centralized way. Roles are clearly defined. Roles include people dedicated to designing (the architects), people responsible for managing the project, and people responsible for implementation. Traditional software engineering follows the cathedral model. Fred P. Brooks in his book The Mythical Man-Month advocates this model. He goes further to say that in order to preserve the architectural integrity of a system, the system design should be done by as few architects as possible. The bazaar model, however, is different. In this model, roles are not clearly defined. Gregorio Robles[20] suggests that software developed using the bazaar model should exhibit the following patterns:
Users should be treated as co-developers The users are treated like co-developers and so they should have access to the source code of the software. Furthermore users are encouraged to submit additions to the software, code fixes for the software, bug reports, documentation etc. Having more co-developers increases the rate at which the software evolves. Linus's law states, "Given enough eyeballs all bugs are shallow." This means that if many users view the source code, they will eventually find all bugs and suggest how to fix them. Note that some users have advanced programming skills, and furthermore, each user's machine provides an additional testing environment. This new testing environment offers that ability to find and fix a new bug. Early releases The first version of the software should be released as early as possible so as to increase one's chances of finding co-developers early.

Frequent integration Code changes should be integrated (merged into a shared code base) as often as possible so as to avoid the overhead of fixing a large number of bugs at the end of the project life cycle. Some open source projects have nightly builds where integration is done automatically on a daily basis. Several versions There should be at least two versions of the software. There should be a buggier version with more features and a more stable version with fewer features. The buggy version (also called the development version) is for users who want the immediate use of the latest features, and are willing to accept the risk of using code that is not yet thoroughly tested. The users can then act as co-developers, reporting bugs and providing bug fixes. High modularization The general structure of the software should be modular allowing for parallel development on independent components. Dynamic decision making structure There is a need for a decision making structure, whether formal or informal, that makes strategic decisions depending on changing user requirements and other factors. Cf. Extreme programming.

Data suggests, however, that OSS is not quite as democratic as the bazaar model suggests. An analysis of five billion bytes of free/open source code by 31,999 developers shows that 74% of the code was written by the most active 10% of authors. The average number of authors involved in a project was 5.1, with the median at 2.[21]

[edit] Licensing
This section needs additional citations for verification. Please help improve this article by adding citations to reliable sources. Unsourced material may be challenged and removed.
(March 2012)

Main article: Open-source license

A license defines the rights and obligations that a licensor grants to a licensee. Open Source licenses grant licensees the right to copy, modify and redistribute source code (or content). These licenses may also impose obligations (e.g., modifications to the code that are distributed must be made available in source code form, an author attribution must be placed in a program/ documentation using that Open Source, etc.). Authors initially derive a right to grant a license to their work based on the legal theory that upon creation of a work the author owns the copyright in that work. What the author/licensor is granting when they grant a license to copy, modify and redistribute their work is the right to use

the authors copyrights. The author still retains ownership of those copyrights, the licensee simply is allowed to use those rights, as granted in the license, so long as they maintain the obligations of the license. The author does have the option to sell/assign, versus license, their exclusive right to the copyrights to their work; whereupon the new owner/assignee controls the copyrights. The ownership of the copyright (the rights) is separate and distinct from the ownership of the work (the thing) - a person can own a copy of a piece of code (or a copy of a book) without the rights to copy, modify or redistribute copies of it. When an author contributes code to an Open Source project (e.g., Apache.org) they do so under an explicit license (e.g., the Apache Contributor License Agreement) or an implicit license (e.g., the Open Source license under which the project is already licensing code). Some Open Source projects do not take contributed code under a license, but actually require (joint) assignment of the authors copyright in order to accept code contributions into the project (e.g., OpenOffice.org and its Joint Copyright Assignment agreement). Placing code (or content) in the public domain is a way of waiving an authors (or owners) copyrights in that work. No license is granted, and none is needed, to copy, modify or redistribute a work in the public domain. Examples of free software license / open source licenses include Apache License, BSD license, GNU General Public License, GNU Lesser General Public License, MIT License, Eclipse Public License and Mozilla Public License. The proliferation of open source licenses is one of the few negative aspects of the open source movement because it is often difficult to understand the legal implications of the differences between licenses.With more than 180,000 open source projects available and its more than 1400 unique licenses, the complexity of deciding how to manage open source usage within closedsource commercial enterprises have dramatically increased. Some are home-grown while others are modeled after mainstream FOSS licenses such as Berkeley Software Distribution (BSD), Apache, MIT-style (Massachusetts Institute of Technology), or GNU General Public License (GPL). In view of this, open source practitioners are starting to use classification schemes in which FOSS licenses are grouped (typically based on the existence and obligations imposed by the copyleft provision; the strength of the copyleft provision).[22] An important legal milestone for the open source / free software movement was passed in 2008, when the US federal appeals court ruled that free software licences definitely do set legally binding conditions on the use of copyrighted work, and they are therefore enforceable under existing copyright law. As a result, if end-users do violate the licensing conditions, their license disappears, meaning they are infringing copyright.[23]

[edit] Funding
See also: Commercial open source applications

Unlike proprietary off-the-shelf software, which comes with restrictive copyright licenses, opensource software can be given away for no charge. This means that its creators cannot require each user to pay a license fee to fund development. Instead, a number of alternative models for funding its development have emerged.

Software can be developed as a consulting project for one or more customers. The customers pay to direct the developers' efforts: to have bugs prioritized and fixed or features added. Companies or independent consultants can also charge for training, installation, technical support, or customization of the software. Another approach to funding is to provide the software freely, but sell licenses to proprietary add-ons such as data libraries. For instance, an open-source CAD program may require parts libraries which are sold on a subscription or flat-fee basis. Open-source software can also promote the sale of specialized hardware that it interoperates with, as in the case of the Asterisk telephony software, developed by a manufacturer of PC telephony hardware. Many open-source software projects have begun as research projects within universities, as personal projects of students or professors, or as tools to aid scientific research. The influence of universities and research institutions on open source shows in the number of projects named after their host institutions, such as BSD Unix, CMU Common Lisp, or the NCSA HTTPd which evolved into Apache. Companies may employ developers to work on open-source projects that are useful to the company's infrastructure: in this case, it is developed not as a product to be sold but as a sort of shared public utility. A local bug-fix or solution to a software problem, written by a developer either at a companys request or to make his/her own job easier, can be released as an open source contribution without costing the company anything.[7] A larger project such as the Linux kernel may have contributors from dozens of companies which use and depend upon it, as well as hobbyist and research developers.

[edit] Comparison with closed source


Main article: Comparison of open source and closed source

The debate over open source vs. closed source (alternatively called proprietary software) is sometimes heated. The top four reasons (as provided by Open Source Business Conference survey[24]) individuals or organizations choose open-source software are: 1) lower cost, 2) security, 3) no vendor 'lock in', and 4) better quality. Since innovative companies no longer rely heavily on software sales, proprietary software has become less of a necessity.[25] As such, things like open-source content management system -- or CMS -- deployments are becoming more commonplace. In 2009,[26] the US White House switched its CMS system from a proprietary system to Drupal open-source CMS. Further, companies like Novell (who traditionally sold software the old-fashioned way) continually debate the benefits of switching to open-source availability, having already switched part of the product offering to open-source code.[27] In this way, open-source software provides solutions to unique or specific problems. As such, it is reported [28] that 98% of enterprise-level companies use open-source offerings in some capacity. With this market shift, more critical systems are beginning to rely on open-source offerings,[29] allowing greater funding (such as US Department of Homeland Security grants [29]) to help "hunt for security bugs."

This is not to argue that open-source software does not have its flaws. One of the greatest barriers facing wide acceptance of open-source software relates to the lack of technical and general support.[24] Open-source companies often combat this by offering support sometimes under a different product name. Acquia provides enterprise-level support for its open-source alternative, Drupal, for instance.[30] Many open-source advocates argue that open-source software is inherently safer because any person can view, edit, and change code.[31] But closed-source software -- and some research[32] -suggests that individuals that aren't paid to scrub code have no incentive to do the boring, monotonous work. Research indicates [33] that the open-source software - Linux - has a lower percentage of bugs than some commercial software.

[edit] Comparison with free software


Main article: Alternative terms for free software

The main difference is that by choosing one term over the other (i.e. either "open source" or "free software") one lets others know about what one's goals are. As Richard Stallman puts it, "Open source is a development methodology; free software is a social movement."[34] Critics have said that the term open source fosters an ambiguity of a different kind such that it confuses the mere availability of the source with the freedom to use, modify, and redistribute it. Developers have used the alternative terms Free/open source Software (FOSS), or Free/Libre/open source Software (FLOSS), consequently, to describe open source software which is also free software. The term open source was originally intended to be trademarkable; however, the term was deemed too descriptive, so no trademark exists.[35] The OSI would prefer that people treat Open Source as if it were a trademark, and use it only to describe software licensed under an OSI approved license.[36] OSI Certified is a trademark licensed only to people who are distributing software licensed under a license listed on the Open Source Initiative's list.[37] Open source software and free software are different terms for software which comes with certain rights, or freedoms, for the user. They describe two approaches and philosophies towards free software. Open source and free software (or software libre) both describe software which is free from onerous licensing restrictions. It may be used, copied, studied, modified and redistributed without restriction. Free software is not the same as freeware, software available at zero price. The definition of open source software was written to be almost identical to the free software definition.[38] There are very few cases of software that is free software but is not open source software, and vice versa. The difference in the terms is where they place the emphasis. Free software is defined in terms of giving the user freedom. This reflects the goal of the free software movement. Open source highlights that the source code is viewable to all; proponents of the term usually emphasize the quality of the software and how this is caused by the

development models which are possible and popular among free and open source software projects. Free software licenses are not written exclusively by the FSF. The FSF and the OSI both list licenses which meet their respective definitions of free software or open source software. The FSF believes that knowledge of the concept of freedom is an essential requirement,[38][39] insists on the use of the term free,[38][39] and separates itself from the open source movement.[38][39]

[edit] Open source vs. source-available


Although the OSI definition of "open source software" is widely accepted, a small number of people and organizations use the term to refer to software where the source is available for viewing, but which may not legally be modified or redistributed. Such software is more often referred to as source-available, or as shared source, a term coined by Microsoft. Michael Tiemann, president of OSI, had criticized[40] companies such as SugarCRM for promoting their software as "open source" when in fact it did not have an OSI-approved license. In SugarCRM's case, it was because the software is so-called "badgeware"[41] since it specified a "badge" that must be displayed in the user interface (SugarCRM has since switched to GPLv3[42]). Another example was Scilab prior to version 5, which called itself "the open source platform for numerical computation"[43] but had a license[44] that forbade commercial redistribution of modified versions. Because OSI does not have a registered trademark for the term "open source", its legal ability to prevent such usage of the term is limited, but Tiemann advocates using public opinion from OSI, customers, and community members to pressure such organizations to change their license or to use a different term.

[edit] Pros and cons for software producers


Software experts and researchers on open source software have identified several advantages and disadvantages. The main advantage for business is that open source is a good way for business to achieve greater penetration of the market. Companies that offer open source software are able to establish an industry standard and, thus, gain competitive advantage[citation needed]. It has also helped build developer loyalty as developers feel empowered and have a sense of ownership of the end product.[45] Moreover less costs of marketing and logistical services are needed for OSS. It also helps companies to keep abreast of all technology developments. It is a good tool to promote a company's image, including its commercial products.[46] The OSS development approach has helped produce reliable, high quality software quickly and inexpensively.[47] The term open source was originally intended to be trademarkable; however, the term was deemed too descriptive, so no trademark exists. Besides, it offers the potential for a more flexible technology and quicker innovation. It is said to be more reliable since it typically has thousands of independent programmers testing and fixing bugs of the software. It is flexible because modular systems allow programmers to build custom interfaces, or add new abilities to it and it is innovative since open source programs are the product of collaboration among a large number of different programmers. The mix of divergent perspectives, corporate objectives, and personal goals speeds up innovation.[48] Moreover free software can be developed in accord with purely technical requirements. It does not require thinking about commercial pressure that often

degrades the quality of the software. Commercial pressures make traditional software developers pay more attention to customers' requirements than to security requirements, since such features are somewhat invisible to the customer.[49] It is sometimes said that the open source development process may not be well defined and the stages in the development process, such as system testing and documentation may be ignored. However this is only true for small (mostly single programmer) projects. Larger, successful projects do define and enforce at least some rules as they need them to make the teamwork possible.[50][51] In the most complex projects these rules may be as strict as reviewing even minor change by two independent developers.[52] Not all OSS initiatives have been successful, for example, SourceXchange and Eazel.[45] Software experts and researchers who are not convinced by open sources ability to produce quality systems identify the unclear process, the late defect discovery and the lack of any empirical evidence as the most important problems (collected data concerning productivity and quality).[16] It is also difficult to design a commercially sound business model around the open source paradigm. Consequently, only technical requirements may be satisfied and not the ones of the market.[16] In terms of security, open source may allow hackers to know about the weaknesses or loopholes of the software more easily than closed-source software. It is depended of control mechanisms in order to create effective performance of autonomous agents who participate in virtual organizations.[53]

[edit] Development tools


This unreferenced section requires citations to ensure verifiability.

In OSS development, the participants, who are mostly volunteers, are distributed among different geographic regions, so there is need for tools to aid participants to collaborate in source code development. Often, these tools are also available as OSS. Revision control systems such as Concurrent Versions System (CVS) and later Subversion (svn) and Git are examples of tools that help centrally manage the source code files and the changes to those files for a software project. Utilities that automate testing, compiling, and bug reporting help preserve stability and support of software projects that have numerous developers but no managers, quality controller, or technical support. Building systems that report compilation errors among different platforms include Tinderbox. Commonly used bugtrackers include Bugzilla and GNATS. Tools such as mailing lists, IRC, and instant messaging provide means of Internet communication between developers. The Web is also a core feature of all of the above systems. Some sites centralize all the features of these tools as a software development management system, including GNU Savannah, SourceForge, and BountySource.

[edit] Projects and organizations

One of the benefits of open source software is that there are a wide variety of codes in oss projects for program developers. Without any blocking of this wide and diverse platform, developers create a wide range of projects and organizations. Some of the "more prominent organizations" involved in OSS development include the Apache Software Foundation, creators of the Apache web server; a loose affiliation of developers headed by Linus Torvalds, creators of the Linux operating system kernel; the Eclipse Foundation, home of the Eclipse software development platform; the Debian Project, creators of the influential Debian GNU/Linux distribution; the Mozilla Foundation, home of the Firefox web browser; and OW2, Europeanborn community developing open-source middleware. New organizations tend to have a more sophisticated governance model and their membership is often formed by legal entity members.[54] Several Open Source programs have become defining entries in their space, including the GIMP image editing system; Sun's Java programming language and environment; the MySQL database system; the FreeBSD Unix operating system; Sun's OpenOffice.org office productivity suite; and the Wireshark network packet sniffer and protocol analyser. Open Source development is often performed "live and in public", using services provided for free on the Internet, such as the Launchpad and SourceForge web sites, and using tools that are themselves Open Source, including the CVS and Subversion source control systems, and the GNU Compiler Collection. Open Source for America is a group created to raise awareness in the U.S. Federal Government about the benefits of open source software. Their stated goals are to encourage the governments utilization of open source software, participation in open source software projects, and incorporation of open source community dynamics to increase government transparency.[55] Mil-OSS is a group dedicated to the advancement of OSS use and creation in the military.[56]

[edit] Certification
Certification can help to build higher user confidence. Certification could be applied to the simplest component that can be used by developers to build the simplest module to a whole software system. There have been numerous institutions involving in this area of the open source software including The International Institute of Software Technology / United Nations University <http://www.iist.unu.edu>. UNU/IIST is a non-profit research and education institution of The United Nations. It is currently involved in a project known as "The Global Desktop Project". This project aims to build a desktop interface that every end-user is able to understand and interact with, thus crossing the language and cultural barriers. It is drawing huge attention from parties involved in areas ranging from application development to localization. Furthermore, this project will improve developing nations' access to information systems. UNU/IIST aims to achieve this without any compromise in the quality of the software. It believes a global standard can be maintained by introducing certifications and is currently organizing conferences in order to explore frontiers in the field <http://opencert.iist.unu.edu>. Alternatively, assurance models (such as DO178B) have already solved the "certification" approach for software. This approach is tailorable and can be applied to OSS, but only if the requisite planning and execution, design, test and traceability artifacts are generated.

1.3 Open Source and Free Software


Within the Linux community, there are two major ideological movements at work. The Free Software movement (which we'll get into in a moment) is working toward the goal of making all software free of intellectual property restrictions. Followers of this movement believe these restrictions hamper technical improvement and work against the good of the community. The Open Source movement is working toward most of the same goals, but takes a more pragmatic approach to them. Followers of this movement prefer to base their arguments on the economic and technical merits of making source code freely available, rather than the moral and ethical principles that drive the Free Software Movement. At the other end of the spectrum are groups that wish to maintain tighter controls over their software. The Free Software movement is headed by the Free Software Foundation, a fund-raising organization for the GNU project. Free software is more of an ideology. The oft-used expression is free as in speech, not free as in beer. In essence, free software is an attempt to guarantee certain rights for both users and developers. These freedoms include the freedom to run the program for any reason, to study and modify the source code, to redistribute the source, and to share any modifications you make. In order to guarantee these freedoms, the GNU General Public License (GPL) was created. The GPL, in brief, provides that anyone distributing a compiled program which is licensed under the GPL must also provide source code, and is free to make modifications to the program as long as those modifications are also made available in source code form. This guarantees that once a program is opened to the community, it cannot be closed except by consent of every author of every piece of code (even the modifications) within it. Most Linux programs are licensed under the GPL. It is important to note that the GPL does not say anything about price. As odd as it may sound, you can charge for free software. The free part is in the liberties you have with the source code, not in the price you pay for the software. (However, once someone has sold you, or even given you, a compiled program licensed under the GPL they are obligated to provide its source code as well.) Another popular license is the BSD license. In contrast to the GPL, the BSD license gives no requirement for the release of a program's source code. Software released under the BSD license allows redistribution in source or binary form provided only a few conditions are met. The author's credentials cannot be used as a sort of advertisement for the program. It also indemnifies the author from liability for damages that may arise from the use of the software. Much of the software included in Slackware Linux is BSD licensed. At the forefront of the younger Open Source movement, the Open Source Initiative is an organization that solely exists to gain support for open source software, that is, software that has the source code available as well as the ready-to-run program. They do not offer a specific license, but instead they support the various types of open source licenses available.

The idea behind the OSI is to get more companies behind open source by allowing them to write their own open source licenses and have those licenses certified by the Open Source Initiative. Many companies want to release source code, but do not want to use the GPL. Since they cannot radically change the GPL, they are offered the opportunity to provide their own license and have it certified by this organization. While the Free Software Foundation and the Open Source Initiative work to help each other, they are not the same thing. The Free Software Foundation uses a specific license and provides software under that license. The Open Source Initiative seeks support for all open source licenses, including the one from the Free Software Foundation. The grounds on which each argues for making source code freely available sometimes divides the two movements, but the fact that two ideologically diverse groups are working toward the same goal lends credence to the efforts of each.

Proprietary Software Definition

Proprietary software is software that is owned by an individual or a company (usually the one that developed it). There are almost always major restrictions on its use, and its source code is almost always kept secret. Source code is the form in which a program is originally written by a human using a programming language and prior to being converted to machine code which is directly readable by a computer's CPU (central processing unit). It is necessary to have the source code in order to be able to modify or improve a program. Software that is not proprietary includes free software and public domain software. Free software, which is generally the same as open source software, is available at no cost to everyone, and it can be used by anyone for any purpose and with only very minimal restrictions. These restrictions vary somewhat according to the license, but a typical requirement is that they include a copy of the original license. The most commonly used license, the GNU Public License (GPL), additionally requires that if a modified version of the software is distributed, the source code for such modified version must be made freely available. The best known example of software licensed under the GPL is Linux.

Public domain software is software that has been donated to the public domain by its copyright holder. Thus it is no longer copyrighted. Consequently, such software is completely free and can be used by anybody for any purpose without restriction. Freeware, not to be confused with free software, is a type of proprietary software that is offered for use free of monetary charges. However, as is the case with other types of proprietary software, there are generally severe restrictions on its use and the source code is kept secret. Examples of freeware include Adobe's Acrobat Reader and Microsoft's Internet Explorer web browser. The restrictions on the use of proprietary software are usually enumerated in the end user license agreements (EULAs) that users must consent to. For software provided by large companies, EULAs are generally long and complex contracts. Among the most common of the prohibitions for such programs are making unauthorized copies, using it on more than a certain number of computers and reverse engineering it. Some Unix-like operating systems are also proprietary. Among the most popular are AIX (developed by IBM), HP-UX (developed by Hewlett-Packard), QNX (developed by QNX Software Systems) and Solaris (developed by Sun Microsystems). Others are free software, including Linux and the BSD systems (the most widely used of which is FreeBSD). Virtually all Microsoft software is proprietary, including the Windows family of operating systems and Microsoft Office. This includes software that is given away at no charge, such as Internet Explorer. Other major producers of proprietary software include Adobe, Borland, IBM, Macromedia, Sun Microsystems and Oracle. In the early days of computing, software was generally free, and it was something that was shared among researchers and developers, who were usually eager to improve it. However, that situation changed as computers became more common, and the production of proprietary software became an excellent business model for many companies. However, in recent years some companies have begun to realize that free software can also be highly profitable. The most outstanding example of this is IBM, which continues to reap high returns from its approximately one billion dollar investment in Linux. Some industry observers think that the role of proprietary software will decrease in the future because of the growing competition from free software. This view holds that free software will eventually come to dominate operating systems and major application programs. Proprietary software will remain strong in some niche markets, mainly for business and technical applications for which the demand is relatively small or specialized and for which users will be willing to pay relatively high prices. The term proprietary is derived from the Latin word proprietas meaning property.

The Free Software Definition


The free software definition presents the criteria for whether a particular software program qualifies as free software. From time to time we revise this definition, to clarify it or to resolve

questions about subtle issues. See the History section below for a list of changes that affect the definition of free software. Free software means software that respects users' freedom and community. Roughly, the users have the freedom to run, copy, distribute, study, change and improve the software. With these freedoms, the users (both individually and collectively) control the program and what it does for them. When users don't control the program, the program controls the users. The developer controls the program, and through it controls the users. This nonfree or proprietary program is therefore an instrument of unjust power. Thus, free software is a matter of liberty, not price. To understand the concept, you should think of free as in free speech, not as in free beer. A program is free software if the program's users have the four essential freedoms:

The freedom to run the program, for any purpose (freedom 0). The freedom to study how the program works, and change it so it does your computing as you wish (freedom 1). Access to the source code is a precondition for this. The freedom to redistribute copies so you can help your neighbor (freedom 2). The freedom to distribute copies of your modified versions to others (freedom 3). By doing this you can give the whole community a chance to benefit from your changes. Access to the source code is a precondition for this.

A program is free software if users have all of these freedoms. Thus, you should be free to redistribute copies, either with or without modifications, either gratis or charging a fee for distribution, to anyone anywhere. Being free to do these things means (among other things) that you do not have to ask or pay for permission to do so. You should also have the freedom to make modifications and use them privately in your own work or play, without even mentioning that they exist. If you do publish your changes, you should not be required to notify anyone in particular, or in any particular way. The freedom to run the program means the freedom for any kind of person or organization to use it on any kind of computer system, for any kind of overall job and purpose, without being required to communicate about it with the developer or any other specific entity. In this freedom, it is the user's purpose that matters, not the developer's purpose; you as a user are free to run the program for your purposes, and if you distribute it to someone else, she is then free to run it for her purposes, but you are not entitled to impose your purposes on her. The freedom to redistribute copies must include binary or executable forms of the program, as well as source code, for both modified and unmodified versions. (Distributing programs in runnable form is necessary for conveniently installable free operating systems.) It is OK if there is no way to produce a binary or executable form for a certain program (since some languages don't support that feature), but you must have the freedom to redistribute such forms should you find or develop a way to make them.

In order for freedoms 1 and 3 (the freedom to make changes and the freedom to publish improved versions) to be meaningful, you must have access to the source code of the program. Therefore, accessibility of source code is a necessary condition for free software. Obfuscated source code is not real source code and does not count as source code. Freedom 1 includes the freedom to use your changed version in place of the original. If the program is delivered in a product designed to run someone else's modified versions but refuse to run yours a practice known as tivoization or lockdown, or (in its practitioners' perverse terminology) as secure boot freedom 1 becomes a theoretical fiction rather than a practical freedom. This is not sufficient. In other words, these binaries are not free software even if the source code they are compiled from is free. One important way to modify a program is by merging in available free subroutines and modules. If the program's license says that you cannot merge in a suitably licensed existing module for instance, if it requires you to be the copyright holder of any code you add then the license is too restrictive to qualify as free. Freedom 3 includes the freedom to release your modified versions as free software. A free license may also permit other ways of releasing them; in other words, it does not have to be a copyleft license. However, a license that requires modified versions to be nonfree does not qualify as a free license. In order for these freedoms to be real, they must be permanent and irrevocable as long as you do nothing wrong; if the developer of the software has the power to revoke the license, or retroactively add restrictions to its terms, without your doing anything wrong to give cause, the software is not free. However, certain kinds of rules about the manner of distributing free software are acceptable, when they don't conflict with the central freedoms. For example, copyleft (very simply stated) is the rule that when redistributing the program, you cannot add restrictions to deny other people the central freedoms. This rule does not conflict with the central freedoms; rather it protects them. Free software does not mean noncommercial. A free program must be available for commercial use, commercial development, and commercial distribution. Commercial development of free software is no longer unusual; such free commercial software is very important. You may have paid money to get copies of free software, or you may have obtained copies at no charge. But regardless of how you got your copies, you always have the freedom to copy and change the software, even to sell copies. Whether a change constitutes an improvement is a subjective matter. If your modifications are limited, in substance, to changes that someone else considers an improvement, that is not freedom. However, rules about how to package a modified version are acceptable, if they don't substantively limit your freedom to release modified versions, or your freedom to make and use modified versions privately. Thus, it is acceptable for the license to require that you change the name of the modified version, remove a logo, or identify your modifications as yours. As long as these requirements are not so burdensome that they effectively hamper you from releasing your

changes, they are acceptable; you're already making other changes to the program, so you won't have trouble making a few more. A special issue arises when a license requires changing the name by which the program will be invoked from other programs. That effectively hampers you from releasing your changed version so that it can replace the original when invoked by those other programs. This sort of requirement is acceptable only if there's a suitable aliasing facility that allows you to specify the original program's name as an alias for the modified version. Rules that if you make your version available in this way, you must make it available in that way also can be acceptable too, on the same condition. An example of such an acceptable rule is one saying that if you have distributed a modified version and a previous developer asks for a copy of it, you must send one. (Note that such a rule still leaves you the choice of whether to distribute your version at all.) Rules that require release of source code to the users for versions that you put into public use are also acceptable. In the GNU project, we use copyleft to protect these freedoms legally for everyone. But noncopylefted free software also exists. We believe there are important reasons why it is better to use copyleft, but if your program is noncopylefted free software, it is still basically ethical. (See Categories of Free Software for a description of how free software, copylefted software and other categories of software relate to each other.) Sometimes government export control regulations and trade sanctions can constrain your freedom to distribute copies of programs internationally. Software developers do not have the power to eliminate or override these restrictions, but what they can and must do is refuse to impose them as conditions of use of the program. In this way, the restrictions will not affect activities and people outside the jurisdictions of these governments. Thus, free software licenses must not require obedience to any export regulations as a condition of any of the essential freedoms. Most free software licenses are based on copyright, and there are limits on what kinds of requirements can be imposed through copyright. If a copyright-based license respects freedom in the ways described above, it is unlikely to have some other sort of problem that we never anticipated (though this does happen occasionally). However, some free software licenses are based on contracts, and contracts can impose a much larger range of possible restrictions. That means there are many possible ways such a license could be unacceptably restrictive and nonfree. We can't possibly list all the ways that might happen. If a contract-based license restricts the user in an unusual way that copyright-based licenses cannot, and which isn't mentioned here as legitimate, we will have to think about it, and we will probably conclude it is nonfree. When talking about free software, it is best to avoid using terms like give away or for free, because those terms imply that the issue is about price, not freedom. Some common terms such as piracy embody opinions we hope you won't endorse. See Confusing Words and Phrases that are Worth Avoiding for a discussion of these terms. We also have a list of proper translations of free software into various languages.

Finally, note that criteria such as those stated in this free software definition require careful thought for their interpretation. To decide whether a specific software license qualifies as a free software license, we judge it based on these criteria to determine whether it fits their spirit as well as the precise words. If a license includes unconscionable restrictions, we reject it, even if we did not anticipate the issue in these criteria. Sometimes a license requirement raises an issue that calls for extensive thought, including discussions with a lawyer, before we can decide if the requirement is acceptable. When we reach a conclusion about a new issue, we often update these criteria to make it easier to see why certain licenses do or don't qualify. If you are interested in whether a specific license qualifies as a free software license, see our list of licenses. If the license you are concerned with is not listed there, you can ask us about it by sending us email at <licensing@gnu.org>. If you are contemplating writing a new license, please contact the Free Software Foundation first by writing to that address. The proliferation of different free software licenses means increased work for users in understanding the licenses; we may be able to help you find an existing free software license that meets your needs. If that isn't possible, if you really need a new license, with our help you can ensure that the license really is a free software license and avoid various practical problems.

Beyond Software
Software manuals must be free, for the same reasons that software must be free, and because the manuals are in effect part of the software. The same arguments also make sense for other kinds of works of practical use that is to say, works that embody useful knowledge, such as educational works and reference works. Wikipedia is the best-known example. Any kind of work can be free, and the definition of free software has been extended to a definition of free cultural works applicable to any kind of works.

Open Source?
Another group has started using the term open source to mean something close (but not identical) to free software. We prefer the term free software because, once you have heard that it refers to freedom rather than price, it calls to mind freedom. The word open never refers to freedom.

History
From time to time we revise this Free Software Definition. Here is the list of changes, along with links to show exactly what was changed.

Version 1.111: Clarify 1.77 by saying that only retroactive restrictions are unacceptable. The copyright holders can always grant additional permission for use of the work by releasing the work in another way in parallel.

Version 1.105: Reflect, in the brief statement of freedom 1, the point (already stated in version 1.80) that it includes really using your modified version for your computing. Version 1.92: Clarify that obfuscated code does not qualify as source code. Version 1.90: Clarify that freedom 3 means the right to distribute copies of your own modified or improved version, not a right to participate in someone else's development project. Version 1.89: Freedom 3 includes the right to release modified versions as free software. Version 1.80: Freedom 1 must be practical, not just theoretical; i.e., no tivoization. Version 1.77: Clarify that all retroactive changes to the license are unacceptable, even if it's not described as a complete replacement. Version 1.74: Four clarifications of points not explicit enough, or stated in some places but not reflected everywhere: o "Improvements" does not mean the license can substantively limit what kinds of modified versions you can release. Freedom 3 includes distributing modified versions, not just changes. o The right to merge in existing modules refers to those that are suitably licensed. o Explicitly state the conclusion of the point about export controls. o Imposing a license change constitutes revoking the old license. Version 1.57: Add "Beyond Software" section. Version 1.46: Clarify whose purpose is significant in the freedom to run the program for any purpose. Version 1.41: Clarify wording about contract-based licenses. Version 1.40: Explain that a free license must allow to you use other available free software to create your modifications. Version 1.39: Note that it is acceptable for a license to require you to provide source for versions of the software you put into public use. Version 1.31: Note that it is acceptable for a license to require you to identify yourself as the author of modifications. Other minor clarifications throughout the text. Version 1.23: Address potential problems related to contract-based licenses. Version 1.16: Explain why distribution of binaries is important. Version 1.11: Note that a free license may require you to send a copy of versions you distribute to the author.

There are gaps in the version numbers shown above because there are other changes in this page that do not affect the definition as such. These changes are in other parts of the page. You can review the complete list of changes to the page through the cvsweb interface.

Proprietary software
From Wikipedia, the free encyclopedia Jump to: navigation, search This article has multiple issues. Please help improve it or discuss these issues on the talk page.

It needs additional citations for verification. Tagged since January 2012. It may contain original research. Tagged since January 2012. Its neutrality is disputed. Tagged since January 2012.

Look up proprietary in Wiktionary, the free dictionary.

Proprietary software is computer software licensed[citation needed] under exclusive legal right of the copyright holder.[citation needed] The licensee is given the right to use the software under certain conditions, while restricted from other uses, such as modification, further distribution, or reverse engineering.[citation needed] Complementary terms include free software,[citation needed] licensed by the owner under more permissive terms, and public domain software, which is not subject to copyright and can be used for any purpose. Proponents of free and open source software use proprietary or non-free to describe software that is not free or open source.[1][2] In the software industry, commercial software refers to software produced for sale, which is a related but distinct categorization. According to Eric S. Raymond, in the Jargon File, "In the language of hackers and users" it is used pejoratively, with the meaning of "inferior" and "a product not conforming to open-systems standards".[3]

Contents

1 Software becoming proprietary 2 Legal basis o 2.1 Limitations 3 Exclusive rights o 3.1 Use of the software o 3.2 Inspection and modification of source code o 3.3 Redistribution 4 Interoperability with software and hardware o 4.1 Proprietary file formats and protocols o 4.2 Proprietary APIs o 4.3 Vendor lock-in o 4.4 Software limited to certain hardware configurations 5 Abandonment by owners 6 Pricing and economics 7 Similar terms 8 Examples 9 See also 10 References

[edit] Software becoming proprietary


Until the late 1960s computershuge and expensive mainframe machines in specially airconditioned computer roomswere usually supplied on a lease rather than purchase basis.[4][5] Service and all software available were usually supplied by manufacturers without separate

charge until 1969. Software source code was usually provided. Users who developed software often made it available, without charge. Customers who purchased expensive mainframe hardware did not pay separately for software. In 1969 IBM led an industry change by starting to charge separately for (mainframe) software and services, and ceasing to supply source code.[6]

[edit] Legal basis


Further information: Software law, Software copyright, Software patent, and End-user license agreement

Most software is covered by copyright which, along with contract law, patents, and trade secrets, provides legal basis for its owner to establish exclusive rights.[7] A software vendor delineates the specific terms of use in an end-user license agreement (EULA). The user may agree to this contract in writing, interactively, called clickwrap licensing, or by opening the box containing the software, called shrink wrap licensing. License agreements are usually not negotiable.[citation needed] Software patents grant exclusive rights to algorithms, software features, or other patentable subject matter. Laws on software patents vary by jurisdiction and are a matter of ongoing debate. Vendors sometimes grant patent rights to the user in the license agreement.[8] For example, the algorithm for creating, or encoding, MP3s is patented; LAME is an MP3 encoder which is open source but illegal to use without obtaining a license for the algorithm it contains. Proprietary software vendors usually regard source code as a trade secret.[9] Free software licences and open-source licences use the same legal basis as proprietary software.[10] Free software companies and projects are also joining into patent pools like the Patent Commons and the Open Invention Network.

[edit] Limitations
License agreements do not override applicable copyright law or contract law. Provisions that conflict may not be enforceable.[citation needed] Some vendors say that software licensing is not a sale, and that limitations of copyright like the first-sale doctrine do not apply. The EULA for Microsoft Windows states that the software is licensed, not sold.[11]

[edit] Exclusive rights


The owner of proprietary software exercises certain exclusive rights over the software. The owner can restrict use, inspection of source code, modification of source code, and redistribution.

[edit] Use of the software

Further information: Copy protection, Damaged good, and Price discrimination

Vendors typically limit the number of computers on which software can be used, and prohibit the user from installing the software on extra computers.[citation needed] Restricted use is sometimes enforced through a technical measure, such as product activation, a product key or serial number, a hardware key, or copy protection. Vendors may also distribute versions that remove particular features, or versions which allow only certain fields of endeavor, such as non-commercial, educational, or non-profit use. Use restrictions vary by license:

Windows Vista Starter is restricted to running a maximum of three concurrent applications. The retail edition of Microsoft Office Home and Student 2007 is limited to non-commercial use on up three devices in one household. Windows XP can be installed on one computer, and limits the number of network file sharing connections to 10.[12] The Home Edition disables features present in Windows XP Professional. Many Adobe licenses are limited to one user, but allow the user to install a second copy on a home computer or laptop.[13] iWork '09, Apple's productivity suite, is available in a five-user family pack, for use on up to five computers in a household.[citation needed]

[edit] Inspection and modification of source code


See also: Closed source software, Open source, and Anti-features

Vendors typically distribute proprietary software in compiled form, usually the machine language understood by the computer's central processing unit. They typically retain the source code, or human-readable version of the software, written in a higher level programming language.[14] This scheme is often referred to as closed source.[15] By withholding source code, the software producer prevents the user from understanding how the software works and from changing how it works.[16] This practice is denounced by some critics, who argue that users should be able to study and change the software they use, for example, to remove secret or malicious features, or look for security vulnerabilities. Richard Stallman says that proprietary software commonly contains "malicious features, such as spying on the users, restricting the users, back doors, and imposed upgrades."[17] Some proprietary software vendors say that retaining the source code makes their software more secure, because the widely available code for open-source software makes it easier to identify security vulnerabilities.[18] Open source proponents pejoratively call this security through obscurity, and say that wide availability results in increased scrutiny of the source code, making open source software more secure.[19] While most proprietary software is closed-source, some vendors distribute the source code or otherwise make it available to customers.[citation needed] The source code is covered by a nondisclosure agreement or a license that allows, for example, study and modification, but not redistribution.[citation needed] The text-based email client Pine and certain implementations of Secure Shell are distributed with proprietary licenses that make the source code available.[citation needed]

Some governments fear that proprietary software may include defects or malicious features which would compromise sensitive information. In 2003 Microsoft established a Government Security Program (GSP) to allow governments to view source code and Microsoft security documentation, of which the Chinese government was an early participant.[20][21] The program is part of Microsoft's broader Shared Source Initiative which provides source code access for some products. The Reference Source License (Ms-RSL) and Limited Public License (Ms-LPL) are proprietary software licenses where the source code is made available. Software vendors sometimes use obfuscated code to impede users who would reverse engineer the software.[citation needed] This is particularly common with certain programming languages.[citation needed] For example, the bytecode for programs written in Java can be easily decompiled to somewhat usable code,[citation needed] and the source code for programs written in scripting languages such as PHP or JavaScript is available at run time.[22]

[edit] Redistribution
Further information: Shareware See also: Freely redistributable software

Proprietary software vendors can prohibit users from sharing the software with others. Another unique license is required for another party to use the software. In the case of proprietary software with source code available, the vendor may also prohibit customers from distributing their modifications to the source code. Shareware is closed-source software whose owner encourages redistribution at no cost, but which the user sometimes must pay to use after a trial period. The fee usually allows use by a single user or computer. In some cases, software features are restricted during or after the trial period, a practice sometimes called crippleware.

[edit] Interoperability with software and hardware


Further information: Interoperability of software

[edit] Proprietary file formats and protocols


Further information: Proprietary format and Proprietary protocol

Proprietary software often[citation needed] stores some of its data in file formats which are incompatible with other software, and may also communicate using protocols which are incompatible. Such formats and protocols may be restricted as trade secrets or subject to patents.[citation needed]

[edit] Proprietary APIs


A proprietary application programming interface (API) is a software library interface "specific to one device or, more likely to a number of devices within a particular manufacturer's product

range."[23] The motivation for using a proprietary API can be vendor lock-in or because standard APIs do not support the device's functionality.[23] The European Commission, in its March 24, 2004 decision on Microsoft's business practices,[24] quotes, in paragraph 463, Microsoft general manager for C++ development Aaron Contorer as stating in a February 21, 1997 internal Microsoft memo drafted for Bill Gates:
The Windows API is so broad, so deep, and so functional that most ISVs would be crazy not to use it. And it is so deeply embedded in the source code of many Windows apps that there is a huge switching cost to using a different operating system instead.

Early versions of the iPhone SDK were covered by a non-disclosure agreement. The agreement forbade independent developers from discussing the content of the interfaces. Apple discontinued the NDA in October 2008.[25]

[edit] Vendor lock-in


Further information: Vendor lock-in

A dependency on the future versions and upgrades for a proprietary software package can create vendor lock-in, entrenching a monopoly position.[26]

[edit] Software limited to certain hardware configurations


Proprietary software may also have licensing terms that limit the usage of that software to a specific set of hardware. Apple has such a licensing model for Mac OS X, an operating system which is limited to Apple hardware, both by licensing and various design decisions. This licensing model has been affirmed by the United States Court of Appeals.[27]

[edit] Abandonment by owners


Main article: Abandonware

Proprietary software which is no longer marketed by its owner and is used without permission by users is called abandonware and may include source code. Some abandonware has the source code released to the public domain either by its author or copyright holder and is therefore free software, and no longer proprietary software. If the proprietor of a software package should cease to exist, or decide to cease or limit production or support for a proprietary software package, recipients and users of the package may have no recourse if problems are found with the software. Proprietors can fail to improve and support software because of business problems.[28] When no other vendor can provide support for the software, the ending of support for older or existing versions of a software package may be done to force users to upgrade and pay for newer versions; or migrate to either competing systems with longer support life cycles or to FOSS-based systems.[29]

[edit] Pricing and economics

See also: Commercial software

Proprietary software is not synonymous with commercial software,[30][31] though the industry commonly confuses[citation needed] the term,[32][33] as does the free software community.[34][35][citation needed] Proprietary software can be distributed at no cost or for a fee, and free software can be distributed at no cost or for a fee.[36] The difference is that whether or not proprietary software can be distributed, and what the fee would be, is at the proprietor's discretion. With free software, anyone who has a copy can decide whether, and how much, to charge for a copy or related services.[37] Proprietary software that comes for no cost is called freeware. Proponents of commercial proprietary software argue that requiring users to pay for software as a product increases funding or time available for the research and development of software. For example, Microsoft says that per-copy fees maximise the profitability of software development.[38] Proprietary software is generally creates greater commercial activity over free software, especially in regard to market revenues.[39]

[edit] Similar terms


The founder of free software movement, Richard Stallman, sometimes uses the term "usersubjugating software"[40] to describe proprietary software. Eben Moglen sometimes talks of "unfree software".[citation needed] The term "non-free" is often used by Debian developers to describe any software whose license does not comply with Debian Free Software Guidelines, and they use "proprietary software" specifically for non-free software that provides no source code.[citation needed] The Open Source Initiative uses the terms "proprietary software" and "closed source software" interchangeably.[41][42] Semi-free software was used by the Free Software Foundation to describe software that is not free software, but comes with permission for individuals to use, copy, distribute, and modify either for non-profit purposes only or with the prohibition to redistribute modified copies or derived works.[43] Such software is also rejected by the Open Source Initiative and Debian. PGP is an example of a semi-free program. The Free Software Foundation classifies semi-free software as non-free software and no longer draws a distinction between semi-free software and proprietary software.

[edit] Examples
Well known examples of proprietary software include Microsoft Windows, Adobe Flash Player, PS3 OS, iTunes, Adobe Photoshop, Google Earth, Mac OS X, Skype, WinRAR, and some versions of Unix. Software distributions considered as proprietary may in fact incorporate a "mixed source" model including both free and non-free software in the same distribution.[44] Most if not all so-called proprietary UNIX distributions are mixed source software, bundling open source components like BIND, Sendmail, X Window System, DHCP, and others along with a purely proprietary kernel and system utilities.[45][46]

Some free software packages are also simultaneously available under proprietary terms. Examples include MySQL, Sendmail and ssh. The original copyright holders for a work of free software, even copyleft free software, can use dual-licensing to allow themselves or others to redistribute proprietary versions. Non-copyleft free software (i.e. software distributed under a permissive free software license or released to the public domain) allows anyone to make proprietary redistributions.[47][48] Free software that depends on proprietary software is considered "trapped" by the Free Software Foundation. This includes software written only for Microsoft Windows,[49] or software that could only run on Java, before it became free software.[50] Internet Advantages

Faster Communication The foremost target of Internet has always been speedy communication and it has excelled way beyond the expectations. Newer innovations are only going to make it faster and more reliable. Now, you can communicate in a fraction of second with a person who is sitting in the other part of the world. For more personal and interactive communication, you can avail the facilities of chat services, video conferencing and so on. Besides, there are plenty of messenger services in offering. With the help of such services, it has become very easy to establish a kind of global friendship where you can share your thoughts and explore other cultures. Information Resources Information is probably the biggest advantage that Internet offers. Internet is a virtual treasure trove of information. Any kind of information on any topic under the sun is available on the Internet. The search engines like Google, Yahoo are at your service on the Internet. There is a huge amount of information available on the Internet for just about every subject known to man, ranging from government law and services, trade fairs and conferences, market information, new ideas and technical support, the list is simply endless. Students and children are among the top users who surf the Internet for research. Today, it is almost required that students should use the Internet for research or the purpose of gathering resources. Even teachers have started giving assignments that require extensive research on the Internet. Besides, you can have an access to latest researches in the field of medicine, technology and so on. Numerous web sites such as America's Doctor also allow you to talk to the doctors online. Entertainment Entertainment is another popular raison d'tre why many people prefer to surf the Internet. In fact, Internet has become quite successful in trapping the multifaceted entertainment industry. Downloading games or just surfing the celebrity websites are some of the uses people have discovered. Even celebrities are using Internet effectively for promotional campaigns. Besides, there are numerous games that can be downloaded from the Internet for free. The industry of online gaming has tasted dramatic and phenomenal attention by game lovers. Social Networking One cannot imagine an online life without Facebook or Twitter. Social networking has become so popular amongst youth that it might one day replace physical networking. It has evolved as a great medium to connect with millions of people with similar interests. Apart from finding long-

lost friends, you can also look for job, business opportunities on forums, communities etc. Besides, there are chat rooms where users can meet new and interesting people. Some of them may even end up finding their life partners. Online Services Internet has made life very convenient. With numerous online services you can now perform all your transactions online. You can book tickets for a movie, transfer funds, pay utility bills, taxes etc., right from your home. Some travel websites even plan an Itinerary as per your preferences and take care of airline tickets, hotel reservations etc. e-commerce The concept of e-commerce is used for any type of commercial maneuvering or business deals that involves the transfer of information across the globe via Internet. It has become a phenomenon associated with any kind of shopping, business deal etc. You name a service, and ecommerce with its giant tentacles engulfing every single product and service will make it available at your doorstep. Websites such as eBay allow you to even bid for homes, buy, sell or auction stuff online. Internet Disadvantages

Theft of Personal Information If you use the Internet for online banking, social networking or other services, you may risk a theft to your personal information such as name, address, credit card number etc. Unscrupulous people can access this information through unsecured connections or by planting software and then use your personal details for their benefit. Needless to say, this may land you in serious trouble. Spamming Spamming refers to sending unwanted e-mails in bulk, which provide no purpose and needlessly obstruct the entire system. Such illegal activities can be very frustrating for you as it makes your Internet slower and less reliable. Virus Threat Internet users are often plagued by virus attacks on their systems. Virus programs are inconspicuous and may get activated if you click a seemingly harmless link. Computers connected to Internet are very prone to targeted virus attacks and may end up crashing. Pornography Pornography is perhaps the biggest disadvantage of Internet. Internet allows you to access and download millions of pornographic photos, videos and other X-rated stuff. Such unrestricted access to porn can be detrimental for children and teenagers. It can even play a havoc in marital and social lives of adults. Social Disconnect Thanks to Internet, people now only meet on social networks. More and more people are getting engulfed in virtual world and drifting apart from their friends and family. Even children prefer to play online games rather than going out and mingling with other kids. This may hamper a

healthy social development in children. Thus, Internet has the potential to make your life simple and convenient, as well as wreak havoc in your life. Its influence is mostly dictated by the choices you make while you are online. With clever use, you can manage to harness its unlimited potential.

Internet History
1969 - Birth of a Network

The Internet as we know it today, in the mid-1990s, traces it origins back to a Defense Department project in 1969. The subject of the project was wartime digital communications. At that time the telephone system was about the only theater-scale communications system in use. A major problem had been identified in its design - its dependence on switching stations that could be targeted during an attack. Would it be possible to design a network that could quickly reroute digital traffic around failed nodes? A possible solution had been identified in theory. That was to build a "web" of datagram network, called an "catenet", and use dynamic routing protocols to constantly adjust the flow of traffic through the catenet. The Defense Advanced Research Projects Agency (DARPA) launched the DARPA Internet Program.
1970s - Infancy

DARPA Internet, largely the plaything of academic and military researchers, spent more than a decade in relative obscurity. As Vietnam, Watergate, the Oil Crisis, and the Iranian Hostage Crisis rolled over the nation, several Internet research teams proceeded through a gradual evolution of protocols. In 1975, DARPA declared the project a success and handed its management over to the Defense Communications Agency. Several of today's key protocols (including IP and TCP) were stable by 1980, and adopted throughout ARPANET by 1983.
Mid 1980s - The Research Net

Let's outline key features, circa-1983, of what was then called ARPANET. A small computer was a PDP-11/45, and a PDP-11/45 does not fit on your desk. Some sites had a hundred computers attached to the Internet. Most had a dozen or so, probably with something like a VAX doing most of the work - mail, news, EGP routing. Users did their work using DEC VT-100 terminals. FORTRAN was the word of the day. Few companies had Internet access, relying instead on SNA and IBM mainframes. Rather, the Internet community was dominated by universities and military research sites. It's most popular service was the rapid email it made possible with distant colleagues. In August 1983, there were 562 registered ARPANET hosts (RFC 1296).

UNIX deserves at least an honorable mention, since almost all the initial Internet protocols were developed first for UNIX, largely due to the availability of kernel source (for a price) and the relative ease of implementation (relative to things like VMS or MVS). The University of California at Berkeley (UCB) deserves special mention, because their Computer Science Research Group (CSRG) developed the BSD variants of AT&T's UNIX operating system. BSD UNIX and its derivatives would become the most common Internet programming platform. Many key features of the Internet were already in place, including the IP and TCP protocols. ARPANET was fundamentally unreliable in nature, as the Internet is still today. This principle of unreliable delivery means that the Internet only makes a best-effort attempt to deliver packets. The network can drop a packet without any notification to sender or receiver. Remember, the Internet was designed for military survivability. The software running on either end must be prepared to recognize data loss, retransmitting data as often as necessary to achieve its ultimate delivery.
Late 1980s - The PC Revolution

Driven largely by the development of the PC and LAN technology, subnetting was standardized in 1985 when RFC 950 was released. LAN technology made the idea of a "catenet" feasible - an internetwork of networks. Subnetting opened the possibilities of interconnecting LANs with WANs. The National Science Foundation (NSF) started the Supercomputer Centers program in 1986. Until then, supercomputers such as Crays were largely the playthings of large, well-funded universities and military research centers. NSF's idea was to make supercomputer resources available to those of more modest means by constructing five supercomputer centers around the country and building a network linking them with potential users. NSF decided to base their network on the Internet protocols, and NSFNET was born. For the next decade, NSFNET would be the core of the U.S. Internet, until its privatization and ultimate retirement in 1995. Domain naming was stable by 1987 when RFC 1034 was released. Until then, hostnames were mapped to IP address using static tables, but the Internet's exponential growth had made this practice infeasible. In the late 1980s, important advances related poor network performance with poor TCP performance, and a string of papers by the likes of Nagle and Van Jacobson (RFC 896, RFC 1072, RFC 1144, RFC 1323) present key insights into TCP performance. The 1987 Internet Worm was the largest security failure in the history of the Internet. More information can be found in RFC 1135. All things considered, it could happen again.
Early 1990s - Address Exhaustion and the Web

In the early 90s, the first address exhaustion crisis hit the Internet technical community. The present solution, CIDR, will sustain the Internet for a few more years by making more efficient use of IP's existing 32-bit address space. For a more lasting solution, IETF is looking at IPv6 and its 128-bit address space, but CIDR is here to stay.

Crisis aside, the World Wide Web (WWW) has been one of Internet's most exciting recent developments. The idea of hypertext has been around for more than a decade, but in 1989 a team at the European Center for Particle Research (CERN) in Switzerland developed a set of protocols for transferring hypertext via the Internet. In the early 1990s it was enhanced by a team at the National Center for Supercomputing Applications (NCSA) at the University of Illinois - one of NSF's supercomputer centers. The result was NCSA Mosaic, a graphical, point-and-click hypertext browser that made Internet easy. The resulting explosion in "Web sites" drove the Internet into the public eye.
Mid 1990s - The New Internet

Of at least as much interest as Internet's technical progress in the 1990s has been its sociological progress. It has already become part of the national vocabulary, and seems headed for even greater prominence. It has been accepted by the business community, with a resulting explosion of service providers, consultants, books, and TV coverage. It has given birth to the Free Software Movement. The Free Software Movement owes much to bulletin board systems, but really came into its own on the Internet, due to a combination of forces. The public nature of the Internet's early funding ensured that much of its networking software was non-proprietary. The emergence of anonymous FTP sites provided a distribution mechanism that almost anyone could use. Network newsgroups and mailing lists offered an open communication medium. Last but not least were individualists like Richard Stallman, who wrote EMACS, launched the GNU Project and founded the Free Software Foundation. In the 1990s, Linus Torvalds wrote Linux, the popular (and free) UNIX clone operating system.
\begin{soapbox}

The explosion of capitalist conservatism, combined with a growing awareness of Internet's business value, has led to major changes in the Internet community. Many of them have not been for the good. First, there seems to be a growing departure from Internet's history of open protocols, published as RFCs. Many new protocols are being developed in an increasingly proprietary manner. IGRP, a trademark of Cisco Systems, has the dubious distinction as the most successful proprietary Internet routing protocol, capable only of operation between Cisco routers. Other protocols, such as BGP, are published as RFCs, but with important operational details omitted. The notoriously mis-named Open Software Foundation has introduced a whole suite of "open" protocols whose specifications are available - for a price - and not on the net. I am forced to wonder: 1) why do we need a new RPC? and 2) why won't OSF tell us how it works? People forget that businesses have tried to run digital communications networks in the past. IBM and DEC both developed proprietary networking schemes that only ran on their hardware. Several information providers did very well for themselves in the 80s, including LEXIS/NEXIS, Dialog, and Dow Jones. Public data networks were constructed by companies like Tymnet and run into every major US city. CompuServe and others built large bulletin board-like systems. Many of these services still offer a quality and depth of coverage unparalleled on the Internet (examine Dialog if you are skeptical of

this claim). But none of them offered nudie GIFs that anyone could download. None of them let you read through the RFCs and then write a Perl script to tweak the one little thing you needed to adjust. None of them gave birth to a Free Software Movement. None of them caught people's imagination. The very existence of the Free Software Movement is part of the Internet saga, because free software would not exist without the net. "Movements" tend to arise when progress offers us new freedoms and we find new ways to explore and, sometimes, to exploit them. The Free Software Movement has offered what would be unimaginable when the Internet was formed - games, editors, windowing systems, compilers, networking software, and even entire operating systems available for anyone who wants them, without licensing fees, with complete source code, and all you need is Internet access. It also offers challenges, forcing us to ask what changes are needed in our society to support these new freedoms that have touched so many people. And it offers chances at exploitation, from the businesses using free software development platforms for commercial code, to the Internet Worm and the security risks of open systems. People wonder whether progress is better served through government funding or private industry. The Internet defies the popular wisdom of "business is better". Both business and government tried to build large data communication networks in the 1980s. Business depended on good market decisions; the government researchers based their system on openness, imagination and freedom. Business failed; Internet succeeded. Our reward has been its commercialization.
\end{soapbox}

For the next few years, the Internet will almost certainly be content-driven. Although new protocols are always under development, we have barely begun to explore the potential of just the existing ones. Chief among these is the World Wide Web, with its potential for simple online access to almost any information imaginable. Yet even as the Internet intrudes into society, remember that over the last two decades "The Net" has developed a culture of its own, one that may collide with society's. Already business is making its pitch to dominate the Internet. Already Congress has deemed it necessary to regulate the Web. The big questions loom unanswered: How will society change the Internet... and how will the Internet change society?

Introduction

The Internet has revolutionized the computer and communications world like nothing before. The invention of the telegraph, telephone, radio, and computer set the stage for this unprecedented integration of capabilities. The Internet is at once a world-wide broadcasting capability, a mechanism for information dissemination, and a medium for collaboration and interaction between individuals and their computers without regard for geographic location. The Internet represents one of the most successful examples of the benefits of sustained investment and commitment to research and development of information infrastructure. Beginning with the early research in packet switching, the government, industry and academia have been partners in evolving and deploying this exciting new technology. Today, terms like "bleiner@computer.org" and "http://www.acm.org" trip lightly off the tongue of the random person on the street. 1 This is intended to be a brief, necessarily cursory and incomplete history. Much material currently exists about the Internet, covering history, technology, and usage. A trip to almost any bookstore will find shelves of material written about the Internet. 2 In this paper,3 several of us involved in the development and evolution of the Internet share our views of its origins and history. This history revolves around four distinct aspects. There is the technological evolution that began with early research on packet switching and the ARPANET (and related technologies), and where current research continues to expand the horizons of the infrastructure along several dimensions, such as scale, performance, and higher-level functionality. There is the operations and management aspect of a global and complex operational infrastructure. There is the social aspect, which resulted in a broad community of Internauts working together to create and evolve the technology. And there is the commercialization aspect, resulting in an extremely effective transition of research results into a broadly deployed and available information infrastructure. The Internet today is a widespread information infrastructure, the initial prototype of what is often called the National (or Global or Galactic) Information Infrastructure. Its history is complex and involves many aspects - technological, organizational, and community. And its influence reaches not only to the technical fields of computer communications but throughout society as we move toward increasing use of online tools to accomplish electronic commerce, information acquisition, and community operations.

Origins of the Internet


The first recorded description of the social interactions that could be enabled through networking was a series of memos written by J.C.R. Licklider of MIT in August 1962 discussing his "Galactic Network" concept. He envisioned a globally interconnected set of computers through which everyone could quickly access data and programs from any site. In spirit, the concept was very much like the Internet of today. Licklider was the first head of the computer research program at DARPA,4 starting in October 1962. While at DARPA he convinced his successors at DARPA, Ivan Sutherland, Bob Taylor, and MIT researcher Lawrence G. Roberts, of the importance of this networking concept. Leonard Kleinrock at MIT published the first paper on packet switching theory in July 1961 and the first book on the subject in 1964. Kleinrock convinced Roberts of the theoretical feasibility of communications using packets rather than circuits, which was a major step along the path towards computer networking. The other key step was to make the computers talk together. To explore this, in 1965 working with Thomas Merrill, Roberts connected the TX-2 computer in

Mass. to the Q-32 in California with a low speed dial-up telephone line creating the first (however small) wide-area computer network ever built. The result of this experiment was the realization that the time-shared computers could work well together, running programs and retrieving data as necessary on the remote machine, but that the circuit switched telephone system was totally inadequate for the job. Kleinrock's conviction of the need for packet switching was confirmed. In late 1966 Roberts went to DARPA to develop the computer network concept and quickly put together his plan for the "ARPANET", publishing it in 1967. At the conference where he presented the paper, there was also a paper on a packet network concept from the UK by Donald Davies and Roger Scantlebury of NPL. Scantlebury told Roberts about the NPL work as well as that of Paul Baran and others at RAND. The RAND group had written a paper on packet switching networks for secure voice in the military in 1964. It happened that the work at MIT (1961-1967), at RAND (1962-1965), and at NPL (1964-1967) had all proceeded in parallel without any of the researchers knowing about the other work. The word "packet" was adopted from the work at NPL and the proposed line speed to be used in the ARPANET design was upgraded from 2.4 kbps to 50 kbps. 5 In August 1968, after Roberts and the DARPA funded community had refined the overall structure and specifications for the ARPANET, an RFQ was released by DARPA for the development of one of the key components, the packet switches called Interface Message Processors (IMP's). The RFQ was won in December 1968 by a group headed by Frank Heart at Bolt Beranek and Newman (BBN). As the BBN team worked on the IMP's with Bob Kahn playing a major role in the overall ARPANET architectural design, the network topology and economics were designed and optimized by Roberts working with Howard Frank and his team at Network Analysis Corporation, and the network measurement system was prepared by Kleinrock's team at UCLA. 6 Due to Kleinrock's early development of packet switching theory and his focus on analysis, design and measurement, his Network Measurement Center at UCLA was selected to be the first node on the ARPANET. All this came together in September 1969 when BBN installed the first IMP at UCLA and the first host computer was connected. Doug Engelbart's project on "Augmentation of Human Intellect" (which included NLS, an early hypertext system) at Stanford Research Institute (SRI) provided a second node. SRI supported the Network Information Center, led by Elizabeth (Jake) Feinler and including functions such as maintaining tables of host name to address mapping as well as a directory of the RFC's. One month later, when SRI was connected to the ARPANET, the first host-to-host message was sent from Kleinrock's laboratory to SRI. Two more nodes were added at UC Santa Barbara and University of Utah. These last two nodes incorporated application visualization projects, with Glen Culler and Burton Fried at UCSB investigating methods for display of mathematical functions using storage displays to deal with the problem of refresh over the net, and Robert Taylor and Ivan Sutherland at Utah investigating methods of 3-D representations over the net. Thus, by the end of 1969, four host computers were connected together into the initial ARPANET, and the budding Internet was off the ground. Even at this early stage, it should be noted that the networking research incorporated both work on the underlying network and work on how to utilize the network. This tradition continues to this day.

Computers were added quickly to the ARPANET during the following years, and work proceeded on completing a functionally complete Host-to-Host protocol and other network software. In December 1970 the Network Working Group (NWG) working under S. Crocker finished the initial ARPANET Host-to-Host protocol, called the Network Control Protocol (NCP). As the ARPANET sites completed implementing NCP during the period 1971-1972, the network users finally could begin to develop applications. In October 1972, Kahn organized a large, very successful demonstration of the ARPANET at the International Computer Communication Conference (ICCC). This was the first public demonstration of this new network technology to the public. It was also in 1972 that the initial "hot" application, electronic mail, was introduced. In March Ray Tomlinson at BBN wrote the basic email message send and read software, motivated by the need of the ARPANET developers for an easy coordination mechanism. In July, Roberts expanded its utility by writing the first email utility program to list, selectively read, file, forward, and respond to messages. From there email took off as the largest network application for over a decade. This was a harbinger of the kind of activity we see on the World Wide Web today, namely, the enormous growth of all kinds of "people-to-people" traffic.

The Initial Internetting Concepts


The original ARPANET grew into the Internet. Internet was based on the idea that there would be multiple independent networks of rather arbitrary design, beginning with the ARPANET as the pioneering packet switching network, but soon to include packet satellite networks, groundbased packet radio networks and other networks. The Internet as we now know it embodies a key underlying technical idea, namely that of open architecture networking. In this approach, the choice of any individual network technology was not dictated by a particular network architecture but rather could be selected freely by a provider and made to interwork with the other networks through a meta-level "Internetworking Architecture". Up until that time there was only one general method for federating networks. This was the traditional circuit switching method where networks would interconnect at the circuit level, passing individual bits on a synchronous basis along a portion of an end-to-end circuit between a pair of end locations. Recall that Kleinrock had shown in 1961 that packet switching was a more efficient switching method. Along with packet switching, special purpose interconnection arrangements between networks were another possibility. While there were other limited ways to interconnect different networks, they required that one be used as a component of the other, rather than acting as a peer of the other in offering end-to-end service. In an open-architecture network, the individual networks may be separately designed and developed and each may have its own unique interface which it may offer to users and/or other providers. including other Internet providers. Each network can be designed in accordance with the specific environment and user requirements of that network. There are generally no constraints on the types of network that can be included or on their geographic scope, although certain pragmatic considerations will dictate what makes sense to offer. The idea of open-architecture networking was first introduced by Kahn shortly after having arrived at DARPA in 1972. This work was originally part of the packet radio program, but subsequently became a separate program in its own right. At the time, the program was called "Internetting". Key to making the packet radio system work was a reliable end-end protocol that could maintain effective communication in the face of jamming and other radio interference, or

withstand intermittent blackout such as caused by being in a tunnel or blocked by the local terrain. Kahn first contemplated developing a protocol local only to the packet radio network, since that would avoid having to deal with the multitude of different operating systems, and continuing to use NCP. However, NCP did not have the ability to address networks (and machines) further downstream than a destination IMP on the ARPANET and thus some change to NCP would also be required. (The assumption was that the ARPANET was not changeable in this regard). NCP relied on ARPANET to provide end-to-end reliability. If any packets were lost, the protocol (and presumably any applications it supported) would come to a grinding halt. In this model NCP had no end-end host error control, since the ARPANET was to be the only network in existence and it would be so reliable that no error control would be required on the part of the hosts. Thus, Kahn decided to develop a new version of the protocol which could meet the needs of an openarchitecture network environment. This protocol would eventually be called the Transmission Control Protocol/Internet Protocol (TCP/IP). While NCP tended to act like a device driver, the new protocol would be more like a communications protocol. Four ground rules were critical to Kahn's early thinking:

Each distinct network would have to stand on its own and no internal changes could be required to any such network to connect it to the Internet. Communications would be on a best effort basis. If a packet didn't make it to the final destination, it would shortly be retransmitted from the source. Black boxes would be used to connect the networks; these would later be called gateways and routers. There would be no information retained by the gateways about the individual flows of packets passing through them, thereby keeping them simple and avoiding complicated adaptation and recovery from various failure modes. There would be no global control at the operations level.

Other key issues that needed to be addressed were:


Algorithms to prevent lost packets from permanently disabling communications and enabling them to be successfully retransmitted from the source. Providing for host-to-host "pipelining" so that multiple packets could be enroute from source to destination at the discretion of the participating hosts, if the intermediate networks allowed it. Gateway functions to allow it to forward packets appropriately. This included interpreting IP headers for routing, handling interfaces, breaking packets into smaller pieces if necessary, etc. The need for end-end checksums, reassembly of packets from fragments and detection of duplicates, if any. The need for global addressing Techniques for host-to-host flow control. Interfacing with the various operating systems There were also other concerns, such as implementation efficiency, internetwork performance, but these were secondary considerations at first.

Kahn began work on a communications-oriented set of operating system principles while at BBN and documented some of his early thoughts in an internal BBN memorandum entitled "Communications Principles for Operating Systems". At this point he realized it would be necessary to learn the implementation details of each operating system to have a chance to embed any new protocols in an efficient way. Thus, in the spring of 1973, after starting the

internetting effort, he asked Vint Cerf (then at Stanford) to work with him on the detailed design of the protocol. Cerf had been intimately involved in the original NCP design and development and already had the knowledge about interfacing to existing operating systems. So armed with Kahn's architectural approach to the communications side and with Cerf's NCP experience, they teamed up to spell out the details of what became TCP/IP. The give and take was highly productive and the first written version7 of the resulting approach was distributed at a special meeting of the International Network Working Group (INWG) which had been set up at a conference at Sussex University in September 1973. Cerf had been invited to chair this group and used the occasion to hold a meeting of INWG members who were heavily represented at the Sussex Conference. Some basic approaches emerged from this collaboration between Kahn and Cerf:

Communication between two processes would logically consist of a very long stream of bytes (they called them octets). The position of any octet in the stream would be used to identify it. Flow control would be done by using sliding windows and acknowledgments (acks). The destination could select when to acknowledge and each ack returned would be cumulative for all packets received to that point. It was left open as to exactly how the source and destination would agree on the parameters of the windowing to be used. Defaults were used initially. Although Ethernet was under development at Xerox PARC at that time, the proliferation of LANs were not envisioned at the time, much less PCs and workstations. The original model was national level networks like ARPANET of which only a relatively small number were expected to exist. Thus a 32 bit IP address was used of which the first 8 bits signified the network and the remaining 24 bits designated the host on that network. This assumption, that 256 networks would be sufficient for the foreseeable future, was clearly in need of reconsideration when LANs began to appear in the late 1970s.

The original Cerf/Kahn paper on the Internet described one protocol, called TCP, which provided all the transport and forwarding services in the Internet. Kahn had intended that the TCP protocol support a range of transport services, from the totally reliable sequenced delivery of data (virtual circuit model) to a datagram service in which the application made direct use of the underlying network service, which might imply occasional lost, corrupted or reordered packets. However, the initial effort to implement TCP resulted in a version that only allowed for virtual circuits. This model worked fine for file transfer and remote login applications, but some of the early work on advanced network applications, in particular packet voice in the 1970s, made clear that in some cases packet losses should not be corrected by TCP, but should be left to the application to deal with. This led to a reorganization of the original TCP into two protocols, the simple IP which provided only for addressing and forwarding of individual packets, and the separate TCP, which was concerned with service features such as flow control and recovery from lost packets. For those applications that did not want the services of TCP, an alternative called the User Datagram Protocol (UDP) was added in order to provide direct access to the basic service of IP. A major initial motivation for both the ARPANET and the Internet was resource sharing - for example allowing users on the packet radio networks to access the time sharing systems attached to the ARPANET. Connecting the two together was far more economical that duplicating these very expensive computers. However, while file transfer and remote login (Telnet) were very important applications, electronic mail has probably had the most significant impact of the

innovations from that era. Email provided a new model of how people could communicate with each other, and changed the nature of collaboration, first in the building of the Internet itself (as is discussed below) and later for much of society. There were other applications proposed in the early days of the Internet, including packet based voice communication (the precursor of Internet telephony), various models of file and disk sharing, and early "worm" programs that showed the concept of agents (and, of course, viruses). A key concept of the Internet is that it was not designed for just one application, but as a general infrastructure on which new applications could be conceived, as illustrated later by the emergence of the World Wide Web. It is the general purpose nature of the service provided by TCP and IP that makes this possible.

Proving the Ideas


DARPA let three contracts to Stanford (Cerf), BBN (Ray Tomlinson) and UCL (Peter Kirstein) to implement TCP/IP (it was simply called TCP in the Cerf/Kahn paper but contained both components). The Stanford team, led by Cerf, produced the detailed specification and within about a year there were three independent implementations of TCP that could interoperate. This was the beginning of long term experimentation and development to evolve and mature the Internet concepts and technology. Beginning with the first three networks (ARPANET, Packet Radio, and Packet Satellite) and their initial research communities, the experimental environment has grown to incorporate essentially every form of network and a very broad-based research and development community. [REK78] With each expansion has come new challenges. The early implementations of TCP were done for large time sharing systems such as Tenex and TOPS 20. When desktop computers first appeared, it was thought by some that TCP was too big and complex to run on a personal computer. David Clark and his research group at MIT set out to show that a compact and simple implementation of TCP was possible. They produced an implementation, first for the Xerox Alto (the early personal workstation developed at Xerox PARC) and then for the IBM PC. That implementation was fully interoperable with other TCPs, but was tailored to the application suite and performance objectives of the personal computer, and showed that workstations, as well as large time-sharing systems, could be a part of the Internet. In 1976, Kleinrock published the first book on the ARPANET. It included an emphasis on the complexity of protocols and the pitfalls they often introduce. This book was influential in spreading the lore of packet switching networks to a very wide community. Widespread development of LANS, PCs and workstations in the 1980s allowed the nascent Internet to flourish. Ethernet technology, developed by Bob Metcalfe at Xerox PARC in 1973, is now probably the dominant network technology in the Internet and PCs and workstations the dominant computers. This change from having a few networks with a modest number of timeshared hosts (the original ARPANET model) to having many networks has resulted in a number of new concepts and changes to the underlying technology. First, it resulted in the definition of three network classes (A, B, and C) to accommodate the range of networks. Class A represented large national scale networks (small number of networks with large numbers of hosts); Class B represented regional scale networks; and Class C represented local area networks (large number of networks with relatively few hosts).

A major shift occurred as a result of the increase in scale of the Internet and its associated management issues. To make it easy for people to use the network, hosts were assigned names, so that it was not necessary to remember the numeric addresses. Originally, there were a fairly limited number of hosts, so it was feasible to maintain a single table of all the hosts and their associated names and addresses. The shift to having a large number of independently managed networks (e.g., LANs) meant that having a single table of hosts was no longer feasible, and the Domain Name System (DNS) was invented by Paul Mockapetris of USC/ISI. The DNS permitted a scalable distributed mechanism for resolving hierarchical host names (e.g. www.acm.org) into an Internet address. The increase in the size of the Internet also challenged the capabilities of the routers. Originally, there was a single distributed algorithm for routing that was implemented uniformly by all the routers in the Internet. As the number of networks in the Internet exploded, this initial design could not expand as necessary, so it was replaced by a hierarchical model of routing, with an Interior Gateway Protocol (IGP) used inside each region of the Internet, and an Exterior Gateway Protocol (EGP) used to tie the regions together. This design permitted different regions to use a different IGP, so that different requirements for cost, rapid reconfiguration, robustness and scale could be accommodated. Not only the routing algorithm, but the size of the addressing tables, stressed the capacity of the routers. New approaches for address aggregation, in particular classless inter-domain routing (CIDR), have recently been introduced to control the size of router tables. As the Internet evolved, one of the major challenges was how to propagate the changes to the software, particularly the host software. DARPA supported UC Berkeley to investigate modifications to the Unix operating system, including incorporating TCP/IP developed at BBN. Although Berkeley later rewrote the BBN code to more efficiently fit into the Unix system and kernel, the incorporation of TCP/IP into the Unix BSD system releases proved to be a critical element in dispersion of the protocols to the research community. Much of the CS research community began to use Unix BSD for their day-to-day computing environment. Looking back, the strategy of incorporating Internet protocols into a supported operating system for the research community was one of the key elements in the successful widespread adoption of the Internet. One of the more interesting challenges was the transition of the ARPANET host protocol from NCP to TCP/IP as of January 1, 1983. This was a "flag-day" style transition, requiring all hosts to convert simultaneously or be left having to communicate via rather ad-hoc mechanisms. This transition was carefully planned within the community over several years before it actually took place and went surprisingly smoothly (but resulted in a distribution of buttons saying "I survived the TCP/IP transition"). TCP/IP was adopted as a defense standard three years earlier in 1980. This enabled defense to begin sharing in the DARPA Internet technology base and led directly to the eventual partitioning of the military and non- military communities. By 1983, ARPANET was being used by a significant number of defense R&D and operational organizations. The transition of ARPANET from NCP to TCP/IP permitted it to be split into a MILNET supporting operational requirements and an ARPANET supporting research needs. Thus, by 1985, Internet was already well established as a technology supporting a broad community of researchers and developers, and was beginning to be used by other communities for daily computer communications. Electronic mail was being used broadly across several

communities, often with different systems, but interconnection between different mail systems was demonstrating the utility of broad based electronic communications between people.

Transition to Widespread Infrastructure


At the same time that the Internet technology was being experimentally validated and widely used amongst a subset of computer science researchers, other networks and networking technologies were being pursued. The usefulness of computer networking - especially electronic mail - demonstrated by DARPA and Department of Defense contractors on the ARPANET was not lost on other communities and disciplines, so that by the mid-1970s computer networks had begun to spring up wherever funding could be found for the purpose. The U.S. Department of Energy (DoE) established MFENet for its researchers in Magnetic Fusion Energy, whereupon DoE's High Energy Physicists responded by building HEPNet. NASA Space Physicists followed with SPAN, and Rick Adrion, David Farber, and Larry Landweber established CSNET for the (academic and industrial) Computer Science community with an initial grant from the U.S. National Science Foundation (NSF). AT&T's free-wheeling dissemination of the UNIX computer operating system spawned USENET, based on UNIX' built-in UUCP communication protocols, and in 1981 Ira Fuchs and Greydon Freeman devised BITNET, which linked academic mainframe computers in an "email as card images" paradigm. With the exception of BITNET and USENET, these early networks (including ARPANET) were purpose-built - i.e., they were intended for, and largely restricted to, closed communities of scholars; there was hence little pressure for the individual networks to be compatible and, indeed, they largely were not. In addition, alternate technologies were being pursued in the commercial sector, including XNS from Xerox, DECNet, and IBM's SNA.8 It remained for the British JANET (1984) and U.S. NSFNET (1985) programs to explicitly announce their intent to serve the entire higher education community, regardless of discipline. Indeed, a condition for a U.S. university to receive NSF funding for an Internet connection was that "... the connection must be made available to ALL qualified users on campus." In 1985, Dennis Jennings came from Ireland to spend a year at NSF leading the NSFNET program. He worked with the community to help NSF make a critical decision - that TCP/IP would be mandatory for the NSFNET program. When Steve Wolff took over the NSFNET program in 1986, he recognized the need for a wide area networking infrastructure to support the general academic and research community, along with the need to develop a strategy for establishing such infrastructure on a basis ultimately independent of direct federal funding. Policies and strategies were adopted (see below) to achieve that end. NSF also elected to support DARPA's existing Internet organizational infrastructure, hierarchically arranged under the (then) Internet Activities Board (IAB). The public declaration of this choice was the joint authorship by the IAB's Internet Engineering and Architecture Task Forces and by NSF's Network Technical Advisory Group of RFC 985 (Requirements for Internet Gateways ), which formally ensured interoperability of DARPA's and NSF's pieces of the Internet. In addition to the selection of TCP/IP for the NSFNET program, Federal agencies made and implemented several other policy decisions which shaped the Internet of today.

Federal agencies shared the cost of common infrastructure, such as trans-oceanic circuits. They also jointly supported "managed interconnection points" for interagency traffic; the Federal Internet Exchanges (FIX-E and FIX-W) built for this purpose served as models for the Network Access Points and "*IX" facilities that are prominent features of today's Internet architecture. To coordinate this sharing, the Federal Networking Council9 was formed. The FNC also cooperated with other international organizations, such as RARE in Europe, through the Coordinating Committee on Intercontinental Research Networking, CCIRN, to coordinate Internet support of the research community worldwide. This sharing and cooperation between agencies on Internet-related issues had a long history. An unprecedented 1981 agreement between Farber, acting for CSNET and the NSF, and DARPA's Kahn, permitted CSNET traffic to share ARPANET infrastructure on a statistical and no-meteredsettlements basis. Subsequently, in a similar mode, the NSF encouraged its regional (initially academic) networks of the NSFNET to seek commercial, non-academic customers, expand their facilities to serve them, and exploit the resulting economies of scale to lower subscription costs for all. On the NSFNET Backbone - the national-scale segment of the NSFNET - NSF enforced an "Acceptable Use Policy" (AUP) which prohibited Backbone usage for purposes "not in support of Research and Education." The predictable (and intended) result of encouraging commercial network traffic at the local and regional level, while denying its access to national-scale transport, was to stimulate the emergence and/or growth of "private", competitive, long-haul networks such as PSI, UUNET, ANS CO+RE, and (later) others. This process of privately-financed augmentation for commercial uses was thrashed out starting in 1988 in a series of NSF-initiated conferences at Harvard's Kennedy School of Government on "The Commercialization and Privatization of the Internet" - and on the "com-priv" list on the net itself. In 1988, a National Research Council committee, chaired by Kleinrock and with Kahn and Clark as members, produced a report commissioned by NSF titled "Towards a National Research Network". This report was influential on then Senator Al Gore, and ushered in high speed networks that laid the networking foundation for the future information superhighway. In 1994, a National Research Council report, again chaired by Kleinrock (and with Kahn and Clark as members again), Entitled "Realizing The Information Future: The Internet and Beyond" was released. This report, commissioned by NSF, was the document in which a blueprint for the evolution of the information superhighway was articulated and which has had a lasting affect on the way to think about its evolution. It anticipated the critical issues of intellectual property rights, ethics, pricing, education, architecture and regulation for the Internet. NSF's privatization policy culminated in April, 1995, with the defunding of the NSFNET Backbone. The funds thereby recovered were (competitively) redistributed to regional networks to buy national-scale Internet connectivity from the now numerous, private, long-haul networks.

The backbone had made the transition from a network built from routers out of the research community (the "Fuzzball" routers from David Mills) to commercial equipment. In its 8 1/2 year lifetime, the Backbone had grown from six nodes with 56 kbps links to 21 nodes with multiple 45 Mbps links. It had seen the Internet grow to over 50,000 networks on all seven continents and outer space, with approximately 29,000 networks in the United States. Such was the weight of the NSFNET program's ecumenism and funding ($200 million from 1986 to 1995) - and the quality of the protocols themselves - that by 1990 when the ARPANET itself was finally decommissioned10, TCP/IP had supplanted or marginalized most other widearea computer network protocols worldwide, and IP was well on its way to becoming THE bearer service for the Global Information Infrastructure.

The Role of Documentation


A key to the rapid growth of the Internet has been the free and open access to the basic documents, especially the specifications of the protocols. The beginnings of the ARPANET and the Internet in the university research community promoted the academic tradition of open publication of ideas and results. However, the normal cycle of traditional academic publication was too formal and too slow for the dynamic exchange of ideas essential to creating networks. In 1969 a key step was taken by S. Crocker (then at UCLA) in establishing the Request for Comments (or RFC) series of notes. These memos were intended to be an informal fast distribution way to share ideas with other network researchers. At first the RFCs were printed on paper and distributed via snail mail. As the File Transfer Protocol (FTP) came into use, the RFCs were prepared as online files and accessed via FTP. Now, of course, the RFCs are easily accessed via the World Wide Web at dozens of sites around the world. SRI, in its role as Network Information Center, maintained the online directories. Jon Postel acted as RFC Editor as well as managing the centralized administration of required protocol number assignments, roles that he continued to play until his death, October 16, 1998. The effect of the RFCs was to create a positive feedback loop, with ideas or proposals presented in one RFC triggering another RFC with additional ideas, and so on. When some consensus (or a least a consistent set of ideas) had come together a specification document would be prepared. Such a specification would then be used as the base for implementations by the various research teams. Over time, the RFCs have become more focused on protocol standards (the "official" specifications), though there are still informational RFCs that describe alternate approaches, or provide background information on protocols and engineering issues. The RFCs are now viewed as the "documents of record" in the Internet engineering and standards community. The open access to the RFCs (for free, if you have any kind of a connection to the Internet) promotes the growth of the Internet because it allows the actual specifications to be used for examples in college classes and by entrepreneurs developing new systems. Email has been a significant factor in all areas of the Internet, and that is certainly true in the development of protocol specifications, technical standards, and Internet engineering. The very early RFCs often presented a set of ideas developed by the researchers at one location to the rest of the community. After email came into use, the authorship pattern changed - RFCs were presented by joint authors with common view independent of their locations. The use of specialized email mailing lists has been long used in the development of protocol specifications, and continues to be an important tool. The IETF now has in excess of 75 working groups, each working on a different aspect of Internet engineering. Each of these working groups has a mailing list to discuss one or more draft documents under development. When consensus is reached on a draft document it may be distributed as an RFC. As the current rapid expansion of the Internet is fueled by the realization of its capability to promote information sharing, we should understand that the network's first role in information

sharing was sharing the information about its own design and operation through the RFC documents. This unique method for evolving new capabilities in the network will continue to be critical to future evolution of the Internet.

Formation of the Broad Community


The Internet is as much a collection of communities as a collection of technologies, and its success is largely attributable to both satisfying basic community needs as well as utilizing the community in an effective way to push the infrastructure forward. This community spirit has a long history beginning with the early ARPANET. The early ARPANET researchers worked as a close-knit community to accomplish the initial demonstrations of packet switching technology described earlier. Likewise, the Packet Satellite, Packet Radio and several other DARPA computer science research programs were multi-contractor collaborative activities that heavily used whatever available mechanisms there were to coordinate their efforts, starting with electronic mail and adding file sharing, remote access, and eventually World Wide Web capabilities. Each of these programs formed a working group, starting with the ARPANET Network Working Group. Because of the unique role that ARPANET played as an infrastructure supporting the various research programs, as the Internet started to evolve, the Network Working Group evolved into Internet Working Group. In the late 1970s, recognizing that the growth of the Internet was accompanied by a growth in the size of the interested research community and therefore an increased need for coordination mechanisms, Vint Cerf, then manager of the Internet Program at DARPA, formed several coordination bodies - an International Cooperation Board (ICB), chaired by Peter Kirstein of UCL, to coordinate activities with some cooperating European countries centered on Packet Satellite research, an Internet Research Group which was an inclusive group providing an environment for general exchange of information, and an Internet Configuration Control Board (ICCB), chaired by Clark. The ICCB was an invitational body to assist Cerf in managing the burgeoning Internet activity. In 1983, when Barry Leiner took over management of the Internet research program at DARPA, he and Clark recognized that the continuing growth of the Internet community demanded a restructuring of the coordination mechanisms. The ICCB was disbanded and in its place a structure of Task Forces was formed, each focused on a particular area of the technology (e.g. routers, end-to-end protocols, etc.). The Internet Activities Board (IAB) was formed from the chairs of the Task Forces. It of course was only a coincidence that the chairs of the Task Forces were the same people as the members of the old ICCB, and Dave Clark continued to act as chair. After some changing membership on the IAB, Phill Gross became chair of a revitalized Internet Engineering Task Force (IETF), at the time merely one of the IAB Task Forces. As we saw above, by 1985 there was a tremendous growth in the more practical/engineering side of the Internet. This growth resulted in an explosion in the attendance at the IETF meetings, and Gross was compelled to create substructure to the IETF in the form of working groups. This growth was complemented by a major expansion in the community. No longer was DARPA the only major player in the funding of the Internet. In addition to NSFNet and the various US and international government-funded activities, interest in the commercial sector was beginning to grow. Also in 1985, both Kahn and Leiner left DARPA and there was a significant decrease in

Internet activity at DARPA. As a result, the IAB was left without a primary sponsor and increasingly assumed the mantle of leadership. The growth continued, resulting in even further substructure within both the IAB and IETF. The IETF combined Working Groups into Areas, and designated Area Directors. An Internet Engineering Steering Group (IESG) was formed of the Area Directors. The IAB recognized the increasing importance of the IETF, and restructured the standards process to explicitly recognize the IESG as the major review body for standards. The IAB also restructured so that the rest of the Task Forces (other than the IETF) were combined into an Internet Research Task Force (IRTF) chaired by Postel, with the old task forces renamed as research groups. The growth in the commercial sector brought with it increased concern regarding the standards process itself. Starting in the early 1980's and continuing to this day, the Internet grew beyond its primarily research roots to include both a broad user community and increased commercial activity. Increased attention was paid to making the process open and fair. This coupled with a recognized need for community support of the Internet eventually led to the formation of the Internet Society in 1991, under the auspices of Kahn's Corporation for National Research Initiatives (CNRI) and the leadership of Cerf, then with CNRI. In 1992, yet another reorganization took place. In 1992, the Internet Activities Board was reorganized and re-named the Internet Architecture Board operating under the auspices of the Internet Society. A more "peer" relationship was defined between the new IAB and IESG, with the IETF and IESG taking a larger responsibility for the approval of standards. Ultimately, a cooperative and mutually supportive relationship was formed between the IAB, IETF, and Internet Society, with the Internet Society taking on as a goal the provision of service and other measures which would facilitate the work of the IETF. The recent development and widespread deployment of the World Wide Web has brought with it a new community, as many of the people working on the WWW have not thought of themselves as primarily network researchers and developers. A new coordination organization was formed, the World Wide Web Consortium (W3C). Initially led from MIT's Laboratory for Computer Science by Tim Berners-Lee (the inventor of the WWW) and Al Vezza, W3C has taken on the responsibility for evolving the various protocols and standards associated with the Web. Thus, through the over two decades of Internet activity, we have seen a steady evolution of organizational structures designed to support and facilitate an ever-increasing community working collaboratively on Internet issues.

Commercialization of the Technology


Commercialization of the Internet involved not only the development of competitive, private network services, but also the development of commercial products implementing the Internet technology. In the early 1980s, dozens of vendors were incorporating TCP/IP into their products because they saw buyers for that approach to networking. Unfortunately they lacked both real information about how the technology was supposed to work and how the customers planned on using this approach to networking. Many saw it as a nuisance add-on that had to be glued on to their own proprietary networking solutions: SNA, DECNet, Netware, NetBios. The DoD had mandated the use of TCP/IP in many of its purchases but gave little help to the vendors regarding how to build useful TCP/IP products.

In 1985, recognizing this lack of information availability and appropriate training, Dan Lynch in cooperation with the IAB arranged to hold a three day workshop for ALL vendors to come learn about how TCP/IP worked and what it still could not do well. The speakers came mostly from the DARPA research community who had both developed these protocols and used them in dayto-day work. About 250 vendor personnel came to listen to 50 inventors and experimenters. The results were surprises on both sides: the vendors were amazed to find that the inventors were so open about the way things worked (and what still did not work) and the inventors were pleased to listen to new problems they had not considered, but were being discovered by the vendors in the field. Thus a two-way discussion was formed that has lasted for over a decade. After two years of conferences, tutorials, design meetings and workshops, a special event was organized that invited those vendors whose products ran TCP/IP well enough to come together in one room for three days to show off how well they all worked together and also ran over the Internet. In September of 1988 the first Interop trade show was born. 50 companies made the cut. 5,000 engineers from potential customer organizations came to see if it all did work as was promised. It did. Why? Because the vendors worked extremely hard to ensure that everyone's products interoperated with all of the other products - even with those of their competitors. The Interop trade show has grown immensely since then and today it is held in 7 locations around the world each year to an audience of over 250,000 people who come to learn which products work with each other in a seamless manner, learn about the latest products, and discuss the latest technology. In parallel with the commercialization efforts that were highlighted by the Interop activities, the vendors began to attend the IETF meetings that were held 3 or 4 times a year to discuss new ideas for extensions of the TCP/IP protocol suite. Starting with a few hundred attendees mostly from academia and paid for by the government, these meetings now often exceed a thousand attendees, mostly from the vendor community and paid for by the attendees themselves. This self-selected group evolves the TCP/IP suite in a mutually cooperative manner. The reason it is so useful is that it is composed of all stakeholders: researchers, end users and vendors. Network management provides an example of the interplay between the research and commercial communities. In the beginning of the Internet, the emphasis was on defining and implementing protocols that achieved interoperation. As the network grew larger, it became clear that the sometime ad hoc procedures used to manage the network would not scale. Manual configuration of tables was replaced by distributed automated algorithms, and better tools were devised to isolate faults. In 1987 it became clear that a protocol was needed that would permit the elements of the network, such as the routers, to be remotely managed in a uniform way. Several protocols for this purpose were proposed, including Simple Network Management Protocol or SNMP (designed, as its name would suggest, for simplicity, and derived from an earlier proposal called SGMP) , HEMS (a more complex design from the research community) and CMIP (from the OSI community). A series of meeting led to the decisions that HEMS would be withdrawn as a candidate for standardization, in order to help resolve the contention, but that work on both SNMP and CMIP would go forward, with the idea that the SNMP could be a more near-term solution and CMIP a longer-term approach. The market could choose the one it found more suitable. SNMP is now used almost universally for network-based management.

In the last few years, we have seen a new phase of commercialization. Originally, commercial efforts mainly comprised vendors providing the basic networking products, and service providers offering the connectivity and basic Internet services. The Internet has now become almost a "commodity" service, and much of the latest attention has been on the use of this global information infrastructure for support of other commercial services. This has been tremendously accelerated by the widespread and rapid adoption of browsers and the World Wide Web technology, allowing users easy access to information linked throughout the globe. Products are available to facilitate the provisioning of that information and many of the latest developments in technology have been aimed at providing increasingly sophisticated information services on top of the basic Internet data communications.

History of the Future


On October 24, 1995, the FNC unanimously passed a resolution defining the term Internet. This definition was developed in consultation with members of the internet and intellectual property rights communities. RESOLUTION: The Federal Networking Council (FNC) agrees that the following language reflects our definition of the term "Internet". "Internet" refers to the global information system that -- (i) is logically linked together by a globally unique address space based on the Internet Protocol (IP) or its subsequent extensions/follow-ons; (ii) is able to support communications using the Transmission Control Protocol/Internet Protocol (TCP/IP) suite or its subsequent extensions/follow-ons, and/or other IP-compatible protocols; and (iii) provides, uses or makes accessible, either publicly or privately, high level services layered on the communications and related infrastructure described herein. The Internet has changed much in the two decades since it came into existence. It was conceived in the era of time-sharing, but has survived into the era of personal computers, client-server and peer-to-peer computing, and the network computer. It was designed before LANs existed, but has accommodated that new network technology, as well as the more recent ATM and frame switched services. It was envisioned as supporting a range of functions from file sharing and remote login to resource sharing and collaboration, and has spawned electronic mail and more recently the World Wide Web. But most important, it started as the creation of a small band of dedicated researchers, and has grown to be a commercial success with billions of dollars of annual investment. One should not conclude that the Internet has now finished changing. The Internet, although a network in name and geography, is a creature of the computer, not the traditional network of the telephone or television industry. It will, indeed it must, continue to change and evolve at the speed of the computer industry if it is to remain relevant. It is now changing to provide new services such as real time transport, in order to support, for example, audio and video streams. The availability of pervasive networking (i.e., the Internet) along with powerful affordable computing and communications in portable form (i.e., laptop computers, two-way pagers, PDAs, cellular phones), is making possible a new paradigm of nomadic computing and communications. This evolution will bring us new applications - Internet telephone and, slightly further out, Internet television. It is evolving to permit more sophisticated forms of pricing and cost recovery, a perhaps painful requirement in this commercial world. It is changing to accommodate yet another generation of underlying network technologies with different characteristics and requirements, e.g. broadband residential access and satellites. New modes of

access and new forms of service will spawn new applications, which in turn will drive further evolution of the net itself. The most pressing question for the future of the Internet is not how the technology will change, but how the process of change and evolution itself will be managed. As this paper describes, the architecture of the Internet has always been driven by a core group of designers, but the form of that group has changed as the number of interested parties has grown. With the success of the Internet has come a proliferation of stakeholders - stakeholders now with an economic as well as an intellectual investment in the network. We now see, in the debates over control of the domain name space and the form of the next generation IP addresses, a struggle to find the next social structure that will guide the Internet in the future. The form of that structure will be harder to find, given the large number of concerned stakeholders. At the same time, the industry struggles to find the economic rationale for the large investment needed for the future growth, for example to upgrade residential access to a more suitable technology. If the Internet stumbles, it will not be because we lack for technology, vision, or motivation. It will be because we cannot set a direction and march collectively into the future.

An anecdotal history of the people and communities that brought about the Internet and the Web
(Last updated 24 March 2010)

A Brief History of the Internet by Walt Howe is licensed under a Creative Commons AttributionNoncommercial-Share Alike 3.0 United States License. Based on a work at www.walthowe.com. You can also read this history in a Belorussion translation by Bohdan Zograf.
The Internet was the result of some visionary thinking by people in the early 1960s who saw great potential value in allowing computers to share information on research and development in scientific and military fields. J.C.R. Licklider of MIT, first proposed a global network of computers in 1962, and moved over to the Defense Advanced Research Projects Agency (DARPA) in late 1962 to head the work to develop it. Leonard Kleinrock of MIT and later UCLA developed the theory of packet switching, which was to form the basis of Internet connections. Lawrence Roberts of MIT connected a Massachusetts computer with a California computer in 1965 over dial-up telephone lines. It showed the feasibility of wide area networking, but also showed that the telephone line's circuit switching was inadequate. Kleinrock's packet switching theory was confirmed. Roberts moved over to DARPA in 1966 and developed his plan for ARPANET. These visionaries and many more left unnamed here are the real founders of the Internet.

When the late Senator Ted Kennedy heard in 1968 that the pioneering Massachusetts company BBN had won the ARPA contract for an "interface message processor (IMP)," he sent a congratulatory telegram to BBN for their ecumenical spirit in winning the "interfaith message processor" contract.

The Internet, then known as ARPANET, was brought online in 1969 under a contract let by the renamed Advanced Research Projects Agency (ARPA) which initially connected four major computers at universities in the southwestern US (UCLA, Stanford Research Institute, UCSB, and the University of Utah). The contract was carried out by BBN of Cambridge, MA under Bob Kahn and went online in December 1969. By June 1970, MIT, Harvard, BBN, and Systems Development Corp (SDC) in Santa Monica, Cal. were added. By January 1971, Stanford, MIT's Lincoln Labs, Carnegie-Mellon, and Case-Western Reserve U were added. In months to come, NASA/Ames, Mitre, Burroughs, RAND, and the U of Illinois plugged in. After that, there were far too many to keep listing here.

Who was the first to use the Internet?


Charley Kline at UCLA sent the first packets on ARPANet as he tried to connect to Stanford Research Institute on Oct 29, 1969. The system crashed as he reached the G in LOGIN!

The Internet was designed in part to provide a communications network that would work even if some of the sites were destroyed by nuclear attack. If the most direct route was not available, routers would direct traffic around the network via alternate routes. The early Internet was used by computer experts, engineers, scientists, and librarians. There was nothing friendly about it. There were no home or office personal computers in those days, and anyone who used it, whether a computer professional or an engineer or scientist or librarian, had to learn to use a very complex system.

Did Al Gore invent the Internet?


According to a CNN transcript of an interview with Wolf Blitzer, Al Gore said,"During my service in the United States Congress, I took the initiative in creating the Internet." Al Gore was not yet in Congress in 1969 when ARPANET started or in 1974 when the term Internet first came into use. Gore was elected to Congress in 1976. In fairness, Bob Kahn and Vint Cerf acknowledge in a paper titled Al Gore and the Internet that Gore has probably done more than any other elected official to support the growth and development of the Internet from the 1970's to the present .

E-mail was adapted for ARPANET by Ray Tomlinson of BBN in 1972. He picked the @ symbol from the available symbols on his teletype to link the username and address. The telnet protocol, enabling logging on to a remote computer, was published as a Request for Comments (RFC) in 1972. RFC's are a means of sharing developmental work throughout community. The ftp protocol, enabling file transfers between Internet sites, was published as an RFC in 1973, and from then on RFC's were available electronically to anyone who had use of the ftp protocol. Libraries began automating and networking their catalogs in the late 1960s independent from ARPA. The visionary Frederick G. Kilgour of the Ohio College Library Center (now OCLC, Inc.) led networking of Ohio libraries during the '60s and '70s. In the mid 1970s more regional

consortia from New England, the Southwest states, and the Middle Atlantic states, etc., joined with Ohio to form a national, later international, network. Automated catalogs, not very userfriendly at first, became available to the world, first through telnet or the awkward IBM variant TN3270 and only many years later, through the web. See The History of OCLC
Ethernet, a protocol for many local networks, appeared in 1974, an outgrowth of Harvard student Bob Metcalfe's dissertation on "Packet Networks." The dissertation was initially rejected by the University for not being analytical enough. It later won acceptance when he added some more equations to it.

The Internet matured in the 70's as a result of the TCP/IP architecture first proposed by Bob Kahn at BBN and further developed by Kahn and Vint Cerf at Stanford and others throughout the 70's. It was adopted by the Defense Department in 1980 replacing the earlier Network Control Protocol (NCP) and universally adopted by 1983. The Unix to Unix Copy Protocol (UUCP) was invented in 1978 at Bell Labs. Usenet was started in 1979 based on UUCP. Newsgroups, which are discussion groups focusing on a topic, followed, providing a means of exchanging information throughout the world . While Usenet is not considered as part of the Internet, since it does not share the use of TCP/IP, it linked unix systems around the world, and many Internet sites took advantage of the availability of newsgroups. It was a significant part of the community building that took place on the networks. Similarly, BITNET (Because It's Time Network) connected IBM mainframes around the educational community and the world to provide mail services beginning in 1981. Listserv software was developed for this network and later others. Gateways were developed to connect BITNET with the Internet and allowed exchange of e-mail, particularly for e-mail discussion lists. These listservs and other forms of e-mail discussion lists formed another major element in the community building that was taking place. In 1986, the National Science Foundation funded NSFNet as a cross country 56 Kbps backbone for the Internet. They maintained their sponsorship for nearly a decade, setting rules for its noncommercial government and research uses. As the commands for e-mail, FTP, and telnet were standardized, it became a lot easier for nontechnical people to learn to use the nets. It was not easy by today's standards by any means, but it did open up use of the Internet to many more people in universities in particular. Other departments besides the libraries, computer, physics, and engineering departments found ways to make good use of the nets--to communicate with colleagues around the world and to share files and resources. While the number of sites on the Internet was small, it was fairly easy to keep track of the resources of interest that were available. But as more and more universities and organizations-and their libraries-- connected, the Internet became harder and harder to track. There was more and more need for tools to index the resources that were available. The first effort, other than library catalogs, to index the Internet was created in 1989, as Peter Deutsch and his crew at McGill University in Montreal, created an archiver for ftp sites, which they named Archie. This software would periodically reach out to all known openly available ftp sites, list their files, and build a searchable index of the software. The commands to search Archie were unix commands, and it took some knowledge of unix to use it to its full capability.

McGill University, which hosted the first Archie, found out one day that half the Internet traffic going into Canada from the United States was accessing Archie. Administrators were concerned that the University was subsidizing such a volume of traffic, and closed down Archie to outside access. Fortunately, by that time, there were many more Archies available.

At about the same time, Brewster Kahle, then at Thinking Machines, Corp. developed his Wide Area Information Server (WAIS), which would index the full text of files in a database and allow searches of the files. There were several versions with varying degrees of complexity and capability developed, but the simplest of these were made available to everyone on the nets. At its peak, Thinking Machines maintained pointers to over 600 databases around the world which had been indexed by WAIS. They included such things as the full set of Usenet Frequently Asked Questions files, the full documentation of working papers such as RFC's by those developing the Internet's standards, and much more. Like Archie, its interface was far from intuitive, and it took some effort to learn to use it well. Peter Scott of the University of Saskatchewan, recognizing the need to bring together information about all the telnet-accessible library catalogs on the web, as well as other telnet resources, brought out his Hytelnet catalog in 1990. It gave a single place to get information about library catalogs and other telnet resources and how to use them. He maintained it for years, and added HyWebCat in 1997 to provide information on web-based catalogs. In 1991, the first really friendly interface to the Internet was developed at the University of Minnesota. The University wanted to develop a simple menu system to access files and information on campus through their local network. A debate followed between mainframe adherents and those who believed in smaller systems with client-server architecture. The mainframe adherents "won" the debate initially, but since the client-server advocates said they could put up a prototype very quickly, they were given the go-ahead to do a demonstration system. The demonstration system was called a gopher after the U of Minnesota mascot--the golden gopher. The gopher proved to be very prolific, and within a few years there were over 10,000 gophers around the world. It takes no knowledge of unix or computer architecture to use. In a gopher system, you type or click on a number to select the menu selection you want. Gopher's usability was enhanced much more when the University of Nevada at Reno developed the VERONICA searchable index of gopher menus. It was purported to be an acronym for Very Easy Rodent-Oriented Netwide Index to Computerized Archives. A spider crawled gopher menus around the world, collecting links and retrieving them for the index. It was so popular that it was very hard to connect to, even though a number of other VERONICA sites were developed to ease the load. Similar indexing software was developed for single sites, called JUGHEAD (Jonzy's Universal Gopher Hierarchy Excavation And Display).
Peter Deutsch, who developed Archie, always insisted that Archie was short for Archiver, and had nothing to do with the comic strip. He was disgusted when VERONICA and JUGHEAD appeared.

In 1989 another significant event took place in making the nets easier to use. Tim Berners-Lee and others at the European Laboratory for Particle Physics, more popularly known as CERN, proposed a new protocol for information distribution. This protocol, which became the World Wide Web in 1991, was based on hypertext--a system of embedding links in text to link to other text, which you have been using every time you selected a text link while reading these pages. Although started before gopher, it was slower to develop.

The development in 1993 of the graphical browser Mosaic by Marc Andreessen and his team at the National Center For Supercomputing Applications (NCSA) gave the protocol its big boost. Later, Andreessen moved to become the brains behind Netscape Corp., which produced the most successful graphical type of browser and server until Microsoft declared war and developed its MicroSoft Internet Explorer.

MICHAEL DERTOUZOS 1936-2001


The early days of the web was a confused period as many developers tried to put their personal stamp on ways the web should develop. The web was threatened with becoming a mass of unrelated protocols that would require different software for different applications. The visionary Michael Dertouzos of MIT's Laboratory for Computer Sciences persuaded Tim Berners-Lee and others to form the World Wide Web Consortium in 1994 to promote and develop standards for the Web. Proprietary plug-ins still abound for the web, but the Consortium has ensured that there are common standards present in every browser. Read Tim Berners-Lee's tribute to Michael Dertouzos. Since the Internet was initially funded by the government, it was originally limited to research, education, and government uses. Commercial uses were prohibited unless they directly served the goals of research and education. This policy continued until the early 90's, when independent commercial networks began to grow. It then became possible to route traffic across the country from one commercial site to another without passing through the government funded NSFNet Internet backbone. Delphi was the first national commercial online service to offer Internet access to its subscribers. It opened up an email connection in July 1992 and full Internet service in November 1992. All pretenses of limitations on commercial use disappeared in May 1995 when the National Science Foundation ended its sponsorship of the Internet backbone, and all traffic relied on commercial networks. AOL, Prodigy, and CompuServe came online. Since commercial usage was so widespread by this time and educational institutions had been paying their own way for some time, the loss of NSF funding had no appreciable effect on costs. Today, NSF funding has moved beyond supporting the backbone and higher educational institutions to building the K-12 and local public library accesses on the one hand, and the research on the massive high volume connections on the other. Microsoft's full scale entry into the browser, server, and Internet Service Provider market completed the major shift over to a commercially based Internet. The release of Windows 98 in June 1998 with the Microsoft browser well integrated into the desktop shows Bill Gates' determination to capitalize on the enormous growth of the Internet. Microsoft's success over the past few years has brought court challenges to their dominance. We'll leave it up to you whether you think these battles should be played out in the courts or the marketplace.

During this period of enormous growth, businesses entering the Internet arena scrambled to find economic models that work. Free services supported by advertising shifted some of the direct costs away from the consumer--temporarily. Services such as Delphi offered free web pages, chat rooms, and message boards for community building. Online sales have grown rapidly for such products as books and music CDs and computers, but the profit margins are slim when price comparisons are so easy, and public trust in online security is still shaky. Business models that have worked well are portal sites, that try to provide everything for everybody, and live auctions. AOL's acquisition of Time-Warner was the largest merger in history when it took place and shows the enormous growth of Internet business! The stock market has had a rocky ride, swooping up and down as the new technology companies, the dot.com's encountered good news and bad. The decline in advertising income spelled doom for many dot.coms, and a major shakeout and search for better business models took place by the survivors. A current trend with major implications for the future is the growth of high speed connections. 56K modems and the providers who supported them spread widely for a while, but this is the low end now. 56K is not fast enough to carry multimedia, such as sound and video except in low quality. But new technologies many times faster, such as cablemodems and digital subscriber lines (DSL) are predominant now. Wireless has grown rapidly in the past few years, and travellers search for the wi-fi "hot spots" where they can connect while they are away from the home or office. Many airports, coffee bars, hotels and motels now routinely provide these services, some for a fee and some for free. A next big growth area is the surge towards universal wireless access, where almost everywhere is a "hot spot". Municipal wi-fi or city-wide access, wiMAX offering broader ranges than wi-fi, EV-DO, 4g, and other formats will joust for dominance in the USA in the years ahead. The battle is both economic and political. Another trend that is rapidly affecting web designers is the growth of smaller devices to connect to the Internet. Small tablets, pocket PCs, smart phones, ebooks, game machines, and even GPS devices are now capable of tapping into the web on the go, and many web pages are not designed to work on that scale. As the Internet has become ubiquitous, faster, and increasingly accessible to non-technical communities, social networking and collaborative services have grown rapidly, enabling people to communicate and share interests in many more ways. Sites like Facebook, Twitter, Linked-In, YouTube, Flickr, Second Life, delicious, blogs, wikis, and many more let people of all ages rapidly share their interests of the moment with others everywhere. As Heraclitus said in the 4th century BC, "Nothing is permanent, but change!"
May you live in interesting times! (ostensibly an ancient Chinese curse)

For more information on Internet history, visit these sites:


Hobbes' Internet Timeline . 1993-2010 by Robert H Zakon. Significant dates in the history of the Internet. A Brief History of the Internet from the Internet Society. Written by some of those who made it happen.

S-ar putea să vă placă și