Sunteți pe pagina 1din 89

Most people will say Charles Babbage, but the real pioneer in practical computing as we know today was

Konrad Zuse Zuse is largely unknown in North America but is a celebrated computer pioneer in his native Germany. Zuse developed functioning program-controlled computing machinery as early as 1936 and went on to form a successful European computer business in the 1950s. Today, throughout the world, Konrad Zuse is almost unanimously accepted as the inventor and creator of the first freely-programmable computer with a binary floating point and switching/circuit system, which really worked. This machine - called the Z3 - was completed in his small workshop in Berlin-Kreuzberg in 1941. Konrad Zuse first started to consider the logical and technical principles of computers as far back as 1934 when he still was a student. He also created the world's first programming language (19421945/46), which he called the Plankalkl. In the past, scientists and engineers had many discussions about the components of a computer and who can be accepted as the true inventor of the computer. At the International Conference on History of Computing (August 14-18, 1998), there was a panel session in which scientists discussed the question: Who is the inventor of the computer? After a discussion lasting one and a half hours, the great majority denoted Konrad Zuse as the most admired computer pioneer.

Source(s):

I work with computers

Konrad Zuse
From Wikipedia, the free encyclopedia Jump to: navigation, search "Zuse" redirects here. For Konrad Zuse's son, see Horst Zuse. For the institute, see Zuse Institute Berlin.

Konrad Zuse

Konrad Zuse in 1992 Born 22 June 1910 Berlin, German Empire 18 December 1995 (aged 85) Hnfeld, Germany Germany Computer science

Died Residence Fields

Institutions Aerodynamic Research Institute Alma mater Technical University of Berlin Z3, Z4 Plankalkl Calculating Space (cf. digital physics) Werner von Siemens Ring in 1964, Harry H. Goode Memorial Award in 1965 (together with George Stibitz), Great Cross of Merit in 1972 Computer History Museum Fellow

Known for

Notable awards

Award in 1999 - weblink

Konrad Zuse (German pronunciation: [knat tsuz]; 19101995) was a German civil engineer and computer pioneer. His greatest achievement was the world's first functional programcontrolled Turing-complete computer, the Z3, which became operational in May 1941. Zuse was also noted for the S2 computing machine, considered the first process-controlled computer. He founded one of the earliest computer businesses in 1941, producing the Z4, which became the world's first commercial computer. In 1946, he designed the first high-level programming language, Plankalkl.[1] In 1969, Zuse suggested the concept of a computation-based universe in his book Rechnender Raum (Calculating Space). Much of his early work was financed by his family and commerce, but after 1939 he was given resources by the Nazi German government.[2] Due to World War II, Zuse's work went largely unnoticed in the United Kingdom and the United States. Possibly his first documented influence on a US company was IBM's option on his patents in 1946. There is a replica of the Z3, as well as the original Z4, in the Deutsches Museum in Munich. The Deutsches Technikmuseum in Berlin has an exhibition devoted to Zuse, displaying twelve of his machines, including a replica of the Z1 and several of Zuse's paintings

What is an Operating System?


Not all computers have operating systems. The computer that controls the microwave oven in your kitchen, for example, doesn't need an operating system. It has one set of tasks to perform, very straightforward input to expect (a numbered keypad and a few pre-set buttons) and simple, neverchanging hardware to control. For a computer like this, an operating system would be unnecessary baggage, driving up the development and manufacturing costs significantly and adding complexity

where none is required. Instead, the computer in a microwave oven simply runs a single hard-wired program all the time.

What is an Operating System?

Watch the video (2:13). Need help? An operating system is the most important software that runs on a computer. It manages the computer's memory, processes, and all of its software and hardware. It also allows you to communicate with the computer without knowing how to speak the computer's "language." Without an operating system, a computer is useless

Operating systems
Quick links Operating System ABCs Operating System types Operating System overviews Troubleshooting Operating System Q&A Operating System definitions Apple news Linux and Unix news Microsoft news Operating system ABCs An operating system, or OS, is a software program that enables the computer hardware to communicate and operate with the computer software. Without a computer operating system, a computer would be useless. Operating system types As computers have progressed and developed so have the operating systems. Below is a basic list of the different operating systems and a few examples of operating systems that fall into each of the categories. Many computer operating systems will fall into more than one of the below categories. GUI - Short for Graphical User Interface, a GUI Operating System contains graphics and icons and is commonly navigated by using a computer mouse. See the GUI definition for a complete definition. Below are some examples of GUI Operating Systems. System 7.x Windows 98 Windows CE Multi-user - A multi-user operating system allows for multiple users to use the same computer at the same time and different times. See the multi-user definition for a complete definition for a complete definition. Below are some examples of multi-user operating systems.

Linux Unix Windows 2000 Multiprocessing - An operating system capable of supporting and utilizing more than one computer processor. Below are some examples of multiprocessing operating systems. Linux Unix Windows 2000 Multitasking - An operating system that is capable of allowing multiple software processes to run at the same time. Below are some examples of multitasking operating systems. Unix Windows 2000 Multithreading - Operating systems that allow different parts of a software program to run concurrently. Operating systems that would fall into this category are: Linux Unix Windows 2000 Troubleshooting Common questions and answers to operating systems in general can be found on the below operating system question and answers. All other questions relating to an operating system in particular can be found through the operating system page. Linux and Variants MacOSMS-DOS IBM OS/2 Warp Unix and Variants Windows CE Windows 3.x Windows 95 Windows 98 Windows 98 SE Windows ME Windows NT Windows 2000 Windows XP Windows Vista Windows 7

Operating system listing Below is a listing of many of the different operating systems available today, the dates they were released, the platforms they have been developed for and who developed them.
Operating system AIX and AIXL AmigaOS BSD Caldera Linux Corel Linux Debian Linux DUnix DYNIX/ptx HP-UX IRIX Kondara Linux Linux MAC OS 8 Date first released Unix and Linux history. Platform Various Developer IBM Commodore BSD SCO Corel GNU Digital IBM Hewlett Packard SGI Kondara Linus Torvalds

Currently no AmigaOS history. Amiga Unix and Linux history. Unix and Linux history. Unix and Linux history. Unix and Linux history. Unix and Linux history. Unix and Linux history. Unix and Linux history. Unix and Linux history. Unix and Linux history. Unix and Linux history. Apple operating system history. Apple operating system history. Apple operating system history. Apple operating system Various Various Various Various Various Various Various Various Various Various

Apple Macintosh Apple

MAC OS 9

Apple Macintosh Apple

MAC OS 10 MAC OS X

Apple Macintosh Apple Apple Macintosh Apple

history. Mandrake Linux MINIX MS-DOS 1.x MS-DOS 2.x MS-DOS 3.x MS-DOS 4.x MS-DOS 5.x MS-DOS 6.x NEXTSTEP OSF/1 QNX Red Hat Linux SCO Slackware Linux Sun Solaris SuSE Linux System 1 Unix and Linux history. Unix and Linux history. MS-DOS history. MS-DOS history. MS-DOS history. MS-DOS history. MS-DOS history. MS-DOS history. Apple operating system history. Unix and Linux history. Unix and Linux history. Unix and Linux history. Unix and Linux history. Unix and Linux history. Unix and Linux history. Unix and Linux history. Apple operating system history. Apple operating system history. Apple operating system Various Various IBM IBM IBM IBM IBM IBM Various Various Various Various Various Various Various Various Mandrake MINIX Microsoft Microsoft Microsoft Microsoft Microsoft Microsoft Apple OSF QNX Red Hat SCO Slackware Sun SuSE

Apple Macintosh Apple

System 2 System 3

Apple Macintosh Apple Apple Macintosh Apple

history. System 4 Apple operating system history. Apple operating system history. Apple operating system history. Unix and Linux history. Unix and Linux history. Unix and Linux history. Unix and Linux history. Unix and Linux history. Unix and Linux history. Unix and Linux history. Unix and Linux history. Microsoft Windows history. Microsoft Windows history. Microsoft Windows history. Microsoft Windows history. Microsoft Windows history. Microsoft Windows history. Microsoft Windows history. Microsoft Windows history. Apple Macintosh Apple

System 6

Apple Macintosh Apple

System 7 System V Tru64 Unix Turbolinux Ultrix Unisys Unix UnixWare VectorLinux Windows 2000 Windows 2003 Windows 3.X Windows 7 Windows 95 Windows 98 Windows CE Windows ME

Apple Macintosh Apple Various Various Various Various Various Various Various Various IBM IBM IBM IBM IBM IBM PDA IBM System V Digital Turbolinux Ultrix Unisys Bell labs UnixWare VectorLinux Microsoft Microsoft Microsoft Microsoft Microsoft Microsoft Microsoft Microsoft

Windows NT Windows Vista Windows XP Xenix

Microsoft Windows history. Microsoft Windows history. Microsoft Windows history. Unix and Linux history.

IBM IBM IBM Various

Microsoft Microsoft Microsoft Microsoft

An operating system (OS) is a set of programs that manage computer hardware resources and provide common services for application software. The operating system is a vital component of the system software in a computer system. Application programs require an operating system which are usually separate programs, but can be combined in simple systems. Time-sharing operating systems schedule tasks for efficient use of the system and may also include accounting for cost allocation of processor time, mass storage, printing, and other resources. For hardware functions such as input and output and memory allocation, the operating system acts as an intermediary between application programs and the computer hardware,[1][2] although the application code is usually executed directly by the hardware and will frequently make a system call to an OS function or be interrupted by it. Operating systems are found on almost any device that contains a computerfrom cellular phones and video game consoles to supercomputers and web servers. Examples of popular modern operating systems include Android, BSD, iOS, Linux, Mac OS X, Microsoft Windows,[3] Windows Phone, and IBM z/OS. All these, except Windows and z/OS,

Central Processing Unit (CPU)


By Tim Fisher, About.com Guide
See More About:

the central processing unit the parts of the pc installing a cpu intel amd

Intel Xeon E3-1200 CPU (Front and Back)


Intel

What is a CPU?:
The Central Processing Unit (CPU) is responsible for interpreting and executing most of the commands from the computer's hardware and software. The CPU could be considered the "brains" of the computer.

The CPU is Also Known As:


processor, computer processor, microprocessor, central processor, "the brains of the computer"

Important CPU Facts:

Not all central processing units have pins on their bottom sides, but in the ones that do, the pins are easily bent. Take great care when handling, especially when installing onto the motherboard. Each motherboard supports only a certain range of CPU types so always check with your motherboard manufacturer before making a purchase.

Popular CPU Manufacturers:


Intel, AMD

CPU Description:
A modern CPU is usually small and square with many short, rounded, metallic connectors on its underside. Some older CPUs have pins instead metallic connectors. The CPU attaches directly to a CPU "socket" (or sometimes a "slot") on the motherboard. The CPU is inserted into the socket pin-side-down and a small lever helps to secure the processor. After running even a short while, modern CPUs can get very hot. To help dissipate this heat, it is necessary to attach a heat sink and a fan directly on top of the CPU. Typically, these come bundled with a CPU purchase. Other more advanced cooling options are also available including water cooling kits and phase change units.

Central Processing Unit


Share This: The Central Processing Unit (CPU) is the "brain" of the computer--it is the 'compute' in computer. Without the CPU, you have no computer. AMD, IBM, Intel, Motorola, SGI and Sun are just a few of the companies that make most of the CPU's used for various kinds of computers including home desktops, office computers, mainframes and supercomputers. Computer CPU's (processors) are composed of thin layers of thousands of transistors. Transistors are tiny, nearly microscopic bits of material that will block electricity at one voltage (they are a non-conductor and 'resist' the flow of electricity) and permit electricity to pass through them at different voltage (the material loses its resistence to electricity and becomes a conductor). The ability of these materials (called semiconductors) to transition from a non-conducting to a conducting state allows them to take two electrical inputs and produce a different output only when one or both inputs are switched on. A computer CPU is composed of millions (and soon billions) of transistors. Because CPU's are so small, they are often referred to as microprocessors. So, the terms processor, microprocessor and CPU are interchangeable. Modern CPU's are what are called 'integrated chips'. The idea behind an integrated chip is that several types of components are integrated into a single piece of silicon (a single CPU), such as one or more execution core, arithmetic unit (ALU), logic unit, registers, instruction memory, cache memory and the input/output controller (bus controller).

Each transistor is a receives a set of inputs and produces output. When one or more of the inputs receive electricity, the combined charge changes the state of the transistor internally and you get a result out the other side. This simple effect of the transistor is what makes it possible for the computer to count and perform logical operations, all of which we call processing. A modern computer's CPU usually contains an execution core with two or more instruction pipelines, a data and address bus, a dedicated arithmetic logic unit (ALU, also called the math coprocessor), and in some cases special high-speed memory for caching program instructions from RAM.

The CPU's in most PC's and servers are general purpose integrated chips composed of several smaller dedicated-purpose components which together create the processing capabilities of the modern computer. For example, Intel makes a Pentium, while AMD makes the Athlon, and Duron (no memory cache).

Generations
CPU manufacturers engineer new ways to do processing that requires some significant reengineering of the current chip design. When they create this new design that changes the number of bits the chip can handle, or some other major way in which the chip performs its job, they are creating a new generation of processors. As of the time this tutorial was last updated (2008), there were seven generations of chips, with an eighth on the drawing board.

CPU Components
A lot of components go into building a modern computer processor and just what goes in changes with every generation as engineers and scientists find new, more efficient ways to do old tasks.

Execution Core(s) Data Bus Address Bus Math Co-processor Instruction sets / Microcode Multimedia extensions Registers Flags Pipelining Memory Controller Cache Memory (L1, L2 and L3)

Measuring Speed: Bits, Cycles and Execution Cores

Bit Width
The first way of describing a processor is to say how many bits it processes in a single instruction or transports across the processor's internal bus in a single cycle (not exactly correct, but close enough). The number of bits used in the CPU's instructions and registers and how many bits the buses can transfer simultaneously is usually expressed in multiples of 8 bits. It is possible for the registers and the bus to have different sizes. Current chip designs are 64 bit chips (as of 2008). More bits usually means more processing capability and more speed.

Clock Cycles

The second way of describing a processor is to say how many cycles per second the chip operates at. This is how many times per second a charge of electricity passes through the chip. The more cycles, the faster the processor. Currently, chips operate in the billions of cycles per second range. When you're talking about billions of anything in computer terms, you're talking about 'giga' something. When you're talking about how many cycles per second, your talking about 'hertz'. Putting the two together, you get gigahertz. More clock cycles usually means more processing capability and more speed.

Execution Cores
The third way of describing a processor is to say how many execution cores are in the chip. The most advanced chips today have eight execution cores. More execution cores means you can get more work done at the same time, but it doesn't necessarily mean a single program will run faster. To put it another way, a processor with one execution core might be able to run your MP3 music, your web browser, a graphics program and that's about where it starts to slow down enough, it's not worth it running more programs. A system with a processor with 8 cores could run all that plus ten more applications without even seeming to slow down (of course, this assumes you have enough RAM to load all of this software at the same time). More execution cores means more processing capability, but not necessarily more speed. As of 2008, the most advanced processors available are 64-bit processors with 8 cores, running as fast as 3-4 gigahertz. Intel has released quad-core 64-bit chips as has AMD.
Multi-Processor Computers

And if you're still needing more processing power, some computers are designed to run more than one processor chip at the same time. Many companies that manufacture servers make models that accept two, four, eight, sixteen even thirty two processors in a single chassis. The biggest supercomputers are running hundreds of thousands of quad-core processors in parallel to do major calculations for such applications as thermonuclear weapons simulations, radioactive decay simulations, weather simulations, high energy physics calculations and more.

CPU Speed Measurements


The main measurement quoted by manufacturers as a supposed indication of processing speed, is the clock speed of the chip measured in hertz. The the theory goes that the higher the number of mega or gigahertz, the faster the processor. However comparing raw speeds is not always a good comparison between chips. Counting how many instructions are processed per second (MIPS, BIPS, TIPS for millions, billions and trillions of instructions per second) is a better measurement. Still others use the number of mathematical calculations per second to rate the speed of a processor.

Of course, what measurement is most important and most helpful to you depends on what you use a computer for. If you primarily do intensive math calculations, measuring the number of calculations per second is most important. If you are measuring how fast the computer runs an application, then instructions per second are most important.

Processor Manufacturers

American Micro Devices (AMD) Intel IBM Motorola Cyrix Texas Instruments

AMD and Intel have pretty much dominated the market. AMD and Intel are for IBM compatible machines. Motorola chips are made for MacIntoshes. Cyrix (another IBM compatible chip maker) runs a distant fourth place in terms of number of chips sold. Today all chip manufacturers produce chips whose input and output are identical, though the internal architecture may be different. This means that though they may not be built the same way, they DO all run the same software. The CPU is built using logic gates, and contains a small number of programs called 'microcode' built into the chip to perform certain basic processes (like reading data from the bus and writing to a device). Current chips use a 'reduced instruction set' or RISC architectures. Chips can also be measured in terms of instructions processed per second (MIPS).

What is a CPU?
In: Computer Hardware Answer:

CPU (central processing unit) the central unit in a computer containing the logic circuitry that performs the instructions of a computer's programs. The CPU performs arithmetical, logical operations on data held in the computer memory - the RAM. The RAM is seen as a vector that contains instructions and data provided by the computer programmer. The CPU relies on an "Operating system" such as Windows or MacOS for input and output of data, interaction with the user or storing information on the disk. Most of the CPUs made today are produced by Intel or AMD, and all of these use the same "instruction set" - or how the instructions are coded to the CPU. There are controversy about these CPU's first of all in the way that they "see" and address the memory, that is highly inefficient. <><><> CPU stands for the Central Processing Unit of a computer system. The CPU can deal with many millions of calculations per second. Bytes of data travel about the computer on electronic pathways, known as buses. Data from the CPU travels along these buses to other parts of the computer, telling them what to do. How quickly the CPU can deal with calculations is decided by the number of bytes that it can process at once (its bandwidth), and the number of instructions it can deal with during one second. The "clock speed" is like a metronome that determines the beat, and the instruction type will determine how many "clock cycles" are needed per instruction. Like incrementing a number held in a register to the CPU is much faster than incrementing a variable held in memory.

The instructions are provided by programmers, that has coded in a formal computer language all the instructions. It will need an operating system to load instructions and show results - allow us to use the computer. You will find CPU in microprocessors used in digital watches, cameras, and cell phones that are just the same as those for "server systems" and "mainframes" for databases and websites. However faster and larger computers may have many CPU that even share the same memory, and is programmed to work together. When CPUs share the same memory, special precautions must be made to avoid interfering with one another. This requires that the cache memory held close to the CPU to improve speed, is either shared (e.g. "dual core") - or needs to be synchronised ("Scalable Coherent Interface") and brings on a new level of complexity. Or to put it really simply; the CPU is like the human brain, performing all of the calculations required to complete a program. Note: There are comments associated with this question. See the discussion page to add to the conversation. Read more: http://wiki.answers.com/Q/What_is_a_CPU#ixzz1sZ2OMhZA

Ada Lovelace: The First Computer Programmer


Instructions: Read through the text, answer the questions that follow, then click on 'Grade Me!' to view your score.

The First Computer Programmer


Ada Lovelace was the daughter of the poet Lord Byron. She was taught by Mary Somerville, a well-known researcher and scientific author, who introduced her to Charles Babbage in June 1833. Babbage was an English mathematician, who first had the idea for a programmable computer. In 1842 and 1843, Ada translated the work of an Italian mathematician, Luigi Menabrea, on Babbage's Analytical Engine. Though mechanical, this machine was an important step in the history of computers; it was the design of a mechanical general-purpose computer. Babbage worked on it for many years until his death in 1871. However, because of financial, political, and

legal issues, the engine was never built. The design of the machine was very modern; it anticipated the first completed general-purpose computers by about 100 years. When Ada translated the article, she added a set of notes which specified in complete detail a method for calculating certain numbers with the Analytical Engine, which have since been recognized by historians as the world's first computer program. She also saw possibilities in it that Babbage hadn't: she realised that the machine could compose pieces of music. The computer programming language 'Ada', used in some aviation and military programs, is named after her.

Who is the first computer programmer?


Posted by Rean on Aug 5, 2011 in Cool, Useful The first computer programmer is Ada Lovelace, born in the year 1815, daughter of the poet Lord Byron. At a young age she took interest on Charles Babbages Analytical Engine, who is considered the father of computer. But unlike todays programming languages that can create websites, desktop applications, and your favorite PlayStation games, the first software was in form of notes. Only over a hundred years after the writing was the algorithm used to compute the Bernoulli numbers.

Too complex? Lets try again. Over one hundred years after Lady Ada Lovelaces passing, the notes she left about Babbages Analytical Engine were republished, which perfectly fit what we describe now as a computer (Babbages engine) and a software (Lovelaces notes). These people are really centuries ahead of their time!

Ada Lovelace
From Wikipedia, the free encyclopedia Jump to: navigation, search

Ada Lovelace

Born Died

10 December 1815 London 27 November 1852 (aged 36) Marylebone, London

Nationality British

Title Spouse Children

Countess of Lovelace 1st Earl of Lovelace 12th Baron Wentworth 15th Baroness Wentworth 2nd Earl of Lovelace 6th Baron Byron 11th Baroness Wentworth

Parents

Augusta Ada King, Countess of Lovelace (10 December 1815 27 November 1852), born Augusta Ada Byron, was an English writer chiefly known for her work on Charles Babbage's early mechanical general-purpose computer, the analytical engine. Her notes on the engine include what is recognised as the first algorithm intended to be processed by a machine; thanks to this, she is sometimes considered the "World's First Computer Programmer".[1][2] She was the only legitimate child of the poet Lord Byron (with Anne Isabella Milbanke). She had no relationship with her father, who died when she was nine. As a young adult, she took an interest in mathematics, and in particular Babbage's work on the analytical engine. Between 1842 and 1843, she translated an article by Italian mathematician Luigi Menabrea on the engine, which she supplemented with a set of notes of her own. These notes contain what is considered the first computer programme that is, an algorithm encoded for processing by a machine. Though Babbage's engine has never been built, Lovelace's notes are important in the early history of computers. She also foresaw the capability of computers to go beyond mere calculating or numbercrunching while others, including Babbage himself, focused only on these capabilities.[3]

The Advantages (Benefits) of Networking


You have undoubtedly heard the the whole is greater than the sum of its parts. This phrase describes networking very well, and explains why it has become so popular. A network isn't just a bunch of computers with wires running between them. Properly implemented, a network is a system that provides its users with unique capabilities, above and beyond what the individual machines and their software applications can provide. Most of the benefits of networking can be divided into two generic categories: connectivity and sharing. Networks allow computers, and hence their users, to be connected together. They also allow for the easy sharing of information and resources, and cooperation between the devices in other ways. Since modern business depends so much on the intelligent flow and management of information, this tells you a lot about why networking is so valuable.

Here, in no particular order, are some of the specific advantages generally associated with networking:
o

Connectivity and Communication: Networks connect computers and the users of those computers. Individuals within a building or work group can be connected into local area networks (LANs); LANs in distant locations can be interconnected into larger wide area networks (WANs). Once connected, it is possible for network users to communicate with each other using technologies such as electronic mail. This makes the transmission of business (or non-business) information easier, more efficient and less expensive than it would be without the network. Data Sharing: One of the most important uses of networking is to allow the sharing of data. Before networking was common, an accounting employee who wanted to prepare a report for her manager would have to produce it on his PC, put it on a floppy disk, and then walk it over to the manager, who would transfer the data to her PC's hard disk. (This sort of shoe-based network was sometimes sarcastically called a sneakernet.) True networking allows thousands of employees to share data much more easily and quickly than this. More so, it makes possible applications that rely on the ability of many people to access and share the same data, such as databases, group software development, and much more. Intranets and extranets can be used to distribute corporate information between sites and to business partners.

Hardware Sharing: Networks facilitate the sharing of hardware devices. For example, instead of giving each of 10 employees in a department an expensive color printer (or resorting to the sneakernet again), one printer can be placed on the network for everyone to share. Internet Access: The Internet is itself an enormous network, so whenever you access the Internet, you are using a network. The significance of the Internet on modern society is hard to exaggerate, especially for those of us in technical fields. Internet Access Sharing: Small computer networks allow multiple users to share a single Internet connection. Special hardware devices allow the bandwidth of the connection to be easily allocated to various individuals as they need it, and permit an organization to purchase one high-speed connection instead of many slower ones. Data Security and Management: In a business environment, a network allows the administrators to much better manage the company's critical data. Instead of having this data spread over dozens or even hundreds of small computers in a haphazard fashion as their users create it, data can be centralized on shared servers. This makes it easy for everyone to find the data, makes it possible for the administrators to ensure that the data is regularly backed up, and also allows

for the implementation of security measures to control who can read or change various pieces of critical information.
o

Performance Enhancement and Balancing: Under some circumstances, a network can be used to enhance the overall performance of some applications by distributing the computation tasks to various computers on the network. Entertainment: Networks facilitate many types of games and entertainment. The Internet itself offers many sources of entertainment, of course. In addition, many multi-player games exist that operate over a local area network. Many home networks are set up for this reason, and gaming across wide area networks (including the Internet) has also become quite popular. Of course, if you are running a business and have easily-amused employees, you might insist that this is really a disadvantage of networking and not an advantage!

Advantages - quick retrieval of information - able to communicate easily through the use of internet - able to store large amounts of data in different forms - useful applications such as word doc, excel and powerpoint reduces time - reduces the cost and use of paper (such as emails being used rather then sending letters or memos) - good form of entertainment Disadvantages - having to keep up to date with changing technology - having to learn the different functions of the applications and the computer Anonymous

A bit (a contraction of binary digit) is the basic unit of information in computing and telecommunications; it is the amount of information stored by a digital device or other physical system that exists in one of two possible distinct states. These may be the two stable states of a flip-flop, two positions of an electrical switch, two distinct voltage or current levels allowed by a circuit, two distinct levels of light intensity, two directions of magnetization or polarization, the orientation of reversible double stranded DNA, etc. In computing, a bit can also be defined as a variable or computed quantity that can have only two possible values. These two values are often interpreted as binary digits and are usually denoted by the numerical digits 0 and 1. The two values can also be interpreted as logical values (true/false, yes/no), algebraic signs (+/), activation states (on/off), or any other two-valued attribute. The correspondence between these values and the physical states of the underlying storage or device is a matter of convention, and different assignments may be used even within the same device or program. The length of a binary number may be referred to as its "bit-length." In information theory, one bit is typically defined as the uncertainty of a binary random variable that is 0 or 1 with equal probability,[1] or the information that is gained when the value of such a variable becomes known.[2] In quantum computing, a quantum bit or qubit is a quantum system that can exist in superposition of two bit values, "true" and "false". The symbol for bit, as a unit of information, is either simply "bit" (recommended by the ISO/IEC standard 80000-13 (2008)) or lowercase "b" (recommended by the IEEE 1541 Standard (2002)).

What is the meaning of URL?


3 years ago Report Abuse

x-dude

Best Answer - Chosen by Asker


In computing, a Uniform Resource Locator (URL) is a type of Uniform Resource Identifier (URI) that specifies where an identified resource is available and the mechanism for retrieving it. In popular usage and in many technical documents and verbal discussions it is often incorrectly used as a synonym for URI.[1] In popular

language, a URL is also referred to as a Web address. The Uniform Resource Locator was created in 1990 by Tim Berners-Lee as part of the URI.[2] He regrets the format of the URL. Instead of being divided into the route to the server, separated by dots, and the file path, separated by slashes, he would have liked it to be one coherent hierarchical path. For example, http://www.serverroute.com/path/to/file. would look like http://com/serverroute/www/path/to/file.

[edit] Syntax Main article: URI scheme#Generic syntax Every URL is made up of some combination of the following: the scheme name (commonly called protocol), followed by a colon, then, depending on scheme, a hostname (alternatively, IP address), a port number, the pathname of the file to be fetched or the program to be run, then (for programs such as CGI scripts) a query string[3][4], and with HTML files, an anchor (optional) for where the page should start to be displayed.[5] The combined syntax looks like: resource_type://domain:port/filepathna The scheme name, or resource type, defines its namespace, purpose, and the syntax of the remaining part of the URL. Most Web-enabled programs will try to dereference a URL according to the semantics of its scheme and a context. For example, a Web browser will usually dereference the URL http://example.org:80 by performing an HTTP request to the host example.org, at the port number 80. Dereferencing the URN mailto:bob@example.com will usually start an e-mail composer with the address bob@example.com in the To field. Other examples of scheme names include https: gopher:, wais:, ftp:. URLs that specify https as a scheme (such as https://example.com/) denote a secure website. The registered domain name or IP address gives the destination location for the URL. The domain google.com, or its IP address 72.14.207.99, is the address of Google's website. The hostname and domain name portion of a URL are case-insensitive since the DNS is specified to ignore case. http://en.wikipedia.org/ and HTTP://EN.WIKIPEDIA.ORG/ both open the same page. The port number is optional; if omitted, the default for the scheme is used. For example, if http://myvncserver.no-ip.org:5800 is typed into the address bar of a browser it will connect to port 5800 of myvncserver.no-ip.org; this port is used by the VNC remote control program and would set up a remote control session. If the port number is omitted a browser will connect to port 80, the default HTTP port. The file path name is the location on the server of the file or program specified. In principle it is case-sensitive, but may be treated as case-insensitive by some servers, especially those based on Microsoft Windows. If the server is case sensitive and

http://en.wikipedia.org/wiki/URL is correct, i hope i helped

What is the full meaning of www?


In: Computer Terminology, History of the Web, Search Engines [Edit categories] Answer: Improve

World Wide Web


Read more: http://wiki.answers.com/Q/What_is_the_full_meaning_of_www#ixzz1sZAxvbcZ

World Wide Web


From Wikipedia, the free encyclopedia Jump to: navigation, search "WWW" redirects here. For other uses, see WWW (disambiguation). "The Web" redirects here. For other uses, see Web (disambiguation). Not to be confused with the Internet. World Wide Web

The Web's historic logo designed by Robert Cailliau Inventor Tim Berners-Lee[1] Robert Cailliau

Launch year Company Availability

1990/1991 CERN Worldwide

The World Wide Web (abbreviated as WWW or W3,[2] and commonly known as the Web) is a system of interlinked hypertext documents accessed via the Internet. With a web browser, one can view web pages that may contain text, images, videos, and other multimedia, and navigate between them via hyperlinks. Using concepts from his earlier hypertext systems like ENQUIRE, British engineer and computer scientist Sir Tim Berners-Lee, now Director of the World Wide Web Consortium (W3C), wrote a proposal in March 1989 for what would eventually become the World Wide Web.[1] At CERN, a European research organization near Geneva situated on Swiss and French soil,[3] Berners-Lee and Belgian computer scientist Robert Cailliau proposed in 1990 to use hypertext "... to link and access information of various kinds as a web of nodes in which the user can browse at will",[4] and they publicly introduced the project in December.[5]

Conten ts
[hide]

LAN - Local Area Network


By Bradley Mitchell, About.com Guide
See More About:

local area networks types of area networks ethernet

Definition: A local area network (LAN) supplies networking capability to a group of computers in close proximity to each other such as in an office building, a school, or a home. A LAN is useful for sharing resources like files, printers, games or other applications. A LAN in turn often connects to other LANs, and to the Internet or other WAN.

Most local area networks are built with relatively inexpensive hardware such as Ethernet cables, network adapters, and hubs. Wireless LAN and other more advanced LAN hardware options also exist.

Specialized operating system software may be used to configure a local area network. For example, most flavors of Microsoft Windows provide a software package called Internet Connection Sharing (ICS) that supports controlled access to LAN resources. The term LAN party refers to a multiplayer gaming event where participants bring their own computers and build a temporary LAN.
Also Known As: local area network Examples: The most common type of local area network is an Ethernet LAN. The smallest home LAN can have exactly two computers; a large LAN can accommodate many thousands of computers. Many LANs are divided into logical groups called subnets. An Internet Protocol (IP) "Class A" LAN can in theory accommodate more than 16 million devices organized into subnets.

e-mail (electronic mail or email)


E-mail Print A AA AAA LinkedIn Facebook Twitter Share This RSS Reprints

E-mail (electronic mail) is the exchange of computer-stored messages by telecommunication. (Some publications spell it email; we prefer the currently more established spelling of e-mail.) Email messages are usually encoded in ASCII text. However, you can also send non-text files, such as graphic images and sound files, as attachments sent in binary streams. E-mail was one of the first uses of the Internet and is still the most popular use. A large percentage of the total traffic over the Internet is e-mail. E-mail can also be exchanged between online service provider users and in networks other than the Internet, both public and private. E-mail can be distributed to lists of people as well as to individuals. A shared distribution list can be managed by using an e-mail reflector. Some mailing lists allow you to subscribe by sending a request to the mailing list administrator. A mailing list that is administered automatically is called a list server.

E-mail is one of the protocols included with the Transport Control Protocol/Internet Protocol (TCP/IP) suite of protocols. A popular protocol for sending e-mail is Simple Mail Transfer Protocol and a popular protocol for receiving it is POP3. Both Netscape and Microsoft include an e-mail utility with their Web browsers.
Related glossary terms: Multimedia Messaging Service (MMS), push voice, disappearing e-mail, ETRN (Extended Turn), text-to-speech (TTS), zipping, BREW (Binary Runtime Environment for Wireless), unzipping

Email
From Wikipedia, the free encyclopedia Jump to: navigation, search This article is about the communications medium. For the former manufacturing conglomerate, see Email Limited.

The at sign, a part of every SMTP email address[1]

Electronic mail, commonly known as email or e-mail, is a method of exchanging digital messages from an author to one or more recipients. Modern email operates across the Internet or other computer networks. Some early email systems required that the author and the recipient both be online at the same time, in common with instant messaging. Today's email systems are based on a store-and-forward model. Email servers accept, forward, deliver and store messages. Neither the users nor their computers are required to be online simultaneously; they need connect only briefly, typically to an email server, for as long as it takes to send or receive messages. An email message consists of three components, the message envelope, the message header, and the message body. The message header contains control information, including, minimally, an originator's email address and one or more recipient addresses. Usually descriptive information is also added, such as a subject header field and a message submission date/time stamp.

Originally a text-only (7-bit ASCII and others) communications medium, email was extended to carry multi-media content attachments, a process standardized in RFC 2045 through 2049. Collectively, these RFCs have come to be called Multipurpose Internet Mail Extensions (MIME). Electronic mail predates the inception of the Internet, and was in fact a crucial tool in creating it,[2] but the history of modern, global Internet email services reaches back to the early ARPANET. Standards for encoding email messages were proposed as early as 1973 (RFC 561). Conversion from ARPANET to the Internet in the early 1980s produced the core of the current services. An email sent in the early 1970s looks quite similar to a basic text message sent on the Internet today. Network-based email was initially exchanged on the ARPANET in extensions to the File Transfer Protocol (FTP), but is now carried by the Simple Mail Transfer Protocol (SMTP), first published as Internet standard 10 (RFC 821) in 1982. In the process of transporting email messages between systems, SMTP communicates delivery parameters using a message envelope separate from the message (header and body) itself.

Logic gate
From Wikipedia, the free encyclopedia Jump to: navigation, search

A logic gate is an idealized or physical device implementing a Boolean function, that is, it performs a logical operation on one or more logic inputs and produces a single logic output. Depending on the context, the term may refer to an ideal logic gate, one that has for instance zero rise time and unlimited fan-out, or it may refer to a non-ideal physical device.[1] (see Ideal and real op-amps for comparison) Logic gates are primarily implemented using diodes or transistors acting as electronic switches, but can also be constructed using electromagnetic relays (relay logic), fluidic logic, pneumatic logic, optics, molecules, or even mechanical elements. With amplification, logic gates can be cascaded in the same way that Boolean functions can be composed, allowing the construction of a physical model of all of Boolean logic, and therefore, all of the algorithms and mathematics that can be described with Boolean logic.

Contents
[hide]

1 Gate functions 2 Complex functions 3 Electronic gates 4 Symbols 5 Universal logic gates 6 De Morgan equivalent symbols 7 Data storage 8 Three-state logic gates 9 History and development 10 Implementations 11 See also 12 References 13 Further reading 14 External links

[edit] Gate functions

All other types of Boolean logic gates (i.e., AND, OR, NOT, XOR, XNOR) can be created from a suitable network of NAND gates. Similarly all gates can be created from a network of NOR gates. For an input of 2 boolean variables, there are 16 possible boolean algebraic functions. These 16 functions are enumerated below, together with their outputs for each combination of input variables.

Venn Diagrams for Logic Gates A INPUT B 0 1 0 1 0 0 1 1 Meaning

OUTP UT

FALSE

0 0 0 0

Whatever A and B, the output is false. Contradiction. Output is true if and only if (iff) both A and B are true.

A AND B 0 0 0 1

A A

0 0 1 0 A doesn't imply B. True if A but not B.

0 0 1 1 True whenever A is true.

A B

0 1 0 0 A is not implied by B. True if not A but B.

0 1 0 1 True whenever B is true.

A XOR B 0 1 1 0 True if A is not equal to B. A OR B 0 1 1 1 True if A is true, or B is true, or both. A NOR B 1 0 0 0 True if neither A nor B. A XNOR 1 0 0 1 True if A is equal to B. B NOT B 1 0 1 0 True if B is false. A is implied by B. False if not A but B, otherwise true.

1 0 1 1

NOT A A B

1 1 0 0 True if A is false. 1 1 0 1 A implies B. False if A but not B, otherwise

true. A NAND 1 1 1 0 A and B are not both true. B Whatever A and B, the output is true. Tautology.

TRUE

1 1 1 1

The four functions denoted by arrows are the logical implication functions. These functions are not usually implemented as elementary circuits, but rather as combinations of a gate with an inverter at one input.

[edit] Complex functions


Logic circuits include such devices as multiplexers, registers, arithmetic logic units (ALUs), and computer memory, all the way up through complete microprocessors, which may contain more than 100 million gates. In practice, the gates are made from field-effect transistors (FETs), particularly MOSFETs (metaloxidesemiconductor field-effect transistors). Compound logic gates AND-OR-Invert (AOI) and OR-AND-Invert (OAI) are often employed in circuit design because their construction using MOSFET's is simpler and more efficient than the sum of the individual gates.[2] In reversible logic, Toffoli gates are used.

[edit] Electronic gates


Main article: Logic family

The simplest form of electronic logic is diode logic. This allows AND and OR gates to be built, but not inverters, and so is an incomplete form of logic. Further, without some kind of amplification it is not possible to have such basic logic operations cascaded as required for more complex logic functions. To build a functionally complete logic system, relays, valves (vacuum tubes), or transistors can be used. The simplest family of logic gates using bipolar transistors is called resistor-transistor logic (RTL). Unlike diode logic gates, RTL gates can be cascaded indefinitely to produce more complex logic functions. These gates were used in early integrated circuits. For higher speed, the resistors used in RTL were replaced by diodes, leading to diode-transistor logic (DTL). Transistor-transistor logic (TTL) then supplanted DTL with the observation that one transistor could do the job of two diodes even more quickly, using only half the space. In virtually every type of contemporary chip implementation of digital systems, the bipolar transistors have been replaced by complementary field-effect transistors (MOSFETs) to reduce size and power consumption still further, thereby resulting in complementary metaloxidesemiconductor (CMOS) logic.

For small-scale logic, designers now use prefabricated logic gates from families of devices such as the TTL 7400 series by Texas Instruments and the CMOS 4000 series by RCA, and their more recent descendants. Increasingly, these fixed-function logic gates are being replaced by programmable logic devices, which allow designers to pack a large number of mixed logic gates into a single integrated circuit. The field-programmable nature of programmable logic devices such as FPGAs has removed the 'hard' property of hardware; it is now possible to change the logic design of a hardware system by reprogramming some of its components, thus allowing the features or function of a hardware implementation of a logic system to be changed. Electronic logic gates differ significantly from their relay-and-switch equivalents. They are much faster, consume much less power, and are much smaller (all by a factor of a million or more in most cases). Also, there is a fundamental structural difference. The switch circuit creates a continuous metallic path for current to flow (in either direction) between its input and its output. The semiconductor logic gate, on the other hand, acts as a high-gain voltage amplifier, which sinks a tiny current at its input and produces a low-impedance voltage at its output. It is not possible for current to flow between the output and the input of a semiconductor logic gate. Another important advantage of standardized integrated circuit logic families, such as the 7400 and 4000 families, is that they can be cascaded. This means that the output of one gate can be wired to the inputs of one or several other gates, and so on. Systems with varying degrees of complexity can be built without great concern of the designer for the internal workings of the gates, provided the limitations of each integrated circuit are considered. The output of one gate can only drive a finite number of inputs to other gates, a number called the 'fanout limit'. Also, there is always a delay, called the 'propagation delay', from a change in input of a gate to the corresponding change in its output. When gates are cascaded, the total propagation delay is approximately the sum of the individual delays, an effect which can become a problem in high-speed circuits. Additional delay can be caused when a large number of inputs are connected to an output, due to the distributed capacitance of all the inputs and wiring and the finite amount of current that each output can provide.

[edit] Symbols

A synchronous 4-bit up/down decade counter symbol (74LS192) in accordance with ANSI/IEEE Std. 91-1984 and IEC Publication 60617-12.

There are two sets of symbols for elementary logic gates in common use, both defined in ANSI/IEEE Std 91-1984 and its supplement ANSI/IEEE Std 91a-1991. The "distinctive shape" set, based on traditional schematics, is used for simple drawings, and derives from MIL-STD-806 of the 1950s and 1960s. It is sometimes unofficially described as "military", reflecting its origin. The "rectangular shape" set, based on IEC 60617-12 and other early industry standards, has rectangular outlines for all types of gate, and allows representation of a much wider range of devices than is possible with the traditional symbols. The IEC's system has been adopted by other standards, such as EN 60617-12:1999 in Europe and BS EN 60617-12:1999 in the United Kingdom. The goal of IEEE Std 91-1984 was to provide a uniform method of describing the complex logic functions of digital circuits with schematic symbols. These functions were more complex than simple AND and OR gates. They could be medium scale circuits such as a 4-bit counter to a large scale circuit such as a microprocessor. IEC 617-12 and its successor IEC 60617-12 do not explicitly show the "distinctive shape" symbols, but do not prohibit them. [1] These are, however, shown in ANSI/IEEE 91 (and 91a) with this note: "The distinctive-shape symbol is, according to IEC Publication 617, Part 12, not preferred, but is not considered to be in contradiction to that standard." This compromise was reached between the respective IEEE and IEC working groups to permit the IEEE and IEC standards to be in mutual compliance with one another. A third style of symbols was in use in Europe and is still preferred by some, see the table de:Logikgatter#Typen von Logikgattern und Symbolik in the German wiki. In the 1980s, schematics were the predominant method to design both circuit boards and custom ICs known as gate arrays. Today custom ICs and the field-programmable gate array are typically designed with Hardware Description Languages (HDL) such as Verilog or VHDL.
Type Distinctive shape Rectangular shape Boolean algebra between A & B Truth table INPU OUTP T UT A B 0 0 0 1 1 0 1 1 A AND B 0 0 0 1

AND

INPU OUTP T UT A B A OR B 0 0 OR 0 1 1 0 1 1 0 1 1 1

INPU OUTP T UT A NOT 0 1 NOT A 1 0

In electronics a NOT gate is more commonly called an inverter. The circle on the symbol is called a bubble, and is used in logic diagrams to indicate a logic negation between the external logic state and the internal logic state (1 to 0 or vice versa). On a circuit diagram it must be accompanied by a statement asserting that the positive logic convention or negative logic convention is being used (high voltage level = 1 or high voltage level = 0, respectively). The wedge is used in circuit diagrams to directly indicate an active-low (high voltage level = 0) input or output without requiring a uniform convention throughout the circuit diagram. This is called Direct Polarity Indication. See IEEE Std 91/91A and IEC 60617-12. Both the bubble and the wedge can be used on distinctive-shape and rectangular-shape symbols on circuit diagrams, depending on the logic convention used. On pure logic diagrams, only the bubble is meaningful.

INPU OUTPU T T A B 0 0 0 1 1 0 1 1 A NAND B 1 1 1 0

NAND

INPU OUTP T UT A B 0 0 0 1 1 0 1 1 A NOR B 1 0 0 0

NOR

INPU OUTP T UT A B 0 0 0 1 1 0 1 1 A XOR B 0 1 1 0

XOR

INPU OUTPU T T A B 0 0 or 0 1 1 0 1 1 A XNOR B 1 0 0 1

XNOR

Two more gates are the exclusive-OR or XOR function and its inverse, exclusive-NOR or XNOR. The two input Exclusive-OR is true only when the two input values are different, false if they are equal, regardless of the value. If there are more than two inputs, the gate generates a true at its output if the number of trues at its input is odd ([2]). In practice, these gates are built from combinations of simpler logic gates.

[edit] Universal logic gates


For more details on the theoretical basis, see functional completeness.

The 7400 chip, containing four NANDs. The two additional pins supply power (+5 V) and connect the ground.

Charles Sanders Peirce (winter of 188081) showed that NOR gates alone (or alternatively NAND gates alone) can be used to reproduce the functions of all the other logic gates, but his work on it was unpublished until 1933.[3] The first published proof was by Henry M. Sheffer in 1913, so the

NAND logical operation is sometimes called Sheffer stroke; the logical NOR is sometimes called Peirce's arrow.[4] Consequently, these gates are sometimes called universal logic gates.[5]

[edit] De Morgan equivalent symbols


By use of De Morgan's theorem, an AND function is identical to an OR function with negated inputs and outputs. Likewise, an OR function is identical to an AND function with negated inputs and outputs. Similarly, a NAND gate is equivalent to an OR gate with negated inputs, and a NOR gate is equivalent to an AND gate with negated inputs. This leads to an alternative set of symbols for basic gates that use the opposite core symbol (AND or OR) but with the inputs and outputs negated or inverted. Use of these alternative symbols can make logic circuit diagrams much clearer and help to show accidental connection of an active high output to an active low input or vice-versa. Any connection that has logic negations at both ends can be replaced by a negationless connection and a suitable change of gate or vice-versa. Any connection that has a negation at one end and no negation at the other can be made easier to interpret by instead using the De Morgan equivalent symbol at either of the two ends. When negation or polarity indicators on both ends of a connection match, there is no logic negation in that path (effectively, bubbles "cancel"), making it easier to follow logic states from one symbol to the next. This is commonly seen in real logic diagrams - thus the reader must not get into the habit of associating the shapes exclusively as OR or AND shapes, but also take into account the bubbles at both inputs and outputs in order to determine the "true" logic function indicated. All logic relations can be realized by using NAND gates (this can also be done using NOR gates). De Morgan's theorem is most commonly used to transform all logic gates to NAND gates or NOR gates. This is done mainly since it is easy to buy logic gates in bulk and because many electronics labs stock only NAND and NOR gates.

[edit] Data storage


Main article: Sequential logic

Logic gates can also be used to store data. A storage element can be constructed by connecting several gates in a "latch" circuit. More complicated designs that use clock signals and that change only on a rising or falling edge of the clock are called edge-triggered "flip-flops". The combination of multiple flip-flops in parallel, to store a multiple-bit value, is known as a register. When using any of these gate setups the overall system has memory; it is then called a sequential logic system since its output can be influenced by its previous state(s). These logic circuits are known as computer memory. They vary in performance, based on factors of speed, complexity, and reliability of storage, and many different types of designs are used based on the application.

[edit] Three-state logic gates

A tristate buffer can be thought of as a switch. If B is on, the switch is closed. If B is off, the switch is open. Main article: Tri-state buffer

Three-state, or 3-state, logic gates are a type of logic gates that have three states of the output: high (H), low (L) and high-impedance (Z). The high-impedance state plays no role in the logic, which remains strictly binary. These devices are used on buses also known as the Data Buses of the CPU to allow multiple chips to send data. A group of three-states driving a line with a suitable control circuit is basically equivalent to a multiplexer, which may be physically distributed over separate devices or plug-in cards. In electronics, a high output would mean the output is sourcing current from the positive power terminal (positive voltage). A low output would mean the output is sinking current to the negative power terminal (zero voltage). High impedance would mean that the output is effectively disconnected from the circuit.

[edit] History and development


In a 1886 letter, Charles Sanders Peirce described how logical operations could be carried out by electrical switching circuits.[6] Starting in 1898, Nikola Tesla filed for patents of devices containing electro-mechanical logic gate circuits (see List of Tesla patents). Eventually, vacuum tubes replaced relays for logic operations. Lee De Forest's modification, in 1907, of the Fleming valve can be used as AND logic gate. Ludwig Wittgenstein introduced a version of the 16-row truth table, which is shown above, as proposition 5.101 of Tractatus Logico-Philosophicus (1921). Claude E. Shannon introduced the use of Boolean algebra in the analysis and design of switching circuits in 1937. Walther Bothe, inventor of the coincidence circuit, got part of the 1954 Nobel Prize in physics, for the first modern electronic AND gate in 1924. Active research is taking place in molecular logic gates.

[edit] Implementations
Main article: unconventional computing

Since the 1990s, most logic gates are made of CMOS transistors (i.e. NMOS and PMOS transistors are used). Often millions of logic gates are packaged in a single integrated circuit. There are several logic families with different characteristics (power consumption, speed, cost, size) such as: RDL (resistor-diode logic), RTL (resistor-transistor logic), DTL (diode-transistor logic), TTL (transistor-transistor logic) and CMOS (complementary metal oxide semiconductor).

There are also sub-variants, e.g. standard CMOS logic vs. advanced types using still CMOS technology, but with some optimizations for avoiding loss of speed due to slower PMOS transistors. Non-electronic implementations are varied, though few of them are used in practical applications. Many early electromechanical digital computers, such as the Harvard Mark I, were built from relay logic gates, using electro-mechanical relays. Logic gates can be made using pneumatic devices, such as the Sorteberg relay or mechanical logic gates, including on a molecular scale.[7] Logic gates have been made out of DNA (see DNA nanotechnology)[8] and used to create a computer called MAYA (see MAYA II). Logic gates can be made from quantum mechanical effects (though quantum computing usually diverges from boolean design). Photonic logic gates use non-linear optical effects

[edit] See also

Which is the smallest addressable unit of memory?


In: Computer Memory [Edit categories]

Suggested for you:

What is the smallest unit in a computer's memory?

A bit. Has the value 0 or 1. This is determined by high voltage or no voltage. The exact voltage depends on the systems being used.

What is the smallest addressable unit of storage in a computer?

typically an 8-bit byte

What is a unit of computer memory?

In ascending order: bit. byte(8 bits), word(2 bytes), double-word(4 bytes), kibibyte, mebibyte, gibibyte, tebibyte, pebibyte; which gets you into the quadrillions.

What the meaning of www in internet?

world wide web = www

What is the full meaning of the abbreviation-WWW?

World Wide Web

Microsoft Access Data Types


Setting Options Programmatically for the Access Driver Microsoft Access Data Types SQLGetInfo Returned Values for Access Other Access Driver Programming Details

26 out of 117 rated this helpful - Rate this topic

The following table shows the Microsoft Access data types, data types used to create tables, and ODBC SQL data types. Microsoft Access data type BIGBINARY[1] BINARY BIT COUNTER CURRENCY DATE/TIME GUID LONG BINARY LONG TEXT MEMO NUMBER (FieldSize= SINGLE) NUMBER (FieldSize= DOUBLE) NUMBER (FieldSize= BYTE) NUMBER (FieldSize= INTEGER) NUMBER (FieldSize= LONG INTEGER) Data type (CREATETABLE) LONGBINARY BINARY BIT COUNTER CURRENCY DATETIME GUID LONGBINARY LONGTEXT LONGTEXT SINGLE DOUBLE UNSIGNED BYTE SHORT LONG ODBC SQL data type SQL_LONGVARBINARY SQL_BINARY SQL_BIT SQL_INTEGER SQL_NUMERIC SQL_TIMESTAMP SQL_GUID SQL_LONGVARBINARY SQL_LONGVARCHAR[2] SQL_WLONGVARCHAR[3] SQL_LONGVARCHAR[2] SQL_WLONGVARCHAR[3] SQL_REAL SQL_DOUBLE SQL_TINYINT SQL_SMALLINT SQL_INTEGER

NUMERIC OLE TEXT VARBINARY

NUMERIC LONGBINARY VARCHAR VARBINARY

SQL_NUMERIC SQL_LONGVARBINARY SQL_VARCHAR[1] SQL_WVARCHAR[2] SQL_VARBINARY

Database
From Wikipedia, the free encyclopedia Jump to: navigation, search This article is about organized collections of data. For the character from The Simpsons, see List of recurring The Simpsons characters#Database. This article needs additional citations for verification. Please help improve this article by adding citations to reliable sources. Unsourced material may be challenged and removed. (December 2011) This article may have too many section headers dividing up its content. Please help improve the article by merging similar sections and removing unneeded subheaders. (November 2011)

A database is an organized collection of data, today typically in digital form. The data are typically organized to model relevant aspects of reality (for example, the availability of rooms in hotels), in a way that supports processes requiring this information (for example, finding a hotel with vacancies). The term database is correctly applied to the data and their supporting data structures, and not to the database management system (DBMS). The database data collection with DBMS is called a database system. The term database system implies that the data is managed to some level of quality (measured in terms of accuracy, availability, usability, and resilience) and this in turn often implies the use of a general-purpose database management system (DBMS).[1] A general-purpose DBMS is typically a complex software system that meets many usage requirements, and the databases that it maintains

are often large and complex. The utilization of databases is now spread to such a wide degree that virtually every technology and product relies on databases and DBMSs for its development and commercialization, or even may have such embedded in it. Also, organizations and companies, from small to large, heavily depend on databases for their operations. Well known DBMSs include Oracle, IBM DB2, Microsoft SQL Server, Microsoft Access, PostgreSQL, MySQL, and SQLite. A database is not generally portable across different DBMS, but different DBMSs can inter-operate to some degree by using standards like SQL and ODBC to support together a single application. A DBMS also needs to provide effective run-time execution to properly support (e.g., in terms of performance, availability, and security) as many end-users as needed. A way to classify databases involves the type of their contents, for example: bibliographic, document-text, statistical, or multimedia objects. Another way is by their application area, for example: accounting, music compositions, movies, banking, manufacturing, or insurance. The term database may be narrowed to specify particular aspects of organized collection of data and may refer to the logical database, to physical database as data content in computer data storage or to many other database sub-definitions.

What is a Database?
By Mike Chapple, About.com Guide
See More About:

database tutorials database tables databases spreadsheets

Ads

Free Programming CoursesGet started in minutes! Become a Software Engineercode.he.net Database ManagementWinSQL - A Homogeneous Solution for Heterogeneous Environment.www.synametrics.com

OLAP viewer for SQL cubesAnalytics & dashboards in a thin client built for SSASwww.pyramidanalytics.com
Databases Ads

Software Database Access Database Software Database Software Contact Database Software MS Access Database

Databases are designed to offer an organized mechanism for storing, managing and retrieving information. They do so through the use of tables. If youre familiar with spreadsheets like Microsoft Excel, youre probably already accustomed to storing data in tabular form. Its not much of a stretch to make the leap from spreadsheets to databases. Lets take a look.

Database Tables
Just like Excel tables, database tables consist of columns and rows. Each column contains a different type of attribute and each row corresponds to a single record. For example, imagine that we were building a database table that contained names and telephone numbers. Wed probably set up columns named FirstName, LastName and TelephoneNumber. Then wed simply start adding rows underneath those columns that contained the data were planning to store. If we were building a table of contact information for our business that has 50 employees, wed wind up with a table that contains 50 rows.

Databases and Spreadsheets


At this point, youre probably asking yourself an obvious question if a database is so much like a spreadsheet, why cant I just use a spreadsheet? Databases are actually much more powerful than spreadsheets in the way youre able to manipulate data. Here are just a few of the actions that you can perform on a database that would be difficult if not impossible to perform on a spreadsheet:

M150 Data, Computing and Information


Differences between data and information

The interchange of the words data and information is widespread, but M150 should help you to develop a clearer understanding of the differences between the two.

Data

Facts, statistics used for reference or analysis. Numbers, characters, symbols, images etc., which can be processed by a computer. Data must be interpreted, by a human or machine, to derive meaning "Data is a representation of information" * Latin 'datum' meaning "that which is given" Data plural, datum singular (M150 adopts the general use of data as singular. Not everyone agrees.)

Information

Knowledge derived from study, experience (by the senses), or instruction. Communication of intelligence. "Information is any kind of knowledge that is exchangeable amongst people, about things, facts, concepts, etc., in some context." * "Information is interpreted data" *

What is the difference between data and information in computer terms?


In: The Difference Between [Edit categories] Answer: Improve

Data is raw material for data processing. data relates to fact, event and transactions. Data refers to unprocessed information. Information is data that has been processed in such a way as to be meaningful to the person who receives it. it is any thing that is communicated.

For example,researchers who conduct market research survey might ask a member of the public to complete questionnaires about a product or a service. These completed questionnaires are data; they are processed and analyze in order to prepare a report on the survey. This resulting report is information.

database management system (DBMS)


E-mail Print A AA AAA LinkedIn Facebook Twitter Share This RSS Reprints

A database management system (DBMS), sometimes just called a database manager, is a program that lets one or more computer users create and access data in a database. The DBMS manages user requests (and requests from other programs) so that users and other programs are free from having to understand where the data is physically located on storage media and, in a multi-user system, who else may also be accessing the data. In handling user requests, the DBMS ensures the integrity of the data (that is, making sure it continues to be accessible and is consistently organized as intended) and security (making sure only those with access privileges can access the data). The most typical DBMS is a relational database management system (RDBMS). A standard user and program interface is the Structured Query Language (SQL). A newer kind of DBMS is the objectoriented database management system (ODBMS).

A DBMS can be thought of as a file manager that manages data in databases rather than files in file systems. In IBM's mainframe operating systems, the nonrelational data managers were (and are, because these legacy application systems are still used) known as access methods. A DBMS is usually an inherent part of a database product. On PCs, Microsoft Access is a popular example of a single- or small-group user DBMS. Microsoft's SQL Server is an example of a DBMS that serves database requests from multiple (client) users. Other popular DBMSs (these are all RDBMSs, by the way) are IBM's DB2, Oracle's line of database management products, and Sybase's products. IBM's Information Management System (IMS) was one of the first DBMSs. A DBMS may be used by or combined with transaction managers, such as IBM's Customer Information Control System (CICS).
Related glossary terms: relational database, OLE DB (OLEDB or Object Linking and Embedding Database), flat file, Fast Guide: SQL Server 2000 commands, commaseparated values file (CSV), information, full-text database, DDBMS (distributed database management system), relational database management system (RDBMS), Quiz: Database Basics

Database Management System


By Mike Chapple, About.com Guide
See More About:

database terms database management

Definition: A database management system (DBMS) is the software that allows a computer to perform database functions of storing, retrieving, adding, deleting and modifying data. Relational database management systems (RDBMS) implement the relational model of tables and relationships. Also Known As: DBMS Examples: Microsoft Access, MySQL, Microsoft SQL Server, Oracle and FileMaker Pro are all examples of database management systems.

What Is the Difference Between Intel Pentium and AMD Athlon?


Share RSS Print Email

Most of the central processing units (CPUs) found in computers are manufactured by either Intel or AMD (Advanced Micro Devices). Though Intel processors are better known and many consider them to be more powerful, AMD processors offer consumers a lower cost and powerful alternative to Intel's CPUs. Intel and AMD's flagship processors, the Pentium and Athlon, respectively, are comparable in many ways. Nevertheless, there are some key differences between the two brands of processors.

History of Intel Pentium


The Intel Pentium processor was first introduced in 1993. Being the first Pentium processor, it was simply called the Pentium Processor. The Pentium name, however, has come to represent several subsequent CPU models, including the Pentium 2, Pentium 3, Pentium 4 and Pentium Dual-Core CPUs. Each succeeding CPU model has improved on the last in multiple ways, most notable by increasing the CPU's processing speed and memory cache size. The "Pentium" name helped Intel gain a near-monopoly over the CPU market for a large part of the 1990s.

History of AMD Athlon


Intel's near-monopoly was shattered when AMD introduced its Athlon line of processors in 1999. Like the Intel Pentium brand, Athlon has branded several AMD processors, including the Athlon XP, Athlon X2 and Athlon 64. Though technically "inferior" (on paper) to their Intel counterparts, these processors offer consumers the same, if not better performance, through a few technological differences.

Memory Cache
One of the key differences between Intel Pentium and AMD Athlon processors is the way they each store and access the CPU memory. This difference is also what accounts for the AMD Athlon processor's relatively lower price and comparable performance (with technically lower specs) with the Intel Pentium. The Intel Pentium processors store their memory in an L2 (level 2) cache that is roughly double the size of the cache found in comparable AMD processors. The L2 cache is a memory bank that stores and transmits data to the L1 (level 1) cache which, in turn, stores and transmits data to the processor itself; the larger the L2 cache, the faster the processing speed. AMD Athlon processors, although they have roughly half the L2 cache space, are able to match this

speed by integrating the memory cache into the processor itself. This technological decision allows AMD Athlon processors to access their cache data much quicker than Intel Pentium processors, even though the cache is smaller.

CPU Benchmark Test Results


When the original AMD Athlon processor (now dubbed "Athlon Classic") was released, Intel was already on its third Pentium processor, the Pentium 3. To get an idea of how these CPUs stacked up to one another, a CPU benchmark test was conducted. The results were as follows and show the comparable nature of the two processors: CPU marks (the higher, the better): Athlon, 54.6; Pentium 3, 48.2 FPU marks (the higher, the better): Athlon, 3270; Pentium 3, 3340 MIPs (Million Instructions Per Second): Athlon, 1973; Pentium 3, 1795 MFLOPs (Mega Floating-Point Operations Per Second): Athlon, 797; Pentium 3: 892 Bits Per Second: Athlon, 1254; Pentium 3: 1586

What is the difference between AMD and Intel processors?


In: Intel Core 2, Intel Microprocessors, AMD Microprocessors [Edit categories] Answer: Improve Here are some very simple things that are different between them:

INTEL processors: Intel was the first major brand for desktop CPUs, they survived a lot of competitors. The CPUs they manufacture are always a bit more expensive than AMDs. Because of their market shares, Intel was able to force some "gadgets" onto the market. Until 2002, all CPUs were classified by the speed (for example 2GHz - so 2000MHz). Intel lost some influence on the desktops CPU market because of AMDs techniques - they used not only pure speed but a more specific command kernel. By that the CPUs (i.E. AMD Athlon XP) were slower, but provided the same results as a faster Intel CPU (remember? AMD at 1.666GHz was the same as a INTEL on 1,8GHz). Intel provides most of the "normal" server CPUs today. Private users often choose AMDs for their machines to save costs. Intel supports the classification by the pure speed of a CPU no more. The introduced new numbers for the CPUs which represent the speed, the advanced features, the cache etc. for a certain CPU (and the best for them: you cannot compare them to AMD any longer as easily as it was before ...) AMD: AMD concentrated on the PC and consumer market and cut the costs for their CPUs in order to be more competitive. Nowadays the AMD CPUs are hotter than Intel CPUs, therefore you will need a better cooling and your

system will be a bit louder. AMD provides a dual core CPU for PCs longer than Intel and I guess they have more experience with these features.

For gaming systems it is more a question of taste and money than a real difference. Both platforms have their advantages:

You cannot do anything wrong, as long as you try to get a CPU with a big cache (1mb at least! - so NO CELERON or SEMPRON) and the rest is a question of RAM and the used video card. Both chips are very similar, but due to marketing, they seem very very different from each other. The primary "marketing" difference is the way they claim their speeds. The Intel chipsets (Pentium, celeron) use MHz as a speed factor. IE "This Pentium 4 chip runs at 3000 MHz" (IE a Pentium 4 3.0). The AMD chipsets do not use a MegaHertz rating for speed because they believe doing so would make their chips seem slower than their Intel counterparts. This is due to the difference in instruction set handling that AMD uses, and is too technical to really get into here. Instead you may see an AthlonXP 1600, etc. Apply the following formula to determine the "Pentium" equivalent MHz speed. MHz = (XP rating/1.5) + (500/1.5) For example, with an Athlon 1600, the math would break down like this: MHz = (1600/1.5) + (500/1.5) MHz = (1066.66) + (333.33) MHz = 1399.99 Which would be equivalent to the last incarnation of the Pentium 3 (at 1400 MHz). Because Intel has enjoyed longer commercial success than AMD, you can typically find comparable AMD chips for half the cost. This has to do with brand loyalty and advertising costs as well as the types of memory used in both chips. Intel uses more expensive RDR RAM while the AMD chips use DDR RAM. That said, the RDR RAM is faster (533 MHz versus the 333 MHz of the DDR RAM) and helps to soften the blow of the cost of the chips to the consumer.

Another user said:

The main difference between AMD and Intel processors are that AMD processors have a 10 step execution process which doesn't allow as fast of a clock but AMD's counter that by being able to do more operations per clock cycle. Intls on the other hand have a 20 step execution proccess which allows much higher clock speeds but they have fewer operations per clock. That's why a 2.08 Ghz AMD barton can perform at levels like a 2.8 GHz Pentium 4. Another difference between the processors are the sockets they use. AMD's generally use Socket A (462 pins) while Intel generally uses Socket 478 (478 pins). Other differences include supported chip sets, AMD motherboards generally use SIS, VIA, or N force chip sets while Intel motherboards use Intel, SIS, and most recently ATI chip sets. AMD is a favorite for over clockers because with minor modifications the FSB multiplier can be unlocked while Intel multipliers cannot be unlocked (Not to say that AMD is used exclusively by over clockers, because it is not, but this ability to be unlocked is a plus). Also as of now Intel processors have the largest L3 cache at 2mb while the largest for AMD is 1mb.

How do I change my MAC address?


Although physical MAC (Media Access Control) addresses are permanent by design, several mechanisms allow modification, or "spoofing", of the MAC address that is reported by the operating system. This can be useful for privacy reasons, for instance when connecting to a Wi-Fi hotspot, or to ensure interoperability. Some internet service providers bind their service to a specific MAC address; if the user then changes their network card or intends to install a router, the service won't work anymore. Changing the MAC address of the new interface will solve the problem. Similarly, some software licenses are bound to a specific MAC address. Changing the MAC address in this way is not permanent: after a reboot, it will revert to the MAC address physically stored in the card. A MAC address is 48 bits in length. As a MAC address can be changed, it can be unwise to rely on this as a single method of authentication. IEEE 802.1x is an emerging standard better suited to authenticating devices at a low level.

Mac OS X
Under Mac OS X, the MAC address can be altered in a fashion similar to the Linux and FreeBSD methods: sudo ifconfig en0 lladdr 00:01:02:03:04:05 or sudo ifconfig en0 ether 00:01:02:03:04:05 This must be done as the superuser and only works for the computer's ethernet card. Instructions on spoofing AirPort Extreme (2.0) cards are available here. There are not, as of yet, any known ways to spoof original AirPort (1.0) cards. The AirPort Extreme MAC address can also be changed easily with SpoofMac.

Windows
Under Windows XP, the MAC address can be changed in the Ethernet adapter's Properties menu, in the Advanced tab, as "MAC Address", "Locally Administered Address", "Ethernet Address" or "Network Address". The exact name depends on the Ethernet driver used; not all drivers support changing the MAC address in this way.

However, a better solution - requiring Administrative User Rights - is to pass over the System Registry Keys under HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Class\ {4D36E972-E325-11CE-BFC1-08002BE10318}. Here settings for each network interface can be found. The contents of the string value called 'NetworkAddress' will be used to set the MAC address of the adapter when next it is enabled. Resetting the adapter can be accomplished in script with the freely available command line utility devcon from Microsoft, or from the adapters context menu in the Network Connections control panel applet. There is a nice tool to change the MAC address for all cards (even those that can't be changed through the adapter's Properties menu): SMAC MAC Address Changer Note: to check your MAC address easily on a Windows XP box, go to Run, type CMD, then type "ipconfig /all" without quotation in the command prompt. The number under physical address is the MAC address. If multiple IP are displayed, you should look under the label "Ethernet adapter x", where x is the name of your connection (which is Local Area Connection by default).

Router
The method to change the MAC address of a router varies with the router. Not all routers have the ability to change their MAC address. The feature is often referred to as "clone MAC address". This take the MAC address of one of the machine on your network and replaces the router's existing MAC address with it. Some support the option to manually enter the MAC address.

How can i change my MAC address for my computer?


4 years ago Report Abuse

Frank

Best Answer - Chosen by Voters


Method 1: This is depending on the type of Network Interface Card (NIC) you have. If you have a card that doesnt support Clone MAC address, then you have to go to second method. 1.Go to Start->Settings->Control Panel and double click on Network and Dial-up Connections. 2.Right click on the NIC you want to change the MAC address and click on properties. 3.Under General tab, click on the Configure button

4.Click on Advanced tab 5.Under Property section, you should see an item called Network Address or "Locally Administered Address", click on it. 6.On the right side, under Value, type in the New MAC address you want to assign to your NIC. Usually this value is entered without the - between the MAC address numbers. 7. Goto command prompt and type in ipconfig /all or net config rdr to verify the changes. If the changes are not materialized, then use the second method. 8.If successful, reboot your systems. Method 2: This method requires some knowledge on the Windows Registry. If you are not familiar with Windows Registry, just use the SMAC tool to change the MAC addresses, or consult with a technical person before you attempt on the following steps. Also, make sure you have a good backup of your registry. a. Goto command prompt and type ipconfig /all, and I. Record the Description for the NIC you want to change. II. Record the Physical Address for the NIC you want to change. Physical Address is the MAC Address b. Goto command prompt and type net config rdr c. Remember the number between the long number (GUID) inside the { }. For example, in the above net config rdr output, for MAC address 00C095ECB793, you should remember {1C9324AD-ADB7-4920-B02D-AB281838637A}. You can copy and paste it to the Notepad, thats probably the easiest way. d. Go to Start -> Run, type regedt32 to start registry editor. Do not use Regedit. e. Do a BACKUP of your registry in case you screw up the following steps. To do this I. Click on HKEY_LOCAL_MACHINE on Local Machine sub-window II. Click on the root key HKEY_LOCAL_MACHINE. III. Click on the drop-down menu Registry -> Save Subtree As and save the backup registry in to a file. Keep this file in a safe place. f. Go to HKEY_LOCAL_MACHINE\SYSTEM\CurrentContro Double click on it to expand the tree. The subkeys are 4-digit numbers, which represent particular network adapters. You should see it starts with 0000, then 0001, 0002, 0003 and so on. g. Go through each subkey that starts with 0000. Click on 0000, check DriverDesc keyword on the right to see if that's the NIC you want to change the MAC address. The DriveDesc should match the Description you recorded from step (a.-I.). If you are not 100% sure about the DriverDesc, then you can verify by checking if the NetCfgInstanceID keyword value matches the GUID from step (c). If there is no match, then move on to 0001, 0002, 0003, and so on, until you find the one you want. Usually 0000 contains the first NIC you installed on the computer. In this demonstration, 0000 is the NIC I selected. (See figure 3.) h. Once you selected the subkey (i.e. 0000), check if there is a keyword "NetworkAddress" exist in the right side of the window. (See figure 3.)

I. If "NetworkAddress" keyword does not exist, then create this new keyword: i. Click on the drop down menu Edit -> Add Value. ii. In the Add Value window, enter the following value then click OK. Value Name: = NetworkAddress Data Type: = REG_SZ iii. String Editor window will pop up at this time (see figure 5.) iv. Enter the new MAC address you want to modify. Then click OK. (There should not be any "-" in this address. Your entry should only consist of 12 digits as seen in the figure 5.) II. If "NetworkAddress" keyword exists, make sure it shows the keyword type is REG_SZ, and it should show as NetworkAddress:REG_SZ: . This keyword might not have a value at this time. i. Double click on the keyword NetworkAddress and the String Editor window will pop up. ii. Enter the new MAC address you want to modify. Then click OK. (There should not be any "-" in this address. Your entry should only consist of 12 digits.) j. There are 2 ways to make the new MAC address active. Method I does not require a system reboot: I. Goto Start->Setting->Control Panel, and double click on "Network Neighborhood". WARNING: Make sure you understand that you WILL lose the network connection after completing step "ii." below, and if you have a DHCP client, you will get a new IP address after completing step "iii." i. Select the Network Adaptor you just changed the MAC address. ii. Right click on the selected Network Adaptor and click "Disable." Verify the status colu

4 years ago

How to determine the APM version.


Asked in Computers & Technology at 11:51 AM on October 26, 2007 Tags: determine, apm, version
! This question is closed Report abuse

madhuhot

Rate this : 00

Latest Answers(4) Sort by Latest | Oldest | Top-rated

To determine the version of APM in Windows click Start / Settings / Control Panel / double-click the icon System and click the Device Manager tab. Within Device Manager click the + next to system and double-click Advanced Power Management. If you are running Windows 95 and do not have Advanced Power Management you must install Power Management support on your computer. If you are running Windows 98 you may need to double-click on Intel xxxxx Power Management Controller. Once in Advanced Power Management click the Settings tab and there should be an APM version listed. Answered by deep at 7:19 AM on October 27, 2007
Report Abuse | Rate this : 00

deep Profile | Q&A

To determine the version of APM in Windows click Start / Settings / Control Panel / double-click the icon System and click the Device Manager tab. Within Device Manager click the + next to system and double-click Advanced Power Management. If you are running Windows 95 and do not have Advanced Power Management you must install Power Management support on your computer. If you are running Windows 98 you may need to double-click on Intel xxxxx Power Management Controller. Once in Advanced Power Management click the Settings tab and there should be an APM version listed. Answered by mrmister at 6:39 PM on October 26, 2007
Report Abuse | Rate this : 00

mrmister

Profile | Q&A

If you are running Windows 95 and do not have Advanced Power Management you must install Power Management support on your computer. If you are running Windows 98 you may need to double-click on Intel xxxxx Power Management Controller. Answered by Raghu at 11:54 AM on October 26, 2007
Report Abuse | Rate this : 00

Raghu

Cause: To help troubleshoot issues relating to Windows Power Management it may be necessary to determine the version of the Advanced Power Management running on the computer. Solution: To determine the version of APM in Windows click Start / Settings / Control Panel / double-click the icon System and click the Device Manager tab. Within Device Manager click the + next to system and double-click Advanced Power Management. If you are running Windows 95 and do not have Advanced Power Management you must install Power Management support on your computer. If you are running Windows 98 you may need to double-click on Intel xxxxx Power Management Controller. Once in Advanced Power Management click the Settings tab and there should be an APM version listed. Answered by alokgupta14 at 11:51 AM on October 26, 2007
Report Abuse | Rate this : 00

alokgupta14 Profile | Q&A


Related Search

How to determine the APM version.

To determine the version of APM in Windows click Start / Settings / Control Panel / double-click the icon System and click the Device Manager tab. Within Device Manager click the next to system relating to Windows Power Management it may be necessary to determine the version of the Advanced Power Management running on the computer. Solution:To determine the version of APM in Windows click Start

Posted in Computers & Technology by madhuhot at 5:21 PM on October 26, 2007 Tags comp
How to determine the APM version.

To determine the version of APM in Windows click Start / Settings / Control Panel / double-click the icon System and click the Device Manager tab. Within Device Manager click the next to system may need to double-click on Intel xxxxx Power Management Controller.Once in Advanced Power Management click the Settings tab and there should be an APM version listed.
Posted in Computers & Technology by navneet at 11:56 AM on October 16, 2007 Tags determine
What are the Versions of Java ?

, or none at all, so be sure to test them all. Below are nine ways to determine the version of Java a web browser is using.Note: The portion of Java that runs programs is referred to as the Java Virtual ways to determine the version of Java a web browser is using.Note: The portion of Java that runs programs is referred to as the Java Virtual Machine JVM or Java Run-time Environment JRE.RECENT JAVA
Posted in Computers & Technology by Aswini77 at 1:32 AM on July 31, 2008 Tags versions, java
What is used to convert between Ethernet and Token Ring?

from bridging, since bridging maintains asingle logical network.In a routed network, the sending workstation determines if outgoingtraffic is local or remote. If the traffic belongs to anothernetwork maintainsa routing table which is used to determine the final destination ofthe data packet through the router....converter that allows an Ethernet network and TokenRing network to communicate between each
Posted in Computers & Technology by Mohd Farooq at 4:25 PM on June 25, 2008 Tags
1.what is "BIOS"?2.what is the activity of "BIOS"properly&how it works when pc starts?

determines whether all of the attachments are in place and operational and then it loads the operating system or key parts of it into your computers random access memory RAM from your hard disk or diskette.When BIOS boots up starts up your computer, it first determines whether all of the attachments are in place and operational and then it loads the operating system or key parts of it into your computers random
Posted in Computers & Technology by subham at 8:05 PM on May 06, 2008

Tags computer
Time and date getting reset and/or losing time?

by this issue. Information about how to End Task all TSRs can be found on CHTSR. If this does resolve your issue, reboot the computer and attempt to determine which TSR or screen saver was causing this issue. Once the culprit has been located, see if the program has any available updates to resolve your issue. Issue with APM APM, or Advanced Power Management, can cause issues with the computer keeping time
Posted in Computers & Technology by Rocky at 5:24 AM on November 13, 2007 Tags losing time
What steps are needed to develop and run software tests ?

requirements hardware, software, configuration, versions, communications, etc. Determine testware requirements automation tools, coverage analyzers, test tracking, problem/bug tracking, etc. Determine test input tests, etc. Determine test environment requirements hardware, software, configuration, versions, communications, etc. Determine testware requirements automation tools, coverage analyzers, test tracking
Posted in Computers & Technology by Sonam Kalra at 3:29 AM on August 11, 2008 Tags steps, needed, develop, software, tests,
Windows Clock on Taskbar and in Date/Time Tool Loses Time

your internal lithium battery might be gone try replacing it.The battery is present on motherboeard....CAUSEAdvanced Power Management APM settings are enabled in the BIOS. You configure your computer to use third-party anti-virus, system utility, and screen saver programs. RESOLUTIONTo resolve the issue for the last symptom listed in this article, use one of the following methods: APM Settings
Posted in Computers & Technology by Rock at 9:51 PM on October 09, 2007 Tags windows, clock, taskbar, date, time, tool, loses
Missing standby option in Windows 98 shut down dialog box.

responding. Would you like to prevent your computer from going on standby in the future?Your computer stopped responding while in standby mode two times consecutively.Advanced Power Management APM Control Panel, and then double-click Add New Hardware.b. Follow the instructions on the screen to finish the Add New Hardware wizard. If this wizard successfully detects and installs support for APM, do
Posted in Computers & Technology by madhuhot at 5:18 PM on October 26, 2007

How to enter the BIOS or CMOS setup


Issue
How to enter the BIOS or CMOS setup.

Solution
Note: This document doesn't help users who cannot enter BIOS or CMOS setup because of a password. Because of the wide variety of computer and BIOS manufacturers over the evolution of computers, there are numerous ways to enter the BIOS or CMOS Setup. Below is a listing of the majority of these methods as well as other recommendations for entering the BIOS setup. New computers Thankfully, computers that have been manufactured in the last few years will allow you to enter the CMOS by pressing one of the below five keys during the boot. Usually it's one of the first two.

F1 F2 DEL ESC F10

A user will know when to press this key when they see a message similar to the below example as the computer is booting. Some older computers may also display a flashing block to indicate when to press the F1 or F2 keys. Press <F2> to enter BIOS setup Tip: If your computer is a new computer and you are unsure of what key to press when the computer is booting, try pressing and holding one or more keys the keyboard. This will cause a stuck key error, which may allow you to enter the BIOS setup. Once you've successfully entered the CMOS setup you should see a screen similar to the below example.

Older computers Unlike the computers of today, older computers (before 1995) had numerous different methods of entering the BIOS setup. Below is a listing of general key sequences that may have had to be pressed as the computer was booting.

CTRL + ALT + ESC CTRL + ALT + INS CTRL + ALT + ENTER CTRL + ALT + S PAGE UP KEY PAGE DOWN KEY

ACER BIOS If your computer is unable to boot or you wish to restore the BIOS back to bootable settings and your computer uses an ACER BIOS, press and hold the F10 key as you turn on the computer. While continuing to hold the F10 key, you should hear two beeps indicating that the settings have been restored. AMI BIOS Older AMI BIOS could be restored back to bootable settings by pressing and holding the Insert key as the computer is booting. BIOS or CMOS diskettes Early 486, 386, and 286 computers may have required a floppy disk in order to enter the BIOS setup. These diskettes are known as ICU, BBU, and SCU disks. Because these diskettes are unique

to your computer manufacturer, you must obtain the diskettes from them. See the computer manufacturers list for contact information. Early IBM computers Some models of early IBM computers required that the user press and hold both mouse buttons as the computer was booting in order to enter the BIOS setup. Other suggestions Finally, if none of the above suggestions help get you into your CMOS setup you can cause a stuck key error, which will usually cause the CMOS setup prompt to appear and remain until you press a key to continue. To do this press and hold any key on the keyboard and do not let go (you may get several beeps as you're doing this). Keep holding the key until the computer stops booting and you're prompted with an option to enter setup or to press another key to continue booting.

What is the difference between BIOS and CMOS?


What is the difference between BIOS and CMOS? Is there anyone who can elaborately explain me the difference?

4 years ago Report Abuse

inclusiv...

Best Answer - Chosen by Voters


BIOS is the proper name of the program / code that controls all of the basic hardware and loads the operating system. CMOS is a term for the type of chip that the BIOS stores it's settings on. Some people mistakenly call the BIOS this, but a CMOS can be used for many different purposes besides a BIOS.

What's the difference between BIOS and CMOS?


Many people use the terms BIOS (basic input/output system) and CMOS (complementary metal oxide semiconductor) interchangeably, but in actuality, they are distinct, though related, components of a computer. The BIOS is the program that starts a computer up, and the CMOS is

where the BIOS stores the date, time, and system configuration details it needs to start the computer. The BIOS is a small program that controls the computer from the time it powers on until the time the operating system takes over. The BIOS is firmware, which means it cannot store variable data. CMOS is a type of memory technology, but most people use the term to refer to the chip that stores variable data for startup. A computer's BIOS will initialize and control components like the floppy and hard drive controllers and the computer's hardware clock, but the specific parameters for startup and initializing components are stored in the CMOS.

Computer POST and beep codes


Quick links POST ABCs POST troubleshooting AMI BIOS beep codes Award BIOS beep codes IBM BIOS beep codes Macintosh startup tones Phoenix BIOS beep codes Motherboard help

POST ABCs The computer power-on self-test (POST) tests the computer to make sure it meets the necessary system requirements and that all hardware is working properly before starting the remainder of the boot process. If the computer passes the POST the computer will have a single beep (with some computer BIOS manufacturers it may beep twice) as the computer starts and the computer will continue to start normally. However, if the computer fails the POST, the computer will either not beep at all or will generate a beep code, which tells the user the source of the problem. If you're receiving an irregular POST or a beep code not mentioned below follow the POST troubleshooting steps to determine the failing hardware component.

Additional information on the POST and how a computer works?

AMI BIOS beep codes Below are the AMI BIOS Beep codes that can occur. However, because of the wide variety of different computer manufacturers with this BIOS, the beep codes may vary.
Beep Code 1 short 2 short 3 short 4 short 5 short 6 short 7 short 8 short 9 short 10 short 11 short 1 long, 3 short Descriptions DRAM refresh failure Parity circuit failure Base 64K RAM failure System timer failure Process failure Keyboard controller Gate A20 error Virtual mode exception error Display memory Read/Write test failure ROM BIOS checksum failure CMOS shutdown Read/Write error Cache Memory error Conventional/Extended memory failure

1 long, 8 short

Display/Retrace test failed

AWARD BIOS beep codes Below are Award BIOS Beep codes that can occur. However, because of the wide variety of different computer manufacturers with this BIOS, the beep codes may vary.
Beep Code 1 long, 2 short Any other beep(s) Description Indicates a video error has occurred and the BIOS cannot initialize the video screen to display any additional information RAM problem.

If any other correctable hardware issues, the BIOS will display a message. IBM BIOS beep codes Below are general IBM BIOS Beep codes that can occur. However, because of the wide variety of models shipping with this BIOS, the beep codes may vary.
Beep Code No Beeps 1 Short Beep 2 Short Beep Continuous Beep Repeating Short Beep One Long and one Short Beep One Long and Two Short Beeps One Long and Three Short Beeps. Description No Power, Loose Card, or Short. Normal POST, computer is ok. POST error, review screen for error code. No Power, Loose Card, or Short. No Power, Loose Card, or Short. Motherboard issue. Video (Mono/CGA Display Circuitry) issue. Video (EGA) Display Circuitry.

Three Long Beeps One Beep, Blank or Incorrect Display

Keyboard or Keyboard card error. Video Display Circuitry.

Macintosh startup tones


Tones Error Tone. (two sets of different tones) Startup tone, drive spins, no video Powers on, no tone. High Tone, four higher tones. Error Problem with logic board or SCSI bus. Problem with video controller. Logic board problem. Problem with SIMM.

Phoenix BIOS beep codes Below are the beep codes for Phoenix BIOS Q3.07 OR 4.X
Beep Code 1-1-1-3 1-1-2-1 1-1-2-3 1-1-3-1 1-1-3-2 1-1-3-3 1-1-4-1 1-1-4-3 1-2-1-1 Description and what to check Verify Real Mode. Get CPU type. Initialize system hardware. Initialize chipset registers with initial POST values. Set in POST flag. Initialize CPU registers. Initialize cache to initial POST values. Initialize I/O. Initialize Power Management.

1-2-1-2 1-2-1-3 1-2-2-1 1-2-2-3 1-2-3-1 1-2-3-3 1-2-4-1 1-3-1-1 1-3-1-3 1-3-2-1 1-3-3-1 1-3-3-3 1-3-4-1 1-3-4-3 1-4-1-3 1-4-2-4 1-4-3-1 1-4-3-2 1-4-3-3 1-4-4-1

Load alternate registers with initial POST values. Jump to UserPatch0. Initialize keyboard controller. BIOS ROM checksum. 8254 timer initialization. 8237 DMA controller initialization. Reset Programmable Interrupt Controller. Test DRAM refresh. Test 8742 Keyboard Controller. Set ES segment to register to 4 GB. 28 Autosize DRAM. Clear 512K base RAM. Test 512 base address lines. Test 512K base memory. Test CPU bus-clock frequency. Reinitialize the chipset. Shadow system BIOS ROM. Reinitialize the cache. Autosize cache. Configure advanced chipset registers.

1-4-4-2 2-1-1-1 2-1-1-3 2-1-2-1 2-1-2-3 2-1-2-4 2-1-3-1 2-1-3-2 2-1-3-3 2-1-4-1 2-1-4-3 2-2-1-1 2-2-1-3 2-2-2-1 2-2-2-3 2-2-3-1 2-2-3-3 2-2-4-1 2-3-1-1 2-3-1-3 2-3-2-1

Load alternate registers with CMOS values. Set Initial CPU speed. Initialize interrupt vectors. Initialize BIOS interrupts. Check ROM copyright notice. Initialize manager for PCI Options ROMs. Check video configuration against CMOS. Initialize PCI bus and devices. Initialize all video adapters in system. Shadow video BIOS ROM. Display copyright notice. Display CPU type and speed. Test keyboard. Set key click if enabled. 56 Enable keyboard. Test for unexpected interrupts. Display prompt Press F2 to enter SETUP. Test RAM between 512 and 640k. Test expanded memory. Test extended memory address lines. Jump to UserPatch1.

2-3-2-3 2-3-3-1 2-3-3-3 2-3-4-1 2-3-4-3 2-4-1-1 2-4-1-3 2-4-2-1 2-4-2-3 2-4-4-1 2-4-4-3 3-1-1-1 3-1-1-3 3-1-2-1 3-1-2-3 3-1-3-1 3-1-3-3 3-1-4-1 3-2-1-1 3-2-1-2 3-2-1-3

Configure advanced cache registers. Enable external and CPU caches. Display external cache size. Display shadow message. Display non-disposable segments. Display error messages. Check for configuration errors. Test real-time clock. Check for keyboard errors Set up hardware interrupts vectors. Test coprocessor if present. Disable onboard I/O ports. Detect and install external RS232 ports. Detect and install external parallel ports. Re-initialize onboard I/O ports. Initialize BIOS Data Area. Initialize Extended BIOS Data Area. Initialize floppy controller. Initialize hard-disk controller. Initialize local-bus hard-disk controller. Jump to UserPatch2.

3-2-2-1 3-2-2-3 3-2-3-1 3-2-3-3 3-2-4-1 3-2-4-3 3-3-1-1 3-3-1-3 3-3-3-1 3-3-3-3 3-3-4-1 3-3-4-3 3-4-1-1 3-4-1-3 3-4-2-1 3-4-2-3 3-4-3-1 3-4-4-1 3-4-4-3 3-4-4-4 4-1-1-1

Disable A20 address line. Clear huge ES segment register. Search for option ROMs. Shadow option ROMs. Set up Power Management. Enable hardware interrupts. Set time of day. Check key lock. Erase F2 prompt. Scan for F2 key stroke. Enter SETUP. Clear in-POST flag. Check for errors POST done--prepare to boot operating system. One beep. Check password (optional). Clear global descriptor table. Clear parity checkers. Clear screen (optional). Check virus and backup reminders. Try to boot with INT 19.

4-2-1-1 4-2-1-3 4-2-2-1 4-2-2-3 4-2-3-1 4-2-3-3 4-2-4-1 4-3-1-3 4-3-1-4 4-3-2-1 4-3-2-2 4-3-2-3 4-3-2-4 4-3-3-1 4-3-3-2 4-3-3-3 4-3-3-4 4-3-4-1 4-3-4-2 4-3-4-3

Interrupt handler error. Unknown interrupt error. Pending interrupt error. Initialize option ROM error. Shutdown error. Extended Block Move. Shutdown 10 error. Initialize the chipset. Initialize refresh counter. Check for Forced Flash. Check HW status of ROM. BIOS ROM is OK. Do a complete RAM test. Do OEM initialization. Initialize interrupt controller. Read in bootstrap code. Initialize all vectors. Boot the Flash program. Initialize the boot device. Boot code was read OK.

What is the difference between BIOS and CMOS?


Question
What is the difference between BIOS and CMOS?

Answer
Often the BIOS and CMOS can be confused because instructions may either indicate to enter the BIOS Setup or the CMOS Setup. Although the setup for BIOS and CMOS is the same, the BIOS and CMOS on the motherboard are not.

If you have already read the above BIOS and CMOS definition links you should now know that the BIOS and CMOS are two different components on the motherboard. The BIOS on the motherboard contains the instructions on how the computer boots and is only modified or updated with BIOS updates, the CMOS is powered by a CMOS battery and contains your system settings and is modified and changed by entering the CMOS Setup. Although the setup is often referred to as the BIOS and CMOS setup, we suggest you only refer to the setup as "CMOS Setup" as it is more appropriate. Computer Hope often refers to the setup as BIOS and CMOS Setup to help users who are looking for one instead of the other.

Does 1-long 2-short beeps always indicate the vid-card?


Does 1-long 2-short beeps always indicate the vid-card?. Discuss Does 1-long 2-short beeps always indicate the vid-card?, on Wireless Forums.

How do I determine what version of USB I'm using?


Question
How do I determine what version of USB I'm using?

Answer
Is the USB device or USB ports on computer labeled as USB Hi-Speed? Many devices and computers using USB 2.0 will indicate if products are USB 2.0 ready or that they are USB 2.0 Hi-Speed. Computers and devices that have this are Hi-Speed are USB 2.0 compatible. Did you buy your computer before 2001? USB 2.0 was introduced in 2001, if you purchased your computer before 2001 it is likely that you are using USB 1.1 or 1.0. Microsoft Windows Device Manager lists USB as Hi-Speed. Compatible versions of Microsoft Windows will list USB 2.0 ports and USB devices as "Enhanced" in the Device Manager.

What versions of Windows support USB 2.0?

Product documentation If after following the above recommendations you are still unable to determine enough information about what version of USB your computer has refer to the USB or the motherboard documentation.

What versions of Windows support USB 2.0?


Question
What versions of Windows support USB 2.0?

Answer
Microsoft Windows ME, Windows 2000, Windows XP, Windows Vista, and all future versions of Windows support USB 2.0. Users running Microsoft Windows 2000 can find USB 2.0 support in Service Pack 4 and through the Windows update site. Microsoft Windows ME and Windows XP users can obtain USB 2.0 drivers and support by visiting the Windows update site. Note: If USB 2.0 support is not detected on the computer the download option for the drivers will not be available.

What is the difference between VRAM and DRAM?


Interested in easy to use VGA solution for embedded applications? Click here! Filed under FAQ / Graphics Cards

Answer:
DRAM (Dynamic RAM) used on video cards is the same technology as the main system RAM on most computers. The 'dynamic' part refers to the fact that this type of memory must be refreshed several times per minute or it will 'forget' the data it is storing. This means that DRAM has a duty cycle (a period during which the RAM is being refreshed and can't respond to external requests like reads/writes), unlike SRAM (Static RAM) which does not require refreshing, and thus is available at all times. DRAM, however, requires fewer discrete components for each bit stored, so physically takes less silicon, and thus is cheaper to manufacture.

An additional limitation of DRAM is that it can do only one thing at a time - it can either be read from or written to. There are two data transfer steps occurring on your video card. The first is to transfer data from the CPU to video RAM. The second is to transfer the video RAM data to the RAMDAC, which produces the video signal you see on your screen. The maximum amount of data which you can pump in and out of your video memory in one second is your 'video bandwidth'. Thus, the read and write operations must share the available video bandwidth, which means that the DRAM has to service both read requests from the RAMDAC and write requests from the CPU. At high pixel addressabilities and colour depths, an enormous amount of extra data has to be moved to and from the video memory, and as a result, DRAM boards may run out of bandwidth. This means that you may not be able to refresh your monitor fast enough to avoid flicker.

VRAM is a special type of DRAM which is dual-ported. It still has a duty cycle, but it can write to and read from at the same time. In practice, this means that you get double the bandwidth out of 60 ns VRAM as you would out of 60 ns DRAM (if implemented correctly on the video card).

What is the difference between an integrated and a non-integrated system board?


Answer: A "system board" is another name for motherboard. Therefore integrated and nonintegrated system boards are two types of motherboards. An integrated system board has multiple components integrated into the board itself. These may include the CPU, video card, sound card, and various controller cards. A non-integrated system board uses installable components and expansion cards. For example, a non-integrated system board may allow you to upgrade the video card by removing the old one and installing a new one. Non-integrated motherboards typically have several PCI expansion slots as well. Most laptop use fully integrated system boards, since they provide a smaller form factor than nonintegrated boards. Desktop computers often use non-integrated motherboards, though they may contain some integrated parts. For example, most modern motherboards used in desktop computers have an integrated sound card and controller cards. Some may even have an integrated processor and video card as well. Since non-integrated system boards often have some integrated components, it is difficult to define the exact difference between the two types. In fact, it may be more accurate to refer to a nonintegrated system board as a partially integrated system board. Still, when it comes to technical specifications, it can be helpful to know what the difference is between integrated and nonintegrated motherboards. While there is no official definition of either type, here is what I have found: In most cases, the difference comes down to the video card. If a motherboard has an integrated graphics processor, it is generally considered to be an integrated motherboard. If the graphics processor resides on a removable card, then the motherboard is considered to be non-integrated. Therefore, when you see the terms "integrated" or "non-integrated" in technical specifications, you can at least tell what type of graphics processor the system uses.

2 different formats

DVD+R(W) and DVD-R(W) are 2 different, competing DVD formats. Since most DVD drives and DVD players now support both formats, they have and will continue to co-exist. Even though the plus (+) format is supposed to be better.

DVD-R is an older format, so older DVD players (think 2004 or earlier) may only be able to play DVD-R format.

They both also have double layer discs, called DVD-R DL (or DVD-R9) and DVD+R DL (or DVD+R9), which have 8.5GB capacity compared to the single-layer 4.7GB.

Lots of things

DVD-R is the original recording format developed in 1997 and is the only approved recording type by the DVD Forum, the "official body" of the DVD technology. DVD+R came out in 2002 by a coalition of many corporations, known as the DVD+RW alliance. The difference is in how data is recorded. DVD+R uses the ADIP form of tracking and speed control, rather than the LPP DVD-R uses. ADIP is better at tracking at high speeds due to its ability to not be as affected by interference and noise/error. DVD+R uses a different way of marking errors on the disk. This makes it more able to handle multi-session linking and/or under-run issues. Both of these make it so that you have a lower chance of making a "coaster" out of a DVD+R drive. Most DVD players can handle both formats with ease. Personally, I have found that I get less errors and more reliable play-back with DVD+R than DVD-R. However, this may be biased as I burn more DVD+R than DVD-R due to earlier experience.

What is difference between DVR-R and DVD+R?


What is difference between DVR-R and DVD+R? I have DVD-R and burned movie on the computer but I does not play? I have DVD-R discs and burned movie on the computer but I does not play on the Laptop which is 2 years old ( Toshiba) but can play on Laptop of my friend (SONY VAIO).

6 years ago Report Abuse

Paultech

Best Answer - Chosen by Voters


DVD-R (pronounced "DVD dash R") and DVD+R (pronounced "DVD plus R") are nearly identical formats. The discs look the same and are both supported by most DVD-ROM drives and DVD burners. The only difference between the formats is the way they determine the location of the laser beam on the disc. DVD-R discs use tiny marks along the grooves in the discs, called land prepits, to determine the laser position. DVD+R discs do not have land prepits, but instead measure the "wobble frequency" as the laser moves toward the outside of the disc. The DVD-R format was developed by Pioneer and was released in the second half of 1997. DVD+R was developed by Sony and Philips and was introduced in 2002. Companies that support DVD-R include Pioneer, Toshiba, Hitachi, and Panasonic, while companies that support DVD+R include Sony, Philips, Hewlett-Packard, Ricoh, and Yamaha. However, most of these companies now develop hybrid DVD drives that support both DVD-R and DVD+R formats. They are known as DVDR or DVDRW drives. When looking for media for your DVD drive, make sure it ends in "-R" if you have a DVD-R drive or "+R" if you have a DVD+R drive. If you have a DVDR drive, you can use either format. DVD-R is still more popular than DVD+R, but since they are both widely supported, it should not matter which format you choose.

What are some examples of computer peripheral devices?

Answer: A computer peripheral, or peripheral device, is an external object that provides input and output for the computer. Some common input devices include:

keyboard mouse joystick pen tablet MIDI keyboard scanner digital camera video camera microphone

Some common output devices include:


monitor projector TV screen printer plotter speakers

There are also devices that function as both input and output devices, such as:

external hard drives media card readers digital camcorders digital mixers MIDI equipment

While these are some of the more common peripherals, there are many other kinds as well. Just remember that any external device that provides input to the computer or receives output from the computer is considered a peripheral.

Some example of computer peripheral devices?


Asked in Computers & Technology at 6:16 AM on November 14, 2008 Tags: computer, peripheral, devices
! This question is closed Report abuse

Sudhir Singh Profile | Q&A

Rate this : 00

Latest Answers(2) Sort by Latest | Oldest | Top-rated

HI, A computer peripheral, or peripheral device, is an external object that provides input and output for the computer. Some common input devices include: 1. keyboard, 2. mouse, 3. joystick, 4.pen tablet, 5.MIDI keyboard, 6.scanner, 7.digital camera, 8.video camera, 9.microphone Some common output devices include: 1.monitor, 2.projector, 3.TV screen, 4.printer, 5.plotter, 6.speakers, There are also devices that function as both input and output devices, such as: 1.external hard drives, 2.media card readers, 3.digital camcorde, 4.digital mixers, 5.MIDI equipment

Common Computer Peripherals Three Peripherals That Are Used for Computer Output

Print this article

Input Peripherals
o

Computer systems are capable of handling thousands of calculations per second. However, in order for a computer to have something to process, the computer must receive instructions from an input device.

Some examples of input peripheral devices are keyboards, computer mice, touchscreens, and bar-code readers.

Output Peripherals
o

Once a computer has processed information, the information must be sent to an output device. Some examples of output devices are computer monitors, printers, plotters, and computer speakers.

Read more: Examples of Computer Peripherals | eHow.com http://www.ehow.com/list_6741538_examples-computerperipherals.html#ixzz1sZjdLuV2

What is the difference between memory and hard disk space?


Answer: Memory and disk space are perhaps the most widely-confused terms in the computing world. To truly comprehend how your computer works, you must first understand what memory and disk space are. The hard disk, sometimes called the "hard drive," which is actually the mechanism that holds the hard disk, is a spindle of magnetic discs that can hold several gigabytes of data. Therefore, disk space refers to how much space you have available on your hard disk for storing files. When you save a document or install a new program, it gets stored on your hard disk. The more files you download, install, or save on your hard disk, the more full it becomes. Memory, on the other hand is not the same as disk space! Memory refers to the random access memory (RAM) inside your computer. These are small chips that hold several memory modules side by side. Your computer uses memory (RAM) to store actively running programs on the computer, including the operating system. For example, the operating system's interface and other processes get loaded into memory when the computer boots up. When you open a program like Microsoft Word, it gets loaded into the computer's memory as well. When you quit the program, the memory space is freed up for other programs. RAM can be accessed several hundred times faster than a hard drive, which is why active programs must be loaded into the RAM from the hard drive. Because most data on the hard disk does not need to get loaded into the system memory at one time, computers typically have much more hard disk space than memory. For example, a computer may come with a 200 GB hard drive, and only 1 GB of RAM. So if your computer tells you that you don't have enough space to install a program, you will need delete files from your hard disk that you don't need or buy an additional hard drive. If your computer says there is not enough memory to run a certain program, you will need to upgrade your memory by buying more RAM. Knowing the difference between these two types of hardware can save you precious time and money.

What is the difference between memory and hard disk space?


7 years ago Report Abuse

Jigs

Best Answer - Chosen by Voters


Memory is usually referred in context of RAM (Random Access Memory). It is basically a small storage space used by applications, and is temporary in nature. It gets erased every time you boot your computer. Normal size of a PC's memory ranges from 128MB to 1 GB. Hard Disk on the other hand is a permanent storage device which you can use for storing all your files, music, photos, etc. Normally the hard disk capacity ranges from 40GB to >100GB.

hat is the difference between an AGP and a PCI graphics card?


Answer: The biggest difference between AGP and PCI graphics cards is that AGP cards can access the system memory to help with complex operations such as texture mapping. PCI cards can only access the memory available on the actual card. AGP doesn't share bandwidth with other devices, whereas PCI cards do. AGP also makes pipelined requests, which means it can execute multiple instructions at one time. PCI cards are not pipelined, which means each instruction has to finish before the next one is run. So, with all these great advantages of AGP, you'd think it would be the clear winner in performance, right? Well, not quite. Tests of similar AGP and PCI graphics cards show they perform almost the same (typically measured in frames per second). The area where AGP really shines is in high-resolution tests, where the direct access to the system memory is most beneficial. If you're installing an AGP or PCI card in your computer, the AGP slot is usually the shortest and should be brown. The PCI slots are slightly longer and are colored white. The actual size of the cards can vary as much as a few inches, though the pins on the bottom of the card should match the correct slot.

What is the difference between AGP and PCI graphics cards?

Answer: AGP (Accelerated Graphics Port) cards serve the same purpose as PCI graphics cards. However, the AGP interface, which was created by our buddies at Intel, has become the more popular choice for PCs. Part of the reason for this is that AGP cards manage memory better than PCI cards. The AGP interface can actually use your computer's standard memory as well as the video memory to help boost video performance. So, if you have an AGP slot in your computer, I'd go with an AGP card. If you only have PCI slots, however, a PCI graphics card isn't going to be much different.

What is the difference between AGP and PCI graphics cards?


7 years ago Report Abuse

jBoogie

Best Answer - Chosen by Asker


AGP or Accelerated Graphics Port succeeded PCI or Peripheral Component Interconnect as a technology to connect graphics cards to a computer's motherboard. PCI is still used to connect other devices like ethernet and sound cards. AGP slots provide a dedicated link between the card and the CPU and thus providing faster communication between them. There are other enhancements as well. See the source for more. But the bottom line is that AGP cards perform much better. But now AGP is getting phased in favor of PCIe or PCI Express which is even faster and can be used as the bus for all the devices in a machine.

What do the terms Mk2 and Mk3 mean?


Answer: Mk is short for "mark." Therefore Mk2 is pronounced "Mark Two," and Mk3 is pronounced "Mark Three." Mark 2, also written Mark II, refers to the second version of a product. Mark 3, also written Mark III, is the third version of a product. Likewise, Mark 4, Mark 5, etc. can be used to identify additional versions. The "Mark" naming scheme is typically used by hardware products and is similar to the version numbers used by software programs. By adding "Mk2" or "Mk3" after a product name,

manufacturers can keep the same name when a product is upgraded. This is useful for brand recognition and helps companies continue successful product lines through incremental upgrades. The "Mark" labels were traditionally used to identify different versions of military equipment. However, they have become more commonly used for consumer products as well. Examples of products that use "Mk" or "Mark" in their names include digital cameras (such as the professional line of Canon cameras) and audio equipment (such as MOTU digital audio hardware). So if you ever see a product name followed by "mk2," you'll know it's the second version of the product. Then you can compare it to the first version and see all the amazing improvements that have been made.

IP addresses and Subnetting


IP addresses & subnetting - an overview IP addresses What is an IP address? Classes of IP addresses Globally routable and private network IP addresses Subnetting What is subnetting? How does subnetting work? Subnet masks Calculating a network number using a subnet mask Calculating a broadcast address using a subnet mask Prefix length notation (CIDR notation) Calculating a subnet mask Defining subnet numbers The fast track to the advantages of subnetting List of subnet masks

Test your knowledge

IP addresses & subnetting - an overview


The following gives an introduction to IP addresses and subnetting on local area networks. If you want to find out about the advantages of using private network IP addresses on you local area network, or what subnetting can do for you, the explanation is here. You can also find the recipe for how you calculate a subnet mask, a network address and broadcast address. However, the course also offers a fast track to getting the advantages of subnetting on local area networks without having to do all the calculations yourself. If this is what you are looking for, you might want to jump directly to the last chapter in this course: The fast track to the advantages of subnetting.

Test your knowledge


1. What is a network number? The part of an IP address that all hosts on a network share The part of an IP address which networks share The part of an IP address which no hosts on a network share 2. What is a host number? The part of an IP address which all hosts on a network share The part of an IP address which networks share The part of an IP address which no hosts on a network share 3. How many hosts can you set up on a Class C network (without subnetting)? 254 256 16,384 4. A '/8' is also referred to as? A class A network A class B network A class C network 5. What is a private network IP address? The IP address of a secret server on the Internet An IP address which is included in the routing tables on the Internet

An IP address which is NOT included in the routing tables on the Internet 6. You are setting up a LAN with 20 hosts. Which of the following private network IP address blocks does it make the most sense to choose your IP addresses from? 7. What is subnetting? The division of a physical network into two or more physical networks The division of a logical network into two or more physical networks The division of a physical network into one or more logical networks 8. How does subnetting work? Bits from the network portion of the IP address are borrowed to designate the subnetwork Bits from the host portion of the IP address are borrowed to designate the subnetwork An additional cable is attached to the servers LAN port 9. What is a subnetmask? A deciphering key used to determine which part of an IP address constitutes the Host and Network portions respectively The network number of a subnet The network of a number before it is subnettet 10. 10.0.0.0-10.255.255.255 172.16.0.0-172.31.255.255 192.168.0.0-192.168.255.255

What does the 0's in a subnet mask (written in its binary form) mean?

They indicate that this part in the corresponding IP address is the network portion They indicate that this part in the corresponding IP address is the host portion They indicate that the network has no subnets

S-ar putea să vă placă și