Documente Academic
Documente Profesional
Documente Cultură
Introduction to Computer
In computing, an input device is any peripheral (piece of computer hardware equipment) used to
provide data and control signals to an information processing system such as a computer or other
information appliance. Examples of input devices include keyboards, mouse, scanners, digital cameras
and joysticks.
Keyboard:
In computing, a keyboard is a typewriter-style device, which uses an arrangement of buttons or keys,
to act as mechanical levers or electronic switches. Following the decline of punch cards and paper tape,
interaction via teleprinter-style keyboards became the main input device for computers.
A keyboard typically has characters engraved or printed on the keys and each press of a key typically
corresponds to a single written symbol. However, to produce some symbols requires pressing and
holding several keys simultaneously or in sequence. While most keyboard keys produce letters,
numbers or signs (characters), other keys or simultaneous key presses can produce actions or execute
computer commands.
Despite the development of alternative input devices, such as the mouse, touch screen, pen devices,
character recognition and voice recognition, the keyboard remains the most commonly used device for
direct (human) input of alphanumeric data into computers.
Mouse: In computing, a mouse is a pointing device that detects two-dimensional motion relative to a
surface. This motion is typically translated into the motion of a pointer on a display, which allows for
fine control of a graphical user interface.
Physically, a mouse consists of an object held in one's hand, with one or more buttons. Mice often also
feature other elements, such as touch surfaces and "wheels", which enable additional control and
dimensional input.
Scanner: Scanner is an input device which scans the object into the computer. In computing, an image
scanner—often abbreviated to just scanner, although the term is ambiguous out of context (barcode
scanner, CAT scanner, etc.)—is a device that optically scans images, printed text, handwriting, or an
object, and converts it to a digital image. Commonly used in offices are variations of the desktop
flatbed scanner where the document is placed on a glass window for scanning. Hand-held scanners,
where the device is moved by hand, have evolved from text scanning "wands" to 3D scanners used for
industrial design, reverse engineering, test and measurement, orthotics, gaming and other applications.
Mechanically driven scanners that move the document are typically used for large-format documents,
where a flatbed design would be impractical.
Light Pen: A light pen is a computer input device in the form of a light-sensitive wand used in
conjunction with a computer's CRT display. It allows the user to point to displayed objects or draw on
the screen in a similar way to a touch screen but with greater positional accuracy. It was long thought
[
that a light pen can work with any CRT-based display, but not with LCDs (though Toshiba and Hitachi
displayed a similar idea at the "Display 2006" show in Japan) and other display technologies. However,
in 2011 Fairlight Instruments released its Fairlight CMI-30A, which uses a 17" LCD monitor with light
pen control.
A light pen detects a change of brightness of nearby screen pixels when scanned by cathode ray tube
electron beam and communicates the timing of this event to the computer. Since a CRT scans the entire
screen one pixel at a time, the computer can keep track of the expected time of scanning various
locations on screen by the beam and infer the pen's position from the latest timestamp.
Touchscreen: A touchscreen is an electronic visual display that the user can control through simple or
multi-touch gestures by touching the screen with a special stylus/pen and-or one or more fingers. Some
touchscreens use an ordinary or specially coated gloves to work while others use a special stylus/pen
only. The user can use the touchscreen to react to what is displayed and to control how it is displayed
(for example by zooming the text size).
The touchscreen enables the user to interact directly with what is displayed, rather than using a mouse,
touchpad, or any other intermediate device (other than a stylus, which is optional for most modern
touchscreens).
Touchscreens are common in devices such as game consoles, all-in-one computers, tablet computers,
and smartphones. They can also be attached to computers or, as terminals, to networks. They also play
a prominent role in the design of digital appliances such as personal digital assistants (PDAs), satellite
navigation devices, mobile phones, and video games and some books (Electronic books).
Joystick: A joystick is an input device consisting of a stick that pivots on a base and reports its angle
or direction to the device it is controlling. A joystick, also known as the control column, is the
principal control device in the cockpit of many civilian and military aircraft, either as a center stick or
side-stick. It often has supplementary switches to control various aspects of the aircraft's flight.
Joysticks are often used to control video games, and usually have one or more push-buttons whose state
can also be read by the computer. A popular variation of the joystick used on modern video game
consoles is the analog stick. Joysticks are also used for controlling machines such as cranes, trucks,
underwater unmanned vehicles, wheelchairs, surveillance cameras, and zero turning radius lawn
mowers. Miniature finger-operated joysticks have been adopted as input devices for smaller electronic
equipment such as mobile phones.
Output Devices: An output device is computer hardware equipment used to communicate the results
of data processing carried out by an information processing system (such as a computer) which
converts the electronically generated information into human-readable form.
VDU / CRT: A monitor or a display is an electronic visual display for computers. The monitor
comprises the display device, circuitry and an enclosure. The display device in modern monitors is
typically a thin film transistor liquid crystal display (TFT-LCD) thin panel, while older monitors used a
cathode ray tube (CRT) about as deep as the screen size. Originally, computer monitors were used for
data processing while television receivers were used for entertainment. From the 1980s onwards,
computers (and their monitors) have been used for both data processing and entertainment, while
televisions have implemented some computer functionality. The common aspect ratio of televisions,
and then computer monitors, has also changed from 4:3 to 16:9 (and 16:10).
Printer: It is a peripheral which makes a persistent human-readable representation of graphics or text
on paper or similar physical media. Individual printers are designed to support local and network users
at the same time. Some printers can print documents stored on memory cards or from digital cameras
and scanners.
Consumer and some commercial printers are designed for low-volume, short-turnaround print jobs;
requiring virtually no setup time to achieve a hard copy of a given document. However, printers are
generally slow devices (30 pages per minute is considered fast, and many inexpensive consumer
printers are far slower than that), and the cost per page is actually relatively high. However, this is
offset by the on-demand convenience and project management costs being more controllable compared
to an out-sourced solution. The printing press remains the machine of choice for high-volume,
professional publishing. However, as printers have improved in quality and performance, many jobs
which used to be done on printing presses are now done by print on demand or by users on local
printers; see desktop publishing. Local printers are also increasingly taking over the process of
photofinishing as digital photo printers become commonplace. The world's first computer printer was a
19th-century mechanically driven apparatus invented by Charles Babbage for his difference engine.
1. Line printers print an entire line of text at a time. They are as under –
a) Drum printers, where a horizontally mounted rotating drum carries the entire character
set of the printer repeated in each printable character position. The IBM 1132 printer is an
example of a drum printer.
b) Chain or train printers, where the character set is arranged multiple times around a
linked chain or a set of character slugs in a track traveling horizontally past the print line.
The IBM 1403 is perhaps the must popular, and came in both chain and train varieties.
The band printer is a later variant where the characters are embossed on a flexible steel
band. The LP27 from Digital Equipment Corporation is a band printer.
c) Bar printers, where the character set is attached to a solid bar that moves horizontally
along the print line, such as the IBM 1443.
2. Impact Printers rely on a forcible impact to transfer ink to the media. The impact printer uses a
print head that either hits the surface of the ink ribbon, pressing the ink ribbon against the paper.
They are as follows –
a) Dot Matrix Printer, is a type of computer printing which uses a print head that runs back
and forth, or in an up and down motion, on the page and prints by impact, striking an ink-
soaked cloth ribbon against the paper, much like the print mechanism on a typewriter.
However, unlike a typewriter or daisy wheel printer, letters are drawn out of a dot matrix,
and thus, varied fonts and arbitrary graphics can be produced. The common serial dot
matrix printers use a horizontally moving print head. The print head can be thought of
featuring a single vertical column of seven or more pins approximately the height of a
character box. In reality, the pins are arranged in up to four vertically or/and horizontally
slightly displaced columns in order to increase the dot density and print speed through
interleaving without causing the pins to jam. Thereby, up to 48 pins can be used to form
the characters of a line while the print head moves horizontally.
b) Daisy Wheel Printer, is an impact printing technology invented in 1969 by David S. Lee
at Diablo Data Systems. It uses interchangeable pre-formed type elements, each with
typically 96 glyphs, to generate high-quality output comparable to premium typewriters
such as the IBM Selectric, but two to three times faster. Daisy wheel printing was used in
electronic typewriters, word processors and computers from 1972. The daisy wheel is
considered to be so named because of its resemblance to the daisy flower.
3. Laser printing is an electrostatic digital printing process that rapidly produces high quality text
and graphics by passing a laser beam over a charged drum to define a differentially charged
image. The drum then selectively collects charged toner and transfers the image to paper, which
is then heated to permanently fix the image. As with digital photocopiers and multifunction
printers (MFPs), laser printers employ a xerographic printing process, but differ from analog
photocopiers in that the image is produced by the direct scanning of the medium across the
printer's photoreceptor. Hence, it proves to be a much faster process compared to the latter. A
laser beam, typically from an aluminum gallium arsenide semiconductor laser, projects an image
of the page to be printed onto an electrically charged rotating drum coated with selenium or,
more common in modern printers, organic photoconductors. Photoconductivity allows charge to
leak away from the areas exposed to light. Powdered ink particles are then electrostatically
picked up by the drum's charged areas, which have not been exposed to the laser beam. The
drum then prints the image onto paper by direct contact and heat, which fuses the ink to the
paper.
4. Inkjet printing is a type of computer printing that creates a digital image by propelling droplets
of ink onto paper, plastic, or other substrates. Inkjet printers are the most commonly used type of
printer and range from small inexpensive consumer models to very large professional machines
that can cost tens of thousands of dollars, or more.
Storage Devices –
A data storage device is a device for recording (storing) information (data). Recording can be done
using virtually any form of energy, spanning from manual muscle power in handwriting, to acoustic
vibrations in phonographic recording, to electromagnetic energy modulating magnetic tape and optical
discs.
A storage device may hold information, process information, or both. A device that only holds
information is a recording medium. Devices that process information (data storage equipment) may
both access a separate portable (removable) recording medium or a permanent component to store and
retrieve information.
Electronic data storage requires electrical power to store and retrieve that data. Most storage devices
that do not require vision and a brain to read data fall into this category. Electromagnetic data may be
stored in either an analog data or digital data format on a variety of media. This type of data is
considered to be electronically encoded data, whether or not it is electronically stored in a
semiconductor device, for it is certain that a semiconductor device was used to record it on its medium.
Most electronically processed data storage media (including some forms of computer data storage) are
considered permanent (non-volatile) storage, that is, the data will remain stored when power is
removed from the device. In contrast, most electronically stored information within most types of
semiconductor (computer chips) microcircuits are volatile memory, for it vanishes if power is removed.
With the exception of barcodes and OCR data, electronic data storage is easier to revise and may be
more cost effective than alternative methods due to smaller physical space requirements and the ease of
replacing (rewriting) data on the same medium. However, the durability of methods such as printed
data is still superior to that of most electronic storage media. The durability limitations may be
overcome with the ease of duplicating (backing-up) electronic data.
a. CD-ROM is a pre-pressed optical compact disc which contains data. The name is an
acronym which stands for "Compact Disc Read-Only Memory". Computers can read
CD-ROMs, but cannot write on the CD-ROM's which are not writable or erasable. Until
the mid-2000s, CD-ROMs were popularly used to distribute software for computers and
video game consoles. Some CDs, called enhanced CDs, hold both computer data and
audio with the latter capable of being played on a CD player, while data (such as software
or digital video) is only usable on a computer (such as ISO 9660 format PC CD-ROMs).
b. DVD also called as "digital video disc" or "digital versatile disc" is a digital optical
disc storage format, invented and developed by Philips, Sony, Toshiba, and Panasonic in
1995. DVDs can be played in many types of players, including DVD players. DVDs offer
higher storage capacity than compact discs while having the same dimensions. Pre-
recorded DVDs are mass-produced using molding machines that physically stamp data
onto the DVD. Such discs are known as DVD-ROM, because data can only be read and
not written or erased. Blank recordable DVD discs (DVD-R and DVD+R) can be
recorded once using a DVD recorder and then function as a DVD-ROM. Rewritable
DVDs (DVD-RW, DVD+RW, and DVD-RAM) can be recorded and erased multiple
times.
Memory –
It refers to the physical devices used to store programs (sequences of instructions) or data (e.g. program
state information) on a temporary or permanent basis for use in a computer or other digital electronic
device. The term primary memory is used for the information in physical systems which function at
high-speed (i.e. RAM), as a distinction from secondary memory, which are physical devices for
program and data storage which are slow to access but offer higher memory capacity. Primary memory
stored on secondary memory is called "virtual memory". An archaic synonym for memory is store.
The term "memory", meaning primary memory is often associated with addressable semiconductor
memory, i.e. integrated circuits consisting of silicon-based transistors, used for example as primary
memory but also other purposes in computers and other digital electronic devices. There are two main
types of semiconductor memory: volatile and non-volatile. Examples of non-volatile memory are flash
memory (sometimes used as secondary, sometimes primary computer memory) and
ROM/PROM/EPROM/EEPROM memory (used for firmware such as boot programs). Examples of
volatile memory are primary memory (typically dynamic RAM, DRAM), and fast CPU cache memory
(typically static RAM, SRAM, which is fast but energy-consuming and offer lower memory capacity
per area unit than DRAM).
Memory
Primary Secondary
EPROM Memory
Memory Memory
EEPROM
Software –
Computer software, or simply software, also known as computer programs, is the non-tangible
component of computers. Computer software contrasts with computer hardware, which is the physical
component of computers. Computer hardware and software require each other and neither can be
realistically used without the other.
Computer software includes all computer programs regardless of their architecture; for example,
executable files, libraries and scripts are computer software. Yet, it shares their mutual properties:
software consists of clearly defined instructions that upon execution, instructs hardware to perform the
tasks for which it is designed. Software is stored in computer memory and cannot be touched, just as a
3D model shown in an illustration cannot be touched.
At the lowest level, executable code consists of machine language instructions specific to an individual
processor – typically a central processing unit (CPU). A machine language consists of groups of binary
values signifying processor instructions that change the state of the computer from its preceding state.
For example, an instruction may change the value stored in a particular storage location inside the
computer – an effect that is not directly observable to the user. An instruction may also (indirectly)
cause something to appear on a display of the computer system – a state change which should be
visible to the user. The processor carries out the instructions in the order they are provided, unless it is
instructed to "jump" to a different instruction, or interrupted.
Software is usually written in high-level programming languages that are easier and more efficient for
humans to use (closer to natural language) than machine language.[2] High-level languages are
compiled or interpreted into machine language object code. Software may also be written in a low-level
assembly language, essentially, a vaguely mnemonic representation of a machine language using a
natural language alphabet. Assembly language is converted into object code via an assembler.
It is of following types –
1. System Software - A generic term referring to the computer programs used to start and run
computer systems and networks. It includes –
a. Operating systems – An operating system (OS) is a collection of software that
manages computer hardware resources and provides common services for
computer programs. The operating system is an essential component of the system
software in a computer system. Application programs usually require an operating
system to function.
b. Device drivers – A device driver (commonly referred to as simply a driver) is a
computer program that operates or controls a particular type of device that is
attached to a computer. A driver provides a software interface to hardware
devices, enabling operating systems and other computer programs to access
hardware functions without needing to know precise details of the hardware being
used.
c. Middleware – It is computer software that provides services to software
applications beyond those available from the operating system. It can be described
as "software glue". Middleware makes it /output, so they can focus on the specific
purpose of their application.
d. Utility Software – It is system software designed to help analyze, configure,
optimize or maintain a computer. Utility software usually focuses on how the
computer infrastructure (including the computer hardware, operating system,
application software and data storage) operates. Due to this focus, utilities are
often rather technical and targeted at people with an advanced level of computer
knowledge - in contrast to application software, which allows users to do things
like creating text documents, playing video games, listening to music or viewing
websites
e. Shells and windowing systems – In computing, a shell is a user interface for
access to an operating system's services. Generally, operating system shells use
either a command-line interface (CLI) or graphical user interface (GUI). Mac OS
and Windows are widely used operating systems with GUIs. Similarly,
Windowing system (or window system) is a type of graphical user interface
(GUI) which implements the WIMP (windows, icons, menus, pointer) paradigm
for a user interface. Each currently running application is assigned a usually
resizable and usually rectangular shaped surface of the display to present its
graphical user interface to the user.
2. Application Software – It is the general designation of computer programs for performing user
tasks. Application software may have a general purpose (word processing, web browsers) or
have a specific purpose (accounting, truck scheduling). Since the development and near-
universal adoption of the web, an important distinction that has emerged has been between web
applications — written with HTML, JavaScript and other web-native technologies and typically
requiring one to be online and running a web browser, and the more traditional native
applications written in whatever languages are available for one's particular type of computer.
There has been contentious debate in the computing community regarding web applications
replacing native applications for many purposes, especially on mobile devices such as smart
phones and tablets. Web apps have indeed greatly increased in popularity for some uses, but the
advantages of applications make them unlikely to disappear soon, if ever. Furthermore, the two
can be complementary, and even integrated. Application software can also be seen as being
either horizontal or vertical. Horizontal applications are more popular and widespread, because
they are general purpose, for example word processors or databases. Vertical applications are
niche products, designed for a particular type of industry or business, or department within an
organization. Integrated suites of software will try to handle every specific aspect possible of,
for example, manufacturing or banking systems, or accounting, or customer service. There are
many types of application software:
An application suite consists of multiple applications bundled together. They usually have
related functions, features and user interfaces, and may be able to interact with each other, e.g.
open each other's files. Business applications often come in suites, e.g. Microsoft Office,
LibreOffice and iWork, which bundle together a word processor, a spreadsheet, etc.; but suites
exist for other purposes, e.g. graphics or music.
Enterprise software addresses the needs of an entire organization's processes and data flow,
across most all departments, often in a large distributed environment. (Examples include
financial systems, customer relationship management (CRM) systems and supply chain
management software). Departmental Software is a sub-type of enterprise software with a focus
on smaller organizations and/or groups within a large organization. (Examples include travel
expense management and IT Helpdesk.)
Enterprise infrastructure software provides common capabilities needed to support enterprise
software systems. (Examples include databases, email servers, and systems for managing
networks and security.)
Information worker software lets users create and manage information, often for individual
projects within a department, in contrast to enterprise management. Examples include time
management, resource management, documentation tools, analytical, and collaborative. Word
processors, spreadsheets, email and blog clients, personal information system, and individual
media editors may aid in multiple information worker tasks.
Content access software is used primarily to access content without editing, but may include
software that allows for content editing. Such software addresses the needs of individuals and
groups to consume digital entertainment and published digital content. (Examples include media
players, web browsers, and help browsers.)
Educational software is related to content access software, but has the content and/or features
adapted for use in by educators or students. For example, it may deliver evaluations (tests), track
progress through material, or include collaborative capabilities.
Simulation software simulates physical or abstract systems for either research, training or
entertainment purposes.
Media development software generates print and electronic media for others to consume, most
often in a commercial or educational setting. This includes graphic-art software, desktop
publishing software, multimedia development software, HTML editors, digital-animation
editors, digital audio and video composition, and many others.
Product engineering software is used in developing hardware and software products. This
includes computer-aided design (CAD), computer-aided engineering (CAE), computer language
editing and compiling tools, integrated development environments, and application programmer
interfaces.
Compilers –
A compiler is a computer program (or set of programs) that transforms source code written in a
programming language (the source language) into another computer language (the target language,
often having a binary form known as object code). The most common reason for wanting to transform
source code is to create an executable program.
The name "compiler" is primarily used for programs that translate source code from a high-level
programming language to a lower level language (e.g., assembly language or machine code). If the
compiled program can run on a computer whose CPU or operating system is different from the one on
which the compiler runs, the compiler is known as a cross-compiler. A program that translates from a
low level language to a higher level one is a decompiler. A program that translates between high-level
languages is usually called a language translator, source to source translator, or language converter. A
language rewriter is usually a program that translates the form of expressions without a change of
language. A compiler is likely to perform many or all of the following operations: lexical analysis,
preprocessing, parsing, semantic analysis (Syntax-directed translation), code generation, and code
optimization.
Interpreters –
In computer science, an interpreter is a computer program that directly executes i.e. performs,
instructions written in a programming or scripting language, without previously batch-compiling them
into machine language. An interpreter generally uses one of the following strategies for program
execution:
1. parse the source code and perform its behavior directly
2. translate source code into some efficient intermediate representation and immediately execute
this
3. explicitly execute stored precompiled code made by a compiler which is part of the interpreter
system
Assemblers –
An assembler is a program which creates object code by translating combinations of mnemonics and
syntax for operations and addressing modes into their numerical equivalents. This representation
typically includes an operation code ("opcode") as well as other control bits. The assembler also
calculates constant expressions and resolves symbolic names for memory locations and other entities.
The use of symbolic references is a key feature of assemblers, saving tedious calculations and manual
address updates after program modifications. Most assemblers also include macro facilities for
performing textual substitution—e.g., to generate common short sequences of instructions as inline,
instead of called subroutines. Some assemblers may also be able to perform some simple types of
instruction set-specific optimizations. One concrete example of this may be the ubiquitous x86
assemblers from various vendors. Most of them are able to perform jump-instruction replacements
(long jumps replaced by short or relative jumps) in any number of passes, on request. Others may even
do simple rearrangement or insertion of instructions, such as some assemblers for RISC architectures
that can help optimize a sensible instruction scheduling to exploit the CPU pipeline as efficiently as
possible. Like early programming languages such as FORTRAN, Algol, COBOL and Lisp, assemblers
have been available since the 1950s and the first generations of text based computer interfaces.
However, assemblers came first as they are far simpler to write than compilers for high-level
languages. This is because each mnemonic along with the addressing modes and operands of an
instruction translates rather directly into the numeric representations of that particular instruction,
without much context or analysis. There have also been several classes of translators and semi
automatic code generators with properties similar to both assembly and high level languages, with
speedcode as perhaps one of the better known examples.
Computer Languages –
A programming language is an artificial language designed to communicate instructions to a
machine, particularly a computer. Programming languages can be used to create programs that control
the behavior of a machine and/or to express algorithms. It can be defined as “A computer programming
language is a language used to write computer programs, which involve a computer performing some
kind of computation or algorithm and possibly control external devices such as printers, disk drives,
robots and so on.”
Languages are of following types –
1. High Level Language – In computer science, a high-level programming language is a
programming language with strong abstraction from the details of the computer. In comparison
to low-level programming languages, it may use natural language elements, be easier to use, or
may automate (or even hide entirely) significant areas of computing systems (e.g. memory
management), making the process of developing a program simpler and more understandable
relative to a lower-level language. The amount of abstraction provided defines how "high-level"
a programming language is. Ex: FORTRAN.
2. Low Level Language – In computer science, a low-level programming language is a
programming language that provides little or no abstraction from a computer's instruction set
architecture. Generally this refers to either machine code or assembly language. The word "low"
refers to the small or nonexistent amount of abstraction between the language and machine
language; because of this, low-level languages are sometimes described as being "close to the
hardware". Low-level languages can be converted to machine code without using a compiler or
interpreter, and the resulting code runs directly on the processor. A program written in a low-
level language can be made to run very quickly, and with a very small memory footprint; an
equivalent program in a high-level language will be more heavyweight. Low-level languages are
simple, but are considered difficult to use, due to the numerous technical details which must be
remembered.
Computer Generations
The UNIVAC and ENIAC computers are examples of first-generation computing devices. The
UNIVAC was the first commercial computer delivered to a business client, the U.S. Census Bureau in
1951.
Transistors replaced vacuum tubes and ushered in the second generation of computers. The transistor
was invented in 1947 but did not see widespread use in computers until the late 1950s. The transistor
was far superior to the vacuum tube, allowing computers to become smaller, faster, cheaper, more
energy-efficient and more reliable than their first-generation predecessors. Though the transistor still
generated a great deal of heat that subjected the computer to damage, it was a vast improvement over
the vacuum tube. Second-generation computers still relied on punched cards for input and printouts for
output. Second-generation computers moved from cryptic binary machine language to symbolic, or
assembly, languages, which allowed programmers to specify instructions in words. High-level
programming languages were also being developed at this time, such as early versions of COBOL and
FORTRAN. These were also the first computers that stored their instructions in their memory, which
moved from a magnetic drum to magnetic core technology. The first computers of this generation were
developed for the atomic energy industry.
The development of the integrated circuit was the hallmark of the third generation of computers.
Transistors were miniaturized and placed on silicon chips, called semiconductors, which drastically
increased the speed and efficiency of computers. Instead of punched cards and printouts, users
interacted with third generation computers through keyboards and monitors and interfaced with an
operating system, which allowed the device to run many different applications at one time with a
central program that monitored the memory. Computers for the first time became accessible to a mass
audience because they were smaller and cheaper than their predecessors.
The microprocessor brought the fourth generation of computers, as thousands of integrated circuits
were built onto a single silicon chip. What in the first generation filled an entire room could now fit in
the palm of the hand. The Intel 4004 chip, developed in 1971, located all the components of the
computer—from the central processing unit and memory to input/output controls—on a single chip. In
1981 IBM introduced its first computer for the home user, and in 1984 Apple introduced the
Macintosh. Microprocessors also moved out of the realm of desktop computers and into many areas of
life as more and more everyday products began to use microprocessors. As these small computers
became more powerful, they could be linked together to form networks, which eventually led to the
development of the Internet. Fourth generation computers also saw the development of GUIs, the
mouse and handheld devices.
Fifth Generation (Present and Beyond) Artificial Intelligence
Fifth generation computing devices, based on artificial intelligence, are still in development, though
there are some applications, such as voice recognition, that are being used today. The use of parallel
processing and superconductors is helping to make artificial intelligence a reality. Quantum
computation and molecular and nanotechnology will radically change the face of computers in years to
come. The goal of fifth-generation computing is to develop devices that respond to natural language
input and are capable of learning and self-organization.
Number System –
A numeral system (or system of numeration) is a writing system for expressing numbers, that is, a
mathematical notation for representing numbers of a given set, using digits or other symbols in a
consistent manner. It can be seen as the context that allows the symbols "11" to be interpreted as the
binary symbol for three, the decimal symbol for eleven, or a symbol for other numbers in different
bases.
Conversion from Decimal to Binary – To convert from a base-10 integer numeral to its base-2
(binary) equivalent, the number is divided by two, and the remainder is the least-significant bit.
The (integer) result is again divided by two; its remainder is the next least significant bit. This
process repeats until the quotient becomes zero. e.g. –
Binary 1 0 0 1 0 1 0 1 1 0 1
Decimal 1×210 0×29 0×28 1×27 0×26 1×25 0×24 1×23 1×22 0×21 1×20 1197
+ + + + + + + + + + =
Binary to Hexadecimal – To convert a binary number into its hexadecimal equivalent, divide it
into groups of four bits. If the number of bits isn't a multiple of four, simply insert extra 0 bits at
the left. This adding of zero at the right is called as padding.
C0E716 = (12 × 163) + (0 × 162) + (14 × 161) + (7 × 160) = (12 × 4096) + (0 × 256) + (14 × 16) +
(7 × 1) = (49,383)10
Divide the obtained decimal equivalent (49,383)10 to Binary by dividing it by 2.
It can be done directly also by writing the Hexadecimal numbers and writing each numbers
corresponding Binary equivalent.