Sunteți pe pagina 1din 139

BU : Information Technology For Business (1st Semester)

UNIT 1. COMPUTER HARDWARE and Software Basics of computer hardware A computer is a programmable machine (or more precisely, a programmable sequential state machine). There are two basic kinds of computers: analog and digital. Analog computers are analog devices. That is, they have continuous states rather than discrete numbered states. An analog computer can represent fractional or irrational values exactly, with no round-off. Analog computers are almost never used outside of experimental settings. A digital computer is a programmable clocked sequential state machine. A digital computer uses discrete states. A binary digital computer uses two discrete states, such as positive/negative, high/low, on/off, used to represent the binary digits zero and one. WHAT ARE COMPUTERS USED FOR? Computers are used for a wide variety of purposes. Data processing is commercial and financial work. This includes such things as billing, shipping and receiving, inventory control, and similar business related functions, as well as the electronic office. Scientific processing is using a computer to support science. This can be as simple as gathering and analyzing raw data and as complex as modelling natural phenomenon (weather and climate models, thermodynamics, nuclear engineering, etc.). Multimedia includes content creation (composing music, performing music, recording music, editing film and video, special effects, animation, illustration, laying out print materials, etc.) and multimedia playback (games, DVDs, instructional materials, etc.). PARTS OF A COMPUTER The classic crude oversimplication of a computer is that it contains three elements: processor unit, memory, and I/O (input/output). The borders between those three terms are highly ambigious, non-contiguous, and erratically shifting.

A slightly less crude oversimplification divides a computer into five elements: arithmetic and logic subsystem, control subsystem, main storage, input subsystem, and output subsystem.

processor arithmetic and logic control main storage external storage input/output overview input output

PROCESSOR The processor is the part of the computer that actually does the computations. This is sometimes called an MPU (for main processor unit) or CPU (for central processing unit or central processor unit). A processor typically contains an arithmetic/logic unit (ALU), control unit (including processor flags, flag register, or status register), internal buses, and sometimes special function units (the most common special function unit being a floating point unit for floating point arithmetic). Some computers have more than one processor. This is called multi-processing. The major kinds of digital processors are: CISC, RISC, DSP, and hybrid. CISC stands for Complex Instruction Set Computer. Mainframe computers and minicomputers were CISC processors, with manufacturers competing to offer the most useful instruction sets. Many of the first two generations of microprocessors were also CISC. RISC stands for Reduced Instruction Set Computer. RISC came about as a result of academic research that showed that a small well designed instruction set running compiled

programs at high speed could perform more computing work than a CISC running the same programs (although very expensive hand optimized assembly language favored CISC). DSP stands for Digital Signal Processing. DSP is used primarily in dedicated devices, such as MODEMs, digital cameras, graphics cards, and other specialty devices. Hybrid processors combine elements of two or three of the major classes of processors. Processors:The processor is the part of the computer that actually does the computations. This is sometimes called an MPU (for main processor unit) or CPU (for central processing unit or central processor unit). A processor typically contains an arithmetic/logic unit (ALU), control unit (including processor flags, flag register, or status register), internal buses, and sometimes special function units (the most common special function unit being a floating point unit for floating point arithmetic). Some computers have more than one processor. This is called multi-processing. The major kinds of digital processors are: CISC, RISC, DSP, and hybrid. CISC CISC stands for Complex Instruction Set Computer. Mainframe computers and minicomputers were CISC processors, with manufacturers competing to offer the most useful instruction sets. Many of the first two generations of microprocessors were also CISC. In many computer applications, programs written in assembly language exhibit the shortest execution times. Assembly language programmers often know the computer architecture more intimately, and can write more efficient programs than compilers can generate from high level language (HLL) code. The disadvantage of this method of increasing program performance is the diverging cost of computer hardware and software. On one hand, it is now possible to constrcut an entire computer on a single chip of semiconductor material, its cost being very small compared to the cost of a programmers time. On the other hand, assembly language programming is perhaps the most time-consuming method of writing software. One way to decrease software costs is to provide assembly language instructions that perform complex tasks similar to those existing in HLLs. These tasks, such as the character select instruction, can be executed in one powerful assembly language instruction. A result of this philosophy is that computer instruction sets become relatively large, with many complex, special purpose, and often slow instructions. Another way to decrease software costs is to program in a HLL, and then let a compiler translate the program into assembly language. This method does not always produce the most efficient code. It has been found that it is extremely difficult to write an efficient optimizing compiler for a computer that has a very large instruction set. How can the HLL program execute more quickly? One approach is to narrow the semantic distance between the HLL concepts and the underlying architectual concepts. This closing of the semantic gap supports lower software costs, since the computer more closely matches the HLL,and is therefore easier to program in an HLL. RISC:RISC stands for Reduced Instruction Set Computer. RISC came about as a result of academic research that showed that a small well designed instruction set running compiled programs at high speed could perform more computing work than a CISC running the same programs (although very expensive hand optimized assembly language favored CISC).

Some designers began to question whether computers with complex instruction sets are as fast as they could be, having in mind the capabilities of the underlying technology. A few designers hypothesized that increased performance should be possible through a streamlined design, and instruction set simplicity. Thus, research efforts began in order to investigate how processing performance could be increased through simplified architectures. This is the root of the reduced instruction set computer (RISC) design philosophy. Seymour Cray has been credited with some of the very early RISC concepts. In an effort to design a very high speed vector processor (CDC 6600), a simple instruction set with pipelined execution was chosen. The CDC 6600 computer was register based, and all operations used data from registers local to the arithmetic units. Cray realized that all operations must be simplified for maximal performance. One complication or bottleneck in processing can cause all other operations to have degraded performance. Starting in the mid 1970s, the IBM 801 research team investigated the effect of a small instruction set and optimizing compiler design on computer performance. They performed dynamic studies of the frequency of use of different instructions in actual application programs. In these studies, they found that approximately 20 percent of the available instructions were used 80 percent of the time. Also, complexity of the control unit necessary to support rarely used instructions, slows the execution of all instructions. Thus, through careful study of program characteristics, one can specify a smaller instruction set consisting only of instructions which are used most of the time, and execute quickly. The first major university RISC research project was at the University of California, Berkeley (UCB), David Patterson, Carlos Squin, and a group of graduate students investigated the effective use of VSLI in microprocessor design. To fit a powerful processor on a single chip of silicon, they looked at ways to simplify the processor design. Much of the circuitry of a modern computer CPU is dedicated to the decoding of instructions and to controlling their execution. Microprogrammed CISC computers typically dedicate over half of their circuitry to the control section. However, UCB researchers realized that a small instruction set requires a smaller area for control circuitry, and the area saved could be used by other CPU functions to boost performance. Extensive studies of application programs were performed to determine what kind of instructions are typically used, how often they execute, and what kind of CPU resources are needed to support them. These studies indicated that a large register set enhanced performance, and pointed to specific instruction classes that should be optimized for better performance. The UCB research effort produced two RISC designs that are widely referenced in the literature. These two processors developed at UCB can be referred to as UCB-RISC I and UCB-RISC II. The mnemonics RISC and CISC emerged at this time. Shortly after the UCB group began its work, researchers at Stanford University (SU), under the direction of John Hennessy, began looking into the relationship between computers and compilers. Their research evolved into the design and implementation of optimizing compilers, and single-cycle instruction sets. Since this research pointed to the need for single-cycle instruction sets, issues related to complex, deep pipelines were also investigated. This research resulted in a RISC processor for VSLI that can be referred to as the SU-MIPS. The result of these initial investigations was the establishment of a design philosophy for a new type of von Neumann architecture computer. Reduced instruction set computer design resulted in computers that execute instructions faster than other computers built of the same technology. It was seen that a study of the target application programs is vital in designing the instruction set and datapath. Also, it was made evident that all facets of a computer design must be considered together. The design of reduced instruction set computers does not rely upon inclusion of a set of required features, but rather, is guided by a design philosophy. Since there is no strict definition of what constitutes a RISC design, a significant controversy exists in categorizing a computer as RISC or CISC. The RISC philosophy can be stated as follows: The effective speed of a computer can be maximized by migrating all but the most frequently used functions into software, thereby

simplifying the hardware, and allowing it to be faster. Therefore, included in hardware are only those performance features that are pointed to by dynamic studies of HLL programs. The same philosophy applies to the instruction set design, as well as to the design of all other on-chip resources. Thus, a resource is incorporated in the architecture only if its incorporation is justified by its frequency of use, as seen from the language studies, and if its incorporation does not slow down other resources that are more frequently used. Common features of this design philsophy can be observed in several examples of RISC design. The instruction set is based upon a load/store approach. Only load and store instructions access memory. No arithmetic, logic, or I/O instruction operates directly on memory contents. This is the key to single-cycle execution of instructions. Operations on register contents are always faster than operations on memory contents (memory references usually take multiple cycles; however, references to cached or buffered operands may be as rapid as register references, if the desired operand is in the cache, and the cache is on the CPU chip). Simple instructions and simple addressing modes are used. This simplificiation results in an instruction decoder that is small, fast, and relatively easy to design. It is easier to develop an optimizing compiler for a small, simple instruction set than for a complex instruction set. With few addressing modes, it is easier to map instructions onto a pipeline, since the pipeline can be designed to avoid a number of computation related conflicts. Little or no microcode is found in many RISC designs. The absence of microcode implies that there is no complex micro-CPU within the instruction decode/control section of a CPU. Pipelining is used in all RISC designs to provide simultaneous execution of multiple instructions. The depth of the pipeline (number of stages) depends upon how execution tasks are subdivided, and the time required for each stage to perform its operation. A carefully designed memory hierarchy is required for increased processing speed. This hierarchy permits fetching of instructions and operands at a rate that is high enough to prevent pipeline stalls. A typical hierarchy includes high-speed registers, cache, and/or buffers located on the CPU chip, and complex memory management schemes to support off-chip cache and memory devices. Most RISC designs include an optimizing compiler as an integral part of the computer architecture. The compiler provides an interface between the HLL and the machine language. Optimizing compilers provide a mechanism to prevent or reduce the number of pipeline faults by reorganizing code. The reorganization part of many compilers moves code around to eliminate redundant or useless statements, and to present instructions to the pipeline in the most efficient order. All instructions typically execute in the minimum possible number of CPU cycles. In some RISC designs, only load/store instructions require more than one cycle in which to execute. If all instructions take the same amount of time, the pipeline can be designed to recover from faults more easily, and it is easier for the compiler to reorganize and optimize the instruction sequence. BUSES A bus is a set (group) of parallel lines that information (data, addresses, instructions, and other information) travels on inside a computer. Information travels on buses as a series of electrical pulses, each pulse representing a one bit or a zero bit (there are trinary, or threestate, buses, but they are rare). Some writers use the term buss with a double s. The size or width of a bus is how many bits it carries in parallel. Common bus sizes are: 4 bits, 8 bits, 12 bits, 16 bits, 24 bits, 32 bits, 64 bits, 80 bits, 96 bits, and 128 bits. The speed of a bus is how fast it moves data along the path. This is usually measured in MegaHertz (MHz) or millions of times a second. The capacity of a bus is how much data it can carry in a second. In theory this is determined by multiplying the size of the bus by the speed of the bus, but in practice there are many factors that slow down a bus, including wait cycles (waiting for memory or another device to have information ready).

Some recent buses move two sets of data through the bus during each cycle (one after the other). This is called double pumping the bus. An internal bus is a bus inside the processor, moving data, addresses, instructions, and other information between registers and other internal components or units. An external bus is a bus outside of the processor (but inside the computer), moving data, addresses, and other information between major components (including cards) inside the computer. A bus master is a combination of circuits, control microchips, and internal software that control the movement of information over a bus. The internal software (if any) is contained inside the bus master and is separate from the main processor. A processor bus is a bus inside the processor. Some processor designs simplify the internal structure by having one or two processor buses. In a single processor bus system, all information is carried around inside the processor on one processor bus. In a dual processor bus system, there is a source bus dedicated to moving source data and a destination bus dedicated to moving results. An alternative approach is to have a lot of small buses that connect various units inside the processor. While this design is more complex, it also has the potential of being faster, especially if there are multiple units within the processor that can perform work simultaneously (a form of parallel processing). A system bus connects the main processor with its primary support components, in particular connecting the processor to its memory. Depending on the computer, a system bus may also have other major components connected. A data bus carries data. Most processors have internal data buses that carry information inside the processor and external data buses that carry information back and forth between the processor and memory. An address bus carries address information. In most processors, memory is connected to the processor with separate address and data buses. The processor places the requested address in memory on the address bus for memory or the memory controller (if there is more than one chip or bank of memory, there will be a memory controller that controls the banks of memory for the processor). If the processor is writing data to memory, then it will assert a write signal and place the data on the data bus for transfer to memory. If the processor is reading data from memory, then it will assert a read signal and wait for data from memory to arrive on the data bus. In some small processors the data bus and address bus will be combined into a single bus. This is called multiplexing. Special signals indicate whether the multiplexed bus is being used for data or address. This is at least twice as slow as separate buses, but greatly reduces the complexity and cost of support circuits, an important factor in the earliest days of computers, in the early days of microprocessors, and for small embedded processors (such as in a microwave oven, where speed is unimportant, but cost is a major factor). An instruction bus is a specialized data bus for fetching instructions from memory. The very first computers had separate storage areas for data and programs (instructions). John Von Neumann introduced the von Neumann architecture, which combined both data and instructions into a single memory, simplifying computer architecture. The difference between data and instructions was a matter of interpretation. In the 1970s, some processors implemented hardware systems for dynamically mapping which parts of memory were for code (instructions) and which parts were for data, along with hardware to insure that data

was never interpretted as code and that code was never interpretted as data. This isolation of code and data helped prevent crashes or other problems from runaway code that started wiping out other programs by incorrectly writing data over code (either from the same program or worse from some other users software). In more recent innovation, super computers and other powerful processors added separate buses for fetching data and instructions. This speeds up the processor by allowing the processor to fetch the next instruction (or group of instructions) at the same time that it is reading or writing data from the current or preceding instruction. A memory bus is a bus that connects a processor to memory or connects a processor to a memory controller or connects a memory controller to a memory bank or memory chip. A cache bus is a bus that connects a processor to its internal (L1 or Level 1) or external (L2 or Level 2) memory cache or caches. An I/O bus (for input/output) is a bus that connects a processor to its support devices (such as internal hard drives, external media, expansion slots, or peripheral ports). Typically the connection is to controllers rather than directly to devices. A graphics bus is a bus that connects a processor to a graphics controller or graphics port. A local bus is a bus for items closely connected to the processor that can run at or near the same speed as the processor itself. BUS STANDARDS ISA (Industry Standard Architecture) is a bus system for IBM PCs and PC clones. The original standard, from 1981, was an 8 bit bus that ran at 4.77 MHz. In 1984, with the introduction of the IBM AT computer (which used the 80286 processor, introduced by Intel in 1982), ISA was expanded to a 16 bit bus that ran at 8.3 MHz. MCA (Micro Channel Architecture) is a 32 bit bus introduced in 1987 by IBM with the PS/2 computer that used the Intel 80386 processor. IBM attempted to license MCA bus to other manufacturers, but they rejected it because of the lack of ability to use the wide variety of existing ISA devices. IBM continues to use a modern variation of MCA in some of its server computers. EISA (Extended Industry Standard Architecture) is a 32 bit bus running at 8.3 MHz created by the clone industry in response to the MCA bus. EISA is backwards compatible so that ISA devices could be connected to it. EISA also can automatically set adaptor card configurations, freeing users from having to manually set jumper switches. NuBus is a 32 bit bus created by Texas Instruments and used in the Macintosh II and other 680x0 based Macintoshes. NuBus supports automatic configuration (for plug and play). VL bus (VESA Local bus) is created in 1992 by the Video Electronics Standards Association for the Intel 80486 processor. The VL bus is 32 bits and runs at 33 MHz. The VL bus requires use of manually set jumper switches. PCI (Peripheral Component Interconnect) is a bus created by Intel in 1993. PCI is available in both a 32 bit version running at 33 MHz and a 64 bit version running at 66 MHz. PCI supports automatic configuration (for plug and play). PCI automatically checks data

transfers for errors. PCI uses a burst mode, increasing bus efficiency by sending several sets of data to one address. DIB (Dual Independent Bus) was created by Intel to increase the performance of frontside L2 cache. SECC (Single Edge Contact Cartridge) was created by Intel for high speed backside L2 cache. AGP (Accelerated Graphics Port) was created by Intel to increase performance by separating video data from the rest of the data on PCI I/O buses. AGP is 32 bits and runs at 66 MHz. AGP 2X double pumps the data, doubling the amount of throughput at the same bus width and speed. AGP 4X runs four sets of data per clock, quadrupling the throughput. DRDRAM was a memory bus created by Rambus to increase speed of connections between the processor and memory. DRDRAM is a 33 bit bus running at 800 MHz. 16 bits are for data, with the other 17 bits reserved for address functions. REGISTER SET Registers are fast memory, almost always connected to circuitry that allows various arithmetic, logical, control, and other manipulations, as well as possibly setting internal flags. Most early computers had only one data register that could be used for arithmetic and logic instructions. Often there would be additional special purpose registers set aside either for temporary fast internal storage or assigned to logic circuits to implement certain instructions. Some early computers had one or two address registers that pointed to a memory location for memory accesses (a pair of address registers typically would act as source and destination pointers for memory operations). Computers soon had multiple data registers, address registers, and sometimes other special purpose registers. Some computers have general purpose registers that can be used for both data and address operations. Every digital computer using a von Neumann architecture has a register (called the program counter) that points to the next executable instruction. Many computers have additional control registers for implementing various control capabilities. Often some or all of the internal flags are combined into a flag or status register. Accumulators are registers that can be used for arithmetic, logical, shift, rotate, or other similar operations. The first computers typically only had one accumulator. Many times there were related special purpose registers that contained the source data for an accumulator. Accumulators were replaced with data registers and general purpose registers. Data registers are used for temporary scratch storage of data, as well as for data manipulations (arithmetic, logic, etc.). In some processors, all data registers act in the same manner, while in other processors different operations are performed are specific registers. address registers Address registers store the addresses of specific memory locations. Often many integer and logic operations can be performed on address registers directly (to allow for computation of addresses). Sometimes the contents of address register(s) are combined with other special purpose registers to compute the actual physical address. This allows for the hardware

General purpose registers can be used as either data or address registers. Control registers control some aspect of processor operation. The most universal control register is the program counter. Almost every digital computer ever made uses a program counter. The program counter points to the memory location that stores the next executable instruction. Branching is implemented by making changes to the program counter. Some processor designs allow software to directly change the program counter, but usually software only indirectly changes the program counter (for example, a JUMP instruction will insert the operand into the program counter). An assembler has a location counter, which is an internal pointer to the address (first byte) of the next location in storage (for instructions, data areas, constants, etc.) while the source code is being converted into object code. An arithmetic/logic unit (ALU) performs integer arithmetic and logic operations. It also performs shift and rotate operations and other specialized operations. Usually floating point arithmetic is performed by a dedicated floating point unit (FPU), which may be implemented as a co-processor. An arithmetic/logic unit (ALU) performs integer arithmetic and logic operations. It also performs shift and rotate operations and other specialized operations. Usually floating point arithmetic is performed by a dedicated floating point unit (FPU), which may be implemented as a co-processor. Control units are in charge of the computer. Control units fetch and decode machine instructions. Control units may also control some external devices. A bus is a set (group) of parallel lines that information (data, addresses, instructions, and other information) travels on inside a computer. Information travels on buses as a series of electrical pulses, each pulse representing a one bit or a zero bit (there are trinary, or threestate, buses, but they are rare). An internal bus is a bus inside the processor, moving data, addresses, instructions, and other information between registers and other internal components or units. An external bus is a bus outside of the processor (but inside the computer), moving data, addresses, and other information between major components (including cards) inside the computer. Some common kinds of buses are the system bus, a data bus, an address bus, a cache bus, a memory bus, and an I/O bus. MAIN STORAGE Main storage is also called memory or internal memory (to distinguish from external memory, such as hard drives). RAM is Random Access Memory, and is the basic kind of internal memory. RAM is called random access because the processor or computer can access any location in memory (as contrasted with sequential access devices, which must be accessed in order). RAM has been made from reed relays, transistors, integrated circuits, magnetic core, or anything that can hold and store binary values (one/zero, plus/minus, open/close, positive/negative, high/low, etc.). Most modern RAM is made from integrated circuits. At one time the most common kind of memory in mainframes was magnetic core, so many older programmers will refer to main memory as core memory even when the RAM is made from more modern technology. Static RAM is called static because it will continue to hold and store information even when power is removed. Magnetic core and reed relays are examples of static memory. Dynamic RAM is called dynamic because it loses all data when power is removed.

Transistors and integrated circuits are examples of dynamic memory. It is possible to have battery back up for devices that are normally dynamic to turn them into static memory. ROM is Read Only Memory (it is also random access, but only for reads). ROM is typically used to store thigns that will never change for the life of the computer, such as low level portions of an operating system. Some processors (or variations within processor families) might have RAM and/or ROM built into the same chip as the processor (normally used for processors used in standalone devices, such as arcade video games, ATMs, microwave ovens, car ignition systems, etc.). EPROM is Erasable Programmable Read Only Memory, a special kind of ROM that can be erased and reprogrammed with specialized equipment (but not by the processor it is connected to). EPROMs allow makers of industrial devices (and other similar equipment) to have the benefits of ROM, yet also allow for updating or upgrading the software without having to buy new ROM and throw out the old (the EPROMs are collected, erased and rewritten centrally, then placed back into the machines). Registers and flags are a special kind of memory that exists inside a processor. Typically a processor will have several internal registers that are much faster than main memory. These registers usually have specialized capabilities for arithmetic, logic, and other operations. Registers are usually fairly small (8, 16, 32, or 64 bits for integer data, address, and control registers; 32, 64, 96, or 128 bits for floating point registers). Some processors separate integer data and address registers, while other processors have general purpose registers that can be used for both data and address purposes. A processor will typically have one to 32 data or general purpose registers (processors with separate data and address registers typically split the register set in half). Many processors have special floating point registers (and some processors have general purpose registers that can be used for either integer or floating point arithmetic). Flags are single bit memory used for testing, comparison, and conditional operations (especially conditional branching). EXTERNAL STORAGE External storage (also called auxillary storage) is any storage other than main memory. In modern times this is mostly hard drives and removeable media (such as floppy disks, Zip disks, optical media, etc.). With the advent of USB and FireWire hard drives, the line between permanent hard drives and removeable media is blurred. Other kinds of external storage include tape drives, drum drives, paper tape, and punched cards. Random access or indexed access devices (such as hard drives, removeable media, and drum drives) provide an extension of memory (although usually accessed through logical file systems). Sequential access devices (such as tape drives, paper tape punch/readers, or dumb terminals) provide for off-line storage of large amounts of information (or back ups of data) and are often called I/O devices (for input/output). INPUT/OUTPUT OVERVIEW Most external devices are capable of both input and output (I/O). Some devices are inherently input-only (also called read-only) or inherently output-only (also called write-only). Regardless of whether a device is I/O, read-only, or write-only, external devices can be classified as block or character devices. A character device is one that inputs or outputs data in a stream of characters, bytes, or bits. Character devices can further be classified as serial or parallel. Examples of character devices include printers, keyboards, and mice. A serial device streams data as a series of bits, moving data one bit at a time. Examples of serial devices include printers and MODEMs.

A parallel device streams data in a small group of bits simultaneously. Usually the group is a single eight-bit byte (or possibly seven or nine bits, with the possibility of various control or parity bits included in the data stream). Each group usually corresponds to a single character of data. Rarely there will be a larger group of bits (word, longword, doubleword, etc.). The most common parallel device is a printer (although most modern printers have both a serial and a parallel connection, allowing greater connection flexibility). A block device moves large blocks of data at once. This may be physically implemented as a serial or parallel stream of data, but the entire block gets transferred as single packet of data. Most block devices are random access (that is, information can be read or written from blocks anywhere on the device). Examples of random access block devices include hard disks, floppy disks, and drum drives. Examples of sequential access block devcies include magnetic tape drives and high speed paper tape readers. INPUT Input devices are devices that bring information into a computer. Pure input devices include such things as punched card readers, paper tape readers, keyboards, mice, drawing tablets, touchpads, trackballs, and game controllers. Devices that have an input component include magnetic tape drives, touchscreens, and dumb terminals. OUTPUT Output devices are devices that bring information out of a computer. Pure output devices include such things as card punches, paper tape punches, LED displays (for light emitting diodes), monitors, printers, and pen plotters. Devices that have an output component include magnetic tape drives, combination paper tape reader/punches, teletypes, and dumb terminals.

Hardware The hardware are the parts of the computer itself including the Central Processing Unit (CPU) and related microchips and micro-circuitry, keyboards, monitors, case and drives (hard, CD, DVD, floppy, optical, tape, etc...). Other extra parts called peripheral components or devices include mouse, printers, modems, scanners, digital cameras and cards (sound, colour, video) etc... Together they are often referred to as a personal computer. Central Processing Unit - Though the term relates to a specific chip or the processor a CPU's performance is determined by the rest of the computer's circuitry and chips. Currently the Pentium chip or processor, made by Intel, is the most common CPU though there are many other companies that produce processors for personal computers. Examples are the CPU made by Motorola and AMD.

With faster processors the clock speed becomes more important. Compared to some of the first computers which operated at below 30 megahertz (MHz) the Pentium chips began at 75 MHz in the late 1990's. Speeds now exceed 3000+ MHz or 3 gigahertz (GHz) and different chip manufacturers use different measuring standards (check your local computer store for the latest speed). It depends on the circuit board that the chip is housed in, or the motherboard, as to whether you are able to upgrade to a faster chip. The motherboard contains the circuitry and connections that allow the various component to communicate with each other. Keyboard - The keyboard is used to type information into the computer or input information. There are many different keyboard layouts and sizes with the most common for Latin based languages being the QWERTY layout (named for the first 6 keys). The standard keyboard has 101 keys. Notebooks have embedded keys accessible by special keys or by pressing key combinations (CTRL or Command and P for example). Ergonomically designed keyboards are designed to make typing easier. Hand held devices have various and different keyboard configurations and touch screens. Some of the keys have a special use. They are referred to as command keys. The 3 most common are the Control (CTRL), Alternate (Alt) and the Shift keys though there can be more (the Windows key for example or the Command key). Each key on a standard keyboard has one or two characters. Press the key to get the lower character and hold Shift to get the upper. Removable Storage and/or Disk Drives - All disks need a drive to get information off - or read - and put information on the disk - or write. Each drive is designed for a specific type of disk whether it is a CD, DVD, hard disk or floppy. Often the term 'disk' and 'drive' are used to describe the same thing but it helps to understand that the disk is the storage device which contains computer files - or software - and the drive is the mechanism that runs the disk. Digital flash drives work slightly differently as they use memory cards to store information so there are no moving parts. Digital cameras also use Flash memory cards to store information, in this case photographs. Hand held devices use digital drives and many also use removable or built in memory cards. Mouse - Most modern computers today are run using a mouse controlled pointer. Generally if the mouse has two buttons the left one is used to select objects and text and the right one is used to access menus. If the mouse has one button (Mac for instance) it controls all the activity and a mouse with a third button can be used by specific software programs. One type of mouse has a round ball under the bottom of the mouse that rolls and turns two wheels which control the direction of the pointer on the screen. Another type of mouse uses an optical system to track the movement of the mouse. Laptop computers use touch pads, buttons and other devices to control the pointer. Hand helds use a combination of devices to control the pointer, including touch screens. Note: It is important to clean the mouse periodically, particularly if it becomes sluggish. A ball type mouse has a small circular panel that can be opened, allowing you to remove the

ball. Lint can be removed carefully with a tooth pick or tweezers and the ball can be washed with mild detergent. A build up will accumulate on the small wheels in the mouse. Use a small instrument or finger nail to scrape it off taking care not to scratch the wheels. Track balls can be cleaned much like a mouse and touch-pad can be wiped with a clean, damp cloth. An optical mouse can accumulate material from the surface that it is in contact with which can be removed with a finger nail or small instrument. Monitors - The monitor shows information on the screen when you type. This is called outputting information. When the computer needs more information it will display a message on the screen, usually through a dialog box. Monitors come in many types and sizes. The resolution of the monitor determines the sharpness of the screen. The resolution can be adjusted to control the screen's display.. Most desktop computers use a monitor with a cathode tube or liquid crystal display. Most notebooks use a liquid crystal display monitor. To get the full benefit of today's software with full colour graphics and animation, computers need a color monitor with a display or graphics card. Printers - The printer takes the information on your screen and transfers it to paper or a hard copy. There are many different types of printers with various levels of quality. The three basic types of printer are; dot matrix, inkjet, and laser.

Dot matrix printers work like a typewriter transferring ink from a ribbon to paper with a series or 'matrix' of tiny pins. Ink jet printers work like dot matrix printers but fires a stream of ink from a cartridge directly onto the paper. Laser printers use the same technology as a photocopier using heat to transfer toner onto paper.

Modem - A modem is used to translate information transferred through telephone lines, cable, satellite or line-of-sight wireless. The term stands for modulate and demodulate which changes the signal from digital, which computers use, to analog, which telephones use and then back again. Digital modems transfer digital information directly without changing to analog. Modems are measured by the speed that the information is transferred. The measuring tool is called the baud rate. Originally modems worked at speeds below 2400 baud but today analog speeds of 56,000 are standard. Cable, wireless or digital subscriber lines can transfer information much faster with rates of 300,000 baud and up. Modems also use Error Correction which corrects for transmission errors by constantly checking whether the information was received properly or not and Compression which allows for faster data transfer rates. Information is transferred in packets. Each packet is checked for errors and is re-sent if there is an error. Anyone who has used the Internet has noticed that at times the information travels at different speeds. Depending on the amount of information that is being transferred, the information will arrive at it's destination at different times. The amount of information that can travel through a line is limited. This limit is called bandwidth.

There are many more variables involved in communication technology using computers, much of which is covered in the section on the Internet. Scanners- Scanners allow you to transfer pictures and photographs to your computer. A scanner 'scans' the image from the top to the bottom, one line at a time and transfers it to the computer as a series of bits or a bitmap. You can then take that image and use it in a paint program, send it out as a fax or print it. With optional Optical Character Recognition (OCR) software you can convert printed documents such as newspaper articles to text that can be used in your word processor. Most scanners use TWAIN software that makes the scanner accessable by other software applications. Digital cameras allow you to take digital photographs. The images are stored on a memory chip or disk that can be transferred to your computer. Some cameras can also capture sound and video. Case - The case houses the microchips and circuitry that run the computer. Desktop models usually sit under the monitor and tower models beside. They come in many sizes, including desktop, mini, midi, and full tower. There is usually room inside to expand or add components at a later time. By removing the cover off the case you may find plate covered, empty slots that allow you to add cards. There are various types of slots including IDE, ASI, USB, PCI and Firewire slots. Depending on the type notebook computers may have room to expand . Most Notebooks also have connections or ports that allows expansion or connection to exterior, peripheral devices such as monitor, portable hard-drives or other devices. Cards - Cards are components added to computers to increase their capability. When adding a peripheral device make sure that your computer has a slot of the type needed by the device. Sound cards allow computers to produce sound like music and voice. The older sound cards were 8 bit then 16 bit then 32 bit. Though the human ear can't distinguish the fine difference between sounds produced by the more powerful sound card they allow for more complex music and music production. Colour cards allow computers to produce colour (with a colour monitor of course). The first colour cards were 2 bit which produced 4 colours [CGA]. It was amazing what could be done with those 4 colours. Next came 4 bit allowing for 16 [EGA and VGA ] colours. Then came 16 bit allowing for 1064 colours and then 24 bit which allows for almost 17 million colours and now 32 bit and higher allow monitors to display almost a billion separate colours. Video cards allow computers to display video and animation. Some video cards allow computers to display television as well as capture frames from video. A video card with a digital video camera allows computers users to produce live video. A high speed connection is required for effective video transmission. Network cards allow computers to connect together to communicate with each other. Network cards have connections for cable, thin wire or wireless networks. For more information Cables connect internal components to the Motherboard, which is a board with series of electronic path ways and connections allowing the CPU to communicate with the other components of the computer.

Memory - Memory can be very confusing but is usually one of the easiest pieces of hardware to add to your computer. It is common to confuse chip memory with disk storage. An example of the difference between memory and storage would be the difference between a table where the actual work is done (memory) and a filing cabinet where the finished product is stored (disk). To add a bit more confusion, the computer's hard disk can be used as temporary memory when the program needs more than the chips can provide. Random Access Memory or RAM is the memory that the computer uses to temporarily store the information as it is being processed. The more information being processed the more RAM the computer needs. One of the first home computers used 64 kilobytes of RAM memory (Commodore 64). Today's modern computers need a minimum of 64 Mb (recommended 128 Mb or more) to run Windows or OS 10 with modern software. RAM memory chips come in many different sizes and speeds and can usually be expanded. Older computers came with 512 Kb of memory which could be expanded to a maximum of 640 Kb. In most modern computers the memory can be expanded by adding or replacing the memory chips depending on the processor you have and the type of memory your computer uses. Memory chips range in size from 1 Mb to 4 Gb. As computer technology changes the type of memory changes as well making old memory chips obsolete. Check your computer manual to find out what kind of memory your computer uses before purchasing new memory chips. Computer hardware A computer contains several major subsystems such as the Central Processing Unit (CPU), memory,and peripheral device controllers. These components all plug into a "Bus". The bus is essentially a communications highway; all the other components work together by transferring data over the bus.The active part of the computer, the part that does calculations and controls all the other parts is the "Central Processing Unit" (CPU). The Central Processing Unit (CPU) contains electronic clocks that control the timing of all operations;electronic circuits that carry out arithmetic operations like addition and multiplication; circuits that identify and execute the instructions that make up a program; and circuits that fetch the data from memory. Instructions and data are stored in main memory. The CPU fetches them as needed. Peripheral device controllers look after input devices, like keyboards and mice,output devices, like printers and graphics displays, and storage devices like disks.The CPU and peripheral controllers work together to transfer information between the computer and its users. Sometimes, the CPU will arrange for data be taken from an input device, transfer through the controller, move over the bus and get loaded directly into the CPU. Data being output follows the same route in reverse moving from the CPU, over the bus, through a controller and out to a device. In other cases, the CPU may get a device controller to move data directly into, or out of, main memory. CPU AND INSTRUCTIONS The CPU of a modern small computer is physically implemented as single silicon "chip". This chip will have engraved on it the million or more transistors and the interconnecting wiring that define the CPU's circuits. The chip will have one hundred or more pins around its rim --some of these pins are connection points for the signal lines from the bus, others will be the points where electrical power is supplied to the chip. Although physically a single component, the CPU is logically made up from a number of subparts. The three most important, which will be present in every CPU Timing and control circuits The timing and control circuits are the heart of the system. A controlling circuit defines the computer's basic processing cycle: repeat fetch next instruction from memory decode instruction (i.e. determine which data manipulation circuit is to be activated) fetch from memory any additional data that are

needed execute the instruction (feed the data to the appropriate manipulation circuit)until "halt" instruction has been executed;Along with the controlling "fetch-decode-execute" circuit, the timing and control component of the CPU contains the circuits for decoding instructions and decoding addresses (i.e. working out the location in memory of required data elements).The arithmetic logic unit (ALU) contains the circuits that manipulate data. There will be circuits for arithmetic operations like addition and multiplication. Often there will be different versions of such circuits one version for integer numbers and a second for real numbers. Other circuits will implement comparison operations that permit a program check whether one data value is greater than or less than some other value. There will also be "logic" circuits that directly manipulate bit pattern data.While most data are kept in memory, CPUs are designed to hold a small amount of data in "registers" (data stores) in the CPU itself. It is normal for main memory to be large enough to hold millions of data values; the CPU may only have space for something like 16 values. A CPU register will hold as many bits as a "word" in the computer's memory. Most current CPUs have registers that each store 32 bits of data.The circuits in the ALU often are organized so that some or all of their inputs and outputs must come from, or go to, CPU registers. Data values have to be fetched from memory and stored temporarily in CPU registers. Only then can they be combined using an ALU circuit, with the result again going to a register. If the result is from the final step in a calculation, it gets stored back into main memory.While some of the CPU registers are used for data values that are being manipulated, others may be reserved for calculations that the CPU has to do when it is working out where in memory particular data values are to be stored.CPU designs vary with respect to their use of registers. But, commonly, a CPU will have 8 or more "data" registers and another 8 "address" registers.Programmers who write in low-level "assembly languages" Assembly language code defines details such as how data should be moved to specific data registers and how addresses are to be calculated and saved temporarily in address registers.Generally, programmers working with high level languages 1but, when necessary, a programmer can find out how the CPU registers are used in their code.In addition to the main data and address registers, the CPU contains many other registers The ALU will contain numerous registers for holding temporary values that are generated as arithmetic operations are performed. The Computer hardware timing and control component contains a number of registers that hold control information. The Program Counter (PC) holds the address of the memory location containing the next instruction to be executed. The Instruction Register (IR) holds the bit pattern that represents the current instruction; different parts of the bit pattern are used to encode the "operation code" and the address of any data required.Most CPUs have a "flags" register. The individual bits in this register record various status data. Typically, one bit is used to indicate whether the CPU is executing code from an ordinary program or code that forms part of the controlling Operating Systems (OS) program. (The OS code has privileges; it can do things,which ordinary programs can not do, like change settings of peripheral device controllers. When the OS-mode bit is not set, these privileged instructions can not be executed.) Commonly, another group of bits in the flags register will be used to record the result of comparison instructions performed by the ALU. One bit in the flags register would be set if the comparator circuits found two values to be equal;a different bit would be set if the first of the two values was greater than the second. Ultimately, a program has to be represented as a sequence of instructions in memory. Each instruction specifies either one data manipulation step or a control action. Normally, instructions are executed in sequence. The machine is initialized with the program counter holding the memory location of the first instruction from the program and then the fetchdecode-execute cycle is started. The CPU sends a fetch request to memory specifying the location specified by the PC (program counter); it receives back the instruction and stores this in the IR. The PC is then updated so that it holds the address of the next instruction in sequence. The instruction in the IR is then decoded and executed. Sometimes, execution of the instruction will change the contents of the PC. Program Counter and Instruction Register Flags register Programs and instructions CPU and instructions "branch" instruction (these instructions allow a program to do things like skip

over processing steps that aren't required for particular data, or go back to the start of some code that must be repeated many times).A CPU is characterized by its instruction repertoire the set of instructions that can be interpreted by the circuits in the timing and control unit and executed using the arithmetic logic unit.. The Motorola 68000 CPU chip can serve as an example.The 68000 (and its variants like 68030 and 68040) were popular CPU chips in the 1980s being used in the Macintosh computers and the Sun3 workstations. Instructions are usually given short "mnemonic" names names that have been chosen to remind one of the effect achieved by the instruction, like ADD and CLeaR.)Different CPU architectures, e.g. the Motorola 68000 and Intel-086 architectures, have different instruction sets. There will be lots of instructions that are common ADD, SUB, etc. But each architecture will have its own special instructions that are not present on the other. Even when both architectures have similar instructions, e.g. the compare and conditional branch instructions, there may be differences in how these work. is a simplified illustration of how instructions are represented inside a computer.Instruction repertoire mnemonic instruction names Computer hardware An instruction is represented by a set of bits. A few CPUs have fixed size instructions; on such machines, every instruction is 16-bits, or 32-bits or whatever.Most CPUs allow for different sizes of instructions. An instruction will be at least 16-bits in size, but may have an additional 16, 32, or more bits.The first few bits of an instruction form the "Op-code" (operation code). These bits identify the data manipulation or control operation required. Again CPUs vary; some use a fixed size op-code, most have several different layouts for instructions with differing numbers of bits allocated for the opcode. If a CPU uses a fixed size op-code, decoding is simple. The timing and control component will implement a form of multiway switch. Instructions are usually given short "mnemonic" names names that have been chosen to remind one of the effect achieved by the instruction, like ADD and CLeaR.)Different CPU architectures, e.g. the Motorola 68000 and Intel-086 architectures, have different instruction sets. There will be lots of instructions that are common ADD, SUB, etc. But each architecture will have its own special instructions that are not present on the other. Even when both architectures have similar instructions, e.g. the compare and conditional branch instructions, there may be differences in how these work. is a simplified illustration of how instructions are represented inside a computer.Instruction repertoire mnemonic instruction names Computer hardware An instruction is represented by a set of bits. A few CPUs have fixed size instructions; on such machines, every instruction is 16-bits, or 32-bits or whatever.Most CPUs allow for different sizes of instructions. An instruction will be at least 16-bits in size, but may have an additional 16, 32, or more bits.The first few bits of an instruction form the "Op-code" (operation code). These bits identify the data manipulation or control operation required. Again CPUs vary; some use a fixed size op-code, most have several different layouts for instructions with differing numbers of bits allocated for the opcode. If a CPU uses a fixed size op-code, decoding is simple. The timing and control A computer contains several major subsystems such as the Central Processing Unit (CPU), memory,and peripheral device controllers. These components all plug into a "Bus". The bus is essentially a communications highway; all the other components work together by transferring data over the bus.The active part of the computer, the part that does calculations and controls all the other parts is the "Central Processing Unit" (CPU). The Central Processing Unit (CPU) contains electronic clocks that control the timing of all operations;electronic circuits that carry out arithmetic operations like addition and multiplication; circuits that identify and execute the instructions that make up a program; and

circuits that fetch the data from memory. Instructions and data are stored in main memory. The CPU fetches them as needed. Peripheral device controllers look after input devices, like keyboards and mice,output devices, like printers and graphics displays, and storage devices like disks.The CPU and peripheral controllers work together to transfer information between the computer and its users. Sometimes, the CPU will arrange for data be taken from an input device, transfer through the controller, move over the bus and get loaded directly into the CPU. Data being output follows the same route in reverse moving from the CPU, over the bus, through a controller and out to a device. In other cases, the CPU may get a device controller to move data directly into, or out of, main memory. CPU AND INSTRUCTIONS The CPU of a modern small computer is physically implemented as single silicon "chip". This chip will have engraved on it the million or more transistors and the interconnecting wiring that define the CPU's circuits. The chip will have one hundred or more pins around its rim --some of these pins are connection points for the signal lines from the bus, others will be the points where electrical power is supplied to the chip. Although physically a single component, the CPU is logically made up from a number of subparts. The three most important, which will be present in every CPU Timing and control circuits The timing and control circuits are the heart of the system. A controlling circuit defines the computer's basic processing cycle: repeat fetch next instruction from memory decode instruction (i.e. determine which data manipulation circuit is to be activated) fetch from memory any additional data that are needed execute the instruction (feed the data to the appropriate manipulation circuit)until "halt" instruction has been executed;Along with the controlling "fetch-decode-execute" circuit, the timing and control component of the CPU contains the circuits for decoding instructions and decoding addresses (i.e. working out the location in memory of required data elements).The arithmetic logic unit (ALU) contains the circuits that manipulate data. There will be circuits for arithmetic operations like addition and multiplication. Often there will be different versions of such circuits one version for integer numbers and a second for real numbers. Other circuits will implement comparison operations that permit a program check whether one data value is greater than or less than some other value. There will also be "logic" circuits that directly manipulate bit pattern data. While most data are kept in memory, CPUs are designed to hold a small amount of data in "registers" (data stores) in the CPU itself. It is normal for main memory to be large enough to hold millions of data values; the CPU may only have space for something like 16 values. A CPU register will hold as many bits as a "word" in the computer's memory. Most current CPUs have registers that each store 32 bits of data.The circuits in the ALU often are organized so that some or all of their inputs and outputs must come from, or go to, CPU registers. Data values have to be fetched from memory and stored temporarily in CPU registers. Only then can they be combined using an ALU circuit, with the result again going to a register. If the result is from the final step in a calculation, it gets stored back into main memory.While some of the CPU registers are used for data values that are being manipulated, others may be reserved for calculations that the CPU has to do when it is working out where in memory particular data values are to be stored.CPU designs vary with respect to their use of registers. But, commonly, a CPU will have 8 or more "data" registers and another 8 "address" registers.Programmers who write in low-level "assembly languages" Assembly language code defines details such as how data should be moved to specific data registers and how addresses are to be calculated and saved temporarily in address registers.Generally, programmers working with high level languages 1but, when necessary, a programmer can find out how the CPU registers are used in their code.In addition to the main data and address registers, the CPU contains many other registers The ALU will contain numerous registers for holding temporary values that are generated as arithmetic operations are performed. The Computer hardware timing and control component contains a

number of registers that hold control information. The Program Counter (PC) holds the address of the memory location containing the next instruction to be executed. The Instruction Register (IR) holds the bit pattern that represents the current instruction; different parts of the bit pattern are used to encode the "operation code" and the address of any data required.Most CPUs have a "flags" register. The individual bits in this register record various status data. Typically, one bit is used to indicate whether the CPU is executing code from an ordinary program or code that forms part of the controlling Operating Systems (OS) program. MEMORY AND DATA Computers have two types of memory: ROM Read Only Memory RAM normal Read Write Memory The acronym RAM instead of RWM is standard. It actually standards for "Random Access Memory". Its origin is very old, it was used to distinguish main memory (where data values can be accessed in any order hence "randomly") from secondary storage like tapes (where data can only be accessed in sequential order).ROM memory is generally used to hold parts of the code of the computer's operating system. Some computers have small ROM memories that contain only a minimal amount of code just sufficient to load the operating system from a disk storage unit. Other machines have larger ROM memories that store substantial parts of the operating system code along with other code, such as code for generating graphics displays.Most of the memory on a computer will be RAM. RAM memory is used to hold the rest of the code and data for the operating system, and the code and data for the program(s) being run on the computer. Memory sizes may be quoted in bits, bytes, or words: Bit a single 0 or 1 data value Byte a group of 8 bits Word the width of the primary data paths . BUS A computer's bus can be viewed as consisting of about one hundred parallel wires;Some of these wires carry timing signals, others will have controlsignals, another group will have a bit pattern code that identifies the component (CPU, memory, peripheral controller) that is to deal with the data, and other wires carry signals encoding the data.Signals are sent over the bus by setting voltages on the different wires (the voltages are small, like 0-volts and 1-volt). When a voltage is applied to a wire the effect propagates along that wire at close to the speed of light; since the bus is only a few inches long, the signals are detectable essentially instantaneously by all attached components. Transmission of information is controlled by clocks that put timing signals on some of the wires. Information signals are encoded on to the bus,held for a few clock ticks to give all components a chance to recognize and if appropriate take action, then the signals are cleared. The clock that controls the bus may be "ticking" at more than one hundred million ticks per second The "plugs" that attach components to the bus incorporate quite sophisticated circuits. These circuits interpret the patterns of 0/1 voltages set on the control and address lines thus memory can recognize a signal as "saying" something like "store the data at address xxx", while a disk control unit can recognize a message like "get ready to write to disk block identified by these data bits". In addition,these circuits deal with "bus arbitration". Sometimes, two or more components may want to put signals on the bus at exactly the same time the bus arbitration circuitry resolves such conflicts giving one component precedence (the other component waits a few hundred millionths of a second and then gets the next chance to send its data). PERIPHERALS There are two important groups of input/output (i/o) devices. There are devices that provide data storage, like disks and tapes, and there are devices that connect the computer system to the external world (keyboards, printers, displays, sensors). The storage devices record data using the same bit pattern encodings as used in the memory and CPU. These devices work with blocks of thousands of bytes. Storage space is allocated in these large units. Data transfers are in units of blocks".The other i/o devices transfer only one, or sometimes two,

bytes of data at a time. Their controllers have two parts. There is a part that attaches to the bus and has some temporary storage registers where data are represented as bit patterns. A second part of the controller has to convert between the internal bit representation of data and its external representation. External representations vary sensors and effectors (used to monitor and control machinery in factories) use voltage levels, devices like simple keyboards and printers may work with timed pulses of current. Disks and tapes Most personal computers have two or three different types of disk storage unit.There will be some form of permanently attached disk (the main "hard disk"), some form of exchangeable disk storage (a "floppy disk" or possibly some kind of cartridge-style hard disk), and there may be a CD-ROM drive for read-only CD disks.CD disks encode 0 and 1 data bits as spots with different reflectivity. The data can be read by a laser beam that is either reflected or not reflected according to the setting of each bit of data; the reflected light gets converted into a voltage pulse and hence the recorded 0/1 data values gets back into the form needed in the computer circuits. Currently, optical storage is essentially read-only once data have been recorded they can't be changed. Read-write optical storage is just beginning to become available at reasonable prices.Most disks use magnetic recording. The disks themselves may be made of thin plastic sheets (floppy disks), or ceramics or steel (hard disks). Their surfaces are covered in a thin layer of magnetic oxide. Spots of this magnetic oxide can be magnetically polarized. If a suitably designed wire coil is moved across the surface, the polarized spots induce different currents in the coil allowing data to be read back from the disk. New data can be written by moving a coil across the surface with a sufficiently strong current flowing to induce a new magnetic spot with a required polarity. There is no limit on the number of times that data can be rewritten on magnetic disks.The bits are recorded in "tracks" these form concentric rings on the surface of the disk. Disks have hundreds of these tracks. (On simple disk units, all tracks hold the same number of bits; since the outermost tracks are slightly longer, their bits are spaced further apart than those on the innermost Optical disks Magnetic disks Tracks Disks and tapes More elaborate disks can arrange to have the bits recorded at the same density; the outer tracks then hold more bits than the inner tracks.)Tracks are too large a unit of storage they can hold tens of thousands of bits.Storage on a track is normally broken down into "blocks" or sectors. At one time,programmers could select the size of blocks. A programmer would ask for some tracks of disk space and then arrange the breakdown into blocks for themselves.Nowadays, the operating system program that controls most of the operations of a computer will mandate a particular block size. This is typically in the range 512 bytes to 4096 bytes (sometimes more).The blocks of bytes written to the disk will contain these data bytes along with a few control bytes added by the disk controller circuitry. These control bytes are handled entirely by the disk controller circuits. They provide house keeping information used to identify blocks and include extra checking information that can be used, when reading data, to verify that the data bits are still the same as when originally recorded.The disk controller may identify blocks by block number and track number, see, or may number all blocks sequentially. it is used sequential numbering, track 1 would contain blocks 16, 17, . Blocks (or sectors) 18 Computers Blocks of 512 bytes (plus extra control bytes used by disk controller) Block 0 Track 0 Block 1 Track 0 Block 0 Track 1 Block 1 Track 1 Block 0 Track 2 ...

... Before data blocks can be read or written, the read/write head mechanism must be moved to the correct track. The read/write head contains the coil that detects or induces magnetism. It is moved by a stepping motor that can align it accurately over a specific track. Movements of the read/write heads are, in computer terms,relatively slow it can take a hundredth of a second to adjust the position of the read/write heads. (The operation of moving the heads to the required track is called "seeking"; details of disk performance commonly include information on "average seek times".) Once the head is aligned above the required track, it is still necessary for the spinning disk to bring the required block under the read/write head (the disk controller reads its control information from the blocks as they pass under the head and so "knows" when the required block is arriving). When the block arrives under the read/write head, the recorded 0/1 bit values can be read and copied to wherever else they are needed.The read circuitry in the disk reassembles the bits into bytes. These then get transferred over the bus to main memory (or, sometimes, into a CPU register).Disks may have their own private cache memories. Again, these are "hidden"stores where commonly accessed data can be kept for faster access. A disk may have cache storage sufficient to hold the contents of a few disk blocks (i.e. several thousand bytes). As well as being sent across the bus to memory, all the bytes of a block being read can be stored in the local disk cache. If a program asks the disk to "Seeking" for tracks in Disk cache memory The required bytes can be read from the cache and sent to main memory. Commonly, hard disks have several disk platters mounted on a single central spindle. There are read/write heads for each disk platter. Data can be recorded on both sides of the disk platters (though often the topmost and bottommost surfaces are unused). The read/write heads are all mounted on the same stepping motor mechanism and move together between the disk platters. The controller will have several registers (similar to CPU registers) and circuitry for performing simple additions on (binary) integer numbers. (The cache memory shown is optional; currently, most disk controllers don't have these caches.) One register (or group of registers) will hold the disk address or "block number" of the data block that must be transferred. Another register holds a byte count; this is initialized to the block size and decremented as each byte is transferred. The disk controller stops trying to read bits when this counter reaches zero. The controller will have some special register used for grouping bits into bytes before they get sent over the bus. Yet another register holds the address of the (byte) location in memory that is to hold the next byte read from the disk (or the next byte to be written to disk).Errors can occur with disk transfers. The magnetic oxide surface may have been damaged. The read process may fail to retrieve the data. The circuits in the disk can detect this but need some way of passing this information to the program that wanted the data. This is where the Flags register in the disk controller gets used. If something goes wrong, bits are set in the flags register to identify the Disk controller A program doing a disk transfer will check the contents of the flags register when the transfer is completed and can attempt some recovery action if the data transfer was erroneous. Bus Block number Byte counter Destination address Flags Disk cache This elaborate circuitry and register setup allows a disk controller to work with a fair degree of autonomy. The data transfer process will start with the CPU sending a request over the bus to the disk controller; the request will cause the disk unit to load its block number register and to start its heads seeking to the appropriate track. It may take the disk a hundredth of a second to get its heads positioned . During this time, the CPU can execute tens of thousands of instructions. Ideally, the CPU will be able to get on with other work,

which it can do provided that it can work with other data that have been read earlier. At one time, programmers were responsible for trying to organize data transfers so that the CPU would be working on one block of data while the next block was being read. Nowadays, this is largely the responsibility of the controlling OS program. When the disk finds the block it can inform the CPU which will respond by providing details of where the data are to be stored in memory. The disk controller can then transfer successive bytes read from the disk into successive locations in memory. A transfer that works like this is said to be using "direct memory access". Direct Memory Access When the transfer is complete, the disk controller will send another signal to the CPU files on disk are made up out of blocks. For example, a text file with twelve thousand characters would need twenty four 512-byte blocks. (The last block would only contain a few characters from the file, it would be filled out with either space characters or just random characters). Programmers don't choose the blocks used for their files. The operating system is responsible for choosing the blocks used for each file, and for recording details for future reference.The data in these blocks form a table of entries with each entry specifying a file name, file size (in bytes actually used and complete blocks allocated), and some record of which blocks are allocated. The allocation scheme shown in Figure 1.14 uses a group of contiguous blocks to make up each individual file. This makes it easy to record details of allocated blocks; the directory need only record the file size and the first block number. Fixed set of blocks holding a file directory: In addition to the table of entries describing allocated files, the directory structure would contain a record of which blocks were allocated and which were free and therefore available for use if another file had to be created. One simple scheme uses a map with one bit for each block; the bit is set if the block is allocated. Tapes are now of minor importance as storage devices for users' files. Mostly they are used for "archival" storage recording data that are no longer of active interest but may be required again later. There are many different requirements for archival data. For example, government taxation offices typically stipulate that companies keep full financial record data for the past seven years; but only the current year's data will be of interest to a company. So, a company will have its current data on disk and hold the data for the other six years on tapes. Apart from archival storage, the main use of tapes is for backup of disk units. All the data on a computer's disks will be copied to tape each night (or, maybe just weekly). The tapes can be stored somewhere safe, remote from the main computer site. If there is a major accident destroying the disks, the essential data can be retrieved from tape and loaded on some other computer system. Tapes The tape units used for most of the last 45 years are physically a bit like large reel-to-reel tape recorders. The tapes are about half an inch wide and two thousand feet in length and are run from their reel, through tensioning devices, across readwrite heads, to a take up reel. The read write heads record 9 separate data tracks;these 9 tracks are used to record the 8bits of a byte along with an extra check bit.Successive bytes are written along the length of the tape; an inch of tape could pack in as much as a few thousand bytes. Data are written to tape in blocks of hundreds,or thousands, of bytes. (On disks, the block sizes are now usually chosen by the operating system, the size of tape blocks is program selectable.) Blocks have to be separated by gaps where no data are recorded these "inter record gaps" have to be large (i.e. half an inch or so) and they tend to reduce the storage capacity of a tape. Files are written to tape as sequences of blocks. Special "end of file" patterns can be recorded on tape to delimit different files.A tape unit can find a file (identified by number) by counting end of file marks and then can read its successive data blocks. Data transfers are inherently sequential, block 0 of a file must be read before the tape unit can find block 1. Files cannot usually be rewritten to the same bit of tape writing to a tape effectively destroys all data previously recorded further along the tape (the physical lengths of data blocks, interrecord gaps, file marks etc vary a little with the tension.on the tape so there is no guarantee that subsequent data won't be overwritten). All the processes using tapes, like skipping to file marks, sequential reads etc, are slow. Modern "streamer" tape units used for backing up the

data on disks use slightly different approaches but again they are essentially a sequential medium. Although transfer rates can be high, the time taken to search for files is considerable. Transfers of individual files are inconvenient; these streamer tapes are most effective when used to save (and, if necessary restore) all the data on disk. Other I/O devices A keyboard and a printer are representative of simple Input/Output (I/O) peripheral devices. Such devices transfer a single data character, encoded as an 8-bit pattern,to/from the computer. When a key is pressed on the keyboard, internal electronics identifies which key was pressed and hence identifies the appropriate bit pattern to send to the computer. When a printer receives a bit pattern from the computer, its circuitry works out how to type or print the appropriate character. This will have at least two wires; if there are only two wires, the bits of a byte are sent serially. Many personal computers have controllers for "parallel ports"; these have a group of 9 or more wires which can carry a reference voltage and eight signal voltages (and so can effectively transmit all the bits of a byte at the same time).CPU wait until data arrive from the device. The program code would be:Repeat ask the device its status until device replies "ready" read data The "repeat" loop would be encoded using three instructions. The first would send a message on the bus requesting to read the status of the device's ready flag. The instruction would cause the CPU to wait until the reply signal was received from the device. The second instruction would test the status data retrieved. The third instruction would be a conditional jump going back to the start of the loop. The loop corresponds to panes 1 and 2 .When a key is pressed on the keyboard the hardware in the keyboard identifies the key and sends its ASCII code as a sequence of voltage pulses. These pulses are interpreted by the controller which assembles the correct bit pattern in the controller's data register. When all 8 bits have been obtained, the controller will set the ready flag. The next request to the device for its status will get a 1 reply (step 4). The program can continue with a "read data register" request which would copy the contents of the device's data register into a CPU register. If the character was to be stored in memory, another sequence of instructions would have to be executed to determine the memory address and then copy the bits from the CPU register into memory. A wait loop like this is easy to code, but makes very poor use of CPU power. The CPU would spend almost all its time waiting for input characters (most computer users type very few characters per second). There are other approaches to organizing input from these low-speed character oriented devices. These alternative approaches are considerably more complex. They rely on the device sending an "interrupt signal" at the same time as it sets its ready flag. The program running on the CPU has to be organized so that it gets on with other work; when the interrupt signal is received this other work is temporarily suspended while the character data is read. Visual displays used for computer output have an array of data elements, one element for each pixel on the screen. If the screen is black and white, a single bit data element will suffice for each pixel. The pixels of colour screens require at least one byte of storage each. The memory used for the visual display may be part of the main memory of the computer, or may be a separate memory unit. Programs get information displayed on a screen by setting the appropriate data elements in the memory used by the visual display. Programs can access the data element for each pixel. Setting individual pixels is laborious. Usually, the operating system of the computer will help by providing routines that can be used to draw lines, colour in rectangles, and show images of letters and digits. There are many other input and output devices that can be attached to computers. Computers have numerous clock devices. Clocks Computers frequency clocks that control the internal operations of the CPU and the bus, there will be clocks that record the time of day and, possibly, serve as a form of "alarm clock" timer. The time of day clock will tick at about 60-times per second; at each tick, a counter gets incremented. An alarm clock time can be told to send a signal when a particular amount of time has elapsed. "Analog-to-Digital" (A-to-D) converters change external voltages ("analog" data) into bit patterns that represent numbers ("digital" data). A-to-Ds allow computers to work with all kinds of input. The input voltage can come from a photomultiplier/detector system (allowing light intensities to be measured), or from a thermocouple

(measurements of temperature), a pressure transducer, or anything else that can generate a voltage. This allows computers to monitor all kinds of external devices everything from signals in the nerves of frog's leg to neutron fluxes in a nuclear reactor. Joystick control devices may incorporate simple forms of A-to-D converters. (Controllers for mice are simpler. Movement of a mouse pointer causes wheels to turn inside the mouse assembly. On each complete revolution, these wheels send a single voltage pulse to the mouse controller. This counts the pulses and stores the counts in its data registers; the CPU can read these data registers and find how far the mouse has moved in x and y directions since last checked.) The controller for an A-to-D , except that the data register will have more bits. A one-byte data register can only represent numbers in the range 0 to 255; usually an accuracy of one part in 250 is insufficient. Larger data registers are used, e.g. 12-bits for measurements that need to be accurate to one part in four thousand, or 16-bits for an accuracy of one part in thirty thousand. On a 12-bit register, the value 0 would obviously encode a minimum (zero) input, while 4095 would represent the upper limit of the measured range. The external interface parts of the A-to-D will allow different measurement ranges to be set, e.g. 0 to 1 volt, 0 to 5 volt, -5 to 5 volt. An A-to-D unit will often have several inputs; instructions from the CPU will direct the controller to select a particular input for the next measurement. A "Digital-to-Analog" (D-to-A) converter is the output device equivalent to an A-to-D input. A D-to-A has a data register that can be loaded with some binary number by the CPU. The Dto-A converts the number into a voltage. The voltage can then be used to control power to a motor. Using an A-to-D for input and a D-to-A for output, a computer program can do things like monitor temperatures in reactor vessels in a chemical plant and control the heaters so that the temperature remains within a required range.Often, there is a need for a computer to monitor, or possibly control, simple two-state devices door locks (open or locked), valves (open or shut), on/off control lights etc. There are various forms of input devices where the data register has each bit wired so that it indicates the state of one of the monitored devices.Similarly, in the corresponding output device, the CPU can load a data register with the on/off state for the controlled devices. A change of the setting of a bit in the control register causes actuators to open or close the corresponding physical device. A-to-D converters D-to-A converters Relays

Software, by definition, is the collection of computer programs, procedures and documentation that performs different tasks on a computer system. The term 'software' was first used by John Tukey in 1958. At the very basic level, computer software consists of a machine language that consists of groups of binary values, which specify processor instructions. The processor instructions change the state of computer hardware in a predefined sequence. Briefly, computer software is the language in which a computer speaks. There are different types of computer software. Major Types of Software Programming Software: This is one of the most commonly known and popularly used forms of computer software. These software come in forms of tools that assist a programmer in writing computer programs. Computer programs are sets of logical instructions that make a computer system perform certain tasks. The tools that help the programmers in instructing a computer system include text editors, compilers and interpreters. System Software: It helps in running the computer hardware and the computer system. System software is a collection of operating systems; devise drivers, servers, windowing systems and utilities. System software helps an application programmer in abstracting away from hardware, memory and other internal complexities of a computer. Application Software: It enables the end users to accomplish certain specific tasks. Business software, databases and educational software are some forms of application software. Different word processors, which are dedicated for specialized tasks to be performed by the user, are other examples of application software. Apart from these three basic types of software, there are some other well-known forms of computer software like inventory management software, ERP, utility software, accounting software and others. Take a look at some of them. Inventory Management Software: This type of software helps an organization in tracking its goods and materials on the basis of quality as well as quantity. Warehouse inventory management functions encompass the internal warehouse movements and storage. Inventory software helps a company in organizing inventory and optimizing the flow of goods in the organization, thus leading to an improved customer service. Utility Software: Also known as service routine, utility software helps in the management of computer hardware and application software. It performs a small range of tasks. Disk defragmenters, systems utilities and virus scanners are some of the typical examples of utility software. Data Backup and Recovery Software: An ideal data backup and recovery software provides functionalities beyond simple copying of data files. This software often supports user needs of specifying what is to be backed up and when. Backup and recovery software preserve the original organization of files and allow an easy retrieval of the backed up data. System software System software provides the basic functions for computer usage and helps run the computer hardware and system. It includes a combination of the following:

Device drivers Operating systems Servers Utilities Window systems

System software is responsible for managing a variety of independent hardware components, so that they can work together harmoniously. Its purpose is to unburden the application software programmer from the often complex details of the particular computer being used, including such accessories as communications devices, printers, device readers, displays and keyboards, and also to partition the computer's resources such as memory and processor time in a safe and stable manner. System software is software that controls a computer and runs applications on it, providing the environment for application software. Application software is the software in application programsalso called computer programs and software programsthat allow the user to perform tasks, play games, listen to music, and otherwise make use of the computer. System software is a collection of programs, and each type has a number of component parts, most notably the operating system (OS). Also included in system software are utilities, device drivers, and language translators. Utilities include a variety of specialized programs that can be applied across application programs. Basic utilities include trouble-shooting and/or diagnostic program. Other utilities may include backup programs, file compression, uninstall, and antivirus programs. Device drivers are needed for every peripheral connected to the computer, from the mouse and keyboard to the printer and/or scanner. Device drivers for basic components like the mouse and keyboard are included in the system, while others may be supplied by the peripheral manufacturer. Operating system An operating system, or OS, is a software program that enables the computer hardware to communicate and operate with the computer software. Without a computer operating system, a computer would be useless. Operating system types As computers have progressed and developed so have the operating systems. Below is a basic list of the different operating systems and a few examples of operating systems that fall into each of the categories. Many computer operating systems will fall into more than one of the below categories. GUI - Short for Graphical User Interface, a GUI Operating System contains graphics and icons and is commonly navigated by using a computer mouse. Multi-user - A multi-user operating system allows for multiple users to use the same computer at the same time and different times. See the multi-user definition for a complete definition for a complete definition. Below are some examples of multi-user operating systems. Linux Unix Windows 2000

Multiprocessing - An operating system capable of supporting and utilizing more than one computer processor. Below are some examples of multiprocessing operating systems. Linux Unix Windows 2000 Multitasking - An operating system that is capable of allowing multiple software processes to run at the same time. Below are some examples of multitasking operating systems. Unix Windows 2000 Multithreading - Operating systems that allow different parts of a software program to run concurrently. Operating systems that would fall into this category are: Linux Unix Windows 2000

Programming software Programming software usually provides tools to assist a programmer in writing computer programs, and software using different programming languages in a more convenient way. The tools include:

Compilers Debuggers Interpreters Linkers Text editors

An Integrated development environment (IDE) is a single application that attempts to manage all these functions. Application software Application software is developed to aid in any task that benefits from computation. It is a broad category, and encompasses software of many kinds, including the internet browser being used to display this page. This category includes:

Business software Computer-aided design Databases Decision making software Educational software Image editing Industrial automation Mathematical software Medical software Molecular modeling software

Quantum chemistry and solid state physics software Simulation software Spreadsheets Telecommunications (i.e., the Internet and everything that flows on it) Video editing software Video games Word processing

APPLICATION SOFTWARE CLASSIFICATION Application software falls into two general categories; horizontal applications and vertical applications. Horizontal applications are the most popular and widespread in departments or companies. Vertical applications are niche products, designed for a particular type of business or division in a company. There are many types of application software:

An application suite consists of multiple applications bundled together. They usually have related functions, features and user interfaces, and may be able to interact with each other, e.g. open each other's files. Business applications often come in suites, e.g. Microsoft Office, OpenOffice.org and iWork, which bundle together a word processor, a spreadsheet, etc.; but suites exist for other purposes, e.g. graphics or music. Enterprise software addresses the needs of organization processes and data flow, often in a large distributed environment. (Examples include financial systems, customer relationship management (CRM) systems and supply-chain management software). Note that Departmental Software is a sub-type of enterprise software with a focus on smaller organizations or groups within a large organization. Examples include travel expense management and IT Helpdesk) Enterprise infrastructure software provides common capabilities needed to support enterprise software systems. (Examples include databases, email servers, and systems for managing networks and security.) Information worker software addresses the needs of individuals to create and manage information, often for individual projects within a department, in contrast to enterprise management. Examples include time management, resource management, documentation tools, analytical, and collaborative. Word processors, spreadsheets, email and blog clients, personal information system, and individual media editors may aid in multiple information worker tasks. Content access software is software used primarily to access content without editing, but may include software that allows for content editing. Such software addresses the needs of individuals and groups to consume digital entertainment and published digital content. (Examples include Media Players, Web Browsers, Help browsers and Games) Educational software is related to content access software, but has the content and/or features adapted for use in by educators or students. For example, it may deliver evaluations (tests), track progress through material, or include collaborative capabilities. Simulation software are computer software for simulation of physical or abstract systems for either research, training or entertainment purposes. Media development software addresses the needs of individuals who generate print and electronic media for others to consume, most often in a commercial or educational setting. This includes graphic-art software, desktop publishing software, multimedia development software, HTML editors, digital-animation editors, digital audio and video composition, and many others.[2] Mobile applications (mobile apps) run on hand-held devices such as smart phones, tablet computers, portable media players, personal digital assistants and enterprise digital assistants : see mobile application development.

Product engineering software is used in developing hardware and software products. This includes computer-aided design (CAD), computer-aided engineering (CAE), computer language editing and compiling tools, integrated development environments, and application programmer interfaces. A command-line interface is one in which you type in commands to make the computer do something. You have to know the commands and what they do, and type them correctly. DOS and Unix are examples of operating systems that provide commanddriven interfaces. A graphical user interface (GUI) is one in which you select command choices from various menus, buttons and icons using a mouse. Microsoft Windows, Mac OS and Ubuntu are common examples of operating systems which bundle one or more graphical user interfaces. A third party server side application that the user may choose to install in his or her account on a social media site or other Web 2.0 web site, for example a facebook app.

Applications can also be classified by computing platform. INFORMATION WORKER SOFTWARE


Enterprise resource planning Accounting software Task and scheduling Field service management Data management o Contact management o Spreadsheet o Personal database Documentation o Document automation/assembly o Word processing o Desktop publishing software o Diagramming software o Presentation software o Email o Blog Reservation systems Financial software o Day trading software o Banking software o Clearing systems o arithmetic software

Content access software

Electronic media software o Web browser o Media players o Hybrid editor players

Entertainment software

Digital pets Screen savers Video games

o o o o o

Arcade games Video game console emulator Personal computer games Console games Mobile games

Educational software

Classroom management Learning/training management software Reference software Sales readiness software Survey management

Enterprise infrastructure software


Business workflow software Database management system (DBMS) software Digital asset management (DAM) software Document management software Geographic information system (GIS) software

Simulation software

Computer simulators o Scientific simulators o Social simulators o Battlefield simulators o Emergency simulators o Vehicle simulators Flight simulators Driving simulators o Simulation games Vehicle simulation games

Media development software


Image organizer Media content creating/editing o 3D computer graphics software o Animation software o Graphic art software o Image editing software Raster graphics editor Vector graphics editor o Video editing software o Sound editing software Digital audio editor o Music sequencer Scorewriter o Hypermedia editing software Web development software o Game development tool

Computer software is defined as a set of programs and procedures that are intended to perform some tasks on a computer system. A software program is a set of instructions that are aimed at changing the state of computer hardware. At the lowest level, software is in the form of an assembly language, a set of instructions in a machine-understandable form. At the highest level, software is in the form of high-level languages, which are compiled or interpreted into machine language code. Major Types of Software Computer software systems are classified into three main types, namely, system software, programming software and application software. System software comprises device drivers, operating system, server and other such software components, which help the programmer abstract away from the memory and hardware features of the system. Programming software assists the programmer in writing programs by providing him/her with tools such as editors, compilers, linkers, debuggers and more. Application software, one of the most important types of software, is used to achieve certain specific tasks. What is Application Software? Application software utilizes the capacities of a computer directly to a dedicated task. Application software is able to manipulate text, numbers and graphics. It can be in the form of software focused on a certain single task like word processing, spreadsheet or playing of audio and video files. Different Types of Application Software Word Processing Software: This software enables the users to create and edit documents. The most popular examples of this type of software are MS-Word, WordPad, Notepad and some other text editors. Database Software: Database is a structured collection of data. A computer database relies on database software to organize the data and enable the database users to achieve database operations. Database software allows the users to store and retrieve data from databases. Examples are Oracle, MSAccess, etc. Spreadsheet Software: Excel, Lotus 1-2-3 and Apple Numbers are some examples of spreadsheet software. Spreadsheet software allows users to perform calculations. They simulate paper worksheets by displaying multiple cells that make up a grid. Multimedia Software: They allow the users to create and play audio and video media. They are capable of playing media files. Audio converters, players, burners, video encoders and decoders are some forms of multimedia software. Examples of this type of software include Real Player and Media Player. Presentation Software: The software that is used to display information in the form of a slide show is known as presentation software. This type of software includes three functions, namely, editing that allows insertion and formatting of text, methods to include graphics in the text and a functionality of executing the slide shows. Microsoft PowerPoint is the best example of presentation software. A programming language is an artificial language designed to communicate instructions to a machine, particularly a computer. Programming languages can be used to create programs that control the behavior of a machine and/or to express algorithms precisely. The earliest programming languages predate the invention of the computer, and were used to direct the behavior of machines such as Jacquard looms and player pianos. Thousands of

different programming languages have been created, mainly in the computer field, with many more being created every year. Most programming languages describe computation in an imperative style, i.e., as a sequence of commands, although some languages, such as those that support functional programming or logic programming, use alternative forms of description. A programming language is usually split into the two components of syntax (form) and semantics (meaning). Some languages are defined by a specification document (for example, the C programming language is specified by an ISO Standard), while other languages, such as Perl 5 and earlier, have a dominant implementation that is used as a reference. A programming language is a notation for writing programs, which are specifications of a computation or algorithm.[1] Some, but not all, authors restrict the term "programming language" to those languages that can express all possible algorithms. Traits often considered important for what constitutes a programming language include: Markup languages like XML, HTML or troff, which define structured data, are not generally considered programming languages Programming languages may, however, share the syntax with markup languages if a computational semantics is defined. XSLT, for example, is a Turing complete XML dialectMoreover, LaTeX, which is mostly used for structuring documents, also contains a Turing complete subset. The term computer language is sometimes used interchangeably with programming language. However, the usage of both terms varies among authors, including the exact scope of each. One usage describes programming languages as a subset of computer languages. In this vein, languages used in computing that have a different goal than expressing computer programs are generically designated computer languages. For instance, markup languages are sometimes referred to as computer languages to emphasize that they are not meant to be used for programming.. Another usage regards programming languages as theoretical constructs for programming abstract machines, and computer languages as the subset thereof that runs on physical computers, which have finite hardware resources..John C. Reynolds emphasizes that formal specification languages are just as much programming languages as are the languages intended for execution. He also argues that textual and even graphical input formats that affect the behavior of a computer are programming languages, despite the fact they are commonly not Turing-complete, and remarks that ignorance of programming language concepts is the reason for many flaws in input formats.. ELEMENTS All programming languages have some primitive building blocks for the description of data and the processes or transformations applied to them (like the addition of two numbers or the selection of an item from a collection). These primitives are defined by syntactic and semantic rules which describe their structure and meaning respectively. A programming language's surface form is known as its syntax. Most programming languages are purely textual; they use sequences of text including words, numbers, and punctuation, much like written natural languages. On the other hand, there are some programming languages which are more graphical in nature, using visual relationships between symbols to specify a program.The syntax of a language describes the possible combinations of symbols that form a syntactically correct program. The meaning given to a combination of symbols is handled by semantics (either formal or hard-coded in a reference implementation). Since most languages are textual, this article discusses textual syntax.

UNIT 2 Information Systems and Strategic Implication System (from Latin systma, in turn from Greek systma, "whole compounded of several parts or members, system", literary "composition"is a set of interacting or interdependent components forming an integrated whole. A system is a set of elements (often called 'components' instead) and relationships which are different from relationships of the set or its elements to other elements or sets. Fields that study the general properties of systems include systems theory, cybernetics, dynamical systems, thermodynamics and complex systems. They investigate the abstract properties of systems' matter and organization, looking for concepts and principles that are independent of domain, substance, type, or temporal scale. Most systems share common characteristics, including:

Systems have structure, defined by components/elements and their composition; Systems have behavior, which involves inputs, processing and outputs of material, energy, information, or data; Systems have interconnectivity: the various parts of a system have functional as well as structural relationships to each other. Systems may have some functions or groups of functions

The term system may also refer to a set of rules that governs structure and/or behavior. IS or CBIS Components The 5 components that must come together in order to produce a Computer-Based Information system : 1.Hardware:The term hardware refers to machinery.This category includes the computer itself, which is often referred to as the central processing unit (CPU), and all of its support equipments. Among the support equipments are input and output devices, storage devices and communications devices. 2.Software: The term software refers to computer programs and the manuals (if any) that support them. Computer programs are machine-readable instructions that direct the circuitry within the hardware parts of the CBIS to function in ways that produce useful information from data. Programs are generally stored on some input / output medium,often a disk or tape. 3.Data: Data are facts that are used by programs to produce useful information.Like programs,data are generally stored in machine-readable form on disk or tape until the computer needs them. 4.Procedures: Procedures are the policies that govern the operation of a computer system. "Procedures are to people what software is to hardware" is a common analogy that is used to illustrate the role of procedures in a CBIS. 5.People: Every CBIS needs people if it is to be useful. Often the most over-looked element of the CBIS are the people, probably the component that most influence the success or failure of information systems. Why Information Systems Are Important :

An understanding of the effective and responsible use and management of information systems and technologies is important for managers, business professionals, and other knowledge workers in today is internetworked enterprises. Information systems play a vital role in the e-business and e-commerce operations, enterprise collaboration and management, and strategic success of businesses that must operate in an internetworked global environment. Thus, the field of information systems has become a major functional area of business administration. An IS Framework for Business Professionals This included (1) foundation concepts: fundamental behavior, technical, business, and managerial concepts like system components and functions, or competitive strategies; (2) information technologies: concepts, developments, or management issues regarding hardware, software, data management, networks, and other technologies; (3) business applications: major uses of IT for business processes, operations, decision making, and strategic/competitive advantage; (4) development processes: how end users and IS specialists develop and implement business/IT solutions to problems and opportunities arising in business; and (5) management challenges: how to effectively and ethically manage the IS function and IT resources to achieve top performance and business value in support of the business strategies of the enterprise. An Information System Model : An information system uses the resources of people, hardware, software, data,and networks to perform input, processing, output, storage, and control activities that convert data resources into information products. Data are first collected and converted to a form that is suitable for processing (input). Then the data are manipulated and converted into information (processing), stored for future use (storage), or communicated to their ultimate user (output) according to correct processing procedures (control).

TYPES OF INFORMATION SYSTEM For most businesses, there are a variety of requirements for information. Senior managers need information to help with their business planning. Middle management need more detailed information to help them monitor and control business activities. Employees with operational roles need information to help them carry out their duties. As a result, businesses tend to have several "information systems" operating at the same time. This revision note highlights the main categories of information system and provides some examples to help you distinguish between them.

The main kinds of information systems in business are described briefly below: Information System Executive Support Systems Description An Executive Support System ("ESS") is designed to help senior management make strategic decisions. It gathers, analyses and summarises the key internal and external information used in the business. A good way to think about an ESS is to imagine the senior management team in an aircraft cockpit - with the instrument panel showing them the status of all the key business activities. ESS typically involve lots of data analysis and modelling tools such as "what-if" analysis to help strategic decision-making.

Management Information Systems

A management information system ("MIS") is mainly concerned with internal sources of information. MIS usually take data from the transaction processing systems (see below) and summarise it into a series of management reports. MIS reports tend to be used by middle management and operational supervisors. Decision-support systems ("DSS") are specifically designed to help management make decisions in situations where there is uncertainty about the possible outcomes of those decisions. DSS comprise tools and techniques to help gather relevant information and analyse the options and alternatives. DSS often involves use of complex spreadsheet and databases to create "what-if" models. Knowledge Management Systems ("KMS") exist to help businesses create and share information. These are typically used in a business where employees create new knowledge and expertise - which can then be shared by other people in the organisation to create further commercial opportunities. Good examples include firms of lawyers, accountants and management consultants. KMS are built around systems which allow efficient categorisation and distribution of knowledge. For example, the knowledge itself might be contained in word processing documents, spreadsheets, PowerPoint presentations. internet pages or whatever. To share the knowledge, a KMS would use group collaboration systems such as an intranet.

DecisionSupport Systems

Knowledge Management Systems

Transaction Processing Systems

As the name implies, Transaction Processing Systems ("TPS") are designed to process routine transactions efficiently and accurately. A business will have several (sometimes many) TPS; for example: Billing systems to send invoices to customers - Systems to calculate the weekly and monthly payroll and tax payments - Production and purchasing systems to calculate raw material requirements - Stock control systems to process all movements into, within and out of the business

Office Automation Systems

Office Automation Systems are systems that try to improve the productivity of employees who need to process data and information. Perhaps the best example is the wide range of software systems that exist to improve the productivity of employees working in an office (e.g. Microsoft Office XP) or systems that allow employees to work from home or whilst on the move.

An information system is a collection of hardware, software, data, people and procedures that are designed to generate information that supports the day-to-day, short-range, and long-range activities of users in an organization. Information systems generally are classified into five categories: office information systems, transaction processing systems, management information systems, decision support systems, and expert systems. The following sections present each of these information systems. 1. Office Information Systems An office information system, or OIS (pronounced oh-eye-ess), is an information system that uses hardware, software and networks to enhance work flow and facilitate communications among employees. Win an office information system, also described as office automation; employees perform tasks electronically using computers and other electronic devices, instead of manually. With an office information system, for example, a registration department might post the class schedule on the Internet and e-mail students when the schedule is updated. In a manual system, the registration department would photocopy the schedule and mail it to each students house. An office information system supports a range of business office activities such as creating and distributing graphics and/or documents, sending messages, scheduling, and accounting. All levels of users from executive management to nonmanagement employees utilize and benefit from the features of an OIS. The software an office information system uses to support these activities include word processing, spreadsheets, databases, presentation graphics, e-mail, Web browsers, Web page authoring, personal information management, and groupware. Office information systems use communications technology such as voice mail, facsimile (fax), videoconferencing, and electronic data interchange (EDI) for the electronic exchange of text, graphics, audio, and video. An office information system also uses a variety of hardware, including computers equipped with modems, video cameras, speakers, and microphones; scanners; and fax machines. 2. Transaction Processing Systems A transaction processing system (TPS) is an information system that captures and processes data generated during an organizations day-to-day transactions. A transaction is a business activity such as a deposit, payment, order or reservation. Clerical staff typically perform the activities associated with transaction processing, which include the following: 1. Recording a business activity such as a students registration, a customers order, an employees timecard or a clients payment. Confirming an action or triggering a response, such as printing a students schedule, sending a thank-you note to a customer, generating an employees paycheck or issuing a receipt to a client. Maintaining data, which involves adding new data, changing existing data, or removing unwanted data.

2.

3.

Transaction processing systems were among the first computerized systems developed to process business data a function originally called data processing. Usually, the TPS

computerized an existing manual system to allow for faster processing, reduced clerical costs and improved customer service. The first transaction processing systems usually used batch processing. With batch processing, transaction data is collected over a period of time and all transactions are processed later, as a group. As computers became more powerful, system developers built online transaction processing systems. With online transaction processing (OLTP) the computer processes transactions as they are entered. When you register for classes, your school probably uses OLTP. The registration administrative assistant enters your desired schedule and the computer immediately prints your statement of classes. The invoices, however, often are printed using batch processing, meaning all student invoices are printed and mailed at a later date. Today, most transaction processing systems use online transaction processing. Some routine processing tasks such as calculating paychecks or printing invoices, however, are performed more effectively on a batch basis. For these activities, many organizations still use batch processing techniques. 3. Management Information Systems While computers were ideal for routine transaction processing, managers soon realized that the computers capability of performing rapid calculations and data comparisons could produce meaningful information for management. Management information systems thus evolved out of transaction processing systems. A management information system, or MIS (pronounced em-eye-ess), is an information system that generates accurate, timely and organized information so managers and other users can make decisions, solve problems, supervise activities, and track progress. Because it generates reports on a regular basis, a management information system sometimes is called a management reporting system (MRS). Management information systems often are integrated with transaction processing systems. To process a sales order, for example, the transaction processing system records the sale, updates the customers account balance, and makes a deduction from inventory. Using this information, the related management information system can produce reports that recap daily sales activities; list customers with past due account balances; graph slow or fast selling products; and highlight inventory items that need reordering. A management information system focuses on generating information that management and other users need to perform their jobs. An MIS generates three basic types of information: detailed, summary and exception. Detailed information typically confirms transaction processing activities. A Detailed Order Report is an example of a detail report. Summary information consolidates data into a format that an individual can review quickly and easily. To help synopsize information, a summary report typically contains totals, tables, or graphs. An Inventory Summary Report is an example of a summary report. Exception information filters data to report information that is outside of a normal condition. These conditions, called the exception criteria, define the range of what is considered normal activity or status. An example of an exception report is an Inventory Exception Report is an Inventory Exception Report that notifies the purchasing department of items it needs to reorder. Exception reports help managers save time because they do not have to search through a detailed report for exceptions. Instead, an exception report brings exceptions to the managers attention in an easily identifiable form. Exception reports thus help them focus on situations that require immediate decisions or actions.

4. Decision Support Systems Transaction processing and management information systems provide information on a regular basis. Frequently, however, users need information not provided in these reports to help them make decisions. A sales manager, for example, might need to determine how high to set yearly sales quotas based on increased sales and lowered product costs. Decision support systems help provide information to support such decisions. A decision support system (DSS) is an information system designed to help users reach a decision when a decision-making situation arises. A variety of DSSs exist to help with a range of decisions. A decision support system uses data from internal and/or external sources. Internal sources of data might include sales, manufacturing, inventory, or financial data from an organizations database. Data from external sources could include interest rates, population trends, and costs of new housing construction or raw material pricing. Users of a DSS, often managers, can manipulate the data used in the DSS to help with decisions. Some decision support systems include query language, statistical analysis capabilities, spreadsheets, and graphics that help you extract data and evaluate the results. Some decision support systems also include capabilities that allow you to create a model of the factors affecting a decision. A simple model for determining the best product price, for example, would include factors for the expected sales volume at each price level. With the model, you can ask what-if questions by changing one or more of the factors and viewing the projected results. Many people use application software packages to perform DSS functions. Using spreadsheet software, for example, you can complete simple modeling tasks or whatif scenarios. A special type of DSS, called an executive information system (EIS), is designed to support the information needs of executive management. Information in an EIS is presented in charts and tables that show trends, ratios, and other managerial statistics. Because executives usually focus on strategic issues, EISs rely on external data sources such as the Dow Jones News/Retrieval service or the Internet. These external data sources can provide current information on interest rates, commodity prices, and other leading economic indicators. To store all the necessary decision-making data, DSSs or EISs often use extremely large databases, called data warehouses. A data warehouse stores and manages the data required to analyze historical and current business circumstances. 5. Expert Systems An expert system is an information system that captures and stores the knowledge of human experts and then imitates human reasoning and decision-making processes for those who have less expertise. Expert systems are composed of two main components: a knowledge base and inference rules. A knowledge base is the combined subject knowledge and experiences of the human experts. The inference rules are a set of logical judgments applied to the knowledge base each time a user describes a situation to the expert system. Although expert systems can help decision-making at any level in an organization, nonmanagement employees are the primary users who utilize them to help with job-related decisions. Expert systems also successfully have resolved such diverse problems as diagnosing illnesses, searching for oil and making soup.

Expert systems are one part of an exciting branch of computer science called artificial intelligence. Artificial intelligence (AI) is the application of human intelligence to computers. AI technology can sense your actions and, based on logical assumptions and prior experience, will take the appropriate action to complete the task. AI has a variety of capabilities, including speech recognition, logical reasoning, and creative responses. Experts predict that AI eventually will be incorporated into most computer systems and many individual software applications. Many word processing programs already include speech recognition. Integrated Information Systems With todays sophisticated hardware, software and communications technologies, it often is difficult to classify a system as belonging uniquely to one of the five information system types discussed. Much of todays application software supports transaction processing and generates management information. Other applications provide transaction processing, management information, and decision support. Although expert systems still operate primarily as separate systems, organizations increasingly are consolidating their information needs into a single, integrated information system. STRATEGIC INFORMATION SYSTEM SIS" was first introduced into the field of information systems in 1982-83 by Dr. Charles Wiseman, President of a newly formed consultancy called "Competitive Applications," (cf. NY State records for consultancies formed in 1982) who gave a series of public lectures. It considers organisation concepts, structure, value chain and 5 forces model. Strategic information systems are those computer systems that implement business strategies; They are those systems where information services resources are applied to strategic business opportunities in such a way that the computer systems have an impact on the organizations products and business operations. Strategic information systems are always systems that are developed in response to corporate business initiative. The ideas in several well-known cases came from information Services people, but they were directed at specific corporate business thrusts. In other cases, the ideas came from business operational people, and Information Services supplied the technological capabilities to realize profitable results._Most information systems are looked on as support activities to the business. They mechanize operations for better efficiency, control, and effectiveness, but they do not, in themselves, increase corporate profitability. They are simply used to provide management with sufficient dependable information to keep the business running smoothly, and they are used for analysis to plan new directions. Strategic information systems, on the other hand, become an integral and necessary part of the business, and directly influence market share, earnings, and all other aspects of marketplace profitability. They may even bring in new products, new markets, and new ways of doing business. They directly affect the competitive stance of the organization, giving it an advantage against the competitors._Most literature on strategic information systems emphasizes the dramatic breakthroughs in computer systems, such as American Airlines' Sabre System and American Hospital Supplys terminals in customer offices. These, and many other highly successful approaches are most attractive to think about, and it is always possible that an equivalent success may be attained in your organization. There are many possibilities for strategic information systems, however, which may not be dramatic breakthroughs, but

which will certainly become a part of corporate decision making and will, increase corporate profitability. The development of any strategic information systems always enhances the image of information Services in the organization, and leads to information management having a more participatory role in the operation of the organization._ Strategic systems are information systems that are developed in response to corporate business initiative. They are intended to give competitive advantage to the organization. They may deliver a product or service that is at a lower cost, that is differentiated, that focuses on a particular market segment, or is innovative._Some of the key ideas of storefront writers are summarized. These include Michael Porters Competitive Advantage and the Value Chain, Charles Wisemans Strategic Perspective View and the Strategic Planning Process, F. Warren McFarlans Competitive Strategy with examples of Information Services Roles, and Gregory Parsons Information Technology Management at the industry level, at the firm level, and at the strategy level. _The three general types of information systems that are developed and in general use are financial systems, operational systems, and strategic systems. These categories are not mutually exclusive and, in fact, they always overlap to some. Well-directed financial systems and operational systems may well become the strategic systems for a particular organization._Financial systems are the basic computerization of the accounting, budgeting, and finance operations of an organization. These are similar and ubiquitous in all organizations because the computer has proven to be ideal for the mechanization and control or financial systems; these include the personnel systems because the headcount control and payroll of a company is of prime financial concern. Financial systems should be one of the bases of all other systems because they give a common, controlled measurement of all operations and projects, and can supply trusted numbers for indicating departmental or project success. Organizational planning must be tied to financial analysis. There is always a greater opportunity to develop strategic systems when the financial systems are in place, and required figures can be readily retrieved from them._Operational systems, or services systems, help control the details of the business. Such systems will vary with each type of enterprise. They are the computer systems that operational managers need to help run the business on a routing basis. They may be useful but mundane systems that simply keep track of inventory, for example, and print out reorder points and cost allocations. On the other hand, they may have a strategic perspective built into them, and may handle inventory in a way that dramatically impacts profitability. A prime example of this is the American Hospital Supply inventory control system installed on customer premises. Where the great majority of inventory control systems simply smooth the operations and give adequate cost control, this well-know hospital system broke through with a new version of the use of an operational system for competitive advantage. The great majority of operational systems for which many large and small computer systems have been purchased, however, simply help to manage and automate the business. They are important and necessary, but can only be put into the "strategic" category it they have a pronounced impact on the profitability of the business._There must be computer capacity planning, technology forecasting, and personnel performance planning. It is more likely that those in the organization with entrepreneurial vision will conceive of strategic plans when such basic operational capabilities are in place and are well managed._Operational systems, then, are those that keep the organization operating under control and most cost effectively. Any of them may be changed to strategic systems if they are viewed with strategic vision. They are fertile

grounds for new business opportunities._Strategic systems are those that link business and computer strategies. They may be systems where a new business thrust has been envisioned and its advantages can be best realized through the use of information technology. They may be systems where new computer technology has been made available on the market, and planners with an entrepreneurial spirit perceive how the new capabilities can quickly gain competitive advantage. They may be systems where operational management people and Information Services people have brainstormed together over business problems, and have realized that a new competitive thrust is possible when computer methods are applied in a new way._ Some of the more common ways of thinking about gaining competitive advantage are:

Deliver a product or a service at a lower cost. This does not necessarily mean the lowest cost, but simply a cost related to the quality of the product or service that will be both attractive in the marketplace and will yield sufficient return on investment. The cost considered is not simply the data processing cost, but is the overall cost of all corporate activities for the delivery of that product or service. There are many operational computer systems that have given internal cost saving and other internal advantages, but they cannot be thought of as strategic until those savings can be translated to a better competitive position in the market. Deliver a product or service that us differentiated. Differentiation means the addition of unique features to a product or service that are competitive attractive in the market. Generally such features will cost something to produce, and so they will be the setting point, rather than the cost itself. Seldom does a lowest cost product also have the best differentiation. A strategic system helps customers to perceive that they are getting some extras for witch they will willingly pat. Focus on a specific market segment. The idea is to identify and create market niches that have not been adequately filled. Information technology is frequently able to provide the capabilities of defining, expanding, and filling a particular niche or segment. The application would be quite specific to the industry. Innovation. Develop products or services through the use of computers that are new and appreciably from other available offerings. Examples of this are automatic credit card handing at service stations, and automatic teller machines at banks. Such innovative approaches not only give new opportunities to attract customers, but also open up entirely new fields of business so that their use has very elastic demand.

Almost any data processing system may be called "strategic" if it aligns the computer strategies with the business strategies of the organization, and there is close cooperation in its development between the information Services people and operational business managers. There should be an explicit connection between the organizations business plan and its systems plan to provide better support of the organizations goals and objectives, and closer management control of the critical information systems._Many organizations that have done substantial work with computers since the 1950s have long used the term "strategic planning" for any computer developments that are going to directly affect the conduct of their business. Not included are budget, or annual planning and the planning of developing Information Services facilities and the many "housekeeping" tasks that are required in any corporation. Definitely included in strategic planning are any information systems that will be used by operational management to conduct the business more profitably. Strategic system, thus, attempt to match Information Services resources to strategic business opportunities where the computer systems will have an impact on the products and the business operations. Planning for strategic systems is not defined by calendar cycles or routine reporting. It is defined by

the effort required to impact the competitive environment and the strategy of a firm at the point in time that management wants to move on the idea._Effective strategic systems can only be accomplished, of course, if the capabilities are in place for the routine basic work of gathering data, evaluating possible equipment and software, and managing the routine reporting of project status. The calendarized planning and operational work is absolutely necessary as a base from which a strategic system can be planned and developed when a priority situation arises. When a new strategic need becomes apparent, Information Services should have laid the groundwork to be able to accept the task of meeting that need._ Porters Competitive Advantage Dr. Michael E. Porter, Professor of Business Administration, Harvard Business School, Has addressed his ideas in two keystone books. Competitive Strategy: Techniques for Analyzing Industries and Competitors, and his newer book, Competitive Advantage, present a framework for helping firms actually create and sustain a competitive advantage in their industry in either cost or differentiation. Dr. Porters theories on competitive advantage are not tied to information systems, but are used by others to involve information services technologies. Porters books give techniques for getting a handle on the possible average profitability of an industry over time. The analysis of these forces is the base for estimating a firms relative position and competitive advantage. In any industry, the sustained average profitability of competitors varies widely. The problem is to determine how a business can outperform the industry average and attain a sustainable competitive advantage. It is possible that the answer lies in information technology together with good management. Porter claims that the principal types of competitive advantage are low cost producer, differentiation, and focus. A firm has a competitive advantage if it is able to deliver its product or service at a lower cost than its competitors. If the quality of its product is satisfactory, this will translate into higher margins and higher returns. Another advantage is gained if the firm is able to differentiate itself in some way. Differentiation leads to offering something that is both unique and is desired, and translates into a premium price. Again, this will lead to higher margins and superior performance._It seems that two types of competitive advantage, lower cost and differentiation, are mutually exclusive. To get lower cost, you sacrifice uniqueness. To get a premium price, there must be extra cost involved in the process. To be a superior performer,_however, you must go for competitive advantage in either cost or differentiation._Another point of Porters is that competitive advantage is gained through a strategy bases on scope. It is necessary to look at the breadth of a firms activities, and narrow the competitive scope to gain focus in either an industry segment, a geographic area, a customer type, and so on. Competitive advantage is most readily gained by defining the competitive scope in which the firm is operating, and concentrating on it._

VALUE CHAIN Based on these ideas of type and scope, Porter gives a useful tool for analysis which he calls The Value Chain. This value chain gives a framework on which a useful analysis can be hung. The basic notion is that to understand competitive advantage in any firm, one cannot look at the firm as a whole. It is necessary to identify the specific activities which the firm performs to do business. Each firm is a collection of the things that it does that all add up to the product being delivered to the customer. These activities are numerous and are unique to every industry, but it is only in these activities where cost advantage or differentiation can be gained.The basic idea is that the firms activities can be divided into nine generic types. Five are the primary activities, which are the activities that

create the product, market it and deliver it; four are the support activities that cross between the primary activities. The primary activities are:

Inbound logistics, which includes the receipt and storage of material, and the general management of supplies. Operations, which are the manufacturing steps or the service steps. Outbound logistics, which are associated with collecting, storing, and physically distributing the product to buyers. In some companies this is a significant cost, and buyers value speed and consistency. Marketing and sales includes customer relations, order entry, and price management. After-sales services covers the support of the product in the field, installation, customer training, and so on.

The support activities are shown across because they are a part of all of the firms operations. They are not directed to the customer, but they allow the firm to perform its primary activities. The four generic types of support activities are:

Procurement, which includes the contracting for and purchase of raw materials, or any items used by the enterprise. Part of procurement is in the purchasing department, but it is also spread throughout the organization. Technology development may simply cover operational procedures, or many be involved with the use of complex technology. Today, sophisticated technology is pervasive, and cuts across all activities; it is not just an R&D function. Human resource management is the recruiting, training, and development of people. Obviously, the cuts across every other activity. Firm infrastructure is a considerable part of the firm, including the accounting department, the legal department, the planning department, government relations, and so on.

The basic idea is that competitive advantage grows out of the firms ability to perform these activities either less expensively than its competitors, or in a unique way. Competitive advantage should be linked precisely to these specific activities, and not thought of broadly at a firm-wide level. This is an attractive way of thinking for most information Services people, as it is, fundamentally, the systems analysis approach. Computer people are trained to reduce systems to their components, look for the best application for each component, then put together an interrelated system._Information technology is also pervasive throughout all parts of the value chain. Every activity that the firm performs has the potential to imbed information technology because it involves information processing. As information technology moves away from repetitive transaction processing and permeates all activities in the value chain, it will be in a better position to be useful in gaining competitive advantage. Porter emphasizes what he call the linkages between the activities that the firm performs. No activities in a firm are independent, yet each department is managed separately. It is most important to understand the cost linkages that are involved so that the firm may get an overall optimization of the production rather than departmental optimizations. A typical linkage might be that if more is spent in procurement, less is spent in operations. If more testing is done in operations, after-sales service costs will be lower. Multifunctional coordination is crucial to competitive advantage, but it is often difficult to see. Insights into linkages give the ability to have overall optimization. Any strategic information system must be analyzed across all departments in the organization._Cost and Competitive

Advantage. Cost leadership is one of Porters two types of competitive advantage. The cost leader delivers a product of acceptable quality at the lowest possible cost. It attempts to open up a significant and sustainable cost gap over all other competitors. The cost advantage is achieved through superior position in relation to the key cost drivers._Cost leadership translates into above-average profits if the cost leader can command the average prices in the industry. On the other hand, cost leaders must maintain quality that is close to, or equal to, that of the competition. Achieving cost leadership usually requires trade-offs with differentiation. The two are usually incompatible._Note that a firms relative cost position cannot be understood by viewing the firm as a whole. Overall cost grows out of the cost performing discrete activities. Cost position is determined by the cumulative cost of performing all value activities._To sustain cost advantage, Porter gives a number of cost drivers which must be understood in detail because the sustainability of cost advantage in an activity depends on the cost drivers of that activity. Again, this type of detail is best obtained by classical systems analysis methods. Some of the cost drivers which must be analyzed, understood, and controlled are:

Scale. The appropriate type of scale must be found. Policies must be set to reinforce economies of scale in scale-sensitive activities. Learning. The learning curve must be understood and managed. As the organization tries to learn from competitors, it must strive to keep its own learning proprietary. Capacity Utilization. Cost can be controlled by the leveling of throughput. Linkages. Linkages should be exploited within the value chain. Work with suppliers and channels can reduce costs. Interrelationships. Shared activities can reduce costs. Integration. The possibilities for integration or de-integration should be examined systematically. Timing. If the advantages of being the firs mover or a late mover are understood, they can be exploited. Policies. Policies that enhance the low-cost position or differentiation should be emphasized. Location. When viewed as a whole, the location of individual activities can be optimized. Institutional Factors. Institutional factors should be examined to see whether their change may be helpful.

Care must be taken in the evaluation and perception of cost drivers because there are pitfalls if the thinking is incremental and indirect activities are ignored. Even though the manufacturing activities, for example, are obvious candidates for analyses, they should not have exclusive focus. Linkages must be exploited and cross-subsidies avoided. Porter gives five steps to achieving cost leadership:

Identify the appropriate value chain and assign costs and assets to it. Identify the cost drivers of each value activity and see how they interact. Determine the relative costs of competitors and the sources of cost differences. Develop a strategy to lower relative cost position through controlling cost drivers or reconfiguring the value chain. the cost reduction strategy for sustainability.

Differentiation Advantage

Differentiation is the second of Porters two types of competitive advantage. In the differentiation strategy, one or more characteristics that are widely value by buyers are selected. The purpose is to achieve and sustain performance that is superior to any competitor in satisfying those buyer needs._A differentiator selectively adds costs in areas that are important to the buyer. Thus, successful differentiation leads to premium prices, and these lead to above-average profitably if there is approximate cost parity. To achieve this, efficient forms of differentiation must be picked, and costs must be reduced in areas that are irrelevant to the buyer needs._Buyers are like sellers in that they have their own value chains. The product being sold will represent one purchased input, but the seller may affect the buyers activities in other ways. Differentiation can lower the buyers cost and improve the buyers performance, and thus create value, or competitive advantage, for the buyer. The buyer may not be able to assess all the value that a firm provides, but it looks for signals of value, or perceived value. A few typical factors which may lower the buyers costs are:

Less idle time Lower risk of failure Lower installation costs Faster processing time Lower labor costs Longer useful life, and so on.

_Porter points out that differentiation is usually costly, depending on the cost drivers of the activities involved. A firm must find forms of differentiation where it has a cost advantage in differentiating._Differentiation is achieved by enhancing the sources of uniqueness. These may be found throughout the value chain, and should be signaled to the buyer. The cost of differentiation can be turned to advantage if the less costly sources are exploited and the cost drivers are controlled. The emphasis must be on getting a sustainable cost advantage in differentiating. Efforts must be made to change the buyers criteria by reconfiguring the value chain to be unique in new ways, and by preemptively responding to changing buyer or channel circumstances._Differentiation will nor work if there is too much uniqueness, or uniqueness that the buyers do not value. The buyers ability to pay a premium price, the signaling criteria, and the segments important to the buyer must all be understood. Also, there cannot be over reliance on sources of differentiation that competitors can emulate cheaply or quickly. Porter lists seven steps to achieving differentiation:

Determine the identify of the real buyer. Understand the buyers value chain, and the impact of the sellers product on it. Determine the purchasing criteria of the buyer. Assess possible sources of uniqueness in the firms value chain. Identify the cost of these sources of uniqueness. Choose the value activities that create the most valuable differentiation for the buyer relative to the costs incurred. Test the chosen differentiation strategy for sustainability.

Focus Strategies for Advantage. Porters writings also discuss focus strategies. He emphasizes that a company that attempts to completely satisfy every buyer does not have a

strategy. Focusing means selecting targets and optimizing the strategies for them. Focus strategies further segment the industry. They may be imitated, but can provide strategic openings._Clearly, multiple generic strategies may be implemented, but internal inconsistencies can then arise, and the distinctions between the focused entities may become blurred._Porters work is directed towards competitive advantage in general, and is not specific to strategic information systems. It has been reviewed here at some length, however, because his concepts are frequently referred to in the writings of others who are concerned with strategic information systems. The value chain concept has been widely adopted, and the ideas of low cost and differentiation are accepted. This section, therefore, is an introduction into a further discussion of strategic information systems. The implementation of such systems tends to be can implementation of the factors elucidated by Porter. Wisemans Strategic Perspective View Charles Wiseman has applied the current concepts of Strategic Information Systems in work at GTE and other companies, and in his consulting work as President of Competitive Applications, Inc. His book, Strategy and Computers: Information Systems as Competitive Weapons, extends Porters thinking in many practical ways in the Information Systems area, and discusses many examples of strategic systems._Wiseman emphasizes that companies have begun to use information systems strategically to reap significant competitive advantage. He feels that the significance of these computer-based products and services does not lie in their technological sophistication or in the format of the reports they produce; rather, it is found in the role played by these information systems in the firms planning and implementation in gaining and maintaining competitive advantage._Wiseman points out that although the use of information systems may not always lead to competitive advantage, it can serve as an important tool in the firms strategic plan. Strategic systems must not be discovered haphazardly. Those who would be competitive leaders must develop a systematic approach for identifying strategic information systems (SIS) opportunities. Both business management and information management must be involved._He emphasizes that information technology is now in a position to be exploited competitively. A framework must be developed for identifying SIS opportunities. There will certainly be competitive response, so one should proceed with strategic thrusts based on information technology. These moves are just as important as other strategic thrusts, such as acquisition, geographical expansion, and so on. It is necessary to plan rationally about acquisition, major alliances with other firms, and other strategic thrusts._IMBS Business Systems Planning (BSP) and MITs Critical Success Factor (CSF) methodologies are ways to develop information architectures and to identify conventional information systems, which are primarily used for planning and control purposes. To identify SIS, a new model or framework is needed. The conventional approach works within the perceived structures of the organization. An effective SIS approach arises from the forging of new alliances that expand the horizon of expectation. Such an approach is most difficult to attain, and can only work with top management support. Innovations, however, frequently, come from simply a new look at existing circumstances, from a new viewpoint. Information Services people must start to look systematically at application opportunities related to managers._Wiseman believes that the range of opportunities is limited by the framework adopted. He contrasts the framework for Conventional IS Opportunities (Figure No. 4) with the framework for Strategic IS Opportunities (Figure No. 5)._In the conventional view, there are two information system

thrusts: to automate the basic processes of the firm, or to satisfy the information needs of managers, professionals, or others. There are three generic targets: strategic planning, management control, and operational control. In this perspective, there are, thus, six generic opportunity areas._In the strategic view of IS opportunities, there are five strategic information thrusts and three strategic targets. This gives fifteen generic opportunity areas. This opens up the range and perspective of management vision._Sustainable competitive advantage can mean many things to different firms. Competitive advantage may be with respect to a supplier, a customer, or a rival. It may exist because of a lower price, because of desirable features, or because of the various resources that a firm possesses. Sustainability is also highly relative, depending upon the business. In established businesses, it may refer to years, and the experience that the firm develops may be quite difficult to emulate. In other industries, a lead of a few weeks or months may be all that is necessary._There is an advantage in looking at Figure No. 5 as a study group, and brainstorming through it to find out what information may be needed to do a job better. One can find competitive advantage in information systems when the subjects are broken down to specifics._Strategic Thrusts. Wiseman uses the term strategic thrusts for the moves that companies make to gain or maintain some kind of competitive edge, or to reduce the competitive edge of one of the strategic targets. Information technology can be used to support or to shape one or more of these thrusts. Examining the possibilities of these thrusts takes imagination, and it is helped by understanding what other firms have done in similar situations. This is why so many examples are presented in the literature. Analogy is important._There is no question that there is considerable overlap between conventional information systems and strategic information systems. Systems are complex and a great deal of data is involved. The idea is to look at this complexity in a new light, and see where competitive advantage might possibly be gained. Note that Wiseman takes Porters three generic categories: low cost producer, differentiation, and focus, and extends them to five categories: differentiation, cost, innovations, growth, and alliance._Cost may be move that not only reduces the costs, but also reduces the costs of selected strategic targets so that you will benefit form preferential treatment. A strategic cost thrust may also aim at achieving economies of scale. The examples always seem obvious when they are described, but the opportunities can usually only be uncovered by considerable search._Innovation is another strategic thrust that can be supported or shaped by information technology in either product or process. In many financial firms, the innovative product is really an information system. Innovation requires rapid response to opportunities to be successful, but this carries with it the question of considerable risk. There can be no innovation without risk, whether information systems are included or not. Innovation, however, can achieve advantage in product or process that results in a fundamental transformation in the way that type of business is conducted. Growth achieves an advantage by expansion in volume or geographical distribution. It may also come from product-time diversification. Information systems can be of considerable help in the management of rapid growth.

UNIT 3 . FUNCTIONAL AND ENTERPRISE SYSTEMS. A Transaction Processing System (TPS) is a type of information system that collects, stores, modifies and retrieves the data transactions of an enterprise. A transaction is any event that passes the ACID test in which data is generated or modified before storage in an information systemThe success of commercial enterprises depends on the reliable processing of transactions to ensure that customer orders are met on time, and that partners and suppliers are paid and can make payment. The field of transaction processing, therefore, has become a vital part of effective business management, led by such organisations as the Association for Work Process Improvement and the Transaction Processing Performance Council. Transaction processing systems offer enterprises the means to rapidly process transactions to ensure the smooth flow of data and the progression of processes throughout the enterprise. Typically, a TPS will exhibit the following characteristics: Rapid Processing The rapid processing of transactions is vital to the success of any enterprise now more than ever, in the face of advancing technology and customer demand for immediate action. TPS systems are designed to process transactions virtually instantly to ensure that customer data is available to the processes that require it. Reliability Similarly, customers will not tolerate mistakes. TPS systems must be designed to ensure that not only do transactions never slip past the net, but that the systems themselves remain operational permanently. TPS systems are therefore designed to incorporate comprehensive safeguards and disaster recovery systems. These measures keep the failure rate well within tolerance levels. Standardisation Transactions must be processed in the same way each time to maximise efficiency. To ensure this, TPS interfaces are designed to acquire identical data for each transaction, regardless of the customer. Controlled Access Since TPS systems can be such a powerful business tool, access must be restricted to only those employees who require their use. Restricted access to the system ensures that employees who lack the skills and ability to control it cannot influence the transaction process. Transactions Processing Qualifiers In order to qualify as a TPS, transactions made by the system must pass the ACID test. The ACID tests refers to the following four prerequisites: Atomicity Atomicity means that a transaction is either completed in full or not at all. For example, if funds are transferred from one account to another, this only counts as a bone fide transaction if both the withdrawal and deposit take place. If one account is debited and the

other is not credited, it does not qualify as a transaction. TPS systems ensure that transactions take place in their entirety. Consistency TPS systems exist within a set of operating rules (or integrity constraints). If an integrity constraint states that all transactions in a database must have a positive value, any transaction with a negative value would be refused. Isolation Transactions must appear to take place in isolation. For example, when a fund transfer is made between two accounts the debiting of one and the crediting of another must appear to take place simultaneously. The funds cannot be credited to an account before they are debited from another. Durability Once transactions are completed they cannot be undone. To ensure that this is the case even if the TPS suffers failure, a log will be created to document all completed transactions. These four conditions ensure that TPS systems carry out their transactions in a methodical, standardised and reliable manner. Types of Transactions While the transaction process must be standardised to maximise efficiency, every enterprise requires a tailored transaction process that aligns with its business strategies and processes. For this reason, there are two broad types of transaction: Batch Processing Batch processing is a resource-saving transaction type that stores data for processing at pre-defined times. Batch processing is useful for enterprises that need to process large amounts of data using limited resources. Examples of batch processing include credit card transactions, for which the transactions are processed monthly rather than in real time. Credit card transactions need only be processed once a month in order to produce a statement for the customer, so batch processing saves IT resources from having to process each transaction individually. Real Time Processing In many circumstances the primary factor is speed. For example, when a bank customer withdraws a sum of money from his or her account it is vital that the transaction be processed and the account balance updated as soon as possible, allowing both the bank and customer to keep track of funds. Backup procedures

Since business organizations have become very dependent on TPSs, a breakdown in their TPS may stop the business' regular routines and thus stopping its operation for a certain amount of time. In order to prevent data loss and minimize disruptions when a TPS breaks

down a well-designed backup and recovery procedure is put into use. The recovery process can rebuild the system when it goes down. Recovery process A TPS may fail for many reasons. These reasons could include a system failure, human errors, hardware failure, incorrect or invalid data, computer viruses, software application errors or natural or man-made disasters. As it's not possible to prevent all TPS failures, a TPS must be able to cope with failures. The TPS must be able to detect and correct errors when they occur. A TPS will go through a recovery of the database to cope when the system fails, it involves the backup, journal, checkpoint, and recovery manager:

Journal: A journal maintains an audit trail of transactions and database changes. Transaction logs and Database change logs are used, a transaction log records all the essential data for each transactions, including data values, time of transaction and terminal number. A database change log contains before and after copies of records that have been modified by transactions. Checkpoint: The purpose of checkpointing is to provide a snapshot of the data within the databas e. A checkpoint, in general, is any identifier or other reference that identifies at a point in time the state of the database Modifications to database pages are performed in memory and are not necessarily written to disk after every update. Therefore, periodically, the database system must perform a checkpoint to write these updates which are held in-memory to the storage disk. Writing these updates to storage disk creates a point in time in which the database system can apply changes contained in a transaction log during recovery after an unexpected shut down or crash of the database system. If a checkpoint is interrupted and a recovery is required, then the database system must start recovery from a previous successful checkpoint. Checkpointing can be either transaction-consistent or nontransaction-consistent (called also fuzzy checkpointing). Transactionconsistent checkpointing produces a persistent database image that is sufficient to recover the database to the state that was externally perceived at the moment of starting the checkpointing. A non-transaction-consistent checkpointing results in a persistent database image that is insufficient to perform a recovery of the database st ate. To perform the database recovery, additional information is needed, typically contained in transaction logs. Transaction consistent checkpointing refers to a consistent database, which doesn't necessarily include all the latest committed transactions, but all modifications made by transactions, that were committed at the time checkpoint creation was started, are fully present. A non-consistent transaction refers to a checkpoint which is not necessarily a consistent database, and can't be recovered to one without all log records generated for open transactions included in the checkpoint. Depending on the type of database management system implemented a checkpoint may incorporate indexes or storage pages (user data), indexes and storage pages. If no indexes are incorporated into the checkpoint, indexes must be created when the database is restored from the checkpoint image.

Recovery Manager: A recovery manager is a program which restores the database to a correct condition which can restart the transaction processing.

Depending on how the system failed, there can be two different recovery procedures used. Generally, the procedures involves restoring data that has been collected from a backup device and then running the transaction processing again. Two types of recovery are backward recovery and forward recovery:

Backward recovery: used to undo unwanted changes to the database. It reverses the changes made by transactions which have been aborted. It involves the logic of reprocessing each transaction, which is very time-consuming. Forward recovery: it starts with a backup copy of the database. The transaction will then reprocess according to the transaction journal that occurred between the time the backup was made and the present time. It's much faster and more accurate.

Types of back-up procedures There are two main types of Back-up Procedures: Grandfather-father-son and Partial backups: Grandfather-father-son: This procedure refers to at least three generations of backup master files. thus, the most recent backup is the son, the oldest backup is the grandfather. It's commonly used for a batch transaction processing system with a magnetic tape. If the system fails during a batch run, the master file is recreated by using the son backup and then restarting the batch. However if the son backup fails, is corrupted or destroyed, then the next generation up backup (father) is required. Likewise, if that fails, then the next generation up backup (grandfather) is required. Of course the older the generation, the more the data may be out of date. Organizations can have up to twenty generations of backup. Partial backups :This only occurs when parts of the master file are backed up. The master file is usually backed up to magnetic tape at regular times, this could be daily, weekly or monthly. Completed transactions since the last backup are stored separately and are called journals, or journal files. The master file can be recreated from the journal files on the backup tape if the system is to fail. Updating in a batch This is used when transactions are recorded on paper (such as bills and invoices) or when it's being stored on a magnetic tape. Transactions will be collected and updated as a batch at when it's convenient or economical to process them. Historically, this was the most common method as the information technology did not exist to allow real-time processing. The two stages in batch processing are:

Collecting and storage of the transaction data into a transaction file - this involves sorting the data into sequential order. Processing the data by updating the master file - which can be difficult, this may involve data additions, updates and deletions that may require to happen in a certain order. If an error occurs, then the entire batch fails.

Updating in batch requires sequential access - since it uses a magnetic tape this is the only way to access data. A batch will start at the beginning of the tape, then reading it from the order it was stored; it's very time-consuming to locate specific transactions. The information technology used includes a secondary storage medium which can store large quantities of data inexpensively (thus the common choice of a magnetic tape). The software used to collect data does not have to be online - it doesn't even need a user interface. Updating in real-time This is the immediate processing of data. It provides instant confirmation of a transaction. This involves a large amount of users who are simultaneously performing transactions to change data. Because of advances in technology (such as the increase in the speed of data transmission and larger bandwidth), real-time updating is now possible. Steps in a real-time update involve the sending of a transaction data to an online database in a master file. The person providing information is usually able to help with error correction and receives confirmation of the transaction completion. Updating in real-time uses direct access of data. This occurs when data are accessed without accessing previous data items. The storage device stores data in a particular location based on a mathematical procedure. This will then be calculated to find an approximate location of the data. If data are not found at this location, it will search through successive locations until it's found. The information technology used could be a secondary storage medium that can store large amounts of data and provide quick access (thus the common choice of a magnetic disk). It requires a user-friendly interface as it's important for rapid response time. Reservation Systems Reservation systems are used for any type of business where a service or a product is set aside for a customer to use for a future time. Marketing information system Set of procedures and practices employed in analyzing and assessing marketing information, gathered continuously from sources inside and outside of a firm. Timely marketing information provides basis for decisions such as product development or improvement, pricing, packaging, distribution, media selection, and promotion. Processes associated with collecting, analyzing, and reporting marketing research information. The system used may be as simple as a manually tabulated consumer survey or as complex as a computer system for tracking the distribution and redemption of cents-off coupons in terms of where the consumer got the coupon, where he redeemed it, what he purchased with it and in what size or quantity he purchased it, and how the sales volume for each product was affected in each area or store where coupons were available. Marketing information provides input to marketing decisions including product improvements, price and packaging changes, copywriting, media buying, distribution, and so forth. Accounting systems are a part of marketing information systems providing product sales and profit information. A Marketing Information System can be defined as 'People, equipment and procedures to gather, sort, analyze, evaluate and distribute needed, timely and accurate information to marketing decision makers' (Gray Armstrong, 2008)

A marketing information system (MIS)consists of people, equipment and procedures to gather, sort, analyze, evaluate and distribute needed, timely and accurate information to marketing decision makers. The MIS begins and ends with marketing managers. First, it interacts with these managers to assess their information needs. Next, it develops the needed information from internal company records, marketing intelligence activities and the marketing research process. Information analysis processes the information to make it more useful. Finally, the MIS distributes information to managers in the right form at the right time to help them in marketing planning, implementation and control. DEVELOPING INFORMATION The information needed by marketing managers comes from internal company records, marketing intelligence and marketing research. The information analysis system then processes this information to make it more useful for managers. Internal Records Information gathered from sources within the company to evaluate marketing performances and to detect marketing problems and opportunities. Most marketing managers use internal records and reports regularly, especially for making day-to-day planning, implementation and control decisions. Internal records information consists of information gathered from sources within the company to evaluate marketing performance and to detect marketing problems and opportunities. Example Office World offers shoppers a free membership card when they make their first purchase at their store. The card entitles shoppers to discounts on selected items, but also provides valuable information to the chain. Since Office World encourages customers to use their card with each purchase, it can track what customers buy, where and when. Using this information, it can track the effectiveness of promotions, trace customers who have defected to other stores and keep in touch with them if they relocate. Information from internal records is usually quicker and cheaper to get than information from other sources, but it also presents some problems. Because internal information was for other purposes, it may be incomplete or in the wrong form for making marketing decisions. For example, accounting department sales and cost data used for preparing financial statements need adapting for use in evaluating product, sales force or channel performance. Marketing Intelligence Everyday information about developments in changing marketing environment that helps managers prepares marketing plans. The marketing intelligence system determines the intelligence needed, collects it by searching the environment and delivers it to marketing managers who need it. Marketing intelligence comes from many sources. Much intelligence is from the company's personnel - executives, engineers and scientists, purchasing agents and the sales force. But company people are often busy and fail to pass on important information. The company must 'sell' its people on their importance as intelligence gatherers, train them to spot new developments and urge them to report intelligence hack to the company. The company must also persuade suppliers, resellers and customers to pass along important intelligence. Some information on competitors conies from what they say about themselves in annual reports, speeches, press releases and advertisements. The company can also learn about competitors from what others say about them in business publications and at trade shows. Or the company can watch what competitors do - buying

and analyzing competitors' products, monitoring their sales and checking for new patents. Companies also buy intelligence information from outside suppliers. Some companies set up an office to collect and circulate marketing intelligence. The staff scans relevant publications, summarize important news and send news bulletins to marketing managers. They develop a file of intelligence information and help managers evaluate new information. These services greatly improve the quality of information available to marketing managers. The methods used to gather competitive information range from the ridiculous to the illegal. Managers routinely shred documents because wastepaper baskets can be an information source. COMPONENTS OF A MARKETING INFORMATION SYSTEM A marketing information system (MIS) is intended to bring together disparate items of data into a coherent body of information. An MIS is, as will shortly be seen, more than raw data or information suitable for the purposes of decision making. An MIS also provides methods for interpreting the information the MIS provides. Moreover, as Kotler's1 definition says, an MIS is more than a system of data collection or a set of information technologies: "A marketing information system is a continuing and interacting structure of people, equipment and procedures to gather, sort, analyse, evaluate, and distribute pertinent, timely and accurate information for use by marketing decision makers to improve their marketing planning, implementation, and control". Figure 9.1 illustrates the major components of an MIS, the environmental factors monitored by the system and the types of marketing decision which the MIS seeks to underpin. Figure 9.1 The marketing information systems and its subsystems

The explanation of this model of an MIS begins with a description of each of its four main constituent parts: the internal reporting systems, marketing research system, marketing intelligence system and marketing models. It is suggested that whilst the MIS varies in its degree of sophistication - with many in the industrialised countries being computerised and few in the developing countries being so - a fully fledged MIS should have these components, the methods (and technologies) of collection, storing, retrieving and processing data notwithstanding.

Internal reporting systems: All enterprises which have been in operation for any period of time nave a wealth of information. However, this information often remains under-utilised because it is compartmentalised, either in the form of an individual entrepreneur or in the functional departments of larger businesses. That is, information is usually categorised according to its nature so that there are, for example, financial, production, manpower, marketing, stockholding and logistical data. Often the entrepreneur, or various personnel working in the functional departments holding these pieces of data, do not see how it could help decision makers in other functional areas. Similarly, decision makers can fail to appreciate how information from other functional areas might help them and therefore do not request it. The internal records that are of immediate value to marketing decisions are: orders received, stockholdings and sales invoices. These are but a few of the internal records that can be used by marketing managers, but even this small set of records is capable of generating a great deal of information. Below, is a list of some of the information that can be derived from sales invoices.

account

try

By comparing orders received with invoices an enterprise can establish the extent to which it is providing an acceptable level of customer service. In the same way, comparing stockholding records with orders received helps an enterprise ascertain whether its stocks are in line with current demand patterns. Marketing research systems: The general topic of marketing research has been the prime ' subject of the textbook and only a little more needs to be added here. Marketing research is a proactive search for information. That is, the enterprise which commissions these studies does so to solve a perceived marketing problem. In many cases, data is collected in a purposeful way to address a well-defined problem (or a problem which can be defined and solved within the course of the study). The other form of marketing research centres not around a specific marketing problem but is an attempt to continuously monitor the marketing environment. These monitoring or tracking exercises are continuous marketing research studies, often involving panels of farmers, consumers or distributors from which the same data is collected at regular intervals. Whilst the ad hoc study and continuous marketing research differs in the orientation, yet they are both proactive. Marketing intelligence systems: Whereas marketing research is focused, market intelligence is not. A marketing intelligence system is a set of procedures and data sources used by marketing managers to sift information from the environment that they can use in their decision making. This scanning of the economic and business environment can be undertaken in a variety of ways, including2 Unfocused scanning The manager, by virtue of what he/she reads, hears and watches exposes him/herself to information that may prove useful. Whilst the behaviour is unfocused and the manager has no specific purpose in mind, it is not unintentional

Semifocused scanning

Again, the manager is not in search of particular pieces of information that he/she is actively searching but does narrow the range of media that is scanned. For instance, the manager may focus more on economic and business publications, broadcasts etc. and pay less attention to political, scientific or technological media. This describes the situation where a fairly limited and unstructured attempt is made to obtain information for a specific purpose. For example, the marketing manager of a firm considering entering the business of importing frozen fish from a neighbouring country may make informal inquiries as to prices and demand levels of frozen and fresh fish. There would be little structure to this search with the manager making inquiries with traders he/she happens to encounter as well as with other ad hoc contacts in ministries, international aid agencies, with trade associations, importers/exporters etc. This is a purposeful search after information in some systematic way. The information will be required to address a specific issue. Whilst this sort of activity may seem to share the characteristics of marketing research it is carried out by the manager him/herself rather than a professional researcher. Moreover, the scope of the search is likely to be narrow in scope and far less intensive than marketing research

Informal search

Formal search

Marketing intelligence is the province of entrepreneurs and senior managers within an agribusiness. It involves them in scanning newspaper trade magazines, business journals and reports, economic forecasts and other media. In addition it involves management in talking to producers, suppliers and customers, as well as to competitors. Nonetheless, it is a largely informal process of observing and conversing. Some enterprises will approach marketing intelligence gathering in a more deliberate fashion and will train its sales force, after-sales personnel and district/area managers to take cognisance of competitors' actions, customer complaints and requests and distributor problems. Enterprises with vision will also encourage intermediaries, such as collectors, retailers, traders and other middlemen to be proactive in conveying market intelligence back to them. Marketing models: Within the MIS there has to be the means of interpreting information in order to give direction to decision. These models may be computerised or may not. Typical tools are:

nalysis of Variance (ANOVA) models

These and similar mathematical, statistical, econometric and financial models are the analytical subsystem of the MIS. A relatively modest investment in a desktop computer is enough to allow an enterprise to automate the analysis of its data. Some of the models used are stochastic, i.e. those containing a probabilistic element whereas others are deterministic

models where chance plays no part. Brand switching models are stochastic since these express brand choices in probabilities whereas linear programming is deterministic in that the relationships between variables are expressed in exact mathematical terms. Human Resource Information System (HRIS) The Human Resource Information System (HRIS) is a software or online solution for the data entry, data tracking, and data information needs of the Human Resources, payroll, management, and accounting functions within a business. Normally packaged as a data base, hundreds of companies sell some form of HRIS and every HRIS has different capabilities. Pick your HRIS carefully based on the capabilities you need in your company. Typically, the better The Human Resource Information Systems (HRIS) provide overall:

Management of all employee information. Reporting and analysis of employee information. Company-related documents such as employee handbooks, emergency evacuation procedures, and safety guidelines. Benefits administration including enrollment, status changes, and personal information updating. Complete integration with payroll and other company financial software and accounting systems. Applicant tracking and resume management.

The HRIS that most effectively serves companies tracks:


attendance and PTO use, pay raises and history, pay grades and positions held, performance development plans, training received, disciplinary action received, personal employee information, and occasionally, management and key employee succession plans, high potential employee identification, and applicant tracking, interviewing, and selection.

An effective HRIS provides information on just about anything the company needs to track and analyze about employees, former employees, and applicants. Your company will need to select a Human Resources Information System and customize it to meet your needs. With an appropriate HRIS, Human Resources staff enables employees to do their own benefits updates and address changes, thus freeing HR staff for more strategic functions. Additionally, data necessary for employee management, knowledge development, career growth and development, and equal treatment is facilitated. Finally, managers can access the information they need to legally, ethically, and effectively support the success of their reporting employees. Purpose

The function of Human Resources departments is generally administrative and common to all organizations. Organizations may have formalized selection, evaluation, and payroll processes. Efficient and effective management of "Human Capital" has progressed to an increasingly imperative and complex process. The HR function consists of tracking existing employee data which traditionally includes personal histories, skills, capabilities, accomplishments and salary. To reduce the manual workload of these administrative activities, organizations began to electronically automate many of these processes by introducing specialized Human Resource Management Systems. HR executives rely on internal or external IT professionals to develop and maintain an integrated HRMS. Before the clientserver architecture evolved in the late 1980s, many HR automation processes were relegated to mainframe computers that could handle large amounts of data transactions. In consequence of the high capital investment necessary to buy or program proprietary software, these internally-developed HRMS were limited to organizations that possessed a large amount of capital. The advent of clientserver, Application Service Provider, and Software as a Service SaaS or Human Resource Management Systems enabled increasingly higher administrative control of such systems. Currently Human Resource Management Systems encompass: 1. 2. 3. 4. 5. 6. 7. 8. Payroll Work Time Benefits Administration HR management Information system Recruiting Training/Learning Management System Performance Record Employee Self-Service

The payroll module automates the pay process by gathering data on employee time and attendance, calculating various deductions and taxes, and generating periodic pay cheques and employee tax reports. Data is generally fed from the human resources and time keeping modules to calculate automatic deposit and manual cheque writing capabilities. This module can encompass all employee-related transactions as well as integrate with existing financial management systems. The work time module gathers standardized time and work related efforts. The most advanced modules provide broad flexibility in data collection methods, labor distribution capabilities and data analysis features. Cost analysis and efficiency metrics are the primary functions. The benefits administration module provides a system for organizations to administer and track employee participation in benefits programs. These typically encompass insurance, compensation, profit sharing and retirement. The HR management module is a component covering many other HR aspects from application to retirement. The system records basic demographic and address data, selection, training and development, capabilities and skills management, compensation planning records and other related activities. Leading edge systems provide the ability to "read" applications and enter relevant data to applicable database fields, notify employers and provide position management and position control. Human resource management function involves the recruitment, placement, evaluation, compensation and development of the employees of an organization. Initially, businesses used computer based information systems to:

produce pay checks and payroll reports;

maintain personnel records; pursue Talent Management.

Online recruiting has become one of the primary methods employed by HR departments to garner potential candidates for available positions within an organization. Talent Management systems typically encompass:

analyzing personnel usage within an organization; identifying potential applicants; recruiting through company-facing listings; recruiting through online recruiting sites or publications that market to both recruiters and applicants.

The significant cost incurred in maintaining an organized recruitment effort, cross-posting within and across general or industry-specific job boards and maintaining a competitive exposure of availabilities has given rise to the development of a dedicated Applicant Tracking System, or 'ATS', module. The training module provides a system for organizations to administer and track employee training and development efforts. The system, normally called a Learning Management System if a stand alone product, allows HR to track education, qualifications and skills of the employees, as well as outlining what training courses, books, CDs, web based learning or materials are available to develop which skills. Courses can then be offered in date specific sessions, with delegates and training resources being mapped and managed within the same system. Sophisticated LMS allow managers to approve training, budgets and calendars alongside performance management and appraisal metrics. The Employee Self-Service module allows employees to query HR related data and perform some HR transactions over the system. Employees may query their attendance record from the system without asking the information from HR personnel. The module also lets supervisors approve O.T. requests from their subordinates through the system without overloading the task on HR department. Many organizations have gone beyond the traditional functions and developed human resource management information systems, which support recruitment, selection, hiring, job placement, performance appraisals, employee benefit analysis, health, safety and security, while others integrate an outsourced Applicant Tracking System that encompasses a subset of the above. Assigning Responsibilities Communication between the Employees. SUPPLY CHAIN MANAGEMENT Supply chain management (SCM) is the oversight of materials, information, and finances as they move in a process from supplier to manufacturer to wholesaler to retailer to consumer. Supply chain management involves coordinating and integrating these flows both within and among companies. It is said that the ultimate goal of any effective supply chain management system is to reduce inventory (with the assumption that products are available when needed). As a solution for successful supply chain management, sophisticated software systems with Web interfaces are competing with Web-based application service providers (ASP) who promise to provide part or all of the SCM service for companies who rent their service. Supply chain management flows can be divided into three main flows:

The product flow The information flow The finances flow

The product flow includes the movement of goods from a supplier to a customer, as well as any customer returns or service needs. The information flow involves transmitting orders and updating the status of delivery. The financial flow consists of credit terms, payment schedules, and consignment and title ownership arrangements. There are two main types of SCM software: planning applications and execution applications. Planning applications use advanced algorithms to determine the best way to fill an order. Execution applications track the physical status of goods, the management of materials, and financial information involving all parties. Some SCM applications are based on open data models that support the sharing of data both inside and outside the enterprise (this is called the extended enterprise, and includes key suppliers, manufacturers, and end customers of a specific company). This shared data may reside in diverse database systems, or data warehouses, at several different sites and companies. By sharing this data "upstream" (with a company's suppliers) and "downstream" (with a company's clients), SCM applications have the potential to improve the time-to-market of products, reduce costs, and allow all parties in the supply chain to better manage current resources and plan for future needs. Increasing numbers of companies are turning to Web sites and Web-based applications as part of the SCM solution. A number of major Web sites offer e-procurement marketplaces where manufacturers can trade and even make auction bids with suppliers. definition Supply chain management (SCM) is the oversight of materials, information, and finances as they move in a process from supplier to manufacturer to wholesaler to retailer to consumer. Supply chain management involves coordinating and integrating these flows both within and among companies. It is said that the ultimate goal of any effective supply chain management system is to reduce inventory (with the assumption that products are available when needed). As a solution for successful supply chain management, sophisticated software systems with Web interfaces are competing with Web-based application service providers (ASP) who promise to provide part or all of the SCM service for companies who rent their service. Supply chain management flows can be divided into three main flows:

The product flow The information flow The finances flow

The product flow includes the movement of goods from a supplier to a customer, as well as any customer returns or service needs. The information flow involves transmitting orders and updating the status of delivery. The financial flow consists of credit terms, payment schedules, and consignment and title ownership arrangements.

There are two main types of SCM software: planning applications and execution applications. Planning applications use advanced algorithms to determine the best way to fill an order. Execution applications track the physical status of goods, the management of materials, and financial information involving all parties. Some SCM applications are based on open data models that support the sharing of data both inside and outside the enterprise (this is called the extended enterprise, and includes key suppliers, manufacturers, and end customers of a specific company). This shared data may reside in diverse database systems, or data warehouses, at several different sites and companies. By sharing this data "upstream" (with a company's suppliers) and "downstream" (with a company's clients), SCM applications have the potential to improve the time-to-market of products, reduce costs, and allow all parties in the supply chain to better manage current resources and plan for future needs. Increasing numbers of companies are turning to Web sites and Web-based applications as part of the SCM solution. A number of major Web sites offer e-procurement marketplaces where manufacturers can trade and even make auction bids with suppliers.

Customer relationship management (CRM) is a widely-implemented strategy for managing a companys interactions with customers, clients and sales prospects. It involves using technology to organize, automate, and synchronize business processesprincipally sales activities, but also those for marketing, customer service, and technical support. The overall goals are to find, attract, and win new clients, nurture and retain those the company already has, entice former clients back into the fold, and reduce the costs of marketing and client service.[1] Customer relationship management describes a company-wide business strategy including customer-interface departments as well as other departments. The three phases in which CRM support the relationship between a business and its customers are to:

Acquire: CRM can help a business acquire new customers through contact management, selling, and fulfillment.[3] Enhance: web-enabled CRM combined with customer service tools offers customers service from a team of sales and service specialists, which offers customers the convenience of one-stop shopping.[3] Retain: CRM software and databases enable a business to identify and reward its loyal customers and further develop its targeted marketing and relationship marketing initiatives.[4]

TYPES/VARIATIONS Sales force automation Sales force automation (SFA) involves using software to streamline all phases of the sales process, minimizing the time that sales representatives need to spend on each phase. This allows sales representatives to pursue more clients in a shorter amount of time than would otherwise be possible. At the heart of SFA is a contact management system for tracking and recording every stage in the sales process for each prospective client, from initial contact to final disposition. Many SFA applications also include insights into opportunities, territories,

sales forecasts and workflow automation, quote generation, and product knowledge. Modules for Web 2.0 e-commerce and pricing are new, emerging interests in SFA Marketing CRM systems for marketing help the enterprise identify and target potential clients and generate leads for the sales team. A key marketing capability is tracking and measuring multichannel campaigns, including email, search, social media, telephone and direct mail. Metrics monitored include clicks, responses, leads, deals, and revenue. This has been superseded by marketing automation and Prospect Relationship Management (PRM) solutions which track customer behaviour and nurture them from first contact to sale, often cutting out the active sales process altogether. Customer service and support Recognizing that service is an important factor in attracting and retaining customers, organizations are increasingly turning to technology to help them improve their clients experience while aiming to increase efficiency and minimize costs. Even so, a 2009 study revealed that only 39% of corporate executives believe their employees have the right tools and authority to solve client problems. The core for these applications has been and still is comprehensive call center solutions, including such features as intelligent call routing, computer telephone integration (CTI), and escalation capabilities. Analytics Relevant analytics capabilities are often interwoven into applications for sales, marketing, and service. These features can be complemented and augmented with links to separate, purpose-built applications for analytics and business intelligence. Sales analytics let companies monitor and understand client actions and preferences, through sales forecasting and data quality. Marketing applications generally come with predictive analytics to improve segmentation and targeting, and features for measuring the effectiveness of online, offline, and search marketing campaign. Web analytics have evolved significantly from their starting point of merely tracking mouse clicks on Web sites. By evaluating buy signals, marketers can see which prospects are most likely to transact and also identify those who are bogged down in a sales process and need assistance. Marketing and finance personnel also use analytics to assess the value of multi-faceted programs as a whole. These types of analytics are increasing in popularity as companies demand greater visibility into the performance of call centers and other service and support channels,[6] in order to correct problems before they affect satisfaction levels. Support-focused applications typically include dashboards similar to those for sales, plus capabilities to measure and analyze response times, service quality, agent performance, and the frequency of various issues. Integrated/Collaborative Departments within enterprises especially large enterprises tend to function with little collaboration. More recently, the development and adoption of these tools and services have fostered greater fluidity and cooperation among sales, service, and marketing. This finds expression in the concept of collaborative systems which uses technology to build bridges between departments. For example, feedback from a technical support center can enlighten marketers about specific services and product features clients are asking for. Reps, in their turn, want to be able to pursue these opportunities without the burden of re-entering records

and contact data into a separate SFA system. Owing to these factors, many of the top-rated and most popular products come as integrated suites. Small business For small business, basic client service can be accomplished by a contact manager system: an integrated solution that lets organizations and individuals efficiently track and record interactions, including emails, documents, jobs, faxes, scheduling, and more. These tools usually focus on accounts rather than on individual contacts. They also generally include opportunity insight for tracking sales pipelines plus added functionality for marketing and service. As with larger enterprises, small businesses are finding value in online solutions, especially for mobile and telecommuting workers. Social media Social media sites like Twitter, LinkedIn and Facebook are amplifying the voice of people in the marketplace and are having profound and far-reaching effects on the ways in which people buy. Customers can now research companies online and then ask for recommendations through social media channels, making their buying decision without contacting the company. People also use social media to share opinions and experiences on companies, products and services. As social media is not as widely moderated or censored as mainstream media, individuals can say anything they want about a company or brand, positive or negative. Increasingly, companies are looking to gain access to these conversations and take part in the dialogue. More than a few systems are now integrating to social networking sites. Social media promoters cite a number of business advantages, such as using online communities as a source of high-quality leads and a vehicle for crowd sourcing solutions to client-support problems. Companies can also leverage client stated habits and preferences to "hypertarget" their sales and marketing communications.[9] Some analysts take the view that business-to-business marketers should proceed cautiously when weaving social media into their business processes. These observers recommend careful market research to determine if and where the phenomenon can provide measurable benefits for client interactions, sales and support.[10] It is stated[by whom?] that people feel their interactions are peer-to-peer between them and their contacts, and resent company involvement, sometimes responding with negatives about that company. Non-profit and membership-based Systems for non-profit and membership-based organizations help track constituents and their involvement in the organization. Capabilities typically include tracking the following: fund-raising, demographics, membership levels, membership directories, volunteering and communications with individuals. Many include tools for identifying potential donors based on previous donations and participation. In light of the growth of social networking tools, there may be some overlap between social/community driven tools and non-profit/membership tools. STRATEGY

For larger-scale enterprises, a complete and detailed plan is required to obtain the funding, resources, and company-wide support that can make the initiative of choosing and implementing a system successful. Benefits must be defined, risks assessed, and cost quantified in three general areas:

Processes: Though these systems have many technological components, business processes lie at its core. It can be seen as a more client-centric way of doing business, enabled by technology that consolidates and intelligently distributes pertinent information about clients, sales, marketing effectiveness, responsiveness, and market trends. Therefore, a company must analyze its business workflows and processes before choosing a technology platform; some will likely need reengineering to better serve the overall goal of winning and satisfying clients. Moreover, planners need to determine the types of client information that are most relevant, and how best to employ them People: For an initiative to be effective, an organization must convince its staff that the new technology and workflows will benefit employees as well as clients. Senior executives need to be strong and visible advocates who can clearly state and support the case for change. Collaboration, teamwork, and two-way communication should be encouraged across hierarchical boundaries, especially with respect to process improvement Technology: In evaluating technology, key factors include alignment with the companys business process strategy and goals, including the ability to deliver the right data to the right employees and sufficient ease of adoption and use. Platform selection is best undertaken by a carefully chosen group of executives who understand the business processes to be automated as well as the software issues. Depending upon the size of the company and the breadth of data, choosing an application can take anywhere from a few weeks to a year or more.

IMPLEMENTATION Implementation issues Increases in revenue, higher rates of client satisfaction, and significant savings in operating costs are some of the benefits to an enterprise. Proponents emphasize that technology should be implemented only in the context of careful strategic and operational planning Implementations almost invariably fall short when one or more facets of this prescription are ignored:

Poor planning: Initiatives can easily fail when efforts are limited to choosing and deploying software, without an accompanying rationale, context, and support for the workforce. In other instances, enterprises simply automate flawed client-facing processes rather than redesign them according to best practices. Poor integration: For many companies, integrations are piecemeal initiatives that address a glaring need: improving a particular client-facing process or two or automating a favored sales or client support channel.] Such point solutions offer little or no integration or alignment with a companys overall strategy. They offer a less than complete client view and often lead to unsatisfactory user experiences. Toward a solution: overcoming siloed thinking. Experts advise organizations to recognize the immense value of integrating their client-facing operations. In this view, internally-focused, department-centric views should be discarded in favor of

reorienting processes toward information-sharing across marketing, sales, and service. For example, sales representatives need to know about current issues and relevant marketing promotions before attempting to cross-sell to a specific client. Marketing staff should be able to leverage client information from sales and service to better target campaigns and offers. And support agents require quick and complete access to a clients sales and service history . Specialists offer these recommendations for boosting adoptions rates and coaxing users to blend these tools into their daily workflow:

Choose a system that is easy to use: not all solutions are created equal; some vendors offer applications that are more user-friendly a factor that should be as important to the decision as is functionality. Choose appropriate capabilities: employees need to know that the time they invest in learning and in using the new system will not be wasted, indeed that it will yield personal advantages; otherwise, they will ignore or circumvent the system. Provide training: changing the way people work is no small task; to be successful, some familiarization training and help-desk support are usually required, even with todays more usable systems. Lead by example: upper management must use the new application themselves, thereby showing employees that the top leaders fully support the application or else it will skew the ultimate course of the initiative toward failure, by risking a greatly reduced rate of adoption by employees.[

PRIVACY AND DATA SECURITY SYSTEM One of the primary functions of these tools is to collect information about clients, thus a company must consider the desire for privacy and data security, as well as the legislative and cultural norms. Some clients prefer assurances that their data will not be shared with third parties without their prior consent and that safeguards are in place to prevent illegal access by third parties.

Related trends Many CRM vendors offer Web-based tools (cloud computing) and software as a service (SaaS), which are accessed via a secure Internet connection and displayed in a Web browser. These applications are sold as subscriptions, with customers not needing to invest in purchasing and maintaining IT hardware, and subscription fees are a fraction of the cost of purchasing software outright. The era of the "social customer" refers to the use of social media (Twitter, Facebook, LinkedIn, Yelp, customer reviews in Amazon etc) by customers in ways that allow other potential customers to glimpse real world experience of current customers with the seller's products and services. This shift increases the power of customers to make purchase decisions that are informed by other parties sometimes outside of the control of the seller or seller's network. In response, CRM philosophy and strategy has shifted to encompass social networks and user communities, podcasting, and personalization in addition to internally generated marketing, advertising and webpage design. With the

spread of self-initiated customer reviews, the user experience of a product or service requires increased attention to design and simplicity, as customer expectations have risen. CRM as a philosophy and strategy is growing to encompass these broader components of the customer relationship, so that businesses may anticipate and innovate to better serve customers, referred to as "Social CRM".

Another related development is Vendor Relationship Management, or VRM, which is the customer-side counterpart of CRM: tools and services that equip customers to be both independent of vendors and better able to engage with them. VRM development has grown out of efforts by ProjectVRM at Harvard's Berkman Center for Internet & Society and Identity Commons' Internet Identity Workshops, as well as by a growing number of startups and established companies. VRM was the subject of a cover story in the May 2010 issue of CRM Magazine[21]. In a 2001 research note, META Group (now Gartner) analyst Doug Laney first proposed, defined and coined the term Extended Relationship Management. He defined XRM as the principle and practice of applying CRM disciplines and technologies to other core enterprise constituents, primarily partners, employees and suppliers...as well as other secondary allies including government, press, and industry consortia.

e-CRM e-CRM Electronic CRM concerns all forms of managing relationships with customers making use of Information Technology (IT). eCRM is enterprises using IT to integrate internal organization resources and external marketing strategies to understand and fulfill their customers needs. Comparing with traditional CRM, the integrated information for eCRM intraorganizational collaboration can be more efficient to communicate with customers. FROM RELATIONSHIP MARKETING TO CUSTOMER RELATIONSHIP MARKETING The concept of relationship marketing was first coined by Leonard Berry[in 1983. He considered it to consist of attracting, maintaining and enhancing customer relationships within organizations In the years that followed, companies were engaging more and more in a meaningful dialogue with individual customers. In doing so, new organizational forms as well as technologies were used, eventually resulting in what we know as Customer Relationship Management (CRM). The main difference between RM and CRM is that the first does not acknowledge the use of technology, where the latter uses Information Technology (IT) in implementing RM strategies.] THE ESSENCE OF CRM The exact meaning of CRM is still subject of heavy discussions.[4] However, the overall goal can be seen as effectively managing differentiated relationships with all customers and communicating with them on an individual basis.[5] Underlying thought is that companies realize that they can supercharge profits by acknowledging that different groups of customers vary widely in their behavior, desires, and responsiveness to marketing.[6]

Loyal customers can not only give operational companies sustained revenue but also advertise for new marketers. To reinforce the reliance of customers and create additional customer sources, firms utilize CRM to maintain the relationship as the general two categories B2B(Business-to-Business) and B2C(Business-to-Customer or Business-toConsumer). Because of the needs and behaviors are different between B2B and B2C, so that the implementation of CRM should come from respective viewpoints. DIFFERENCES BETWEEN CRM AND ECRM Major differences between CRM and eCRM: Customer contacts

CRM- contact with customer made through the retail store, phone, and fax. eCRM- all of the traditional methods are used in addition to Internet, email, wireless, and PDA technologies

System Interface

CRM- implements the use of ERP systems, emphasis is on the back-end eCRM- geared more toward front end, which interacts with the back-end through use of ERP systems, data warehouses, and data marts

System overhead (client computers)


CRM- the client must download various applications to view the web-enabled applications. They would have to be rewritten for different platform. eCRM- doesn't have these requirements because the client uses the browser

Customization and Personalization of Information


CRM- views differ based on the audience, and personalized views are not available. Individual personalization requires program changes eCRM- Personalized individual views based on purchase history and preferences. Individual has ability to customize view.

System Focus

CRM- System (created for internal use) designed based on job function and products. Web applications designed for a single department or business unit eCRM- System (created for external use) designed based on customer needs. Web application designed for enterprise-wide use.

System Maintenance and Modification


CRM- More time involved in implementation and maintenance is more expensive because the system exists at different locations and on various servers eCRM- Reduction in time and cost. Implementation and maintenance can take place at one location and on one server

As the internet is becoming more and more important in business life, many companies consider it as an opportunity to reduce customer-service costs, tighten customer relationships and most important, further personalize marketing messages and enable mass customization. ECRM is being adopted by companies because it increases customer loyalty and customer retention by improving customer satisfaction, one of the objectives of eCRM.

E-loyalty results in long-term profits for online retailers because they incur less costs of recruiting new customers, plus they have an increase in customer retention. [10] Together with the creation of Sales Force Automation (SFA), where electronic methods were used to gather data and analyze customer information, the trend of the upcoming Internet can be seen as the foundation of what we know as eCRM today. As we implement eCRM process, there are three steps life cycle: 1. Data Collection: About customers preference information for actively (answer knowledge) and passively (surfing record) ways via website, email, questionnaire. 2. Data Aggregation: Filter and analysis for firms specific needs to fulfill their customers. 3. Customer Interaction: According to customers need, company provide the proper feedback them. We can define eCRM as activities to manage customer relationships by using the Internet, web browsers or other electronic touch points. The challenge hereby is to offer communication and information on the right topic, in the right amount, and at the right time that fits the customers specific needs. eCRM strategy components When enterprises integrate their customer information, there are three eCRM strategy components: 1. Operational: Because of sharing information, the processes in business should make customers need as first and seamlessly implement. This avoids multiple times to bother customers and redundant process 2. Analytical: Analysis helps company maintain a long-term relationship with customers. 3. Collaborative: Due to improved communication technology, different departments in company implement (intraorganizational) or work with business partners (interorganizational) more efficiently by sharing information. Several CRM software packages exist that can help companies in deploying CRM activities. Besides choosing one of these packages, companies can also choose to design and build their own solutions. In order to implement CRM in an effective way, one needs to consider the following factors:

Create a customer-focused culture in the organization. Adopt customer-based managers to assess satisfaction. Develop an end-to-end process to serve customers. Recommend questions to be asked to help a customer solve a problem. Track all aspects of selling to customers, as well as prospects.

Furthermore, CRM solutions are more effective once they are being implemented in other information systems used by the company. Examples are Transaction Processing System (TPS) to process data real-time, which can then be sent to the sales and finance departments in order to recalculate inventory and financial position quick and accurately. Once this information is transferred back to the CRM software and services it could prevent customers from placing an order in the belief that an item is in stock while it is not. Contrast with traditional CRM being implemented under ERP (Enterprise Resource Planning) interface communicating in firms and with their customers, eCRM optimizes the customized environment via web browser. This provides beneficial for effective

communication not only enterprises to external customers and internal departments Business personalized each of their customer profile unified in entire organization. By central repository, customer may communicate with different department staff in the corporate via Internet (or phone call). And firms are able to use the marketing analysis for customer more mature services. As each department integrates customers information, they can focus on individual operational duty more efficiently, so that firm may reduce execution cost

eCRM in B2B market

Traditional B2B customers are usually seeking ways in order to decrease the firms expense. Customizing the specific product and reducing the repeated routine cost products for them can expense least. Due to information technology developing, websites information has been becoming an important medium to reduce collecting cost and time, and it becomes a long-term relationship eventually. At the same time, more complex collaboration can be implemented on networking platform.[7] After businesses collaborate as supply chain partners, it takes the mutual benefits can give both sides an equal position to increase cowork confidence [16]

eCRM in B2C market

In contrast with B2B, the attitude of B2C individually purchase is decided by the positive experience and online shopping knowledge. Previous research[17][18] found B2C enjoy the shopping online is quick to transact, convenient to return, save energy to retailer, and fun to browser. Thus, the marketing investigating, customers communicating, and information obtaining are important factors to maintain customer services.[7] cloud solution Today, more and more enterprise CRM systems move to cloud computing solution, up from 8 percent of the CRM market in 2005 to 20 percent of the market in 2008, according to Gartner. Moving managing system into cloud, companies can cost efficiently as pay-per-use on manage, maintain, and upgrade etc. system and connect with their customers streamlined in the cloud. In cloud based CRM system, transaction can be recorded via CRM database immediately Some enterprise CRM in cloud systems like eSalesTrack and Cloud 9 analytics etc. are web-based customers dont need to install an additional interface and the activities with businesses can be updated real-time. People may communication on mobile devices to get the efficient services. Furthermore, customer/case experience and the interaction feedbacks are another way of CRM collaboration and integration information in corporate organization to improve businesses services, like Pega and VeraCentra etc. There are multifarious cloud CRM services for enterprise to use and here are some hints to the your right CRMsystem: 1. Assess your companys needs: some of enterprise CRM systems are featured, for example: OutStart provide a community management, SmartLead for leaders etc.. 2. Take advantage of free trials: comparison and familiarization each of the optional. 3. Do the math: estimate the customer strategy for company budget. 4. Consider mobile options: some system like Salesforce.com can be combined with other mobile device application. 5. Ask about security: consider whether the cloud CRM provider give enough protect as your own.

6. Make sure the sales team is on board: as the frontline of enterprise, the launched CRM system should be the help for sales. 7. Know your exit strategy: understand the exit mechanism to keep flexibility. vCRM Channels through which companies can communicate with its customers, are growing by the day, and as a result, getting their time and attention has turned into a major challenge. One of the reasons eCRM is so popular nowadays is that digital channels can create unique and positive experiences not just transactions for customers. An extreme, but ever growing in popularity, example of the creation of experiences in order to establish customer service is the use of Virtual Worlds, such as Second Life. Through this so-called vCRM, companies are able to create synergies between virtual and physical channels and reaching a very wide consumer base. However, given the newness of the technology, most companies are still struggling to identify effective entries in Virtual Worlds. Its highly interactive character, which allows companies to respond directly to any customers requests or problems, is another feature of eCRM that helps companies establish and sustain long-term customer relationships.] Furthermore, Information Technology has helped companies to even further differentiate between customers and address a personal message or service. Some examples of tools used in eCRM:

Personalized Web Pages where customers are recognized and their preferences are shown. Customized products or services (Dell).

CRM programs should be directed towards customer value that competitors cannot match.[27] However, in a world where almost every company is connected to the Internet, eCRM has become a requirement for survival, not just a competitive advantage. DIFFERENT LEVELS OF ECRM In defining the scope of eCRM, three different levels can be distinguished:

Foundational services:

This includes the minimum necessary services such as web site effectiveness and responsiveness as well as order fulfillment.

Customer-centered services:

These services include order tracking, product configuration and customization as well as security/trust.

Value-added services:

These are extra services such as online auctions and online training and education. Selfservices are becoming increasingly important in CRM activities. The rise of the Internet and eCRM has boosted the options for self-service activities. A critical success factor is the integration of such activities into traditional channels. An example was Fords plan to sell cars directly to customers via its Web Site, which provoked an outcry among its dealers network.[30] CRM activities are mainly of two different types. Reactive service is where the

customer has a problem and contacts the company. Proactive service is where the manager has decided not to wait for the customer to contact the firm, but to be aggressive and contact the customer himself in order to establish a dialogue and solve problems.[31] Steps to eCRM Success Many factors play a part in ensuring that the implementation any level of eCRM is successful. One obvious way it could be measured is by the ability for the system to add value to the existing business. There are four suggested implementation steps that affect the viability of a project like this: 1. Developing customer-centric strategies 2. Redesigning workflow management systems 3. Re-engineering work processes 4. Supporting with the right technologies FAILURES Designing, creating and implementing IT projects has always been risky. Not only because of the amount of money that is involved, but also because of the high chances of failure. However, a positive trend can be seen, indicating that CRM failures dropped from a failure rate of 80% in 1998, to about 40% in 2003 Some of the major issues relating to CRM failure are the following:

Difficulty in measuring and valuing intangible benefits. Failure to identify and focus on specific business problems. Lack of active senior management sponsorship. Poor user acceptance. Trying to automate a poorly defined process.

PRIVACY The effective and efficient employment of CRM activities cannot go without the remarks of safety and privacy. CRM systems depend on databases in which all kinds of customer data is stored. In general, the following rule applies: the more data, the better the service companies can deliver to individual customers. Some known examples of these problems are conducting credit-card transaction online of the phenomenon known as 'cookies' used on the Internet in order to track someones information and behavior. The design and the quality of the website are two very important aspects that influences the level of trust customers experience and their willingness of reluctance to do a transaction or leave personal information.[ Privacy policies can be ineffective in relaying to customers how much of their information is being used. In a recent study by The University of Pennsylvania and University of California, it was revealed that over half the respondents have an incorrect understanding of how their information is being used. They believe that, if a company has a privacy policy, they will not share the customer's information with third party companies without the customer's express consent. Therefore, if marketers want to use consumer information for advertising purposes, they must clearly illustrate the ways in which they will use the customer's information and present the benefits of this in order to acquire the customer's consent.[45] Privacy concerns

are being addressed more and more. Legislation is being proposed that regulates the use of personal data. Also, Internet policy officials are calling for more performance measures of privacy policies. Knowledge Management System Knowledge Management System (KM System) refers to a (generally IT based) system for managing knowledge in organizations for supporting creation, capture, storage and dissemination of information. It can comprise a part (neither necessary nor sufficient) of a Knowledge Management initiative. The idea of a KM system is to enable employees to have ready access to the organization's documented base of facts, sources of information, and solutions. For example a typical claim justifying the creation of a KM system might run something like this: an engineer could know the metallurgical composition of an alloy that reduces sound in gear systems. Sharing this information organization wide can lead to more effective engine design and it could also lead to ideas for new or improved equipment. A KM system could be any of the following: 1. Document based i.e. any technology that permits creation/management/sharing of formatted documents such as Lotus Notes, web, distributed databases etc. 2. Ontology/Taxonomy based: these are similar to document technologies in the sense that a system of terminologies (i.e. ontology) are used to summarize the document e.g. Author, Subj, Organization etc. as in DAML & other XML based ontologies 3. Based on AI technologies which use a customized representation scheme to represent the problem domain. 4. Provide network maps of the organization showing the flow of communication between entities and individuals 5. Increasingly social computing tools are being deployed to provide a more organic approach to creation of a KM system. KMS systems deal with information (although Knowledge Management as a discipline may extend beyond the information centric aspect of any system) so they are a class of information system and may build on, or utilize other information sources. Distinguishing features of a KMS can include: 1. Purpose: a KMS will have an explicit Knowledge Management objective of some type such as collaboration, sharing good practice or the like. 2. Context: One perspective on KMS would see knowledge is information that is meaningfully organized, accumulated and embedded in a context of creation and application. 3. Processes: KMS are developed to support and enhance knowledge-intensive processes, tasks or projects of e.g., creation, construction, identification, capturing, acquisition, selection, valuation, organization, linking, structuring, formalization, visualization, transfer, distribution, retention, maintenance, refinement, revision, evolution, accessing, retrieval and last but not least the application of knowledge, also called the knowledge life cycle. 4. Participants: Users can play the roles of active, involved participants in knowledge networks and communities fostered by KMS, although this is not necessarily the case. KMS designs are held to reflect that knowledge is developed collectively and that the distribution of knowledge leads to its continuous change, reconstruction and application in different contexts, by different participants with differing backgrounds and experiences.

5. Instruments: KMS support KM instruments, e.g., the capture, creation and sharing of the codifiable aspects of experience, the creation of corporate knowledge directories, taxonomies or ontologies, expertise locators, skill management systems, collaborative filtering and handling of interests used to connect people, the creation and fostering of communities or knowledge networks. A KMS offers integrated services to deploy KM instruments for networks of participants, i.e. active knowledge workers, in knowledge-intensive business processes along the entire knowledge life cycle. KMS can be used for a wide range of cooperative, collaborative, adhocracy and hierarchy communities, virtual organizations, societies and other virtual networks, to manage media contents; activities, interactions and work-flows purposes; projects; works, networks, departments, privileges, roles, participants and other active users in order to extract and generate new knowledge and to enhance, leverage and transfer in new outcomes of knowledge providing new services using new formats and interfaces and different communication channels. Some of the advantages claimed for KM systems are: 1. 2. 3. 4. Sharing of valuable organizational information throughout organisational hierarchy. Can avoid re-inventing the wheel, reducing redundant work. May reduce training time for new employees Retention of Intellectual Property after the employee leaves if such knowledge can be codified.

ERP Enterprise resource planning software, or ERP, doesnt live up to its acronym. Forget about planningit doesnt do much of thatand forget about resource, a throwaway term. But remember the enterprise part. This is ERPs true ambition. It attempts to integrate all departments and functions across a company onto a single computer system that can serve all those different departments particular needs. That is a tall order, building a single software program that serves the needs of people in finance as well as it does the people in human resources and in the warehouse. Each of those departments typically has its own computer system optimized for the particular ways that the department does its work. But ERP combines them all together into a single, integrated software program that runs off a single database so that the various departments can more easily share information and communicate with each other. That integrated approach can have a tremendous payback if companies install the software correctly. ERP, which is an abbreviation for Enterprise Resource Planning, is principally an integration of business management practices and modern technology. Information Technology (IT) integrates with the core business processes of a corporate house to streamline and accomplish specific business objectives. Consequently, ERP is an amalgamation of three most important components; Business Management Practices, Information Technology and Specific Business Objectives.

In simpler words, an ERP is a massive software architecture that supports the streaming and distribution of geographically scattered enterprise wide information across all the functional units of a business house. It provides the business management executives with a comprehensive overview of the complete business execution which in turn influences their decisions in a productive way. At the core of ERP is a well managed centralized data repository which acquires information from and supply information into the fragmented applications operating on a universal computing platform. Information in large business organizations is accumulated on various servers across many functional units and sometimes separated by geographical boundaries. Such information islands can possibly service individual organizational units but fail to enhance enterprise wide performance, speed and competence. The term ERP originally referred to the way a large organization planned to use its organizational wide resources. Formerly, ERP systems were used in larger and more industrial types of companies. However, the use of ERP has changed radically over a period of few years. Today the term can be applied to any type of company, operating in any kind of field and of any magnitude. Todays ERP software architecture can possibly envelop a broad range of enterprise wide functions and integrate them into a single unified database repository. For instance, functions such as Human Resources, Supply Chain Management, Customer Relationship Management, Finance, Manufacturing Warehouse Management and Logistics were all previously stand alone software applications, generally housed with their own applications, database and network, but today, they can all work under a single umbrella the ERP architecture.

Take a customer order, for example. Typically, when a customer places an order, that order begins a mostly paper-based journey from in-basket to in-basket around the company, often being keyed and rekeyed into different departments computer systems along the way. All that lounging around in in-baskets causes delays and lost orders, and all the keying into different computer systems invites errors. Meanwhile, no one in the company truly knows what the status of the order is at any given point because there is no way for the finance department, for example, to get into the warehouses computer system to see whether the item has been shipped. "Youll have to call the warehouse" is the familiar refrain heard by frustrated customers. ERP vanquishes the old standalone computer systems in finance, HR, manufacturing and the warehouse, and replaces them with a single unified software program divided into software modules that roughly approximate the old standalone systems. Finance, manufacturing and the warehouse all still get their own software, except now the software is linked together so that someone in finance can look into the warehouse software to see if an order has been shipped. Most vendors ERP software is flexible enough that you can install some modules without buying the whole package. Many companies, for example, will just install an ERP finance or HR module and leave the rest of the functions for another day. HOW CAN ERP IMPROVE A COMPANYS BUSINESS PERFORMANCE?

ERPs best hope for demonstrating value is as a sort of battering ram for improving the way your company takes a customer order and processes it into an invoice and revenueotherwise known as the order fulfillment process. That is why ERP is often referred to as back-office software. It doesnt handle the up-front selling process (although most ERP vendors have developed CRM software or acquired pure-play CRM providers that can do this); rather, ERP takes a customer order and provides a software road map for automating the different steps along the path to fulfilling it. When a customer service representative enters a customer order into an ERP system, he has all the information necessary to complete the order (the customers credit rating and order history from the finance module, the companys inventory levels from the warehouse module and the shipping docks trucking schedule from the logistics module, for example). People in these different departments all see the same information and can update it. When one department finishes with the order it is automatically routed via the ERP system to the next department. To find out where the order is at any point, you need only log in to the ERP system and track it down. With luck, the order process moves like a bolt of lightning through the organization, and customers get their orders faster and with fewer errors than before. ERP can apply that same magic to the other major business processes, such as employee benefits or financial reporting. In order for a software system to be considered ERP, it must provide a business with wide collection of functionalities supported by features like flexibility, modularity & openness, widespread, finest business processes and global focus. Integration is Key to ERP Systems Integration is an exceptionally significant ingredient to ERP systems. The integration between business processes helps develop communication and information distribution, leading to remarkable increase in productivity, speed and performance. The key objective of an ERP system is to integrate information and processes from all functional divisions of an organization and merge it for effortless access and structured workflow. The integration is typically accomplished by constructing a single database repository that communicates with multiple software applications providing different divisions of an organization with various business statistics and information. Although the perfect configuration would be a single ERP system for an entire organization, but many larger organizations usually deploy a single functional system and slowly interface it with other functional divisions. This type of deployment can really be time-consuming and expensive. The Ideal ERP System An ERP system would qualify as the best model for enterprise wide solution architecture, if it chains all the below organizational processes together with a central database repository and a fused computing platform. Manufacturing

Engineering, resource & capacity planning, material planning, workflow management, shop floor management, quality control, bills of material, manufacturing process, etc. Financials Accounts payable, accounts receivable, fixed assets, general ledger, cash management, and billing (contract/service) Human Resource Recruitment, benefits, compensations, training, payroll, time and attendance, labour rules, people management Supply Chain Management Inventory management, supply chain planning, supplier scheduling, claim processing, sales order administration, procurement planning, transportation and distribution Projects Costing, billing, activity management, time and expense Customer Relationship Management Sales and marketing, service, commissions, customer contact and after sales support Data Warehouse Generally, this is an information storehouse that can be accessed by organizations, customers, suppliers and employees for their learning and orientation ERP Systems Improve Productivity, Speed and Performance Prior to evolution of the ERP model, each department in an enterprise had their own isolated software application which did not interface with any other system. Such isolated framework could not synchronize the inter-department processes and hence hampered the productivity, speed and performance of the overall organization. These led to issues such as incompatible exchange standards, lack of synchronization, incomplete understanding of the enterprise functioning, unproductive decisions and many more. For example: The financials could not coordinate with the procurement team to plan out purchases as per the availability of money. Hence, deploying a comprehensive ERP system across an organization leads to performance increase, workflow synchronization, standardized information exchange formats, complete overview of the enterprise functioning, global decision optimization, speed enhancement and much more.

Implementation of an ERP System Implementing an ERP system in an organization is an extremely complex process. It takes lot of systematic planning, expert consultation and well structured approach. Due to its

extensive scope it may even take years to implement in a large organization. Implementing an ERP system will eventually necessitate significant changes on staff and work processes. While it may seem practical for an in-house IT administration to head the project, it is commonly advised that special ERP implementation experts be consulted, since they are specially trained in deploying these kinds of systems. Organizations generally use ERP vendors or consulting companies to implement their customized ERP system. There are three types of professional services that are provided when implementing an ERP system, they are Consulting, Customization and Support.

Consulting Services are responsible for the initial stages of ERP implementation where they help an organization go live with their new system, with product training, workflow, improve ERPs use in the specific organization, etc. Customization Services work by extending the use of the new ERP system or changing its use by creating customized interfaces and/or underlying application code. While ERP systems are made for many core routines, there are still some needs that need to be built or customized for a particular organization. Support Services include both support and maintenance of ERP systems. For instance, trouble shooting and assistance with ERP issues.

The ERP implementation process goes through five major stages which are Structured Planning, Process Assessment, Data Compilation & Cleanup, Education & Testing and Usage & Evaluation. 1. Structured Planning: is the foremost and the most crucial stage where an capable project team is selected, present business processes are studied, information flow within and outside the organization is scrutinized, vital objectives are set and a comprehensive implementation plan is formulated. 2. Process Assessment: is the next important stage where the prospective software capabilities are examined, manual business processes are recognized and standard working procedures are constructed. 3. Data Compilation & Cleanup: helps in identifying data which is to be converted and the new information that would be needed. The compiled data is then analyzed for accuracy and completeness, throwing away the worthless/unwanted information. 4. Education & Testing: aids in proofing the system and educating the users with ERP mechanisms. The complete database is tested and verified by the project team using multiple testing methods and processes. A broad in-house training is held where all the concerned users are oriented with the functioning of the new ERP system. 5. Usage & Evaluation: is the final and an ongoing stage for the ERP. The lately implemented ERP is deployed live within the organization and is regularly checked by the project team for any flaw or error detection. Advantages of ERP Systems There are many advantages of implementing an EPR system. A few of them are listed below:

A perfectly integrated system chaining all the functional areas together The capability to streamline different organizational processes and workflows The ability to effortlessly communicate information across various departments Improved efficiency, performance and productivity levels Enhanced tracking and forecasting

Improved customer service and satisfaction

Disadvantages of ERP Systems While advantages usually outweigh disadvantages for most organizations implementing an ERP system, here are some of the most common obstacles experienced:

The scope of customization is limited in several circumstances The present business processes have to be rethought to make them synchronize with the ERP ERP systems can be extremely expensive to implement There could be lack of continuous technical support ERP systems may be too rigid for specific organizations that are either new or want to move in a new direction in the near future

Commercial applications

Manufacturing Engineering, bills of material, work orders, scheduling, capacity, workflow management, quality control, cost management, manufacturing process, manufacturing projects, manufacturing flow Supply chain management Order to cash, inventory, order entry, purchasing, product configurator, supply chain planning, supplier scheduling, inspection of goods, claim processing, commission calculation Financials General ledger, cash management, accounts payable, accounts receivable, fixed assets Project management Costing, billing, time and expense, performance units, activity management Human resources Human resources, payroll, training, time and attendance, rostering, benefits Customer relationship management Sales and marketing, commissions, service, customer contact, call-center support Data services Various "self-service" interfaces for customers, suppliers and/or employees Access control Management of user privileges for various processes

Business Process Re-engineering (BPR) Business Process Re-engineering (BPR) is the strategic analysis of business processes and the planning and implementation of improved business processes. The analysis is often customer centred and holistic in approach.BPR is a multi-disciplinary subject and should involve more than IT specialists. Nevertheless, IT can support BPR in many ways: Software can be used to help with gathering information about the current business organisation. Workflow and process analysis tools, business modelling and simulation, On-line analytic processing (OLAP), Data mining all have a role to play. definition Business process reengineering (BPR) is the analysis and redesign of workflow within and between enterprises. BPR reached its heyday in the early 1990's when Michael Hammer and James Champy published their best-selling book, "Reengineering the Corporation". The authors promoted the idea that sometimes radical redesign and reorganization of an enterprise (wiping the slate clean) was necessary to lower costs and increase quality of

service and that information technology was the key enabler for that radical change. Hammer and Champy felt that the design of workflow in most large corporations was based on assumptions about technology, people, and organizational goals that were no longer valid. They suggested seven principles of reengineering to streamline the work process and thereby achieve significant levels of improvement in quality, time management, and cost: 1. Organize around outcomes, not tasks. 2. Identify all the processes in an organization and prioritize them in order of redesign urgency. 3. Integrate information processing work into the real work that produces the information. 4. Treat geographically dispersed resources as though they were centralized. 5. Link parallel activities in the workflow instead of just integrating their results. 6. Put the decision point where the work is performed, and build control into the process. 7. Capture information once and at the source. By the mid-1990's, BPR gained the reputation of being a nice way of saying "downsizing." According to Hammer, lack of sustained management commitment and leadership, unrealistic scope and expectations, and resistance to change prompted management to abandon the concept of BPR and embrace the next new methodology, enterprise resource planning (ERP). Business process reengineering (BPR) began as a private sector technique to help organizations fundamentally rethink how they do their work in order to dramatically improve customer service, cut operational costs, and become world-class competitors. A key stimulus for reengineering has been the continuing development and deployment of sophisticated information systems and networks. Leading organizations are becoming bolder in using this technology to support innovative business processes, rather than refining current ways of doing work.[1]

Reengineering guidance and relationship of Mission and Work Processes to Information Technology.

Business Process Reengineering (BPR) is basically the fundamental rethinking and radical re-design, made to an organizations existing resources. It is more than just business improvising.

It is an approach for redesigning the way work is done to better support the organization's mission and reduce costs. Reengineering starts with a high-level assessment of the organization's mission, strategic goals, and customer needs. Basic questions are asked, such as "Does our mission need to be redefined? Are our strategic goals aligned with our mission? Who are our customers?" An organization may find that it is operating on questionable assumptions, particularly in terms of the wants and needs of its customers. Only after the organization rethinks what it should be doing, does it go on to decide how best to do it.[1] Within the framework of this basic assessment of mission and goals, reengineering focuses on the organization's business processesthe steps and procedures that govern how resources are used to create products and services that meet the needs of particular customers or markets. As a structured ordering of work steps across time and place, a business process can be decomposed into specific activities, measured, modeled, and improved. It can also be completely redesigned or eliminated altogether. Reengineering identifies, analyzes, and redesigns an organization's core business processes with the aim of achieving dramatic improvements in critical performance measures, such as cost, quality, service, and speed. Reengineering recognizes that an organization's business processes are usually fragmented into subprocesses and tasks that are carried out by several specialized functional areas within the organization. Often, no one is responsible for the overall performance of the entire process. Reengineering maintains that optimizing the performance of subprocesses can result in some benefits, but cannot yield dramatic improvements if the process itself is fundamentally inefficient and outmoded. For that reason, reengineering focuses on redesigning the process as a whole in order to achieve the greatest possible benefits to the organization and their customers. This drive for realizing dramatic improvements by fundamentally rethinking how the organization's work should be done distinguishes reengineering from process improvement efforts that focus on functional or incremental improvement HISTORY In 1990, Michael Hammer, a former professor of computer science at the Massachusetts Institute of Technology (MIT), published an article in the Harvard Business Review, in which he claimed that the major challenge for managers is to obliterate non-value adding work, rather than using technology for automating it.[2] This statement implicitly accused managers of having focused on the wrong issues, namely that technology in general, and more specifically information technology, has been used primarily for automating existing processes rather than using it as an enabler for making non-value adding work obsolete. Hammer's claim was simple: Most of the work being done does not add any value for customers, and this work should be removed, not accelerated through automation. Instead, companies should reconsider their processes in order to maximize customer value, while minimizing the consumption of resources required for delivering their product or service. A similar idea was advocated by Thomas H. Davenport and J. Short in 1990[3], at that time a member of the Ernst & Young research center, in a paper published in the Sloan Management Review the same year as Hammer published his paper.

This idea, to unbiasedly review a companys business processes, was rapidly adopted by a huge number of firms, which were striving for renewed competitiveness, which they had lost due to the market entrance of foreign competitors, their inability to satisfy customer needs, and their insufficient cost structure[citation needed]. Even well established management thinkers, such as Peter Drucker and Tom Peters, were accepting and advocating BPR as a new tool for (re-)achieving success in a dynamic world[citation needed]. During the following years, a fast growing number of publications, books as well as journal articles, were dedicated to BPR, and many consulting firms embarked on this trend and developed BPR methods. However, the critics were fast to claim that BPR was a way to dehumanize the work place, increase managerial control, and to justify downsizing, i.e. major reductions of the work force [4], and a rebirth of Taylorism under a different label. Despite this critique, reengineering was adopted at an accelerating pace and by 1993, as many as 65% of the Fortune 500 companies claimed to either have initiated reengineering efforts, or to have plans to do so[citation needed]. This trend was fueled by the fast adoption of BPR by the consulting industry, but also by the study Made in America[citation needed], conducted by MIT, that showed how companies in many US industries had lagged behind their foreign counterparts in terms of competitiveness, time-to-market and productivity. Development after 1995 With the publication of critiques in 1995 and 1996 by some of the early BPR proponents[citation needed] , coupled with abuses and misuses of the concept by others, the reengineering fervor in the U.S. began to wane. Since then, considering business processes as a starting point for business analysis and redesign has become a widely accepted approach and is a standard part of the change methodology portfolio, but is typically performed in a less radical way as originally proposed. More recently, the concept of Business Process Management (BPM) has gained major attention in the corporate world and can be considered as a successor to the BPR wave of the 1990s, as it is evenly driven by a striving for process efficiency supported by information technology. Equivalently to the critique brought forward against BPR, BPM is now accused[citation needed] of focusing on technology and disregarding the people aspects of change. can be found. This section contains the definition provided in notable publications in the field:

"... the fundamental rethinking and radical redesign of business processes to achieve dramatic improvements in critical contemporary measures of performance, such as cost, quality, service, and speed "encompasses the envisioning of new work strategies, the actual process design activity, and the implementation of the change in all its complex technological, human, and organizational dimensions Additionally, Davenport (ibid.) points out the major difference between BPR and other approaches to organization development (OD), especially the continuous improvement or TQM movement, when he states: "Today firms must seek not fractional, but multiplicative levels of improvement 10x rather than 10%." Finally, Johansson[7] provide a description of BPR relative to other process-oriented views, such as Total Quality Management (TQM) and Just-in-time (JIT), and state: "Business Process Reengineering, although a close relative, seeks radical rather than merely continuous improvement. It escalates the efforts of JIT and TQM to make process orientation a strategic tool and a core competence of the organization. BPR concentrates on core business processes, and uses the specific techniques

within the JIT and TQM toolboxes as enablers, while broadening the process vision." In order to achieve the major improvements BPR is seeking for, the change of structural organizational variables, and other ways of managing and performing work is often considered as being insufficient. For being able to reap the achievable benefits fully, the use of information technology (IT) is conceived as a major contributing factor. While IT traditionally has been used for supporting the existing business functions, i.e. it was used for increasing organizational efficiency, it now plays a role as enabler of new organizational forms, and patterns of collaboration within and between organizations[citation needed]. BPR derives its existence from different disciplines, and four major areas can be identified as being subjected to change in BPR - organization, technology, strategy, and people where a process view is used as common framework for considering these dimensions. The approach can be graphically depicted by a modification of "Leavitts diamond". Business strategy is the primary driver of BPR initiatives and the other dimensions are governed by strategy's encompassing role. The organization dimension reflects the structural elements of the company, such as hierarchical levels, the composition of organizational units, and the distribution of work between them[.Technology is concerned with the use of computer systems and other forms of communication technology in the business. In BPR, information technology is generally considered as playing a role as enabler of new forms of organizing and collaborating, rather than supporting existing business functions. The people / human resources dimension deals with aspects such as education, training, motivation and reward systems. The concept of business processes interrelated activities aiming at creating a value added output to a customer - is the basic underlying idea of BPR. These processes are characterized by a number of attributes: Process ownership, customer focus, value adding, and cross-functionality. The role of information technology Information technology (IT) has historically played an important role in the reengineering concept[It is considered by some as a major enabler for new forms of working and collaborating within an organization and across organizational borders. Early BPR literature identified several so called disruptive technologies that were supposed to challenge traditional wisdom about how work should be performed.

Shared databases, making information available at many places Expert systems, allowing generalists to perform specialist tasks Telecommunication networks, allowing organizations to be centralized and decentralized at the same time Decision-support tools, allowing decision-making to be a part of everybody's job Wireless data communication and portable computers, allowing field personnel to work office independent Interactive videodisk, to get in immediate contact with potential buyers Automatic identification and tracking, allowing things to tell where they are, instead of requiring to be found High performance computing, allowing on-the-fly planning and revisioning

In the mid 1990s, especially workflow management systems were considered as a significant contributor to improved process efficiency. Also ERP (Enterprise Resource Planning) vendors, such as SAP, JD Edwards, Oracle, PeopleSoft, positioned their solutions as vehicles for business process redesign and improvement.

Research & Methodology Although the labels and steps differ slightly, the early methodologies that were rooted in ITcentric BPR solutions share many of the same basic principles and elements. The following outline is one such model, based on the PRLC (Process Reengineering Life Cycle) approach developed by Guha.

Simplified schematic outline of using a business process approach, examplified for pharmceutical R&D: 1. Structural organization with functional units 2. Introduction of New Product Development as cross-functional process 3. Re-structuring and streamlining activities, removal of non-value adding tasks Benefiting from lessons learned from the early adopters, some BPR practitioners advocated a change in emphasis to a customer-centric, as opposed to an IT-centric, methodology. One such methodology, that also incorporated a Risk and Impact Assessment to account for the impact that BPR can have on jobs and operations, was described by Lon Roberts (1994]. Roberts also stressed the use of change management tools to proactively address resistance to changea factor linked to the demise of many reengineering initiatives that looked good on the drawing board. Some items to use on a process analysis checklist are: Reduce handoffs, Centralize data, Reduce delays, Free resources faster, Combine similar activities. Also within the management consulting industry, a significant number of methodological approaches have been developed

CRITIQUE Reengineering has earned a bad reputation because such projects have often resulted in massive layoffs[. This reputation is not altogether unwarranted, since companies have often downsized under the banner of reengineering. Further, reengineering has not always lived up to its expectations. The main reasons seem to be that:

Reengineering assumes that the factor that limits an organization's performance is the ineffectiveness of its processes (which may or may not be true) and offers no means of validating that assumption. Reengineering assumes the need to start the process of performance improvement with a "clean slate," i.e. totally disregard the status quo. According to Eliyahu M. Goldratt (and his Theory of Constraints) reengineering does not provide an effective way to focus improvement efforts on the organization's constraint[citation needed].

There was considerable hype surrounding the introduction of Reengineering the Corporation (partially due to the fact that the authors of the book reportedly[citation needed] bought numbers of copies to promote it to the top of bestseller lists). Abrahamson (1996) showed that fashionable management terms tend to follow a lifecycle, which for Reengineering peaked between 1993 and 1996 (Ponzi and Koenig 2002). They argue that Reengineering was in fact nothing new (as e.g. when Henry Ford implemented the assembly line in 1908, he was in fact reengineering, radically changing the way of thinking in an organization). Dubois (2002) highlights the value of signaling terms as Reengineering, giving it a name, and stimulating it. At the same there can be a danger in usage of such fashionable concepts as mere ammunition to implement particular reform. Read Article by Faraz Rafique. The most frequent and harsh critique against BPR concerns the strict focus on efficiency and technology and the disregard of people in the organization that is subjected to a reengineering initiative. Very often, the label BPR was used for major workforce reductions. Thomas Davenport, an early BPR proponent, stated that: "When I wrote about "business process redesign" in 1990, I explicitly said that using it for cost reduction alone was not a sensible goal. And consultants Michael Hammer and James Champy, the two names most closely associated with reengineering, have insisted all along that layoffs shouldn't be the point. But the fact is, once out of the bottle, the reengineering genie quickly turned ugly." Michael Hammer similarly admitted that: "I wasn't smart enough about that. I was reflecting my engineering background and was insufficient appreciative of the human dimension. I've learned that's critical Other criticism brought forward against the BPR concept include

It never changed management thinking, actually the largest causes of failure in an organization lack of management support for the initiative and thus poor acceptance in the organization. exaggerated expectations regarding the potential benefits from a BPR initiative and consequently failure to achieve the expected results. underestimation of the resistance to change within the organization. implementation of generic so-called best-practice processes that do not fit specific company needs.

overtrust in technology solutions. performing BPR as a one-off project with limited strategy alignment and long-term perspective. poor project management.

UNIT 4 : Introduction to E-Business E-commerce in India E-commerce companies in India offers the most tangible and finest e-commerce solutions. E-commerce development firms in India provide high-end e-commerce solution taking utmost care of the privacy and security of the e-commerce website. E-Commerce service includes shopping carts,database programmers,graphic design services,graphics,e-business,Flash designs etc. Electronic commerce (or e-commerce) encompasses all business conducted by means of computer networks. Advances in telecommunications and computer technologies in recent years have made computer networks an integral part of the economic infrastructure. More and more companies are facilitating transactions over web. There has been tremendous competition to target each and every computer owner who is connected to the Web. Although business-to-business transactions play an important part in e-commerce market, a share of e-commerce revenues in developed countries is generated from business to consumer transactions. E-commerce provides multiple benefits to the consumers in form of availability of goods at lower cost, wider choice and saves time. People can buy goods with a click of mouse button without moving out of their house or office. Similarly online services such as banking, ticketing (including airlines, bus, railways), bill payments, hotel booking etc. have been of tremendous benefit for the customers. Most experts believe that overall e-commerce will increase exponentially in coming years. Business to business transactions will represent the largest revenue but online retailing will also enjoy a drastic growth. Online businesses like financial services, travel, entertainment, and groceries are all likely to grow. For developing countries like India, e-commerce offers considerable opportunity. Ecommerce in India is still in nascent stage, but even the most-pessimistic projections indicate a boom. It is believed that low cost of personal computers, a growing installed base for Internet use, and an increasingly competitive Internet Service Provider (ISP) market will help fuel e-commerce growth in Asias second most populous nation. Amongst the Asian nations, the growth of e-commerce in India between 1997 and 2003 was highest in India. Cridit Lyonnais forecasts that India will have 30 million Internet users by 2004 and that the potential Internet market will reach 47 million households in 2005. According to a McKinsey-Nasscom report by the year 2008, e-commerce transactions in India are expected to reach $100 billion. Indian middle class of 288 million people is equal to the entire U.S. consumer base. This makes India a real attractive market for e-commerce. To make a successful e-commerce transaction both the payment and delivery services must be made efficient. There has been a rise in the number of companies' taking up e-commerce in the recent past. Major Indian portal sites have also shifted towards e-commerce instead of depending on advertising revenue. Many sites are now selling a diverse range of products and services from flowers, greeting cards, and movie tickets to groceries, electronic gadgets, and computers. With stock exchanges coming online the time for true e-commerce in India has finally arrived. On the negative side there are many challenges faced by e-commerce sites in India. The relatively small credit card population and lack of uniform credit agencies create a variety of payment challenges unknown in the United States and Western Europe. Delivery of goods to consumer by couriers and postal services is not very reliable in smaller cities, towns and

rural areas. However, many Indian Banks have put the Internet banking facilities in place for the up coming e-commerce market. The speed post and courier system has also improved tremendously in recent years. Modern computer technology like secured socket layer (SSL) helps to protect against payment fraud, and to share information with suppliers and business partners. With further improvement in payment and delivery system it is expected that India will soon become a major player in the e-commerce market. While many companies, organizations, and communities in India are beginning to take advantage of the potential of e-commerce, critical challenges remain to be overcome before e-commerce would become an asset for common people. The following is the list of the best E-Commerce companies in India ASA Systel Communications Pvt Ltd: It is a leading E-commerce company in India which provides innovative and superb quality web services which encompasses the building of e-commerce related websites and portals. The company also uses the latest payment modes and security. The company has its offices in Chennai, Lucknow and will shortly set up offices in Delhi, Mumbai, Kathmandu,Bhopal. Candid Info: This Indian E-commerce company is based in New Delhi. It is a renowned Offshore Outsource Web designing development e-commerce company. It offers off shore web development ,designing, SEO solutions for large corporations and SME's. The company specializes in Web Hosting,E-commerce solutions,portfolio,SEO,Blog etc. Chenab Information Technologies Private Limited : This E-commerce company in India comprises of web enabled business and web bases services,airline and security systems by using the internet technologies and tools of the state of the art. The company has three Software Development centres in Mumbai and the overseas branch office in New York. It is the first software company across the globe to get the certification of ISO 9001:2000. Eurolink Systems Limited: This leading E-commerce company provides consulting and e-business solutions,FlexTCA Systems,Trillium Protocol services to the global community. In order to be compliant with specific customer requirements,the company combines customized and COTS HW/SW. The company has its office in England, U.S, Switzerland, India with about 200 employee strength. HashPro Technologies : It offers e-business and traditional analysis, development, implementation, design and strategic planning. It is a leader in the provider of integrated talent management software organization in India. It is key technology consulting provider. It renders services like the Ecommerce Hosting,Internet Marketing,Human Resources. The e-Workforce initiative of the company will enable the company to become a 100 percent e-Corporation. Compare Info Base: The company is leading provider of e-commerce portals and IT solutions. The company manages about 1500 websites and portals with 4000 domain names. It has web presence in Maps,Software Development,GIS Travel,Education,Media,Greetings etc. The company is a significant name in developing and selling E business. It specializes in Content development

services,Website development services,PHP Programming & Development etc.It has its office in Mumbai, Kolkata, San Jose, Delhi. Sanver E-solutions: This company is based in Mumbai. They believe that Information Technology is a way to the business objectives. It is a IT Consulting and Solutions Provider which offers personalized and personal business solutions using Information and Communication Technology. It renders other services like the CRM & SFA,Sugar CRM Hosting,Implementation etc. Planet Asia: This E-commerce company in India uses track record and deep experience in externalized applications to produce high quality B2SPEC(Business to Partner,Supplier, Customer) solutions to global enterprises. Candid Web Technology: This fast growing E commerce company in India is a provider of Complete Web Solutions for the design and development of dynamic web sites .The clients of the e-commerce company spans from the small scale companies to corporate organizations. Internet A global network connecting millions of computers. More than 100 countries are linked into exchanges of data, news and opinions. Unlike online services, which are centrally controlled, the Internet is decentralized by design. Each Internet computer, called a host, is independent. Its operators can choose which Internet services to use and which local services to make available to the global Internet community. Remarkably, this anarchy by design works exceedingly well. There are a variety of ways to access the Internet. Most online services, such as America Online, offer access to some Internet services. It is also possible to gain access through a commercial Internet Service Provider (ISP). The Internet is named after the Internet Protocol, the standard communications protocol used by every computer on the Internet. The Internet can powerfully leverage your ability to find, manage, and share informationThe conceptual foundation for creation of the Internet was largely created by three individuals and a research conference, each of which changed the way we thought about technology by accurately predicting its future:

Vannevar Bush wrote the first visionary description of the potential uses for information technology with his description of the "memex" automated library system. Norbert Wiener invented the field of Cybernetics, inspiring future researchers to focus on the use of technology to extend human capabilities. The 1956 Dartmouth Artificial Intelligence conference crystallized the concept that technology was improving at an exponential rate, and provided the first serious consideration of the consequences. Marshall McLuhan made the idea of a global village interconnected by an electronic nervous system part of our popular culture.

In 1957, the Soviet Union launched the first satellite, Sputnik I, triggering US President Dwight Eisenhower to create the ARPA agency to regain the technological lead in the arms race. ARPA appointed J.C.R. Licklider to head the new IPTO organization with a mandate to further the research of the SAGE program and help protect the US against a space-based nuclear attack. Licklider evangelized within the IPTO about the potential benefits of a country-wide communications network, influencing his successors to hire Lawrence Roberts to implement his vision. Roberts led development of the network, based on the new idea of packet switching invented by Paul Baran at RAND, and a few years later by Donald Davies at the UK National Physical Laboratory. A special computer called an Interface Message Processor was developed to realize the design, and the ARPANET went live in early October, 1969. The first communications were between Leonard Kleinrock's research center at the University of California at Los Angeles, and Douglas Engelbart's center at the Stanford Research Institute. The first networking protocol used on the ARPANET was the Network Control Program. In 1983, it was replaced with the TCP/IP protocol invented Wby Robert Kahn, Vinton Cerf, and others, which quickly became the most widely used network protocol in the world. In 1990, the ARPANET was retired and transferred to the NSFNET. The NSFNET was soon connected to the CSNET, which linked Universities around North America, and then to the EUnet, which connected research facilities in Europe. Thanks in part to the NSF's enlightened management, and fueled by the popularity of the web, the use of the Internet exploded after 1990, causing the US Government to transfer management to independent organizations starting in 1995. WORLD WIDE WEB: A system of Internet servers that support specially formatted documents. The documents are formatted in a markup language called HTML (HyperText Markup Language) that supports links to other documents, as well as graphics, audio, and video files. This means you can jump from one document to another simply by clicking on hot spots. Not all Internet servers are part of the World Wide Web. The World Wide Web, abbreviated as WWW and commonly known as the Web, is a system of interlinked hypertext documents accessed via the Internet. With a web browser, one can view web pages that may contain text, images, videos, and other multimedia and navigate between them via hyperlinks. Using concepts from earlier hypertext systems, English engineer and computer scientist Sir Tim Berners-Lee, now the Director of the World Wide Web Consortium, wrote a proposal in March 1989 for what would eventually become the World Wide Web.[1] At CERN in Geneva, Switzerland, Berners-Lee and Belgian computer scientist Robert Cailliau proposed in 1990 to use "HyperText ... to link and access information of various kinds as a web of nodes in which the user can browse at will", [2] and publicly introduced the project in December.[3] "The World-Wide Web (W3) was developed to be a pool of human knowledge, and human culture, which would allow collaborators in remote sites to share their ideas and all aspects of a common project. The terms Internet and World Wide Web are often used in every-day speech without much distinction. However, the Internet and the World Wide Web are not one and the same. The Internet is a global system of interconnected computer networks. In contrast, the Web is one of the services that runs on the Internet. It is a collection of interconnected documents and other resources, linked by hyperlinks and URLs. In short, the Web is an application running on the Internet.[22] Viewing a web page on the World Wide Web normally begins either by typing the URL of the page into a web browser, or by

following a hyperlink to that page or resource. The web browser then initiates a series of communication messages, behind the scenes, in order to fetch and display it. First, the server-name portion of the URL is resolved into an IP address using the global, distributed Internet database known as the Domain Name System (DNS). This IP address is necessary to contact the Web server. The browser then requests the resource by sending an HTTP request to the Web server at that particular address. In the case of a typical web page, the HTML text of the page is requested first and parsed immediately by the web browser, which then makes additional requests for images and any other files that complete the page image. Statistics measuring a website's popularity are usually based either on the number of page views or associated server 'hits' (file requests) that take place. Any images and other resources are incorporated to produce the on-screen web page that the user sees. Most web pages contain hyperlinks to other related pages and perhaps to downloadable files, source documents, definitions and other web resources. Such a collection of useful, related resources, interconnected via hypertext links is dubbed a web of information. Publication on the Internet created what Tim Berners-Lee first called the WorldWideWeb . Many domain names used for the World Wide Web begin with www because of the long-standing practice of naming Internet hosts (servers) according to the services they provide. The hostname for a web server is often www, in the same way that it may be ftp for an FTP server, and news or nntp for a USENET news server. These host names appear as Domain Name System (DNS) subdomain names, as in www.example.com. The use of 'www' as a subdomain name is not required by any technical or policy standard; indeed, the first ever web server was called nxoc01.cern.ch,[25] and many web sites exist without it. Many established websites still use 'www', or they invent other subdomain names such as 'www2', 'secure', etc. Many such web servers are set up such that both the domain root (e.g., example.com) and the www subdomain (e.g., www.example.com) refer to the same site; others require one form or the other, or they may map to different web sites. The use of a subdomain name is useful for load balancing incoming web traffic by creating a CNAME record that points to a cluster of web servers. Since, currently, only a subdomain can be cname'ed the same result cannot be achieved by using the bare domain root. When a user submits an incomplete website address to a web browser in its address bar input field, some web browsers automatically try adding the prefix "www" to the beginning of it and possibly ".com", ".org" and ".net" at the end, depending on what might be missing. For example, entering 'microsoft' may be transformed to http://www.microsoft.com/ and 'openoffice' to http://www.openoffice.org. This feature started appearing in early versions of Mozilla Firefox, when it still had the working title 'Firebird' in early 2003. [26] It is reported that Microsoft was granted a US patent for the same idea in 2008, but only for mobile devices.[27] The scheme specifier (http:// or https://) in URIs refers to the Hypertext Transfer Protocol and to HTTP Secure respectively and so defines the communication protocol to be used for the request and response. The HTTP protocol is fundamental to the operation of the World Wide Web, and the encryption involved in HTTPS adds an essential layer if confidential information such as passwords or banking information are to be exchanged over the public Internet. Web browsers usually prepend the scheme to URLs too, if omitted. Internet architecture: It is by definition a meta-network, a constantly changing collection of thousands of individual networks intercommunicating with a common protocol. The Internet's architecture is described in its name, a short from of the compound word "inter-networking". This architecture is based in the very specification of the standard TCP/IP

protocol, designed to connect any two networks which may be very different in internal hardware, software, and technical design. Once two networks are interconnected, communication with TCP/IP is enabled end-to-end, so that any node on the Internet has the near magical ability to communicate with any other no matter where they are. This openness of design has enabled the Internet architecture to grow to a global scale. In practice, the Internet technical architecture looks a bit like a multi-dimensional river system, with small tributaries feeding medium-sized streams feeding large rivers. For example, an individual's access to the Internet is often from home over a modem to a local Internet service provider who connects to a regional network connected to a national network. At the office, a desktop computer might be connected to a local area network with a company connection to a corporate Intranet connected to several national Internet service providers. In general, small local Internet service providers connect to medium-sized regional networks which connect to large national networks, which then connect to very large bandwidth networks on the Internet backbone. Most Internet service providers have several redundant network cross-connections to other providers in order to ensure continuous availability.

INTRANET A network based on TCP/IP protocols (an internet) belonging to an organization, usually a corporation, accessible only by the organization's members, employees, or others with authorization. An intranet's Web sites look and act just like any other Web sites, but the firewall surrounding an intranet fends off unauthorized access. Like the Internet itself, intranets are used to share information. Secure intranets are now the fastest-growing segment of the Internet because they are much less expensive to build and manage than private networks based on proprietary protocols. An intranet is a private computer network that uses Internet Protocol technologies to securely share any part of an organization's information or operational systems within that organization. The term is used in contrast to internet, a network between organizations, and instead refers to a network within an organization. Sometimes the term refers only to the organization's internal website, but may be a more extensive part of the organization's information technology infrastructure. It may host multiple private websites and constitute an important component and focal point of internal communication and collaboration. Characteristics An intranet is built from the same concepts and technologies used for the Internet, such as client-server computing and the Internet Protocol Suite (TCP/IP). Any of the well known Internet protocols may be found in an intranet, such as HTTP (web services), SMTP (e-mail), and FTP (file transfer). Internet technologies are often deployed to provide modern interfaces to legacy information systems hosting corporate data.An intranet can be understood as a private version of the Internet, or as a private extension of the Internet confined to an organization. The first intranet websites and home pages began to appear in organizations in 1990 - 1991. Although not officially noted, the term intranet first became common-place among early adopters, such as universities and technology corporations, in 1992.

Intranets are also contrasted with extranets. While intranets are generally restricted to employees of the organization, extranets may also be accessed by customers, suppliers, or other approved parties. Extranets extend a private network onto the Internet with special provisions for access, authorization, and authentication (AAA protocol).An organization's intranet does not necessarily have to provide access to the Internet. When such access is provided it is usually through a network gateway with a firewall, shielding the intranet from unauthorized external access. The gateway often also implements user authentication, encryption of messages, and often virtual private network (VPN) connectivity for off-site employees to access company information, computing resources and internal communications. Uses Increasingly, intranets' are being used to deliver tools and applications, e.g., collaboration (to facilitate working in groups and teleconferencing) or sophisticated corporate directories, sales and customer relationship management tools, project management etc., to advance productivity. Intranets are also being used as corporate culture-change platforms. For example, large numbers of employees discussing key issues in an intranet forum application could lead to new ideas in management, productivity, quality, and other corporate issues.In large intranets, website traffic is often similar to public website traffic and can be better understood by using web metrics software to track overall activity. User surveys also improve intranet website effectiveness. Larger businesses allow users within their intranet to access public internet through firewall servers. They have the ability to screen messages coming and going keeping security intact.When part of an intranet is made accessible to customers and others outside the business, that part becomes part of an extranet. Businesses can send private messages through the public network, using special encryption/decryption and other security safeguards to connect one part of their intranet to another.Intranet user-experience, editorial, and technology teams work together to produce in-house sites. Most commonly, intranets are managed by the communications, HR or CIO departments of large organizations, or some combination of these.Because of the scope and variety of content and the number of system interfaces, intranets of many organizations are much more complex than their respective public websites. Intranets and their use are growing rapidly. According to the Intranet design annual 2007 from Nielsen Norman Group, the number of pages on participants' intranets averaged 200,000 over the years 2001 to 2003 and has grown to an average of 6 million pages over 20052007. BENEFITS

Workforce productivity: Intranets can also help users to locate and view information faster and use applications relevant to their roles and responsibilities. With the help of a web browser interface, users can access data held in any database the organization wants to make available, anytime and - subject to security provisions from anywhere within the company workstations, increasing employees' ability to perform their jobs faster, more accurately, and with confidence that they have the right information. It also helps to improve the services provided to the users. Time: With intranets, organizations can make more information available to employees on a "pull" basis (i.e., employees can link to relevant information at a time which suits them) rather than being deluged indiscriminately by emails. Communication: Intranets can serve as powerful tools for communication within an organization, vertically and horizontally. From a communications standpoint, intranets are useful to communicate strategic initiatives that have a global reach throughout the organization. The type of information that can easily be conveyed is the purpose

of the initiative and what the initiative is aiming to achieve, who is driving the initiative, results achieved to date, and who to speak to for more information. By providing this information on the intranet, staff have the opportunity to keep up-todate with the strategic focus of the organization. Some examples of communication would be chat, email, and or blogs. A great real world example of where an intranet helped a company communicate is when Nestle had a number of food processing plants in Scandinavia. Their central support system had to deal with a number of queries every day (McGovern, Gerry). When Nestle decided to invest in an intranet, they quickly realized the savings. McGovern says the savings from the reduction in query calls was substantially greater than the investment in the intranet. Web publishing allows cumbersome corporate knowledge to be maintained and easily accessed throughout the company using hypermedia and Web technologies. Examples include: employee manuals, benefits documents, company policies, business standards, newsfeeds, and even training, can be accessed using common Internet standards (Acrobat files, Flash files, CGI applications). Because each business unit can update the online copy of a document, the most recent version is always available to employees using the intranet. Business operations and management: Intranets are also being used as a platform for developing and deploying applications to support business operations and decisions across the internetworked enterprise. Cost-effective: Users can view information and data via web-browser rather than maintaining physical documents such as procedure manuals, internal phone list and requisition forms. This can potentially save the business money on printing, duplicating documents, and the environment as well as document maintenance overhead. "PeopleSoft, a large software company, has derived significant cost savings by shifting HR processes to the intranet". Promote common corporate culture: Every user is viewing the same information within the Intranet. Enhance Collaboration: With information easily accessible by all authorised users, teamwork is enabled. Cross-platform Capability: Standards-compliant web browsers are available for Windows, Mac, and UNIX. Built for One Audience: Many companies dictate computer specifications. Which, in turn, may allow Intranet developers to write applications that only have to work on one browser (no cross-browser compatibility issues). Knowledge of your Audience: Being able to specifically address your "viewer" is a great advantange. Since Intranets are user specific (requiring database/network authentication prior to access), you know exactly who you are interfacing with. So, you can personalize your Intranet based on role (job title, department) or individual ("Congratulations Jane, on your 3rd year with our company!"). Immediate Updates: When dealing with the public in any capacity, laws/specifications/parameters can change. With an Intranet and providing your audience with "live" changes, they are never out of date, which can limit a company's liability. Supports a distributed computing architecture: The intranet can also be linked to a companys management information system, for example a time keeping system. Planning and creation: Most organizations devote considerable resources into the planning and implementation of their intranet as it is of strategic importance to the organization's success. Some of the planning would include topics such as: The purpose and goals of the intranet :Persons or departments responsible for implementation and management Functional plans, information architecture, page layouts, design. Implementation schedules and phase-out of existing systems Defining and implementing security of the intranet .How to ensure it is within legal boundaries and other constraints .Level of interactivity (eg wikis, on-line forms) desired. Is the input of new data and updating of existing data to be centrally controlled or devolved. These are in addition to the hardware and software decisions

(like content management systems), participation issues (like good taste, harassment, confidentiality), and features to be supportedThe actual implementation would include steps such as:

Securing senior management support and funding. Business requirements analysis. User involvement to identify users' information needs. Installation of web server and user access network. Installing required user applications on computers. Creation of document framework for the content to be hosted User involvement in testing and promoting use of intranet. Ongoing measurement and evaluation, including through benchmarking against other intranets.

Useful components of an intranet structure might include:


Key personnel committed to maintaining the Intranet and keeping content current. Social networking is useful as a feedback forum for users to indicate what they want and what they do not like

Extranet An extranet is a private network that uses Internet protocols, network connectivity, and possibly the public telecommunication system to securely share part of an organization's information or operations with suppliers, vendors, partners, customers or other businesses. An extranet can be viewed as part of a company's intranet that is extended to users outside the company, usually via the Internet. It has also been described as a "state of mind" in which the Internet is perceived as a way to do business with a selected set of other companies (business-to-business, B2B), in isolation from all other Internet users. In contrast, business-to-consumer (B2C) models involve known servers of one or more companies, communicating with previously unknown consumer users. EXTRA-/INTRA-NET An extranet can be understood as an intranet mapped onto the public Internet or some other transmission system not accessible to the general public, but managed by more than one company's administrator(s). For example, military networks of different security levels may map onto a common military radio transmission system that never connects to the Internet. Any private network mapped onto a public one is a virtual private network (VPN), often using special security protocols.For decades, institutions have been interconnecting to each other to create private networks for sharing information. One of the differences that characterizes an extranet, however, is that its interconnections are over a shared network rather than through dedicated physical lines. description." Similarly, for smaller, geographically united organizations, "extranet" is a useful term to describe selective access to intranet systems granted to suppliers, customers, or other companies. In this sense, an "extranet" designates the "private part" of a website, where "registered users" can navigate, enabled by authentication mechanisms on a "login page". An extranet requires network security. These can include firewalls, server management, the issuance and use of digital certificates or similar means of user authentication, encryption of messages, and the use of virtual private networks (VPNs) that tunnel through the public network.

INDUSTRY USES During the late 1990s and early 2000s, several industries started to use the term "extranet" to describe central repositories of shared data made accessible via the web only to authorized members of particular work groups. Scandinavia, Germany and Belgium, among others. Some applications are offered on a Software as a Service (SaaS) basis by vendors functioning as Application service providers (ASPs). Specially secured extranets are used to provide virtual data room services to companies in several sectors (including law and accountancy). For example, in the construction industry, project teams could login to and access a 'project extranet' to share drawings and documents, make comments, issue requests for information, etc.. ADVANTAGES

Exchange large volumes of data using Electronic Data Interchange (EDI) Share product catalogs exclusively with trade partners Collaborate with other companies on joint development efforts Jointly develop and use training programs with other companies Provide or access services provided by one company to a group of other companies, such as an online banking application managed by one company on behalf of affiliated banks Share news of common interest exclusively

DISADVANTAGES

Extranets can be expensive to implement and maintain within an organization (e.g., hardware, software, employee training costs) if hosted internally instead of via an application service provider. Security of extranets can be a concern when hosting valuable or proprietary information. System access must be carefully controlled to secure sensitive data.

UNIT 5: Data and Systems interface Systems analysis is the study of sets of interacting entities, including computer systems analysis. This field is closely related to operations research. It is also "an explicit formal inquiry carried out to help someone (referred to as the decision maker) identify a better course of action and make a better decision than he might otherwise have made." The terms analysis and synthesis come from Greek where they mean respectively "to take apart" and "to put together". These terms are in scientific disciplines from mathematics and logic to economy and psychology to denote similar investigative procedures. Analysis is defined as the procedure by which we break down an intellectual or substantial whole into parts or . Synthesis is defined as the : to basant combine separate elements or components in order to form a coherent whole Systems analysis researchers apply methodology to the analysis of systems involved to form an overall picture. analysis is used in every field where there is a work of developing somthing. The development of a computer-based information system includes a systems analysis phase which produces or enhances the data model which itself is a precursor to creating or enhancing a database (see Christopher J. Date "An Introduction to Database Systems"). There are a number of different approaches to system analysis. When a computer-based information system is developed, systems analysis (according to the Waterfall model) would constitute the following steps:

The development of a feasibility study, involving determining whether a project is economically, socially, technologically and organizationally feasible. Conducting fact-finding measures, designed to ascertain the requirements of the system's end-users. These typically span interviews, questionnaires, or visual observations of work on the existing system. Gauging how the end-users would operate the system (in terms of general experience in using computer hardware or software), what the system would be used for etc.

Another view outlines a phased approach to the process. This approach breaks systems analysis into 5 phases:

Scope definition Problem analysis Requirements analysis Logical design Decision analysis

Use cases are a widely-used systems analysis modeling tool for identifying and expressing the functional requirements of a system. Each use case is a business scenario or event for which the system must provide a defined response. Use cases evolved out of objectoriented analysis; however, their use as a modeling tool has become common in many other methodologies for system analysis and design. Practitioners of systems analysis are often called up to dissect systems that have grown haphazardly to determine the current components of the system. This was shown during the year 2000 re-engineering effort as business and manufacturing processes were examined as part of the Y2K automation upgrades. Employment utilizing systems analysis include systems analyst, business analyst, manufacturing engineer, enterprise architect, etc. While practitioners of systems analysis can be called upon to create new systems they often modify, expand or document existing systems (processes, procedures and methods). A set

of components interact with each other to accomplish some specific purpose. Systems are all around us. Our body is itself a system. A business is also a system. Men, money, machine, market and material are the components of business system that work together that achieve the common goal of the organization. The development of any information system can be put into two major phases: analysis and Design. During analysis phase the complete functioning of the system is understood and requirements are defined which lead to designing of a new system. Hence the development process of a system is also known as System Analysis and Design process Role of System Analyst The system analyst is the person (or persons) who guides through the development of an information system. In performing these tasks the analyst must always match the information system objectives with the goals of the organization. Role of System Analyst differs from organization to organization. Most common responsibilities of System Analyst are following 1) System analysis It includes system's study in order to get facts about business activity. It is about getting information and determining requirements. Here the responsibility includes only requirement determination, not the design of the system. 2) System analysis and design: Here apart from the analysis work, Analyst is also responsible for the designing of the new system/application. 3) Systems analysis, design, and programming: Here Analyst is also required to perform as a programmer, where he actually writes the code to implement the design of the proposed application. Due to the various responsibilities that a system analyst requires to handle, he has to be multifaceted person with varied skills required at various stages of the life cycle. In addition to the technical know-how of the information system development a system analyst should also have the following knowledge.

Business knowledge: As the analyst might have to develop any kind of a business system, he should be familiar with the general functioning of all kind of businesses. Interpersonal skills: Such skills are required at various stages of development process for interacting with the users and extracting the requirements out of them Problem solving skills: A system analyst should have enough problem solving skills for defining the alternate solutions to the system and also for the problems occurring at the various stages of the development process. The system end users of the system refer to the people who use computers to perform their jobs, like desktop operators. Further, end users can be divided into various categories.

Very first users are the hands-on users. They actually interact with the system. They are the people who feed in the input data and get output data. Like person at the booking counter of a gas authority. This person actually sees the records and registers requests from various customers for gas cylinders. Other users are the indirect end users who do not interact with the systems hardware and software. However, these users benefit from the results of these systems. These types of users can be managers of organization using that system. There are third types of users who have management responsibilities for application systems. These oversee investment in the development or use of the system. Fourth types of users are senior managers. They are responsible for evaluating organization's exposure to risk from the systems failure. Systems Development Life Cycle (SDLC), The Systems Development Life Cycle (SDLC), or Software Development Life Cycle in systems engineering, information systems and software engineering, is the process of creating or altering systems, and the models and methodologies that people use to develop these systems. The concept generally refers to computer or information systems. In software engineering the SDLC concept underpins many kinds of software development methodologies. These methodologies form the framework for planning and controlling the creation of an information system: the software development process. A Systems Development Life Cycle (SDLC) adheres to important phases that are essential for developers, such as planning, analysis, design, and implementation, and are explained in the section below. A number of system development life cycle (SDLC) models have been created: waterfall, fountain, spiral, build and fix, rapid prototyping, incremental, and synchronize and stabilize. The oldest of these, and the best known, is the waterfall model: a sequence of stages in which the output of each stage becomes the input for the next. These stages can be characterized and divided up in different ways, including the following[6]:

Project planning, feasibility study: Establishes a high-level view of the intended project and determines its goals. Systems analysis, requirements definition: Refines project goals into defined functions and operation of the intended application. Analyzes end-user information needs. Systems design: Describes desired features and operations in detail, including screen layouts, business rules, process diagrams, pseudocode and other documentation. Implementation: The real code is written here. Integration and testing: Brings all the pieces together into a special testing environment, then checks for errors, bugs and interoperability. Acceptance, installation, deployment: The final stage of initial development, where the software is put into production and runs actual business.

Maintenance: What happens during the rest of the software's life: changes, correction, additions, moves to a different computing platform and more. This, the least glamorous and perhaps most important step of all, goes on seemingly forever.

In the following example (see picture) these stage of the Systems Development Life Cycle are divided in ten steps from definition to creation and modification of IT work products:

System analysis The goal of system analysis is to determine where the problem is in an attempt to fix the system. This step involves breaking down the system in different pieces to analyze the situation, analyzing project goals, breaking down what needs to be created and attempting to engage users so that definite requirements can be defined. Requirements analysis sometimes requires individuals/teams from client as well as service provider sides to get detailed and accurate requirements....often there has to be a lot of communication to and from to understand these requirements. Requirement gathering is the most crucial aspect as many times communication gaps arise in this phase and this leads to validation errors and bugs in the software program...... Design In systems design the design functions and operations are described in detail, including screen layouts, business rules, process diagrams and other documentation. The output of this stage will describe the new system as a collection of modules or subsystems. The design stage takes as its initial input the requirements identified in the approved

requirements document. For each requirement, a set of one or more design elements will be produced as a result of interviews, workshops, and/or prototype efforts. Design elements describe the desired software features in detail, and generally include functional hierarchy diagrams, screen layout diagrams, tables of business rules, business process diagrams,pseudocode, and a complete entity-relationship diagram with a full data dictionary. These design elements are intended to describe the software in sufficient detail that skilled programmers may develop the software with minimal additional input design. Development Modular and subsystem programming code will be accomplished during this stage. Testing The code is tested at various levels in software testing. Unit, system and user acceptance testings are often performed. This is a grey area as many different opinions exist as to what the stages of testing are and how much if any iteration occurs. Iteration is not generally part of the waterfall model, but usually some occur at this stage.in the testing the whole system is test one by one Following are the types of testing:

Defect testing Path testing Data set testing. Unit testing System testing Integration testing Black box testing White box testing Regression testing Automation testing User acceptance testing Performance testing Production process that ensures that the program performs the intended task.

Implementation The system is implemented methods are direct ,parallel, pilot, phased Operations and maintenance The deployment of the system includes changes and enhancements before the decommissioning or sunset of the system. Maintaining the system is an important aspect of SDLC. As key personnel change positions in the organization, new changes will be implemented, which will require system updates. SYSTEMS ANALYSIS AND DESIGN The Systems Analysis and Design (SAD) is the process of developing Information Systems (IS) that effectively use of hardware, software, data, process, and people to support the companys business objectives.

Management and control

The Systems Development Life Cycle (SDLC) phases serve as a programmatic guide to project activity and provide a flexible but consistent way to conduct projects to a depth matching the scope of the project. Each of the SDLC phase objectives are described in this section with key deliverables, a description of recommended tasks, and a summary of related control objectives for effective management. It is critical for the project manager to establish and monitor control objectives during each SDLC phase while executing projects. Control objectives help to provide a clear statement of the desired result or purpose and should be used throughout the entire SDLC process. Control objectives can be grouped into major categories (Domains), and relate to the SDLC phases as shown in the figure. To manage and control any SDLC initiative, each project will be required to establish some degree of a Work Breakdown Structure (WBS) to capture and schedule the work necessary to complete the project. The WBS and all programmatic material should be kept in the Project Description section of the project notebook. The WBS format is mostly left to the project manager to establish in a way that best describes the project work. There are some key areas that must be defined in the WBS as part of the SDLC policy. The following diagram describes three key areas that will be addressed in the WBS in a manner established by the project manager. Work breakdown structured organization

Work Breakdown Structure The upper section of the Work Breakdown Structure (WBS) should identify the major phases and milestones of the project in a summary fashion. In addition, the upper section should provide an overview of the full scope and timeline of the project and will be part of the initial project description effort leading to project approval. The middle section of the WBS is based on the seven Systems Development Life Cycle (SDLC) phases as a guide for WBS task development. The WBS elements should consist of milestones and tasks as opposed to activities and have a definitive period (usually two weeks or more). Each task must have a measurable output (e.g. document, decision, or analysis). A WBS task may rely on one or more activities (e.g. software engineering, systems engineering) and may require close coordination with other tasks, either internal or external to the project. Any part of the project needing support from contractors should have a Statement of work (SOW) written to include the appropriate tasks from the SDLC phases Prototyping Approach Software prototyping, is the development approach of activities during software development the creation of prototypes, i.e., incomplete versions of the software program being developed. Basic principles of the Prototyping Approach are

Not a standalone, complete development methodology approach, but rather an approach to handling selected portions of a larger, more traditional development methodology (i.e. Incremental, Spiral, or Rapid Application Development (RAD)) approaches. Attempts to reduce inherent project risk by breaking a project into smaller segments and providing more ease-of-change during the development process. User is involved throughout the development process, which increases the likelihood of user acceptance of the final implementation. Small-scale mock-ups of the system are developed following an iterative modification process until the prototype evolves to meet the users requirements. While most prototypes are developed with the expectation that they will be discarded, it is possible in some cases to evolve from prototype to working system. A basic understanding of the fundamental business problem is necessary to avoid solving the wrong problem.

UNIT 6 : DECISION SUPPORT SYSTEM DECISION SUPPORT SYSTEM A decision support systems (DSS) is a computer-based information system that supports business or organizational decision-making activities. DSSs serve the management, operations, and planning levels of an organization and help to make decisions, which may be rapidly changing and not easily specified in advance.DSSs include knowledge-based systems. A properly designed DSS is an interactive software-based system intended to help decision makers compile useful information from a combination of raw data, documents, personal knowledge, or business models to identify and solve problems and make decisions. Typical information that a decision support application might gather and present are:

inventories of information assets (including legacy and relational data sources, cubes, data warehouses, and data marts), comparative sales figures between one period and the next, projected revenue figures based on product sales assumptions.

HISTORY According to Keen (1978) the concept of decision support has evolved from two main areas of research: The theoretical studies of organizational decision making done at the Carnegie Institute of Technology during the late 1950s and early 1960s, and the technical work on interactive computer systems, mainly carried out at the Massachusetts Institute of Technology in the 1960s. It is considered that the concept of DSS became an area of research of its own in the middle of the 1970s, before gaining in intensity during the 1980s. In the middle and late 1980s, executive information systems (EIS), group decision support systems (GDSS), and organizational decision support systems (ODSS) evolved from the single user and model-oriented DSS.In the 1970s DSS was described as "a computer based system to aid decision making". Late 1970s the DSS movement started focusing on "interactive computer-based systems which help decision-makers utilize data bases and models to solve ill-structured problems". In the 1980s DSS should provide systems "using suitable and available technology to improve effectiveness of managerial and professional activities", and end 1980s DSS faced a new challenge towards the design of intelligent workstations.In 1987 Texas Instruments completed development of the Gate Assignment Display System (GADS) for United Airlines. This decision support system is credited with significantly reducing travel delays by aiding the management of ground operations at various airports, beginning with O'Hare International Airport in Chicago and Stapleton Airport in Denver Colorado. Beginning in about 1990, data warehousing and on-line analytical processing (OLAP) began broadening the realm of DSS. As the turn of the millennium approached, new Web-based analytical applications were introduced. The advent of better and better reporting technologies has seen DSS start to emerge as a critical component of management design. Examples of this can be seen in the intense amount of discussion of DSS in the education environment.

Benefits

1. Improves personal efficiency 2. Speed up the process of decision making 3. Increases organizational control 4. Encourages exploration and discovery on the part of the decision maker 5. Speeds up problem solving in an organization 6. Facilitates interpersonal communication 7. Promotes learning or training 8. Generates new evidence in support of a decision 9. Creates a competitive advantage over competition 10. Reveals new approaches to thinking about the problem space 11. Helps automate managerial processes DSS Classification Haettenschwiler differentiates passive, active, and cooperative DSS. A passive DSS is a system that aids the process of decision making, but that cannot bring out explicit decision suggestions or solutions. An active DSS can bring out such decision suggestions or solutions. A cooperative DSS allows the decision maker (or its advisor) to modify, complete, or refine the decision suggestions provided by the system, before sending them back to the system for validation. The system again improves, completes, and refines the suggestions of the decision maker and sends them back to her for validation. The whole process then starts again, until a consolidated solution is generated. Daniel Power classified DSS as communication-driven DSS, data-driven DSS, documentdriven DSS, knowledge-driven DSS, and model-driven DSS.

A communication-driven DSS supports more than one person working on a shared task; examples include integrated tools like Microsoft's NetMeeting A data-driven DSS or data-oriented DSS emphasizes access to and manipulation of a time series of internal company data and, sometimes, external data. A document-driven DSS manages, retrieves, and manipulates unstructured information in a variety of electronic formats. A knowledge-driven DSS provides specialized problem-solving expertise stored as facts, rules, procedures, or in similar structures. A model-driven DSS emphasizes access to and manipulation of a statistical, financial, optimization, or simulation model. Model-driven DSS use data and parameters provided by users to assist decision makers in analyzing a situation; they are not necessarily data-intensiveUsing scope as the criterion, Power[9] differentiates enterprise-wide DSS and desktop DSS. An enterprise-wide DSS is linked to large data warehouses and serves many managers in the company. A desktop, single-user DSS is a small system that runs on an individual manager's PC.

Holsapple and Whinston classify DSS into the following six frameworks: Text-oriented DSS, Database-oriented DSS, Spreadsheet-oriented DSS, Solver-oriented DSS, Ruleoriented DSS, and Compound DSS. A compound DSS is the most popular classification for a DSS. It is a hybrid system that includes two or more of the five basic structures described by Holsapple and Whinston COMPONENTS OF DSS Three fundamental components of a DSS architecture are: 1. the database management system

2. 3. 4. 5.

the model base management system Knowledgebase management system User interface management system User

DBMS is a software to manage databases. Larger organisation utilise Datawarehouses. Development Frameworks DSS systems are not entirely different from other systems and require a structured approach. Such a framework includes people, technology, and the development approach. DSS technology levels (of hardware and software) may include: 1. The actual application that will be used by the user. This is the part of the application that allows the decision maker to make decisions in a particular problem area. The user can act upon that particular problem. 2. Generator contains Hardware/software environment that allows people to easily develop specific DSS applications. This level makes use of case tools or systems such as Crystal, AIMMS, and iThink. 3. Tools include lower level hardware/software. DSS generators including special languages, function libraries and linking modules An iterative developmental approach allows for the DSS to be changed and redesigned at various intervals. Once the system is designed, it will need to be tested and revised for the desired outcome. The nascent field of Decision engineering treats the decision itself as an engineered object, and applies engineering principles such as Design and Quality assurance to an explicit representation of the elements that make up a decision. APPLICATIONS One example is the clinical decision support system for medical diagnosis. Other examples include a bank loan officer verifying the credit of a loan applicant or an engineering firm that has bids on several projects and wants to know if they can be competitive with their costs. DSS is extensively used in business and management. Executive dashboard and other business performance software allow faster decision making, identification of negative trends, and better allocation of business resources. A growing area of DSS application, concepts, principles, and techniques is in agricultural production, marketing for sustainable development, the DSSAT4 package, developed through financial support of USAID during the 80's and 90's, has allowed rapid assessment of several agricultural production systems around the world to facilitate decision-making at the farm and policy levels. There are, however, many constraints to the successful adoption on DSS in agriculture. DSS are also prevalent in forest management where the long planning time frame demands specific requirements. All aspects of Forest management, from log transportation, harvest scheduling to sustainability and ecosystem protection have been addressed by modern DSSs. A comprehensive list and discussion of all available systems in forest management is being compiled under the COST action Forsys

A specific example concerns the Canadian National Railway system, which tests its equipment on a regular basis using a decision support system. A problem faced by any railroad is worn-out or defective rails, which can result in hundreds of derailments per year. Decision support systems (DSS) are a diverse group of interactive computer toolsprimarily customizable software designed to assist managerial decision making. They fall into a broader class known as management support systems (MSSs). The goal of a DSS is to make management more efficient and effective, particularly with ad hoc and discretionary decisions (versus routine or programmatic ones that require little judgment). EVOLUTION OF DSS DSS were introduced in the 1970s and gained mainstream attention in the 1980s. Originally run largely on mainframes, they were seen as an evolutionary step from management information systems, which at the time were relatively inflexible storehouses of corporate data. In that environment, DSS were high-end applications reserved for occasional, nonrecurring strategic decisions by senior management. Since then, the rapid advances in personal computers ushered in a new breed of simple and widely used DSS. Indeed, some experts consider the built-in

Figure Decision support Systems Versus Other Management Tools

analytic functions in popular spreadsheet programs, such as Microsoft Excel and Lotus 1-23, to be mini-DSS. As a result, many DSS today are simple, informal PC software tools that users create themselves with the help of templates, macros, user-programmed modules, and other customizable features. While a simple DSS for an individual may cost a couple hundred dollars and some programming time, sophisticated ones continue to be significant business investments. At their inception they were exceptionally expensive to develop, and thus only large companies could afford them. Although relative prices have come down, they still tend to cost anywhere from $30,000 to $500,000 or more to implement company-wide. Premium systems are offered by such firms as IBM, SAS Institute, SPSS, and a host of more specialized vendors. STRUCTURED DECISIONS. A structured decision is one in which all three components can be fairly well specified, i.e., the data, process, and evaluation are determined. Usually structured decisions are made regularly and therefore it makes sense to place a comparatively rigid framework around the decision and the people making it. An example of this type of decision may be the routine credit-granting decision made by many businesses. It is probably the case that most firms collect rather similar sets of data for credit granting decision makers to use. In addition the way in which the data is combined is likely to be consistent (for instance, household debt must be less than 25 percent of gross income). Finally, this decision can also be evaluated in a very structured way (specifically when the marginal cost of relaxing credit requirements equals the marginal revenue obtained from additional sales). For structured decisions it is possible and desirable to develop computer programs that collect and combine the data, thus giving the process a high degree of consistency. However, because these tend to be routine and predictable choices, a DSS is typically not needed for highly structured decisions. Instead, there are any number of automated tools that can make the decision based on the predefined criteria. UNSTRUCTURED DECISIONS. At the other end of the continuum are unstructured decisions. These decisions have the same components as structured ones; however, there is little agreement on their nature. For

instance, with these types of decisions, each decision maker may use different data and processes to reach a conclusion. In addition, because of the nature of the decision there may also be few people that are even qualified to evaluate the decision. These types of decisions are generally the domain of experts in a given field. This is why firms hire consulting engineers to assist their decision-making activities in these areas. To support unstructured decisions requires an appreciation of individual approaches, and it may not be terribly beneficial to expend a great deal of effort to support them. Generally, unstructured decisions are not made regularly or are made in situations in which the environment is not well understood. New product decisions may fit into this category for either of these reasons. To support a decision like this requires a system that begins by focusing on the individual or team that will make the decision. These decision makers are usually entrusted with decisions that are unstructured because of their experience or expertise, and therefore it is their individual ability that is of value. One approach to support systems in this area is to construct a program that simulates the process used by a particular individual. These have been called "expert systems." It is probably not the case that an expert decision maker would be replaced by such a system, although it may offer support in terms of providing another perspective of the decision. Another approach is to monitor and document the process that was used so that the decision maker(s) can readily review what has already been examined and concluded. An even more novel approach used to support these decisions is to provide environments that are specially designed to give these decision makers an atmosphere that is conducive to their particular tastes, a task well suited for a DSS. The key to support of unstructured decisions is to understand the role that individual experience or expertise plays in the decision and to allow for individual approaches. SEMI-STRUCTURED DECISIONS. In the middle of the continuum are semi-structured decisions, and this is where most of what are considered to be true decision support systems are focused. Decisions of this type are characterized as having some agreement on the data, process, and/or evaluation to be used, but there is still a desire not to place too much structure on the decision and to let some human judgment be used. An initial step in analyzing which support system is required is to understand where the limitations of the decision maker may be manifested, i.e., will it be in the data acquisition portion, or in the process component, or possibly in the evaluation of outcomes. For instance, suppose an insurance executive is trying to decide whether to offer a new type of product to existing policyholders that will focus on families with two or more children that will be ready to attend college in six to nine years. The support required for this decision is essentially data oriented. The information required can be expressed in terms of the following query on the insurance company's database: "Give me a list of all of our policyholders that have a college education and have more than two children between ages 10 and 12."

Group decision making is a type of participatory process in which multiple individuals acting collectively, analyze problems or situations, consider and evaluate alternative courses of action, and select from among the alternatives a solution or solutions. The number of people involved in group decision-making varies greatly, but often ranges from two to seven. The individuals in a group may be demographically similar or quite diverse. Decision-making groups may be relatively informal in nature, or formally designated and charged with a specific goal. The process used to arrive at decisions may be unstructured or structured. The nature and composition of groups, their size, demographic makeup, structure, and purpose, all affect their functioning to some degree. The external contingencies faced by groups (time pressure and conflicting goals) impact the development and effectiveness of decisionmaking groups as well. In organizations many decisions of consequence are made after some form of group decision-making process is undertaken. However, groups are not the only form of collective work arrangement. Group decision-making should be distinguished from the concepts of teams, teamwork, and self managed teams. Although the words teams and groups are often used interchangeably, scholars increasingly differentiate between the two. The basis for the distinction seems to be that teams act more collectively and achieve greater synergy of effort. Katzenback and Smith spell out specific differences between decision making groups and teams:

The group has a definite leader, but the team has shared leadership roles Members of a group have individual accountability; the team has both individual and collective accountability.

The group measures effectiveness indirectly, but the team measures performance directly through their collective work product.

The group discusses, decides, and delegates, but the team discusses, decides, and does real work.

GROUP DECISION MAKING METHODS There are many methods or procedures that can be used by groups. Each is designed to improve the decision-making process in some way. Some of the more common group decision-making methods are brainstorming, dialetical inquiry, nominal group technique, and the delphi technique.

BRAINSTORMING. Brainstorming involves group members verbally suggesting ideas or alternative courses of action. The "brainstorming session" is usually relatively unstructured. The situation at hand is described in as much detail as necessary so that group members have a complete understanding of the issue or problem. The group leader or facilitator then solicits ideas from all members of the group. Usually, the group leader or facilitator will record the ideas presented on a flip chart or marker board. The "generation of alternatives" stage is clearly differentiated from the "alternative evaluation" stage, as group members are not allowed to evaluate suggestions until all ideas have been presented. Once the ideas of the group members have been exhausted, the group members then begin the process of evaluating the utility of the different suggestions presented. Brainstorming is a useful means by which to generate alternatives, but does not offer much in the way of process for the evaluation of alternatives or the selection of a proposed course of action. One of the difficulties with brainstorming is that despite the prohibition against judging ideas until all group members have had their say, some individuals are hesitant to propose ideas because they fear the judgment or ridicule of other group members. In recent years, some decision-making groups have utilized electronic brainstorming, which allows group members to propose alternatives by means of e-mail or another electronic means, such as an online posting board or discussion room. Members could conceivably offer their ideas anonymously, which should increase the likelihood that individuals will offer unique and creative ideas without fear of the harsh judgment of others. DIALETICAL INQUIRY. Dialetical inquiry is a group decision-making technique that focuses on ensuring full consideration of alternatives. Essentially, it involves dividing the group into opposing sides, which debate the advantages and disadvantages of proposed solutions or decisions. A similar group decision-making method, devil's advocacy, requires that one member of the group highlight the potential problems with a proposed decision. Both of these techniques are designed to try and make sure that the group considers all possible ramifications of its decision. NOMINAL GROUP TECHNIQUE.

The nominal group technique is a structured decision making process in which group members are required to compose a comprehensive list of their ideas or proposed alternatives in writing. The group members usually record their ideas privately. Once finished, each group member is asked, in turn, to provide one item from their list until all ideas or alternatives have been publicly recorded on a flip chart or marker board. Usually, at this stage of the process verbal exchanges are limited to requests for clarificationno evaluation or criticism of listed ideas is permitted. Once all proposals are listed publicly, the group engages in a discussion of the listed alternatives, which ends in some form of ranking or rating in order of preference. As with brainstorming, the prohibition against criticizing proposals as they are presented is designed to overcome individuals' reluctance to share their ideas. Empirical research conducted on group decision making offers some evidence that the nominal group technique succeeds in generating a greater number of decision alternatives that are of relatively high quality. DELPHI TECHNIQUE. The Delphi technique is a group decision-making process that can be used by decisionmaking groups when the individual members are in different physical locations. The technique was developed at the Rand Corporation. The individuals in the Delphi "group" are usually selected because of the specific knowledge or expertise of the problem they possess. In the Delphi technique, each group member is asked to independently provide ideas, input, and/or alternative solutions to the decision problem in successive stages. These inputs may be provided in a variety of ways, such as e-mail, fax, or online in a discussion room or electronic bulletin board. After each stage in the process, other group members ask questions and alternatives are ranked or rated in some fashion. After an indefinite number of rounds, the group eventually arrives at a consensus decision on the best course of action. ADVANTAGES AND DISADVANTAGES OF GROUP DECISION MAKING The effectiveness of decision-making groups can be affected by a variety of factors. Thus, it is not possible to suggest that "group decision making is always better" or "group decision making is always worse" than individual decision-making. For example, due to the increased demographic diversity in the workforce, a considerable amount of research has focused on diversity's effect on the effectiveness of group functioning. In general, this research suggests that demographic diversity can sometimes have positive or negative effects, depending on the specific situation. Demographically diverse group may have to over-come social barriers

and difficulties in the early stages of group formation and this may slow down the group. However, some research indicates that diverse groups, if effectively managed, tend to generate a wider variety and higher quality of decision alternatives than demographically homogeneous groups. Despite the fact that there are many situational factors that affect the functioning of groups, research through the years does offer some general guidance about the relative strengths and weaknesses inherent in group decision making. The following section summarizes the major pros and cons of decision making in groups.

ADVANTAGES. Group decision-making, ideally, takes advantage of the diverse strengths and expertise of its members. By tapping the unique qualities of group members, it is possible that the group can generate a greater number of alternatives that are of higher quality than the individual. If a greater number of higher quality alternatives are generated, then it is likely that the group will eventually reach a superior problem solution than the individual. Group decision-making may also lead to a greater collective understanding of the eventual course of action chosen, since it is possible that many affected by the decision implementation actually had input into the decision. This may promote a sense of "ownership" of the decision, which is likely to contribute to a greater acceptance of the course of action selected and greater commitment on the part of the affected individuals to make the course of action successful. DISADVANTAGES. There are many potential disadvantages to group decision-making. Groups are generally slower to arrive at decisions than individuals, so sometimes it is difficult to utilize them in situations where decisions must be made very quickly. One of the most often cited problems is groupthink. Irving Janis, in his 1972 book Victims of Groupthink, defined the phenomenon as the "deterioration of mental efficiency, reality testing, and moral judgment resulting from in-group pressure." Groupthink occurs when individuals in a group feel pressure to conform to what seems to be the dominant view in the group. Dissenting views of the majority opinion are suppressed and alternative courses of action are not fully explored.

Research suggests that certain characteristics of groups contribute to groupthink. In the first place, if the group does not have an agreed upon process for developing and evaluating alternatives, it is possible that an incomplete set of alternatives will be considered and that different courses of action will not be fully explored. Many of the formal decision-making processes (e.g., nominal group technique and brain-storming) are designed, in part, to reduce the potential for groupthink by ensuring that group members offer and consider a large number of decision alternatives. Secondly, if a powerful leader dominates the group, other group members may quickly conform to the dominant view. Additionally, if the group is under stress and/or time pressure, groupthink may occur. Finally, studies suggest that highly cohesive groups are more susceptible to groupthink. Group polarization is another potential disadvantage of group decision-making. This is the tendency of the group to converge on more extreme solutions to a problem. The "risky shift" phenomenon is an example of polarization; it occurs when the group decision is a riskier one than any of the group members would have made individually. This may result because individuals in a group sometimes do not feel as much responsibility and accountability for the actions of the group as they would if they were making the decision alone. Decision-making in groups is a fact of organizational life for many individuals. Because so many individuals spend at least some of their work time in decision-making groups, groups are the subjects of hundreds of research studies each year. Despite this, there is still much to learn about the development and functioning of groups. Research is likely to continue to focus on identifying processes that will make group decision-making more efficient and effective. It is also likely to examine how the internal characteristics of groups (demographic and cognitive diversity) and the external contingencies faced by groups affect their functioning. Group decision making is a situation faced when people are brought together to solve problems in the anticipation that they are more effective than individuals under the idea of synergy. But cohesive groups display risky behavior in decision making situations that led to the devotion of much effort, especially in the area of applied social sciences and other relevant fields of specialization. There are several aspects of group cohesion which have a negative effect on group decision making and hence on group effectiveness. Risky-shift phenomenon, group polarisation, and group-think are negative aspects of group decision making which have drawn attention Group-think is one of the most dangerous traps in our decision making. It's particularly because it taps into our deep social identification mechanisms - everyone likes to feel part of a group - and our avoidance of social challenges. But consensus without conflict almost

always means that other viewpoints are being ignored, and the consequences of group-think can be disastrous Issues facing any work group concerning decision making are: how should decisions be made? Consensus? Voting? One-person rule? Secret ballot?] Consideration of the various opinions of the different individuals and deciding what action a group should take might be of help. DECISION MAKING IN BUSINESS AND MANAGEMENT In general, business and management systems should be set up to allow decision making at the lowest possible level. Several decision making models or practices for business include:

SWOT Analysis - Evaluation by the decision making individual or organization of Strengths, Weaknesses, Opportunities and Threats with respect to desired end state or objective. Analytic Hierarchy Process - widely-used procedure for group decision making Buyer decision processes - transaction before, during, and after a purchase Complex systems - common behavioural and structural features that can be modelled Corporate finance o The investment decision o The financing decision o The dividend decision o Working capital management decisions o Cost-benefit analysis - process of weighing the total expected costs vs. the total expected benefits Control-Ethics, a decision making framework that balances the tensions of accountability and 'best' outcome. Decision trees o Decision analysis - the discipline devoted to prescriptive modeling for decision making under conditions of uncertainty. o Program Evaluation and Review Technique (PERT) o critical path analysis o critical chain analysis Force field analysis - analyzing forces that either drive or hinder movement toward a goal Game theory - the branch of mathematics that models decision strategies for rational agents under conditions of competition, conflict and cooperation. Grid Analysis - analysis done by comparing the weighted averages of ranked criteria to options. A way of comparing both objective and subjective data. Hope and fear (or colloquially greed and fear) as emotions that motivate business and financial players, and often bear a higher weight that the rational analysis of fundamentals, as discovered by neuroeconomics research Linear programming - optimization problems in which the objective function and the constraints are all linear Min-max criterion Model (economics)- theoretical construct of economic processes of variables and their relationships Monte Carlo method - class of computational algorithms for simulating systems Morphological analysis - all possible solutions to a multi-dimensional problem complex

optimization o constrained optimization Paired Comparison Analysis - paired choice analysis Pareto Analysis - selection of a limited of number of tasks that produce significant overall effect Robust decision - making the best possible choice when information is incomplete, uncertain, evolving and inconsistent Satisficing - In decision-making, satisficing explains the tendency to select the first option that meets a given need or select the option that seems to address most needs rather than seeking the optimal solution. Scenario analysis - process of analyzing possible future events Six Thinking Hats - symbolic process for parallel thinking Strategic planning process - applying the objectives, SWOTs, strategies, programs process Trend following and other imitations of what other business deciders do, or of the current fashions among consultants.

EXPERT SYSTEM An expert system is software that attempts to provide an answer to a problem, or clarify uncertainties where normally one or more human experts would need to be consulted. Expert systems are most common in a specific problem domain, and are a traditional application and/or subfield of artificial intelligence (AI). Features 1. The sequence of steps taken to reach a conclusion is dynamically synthesized with each new case. The sequence is not explicitly programmed at the time that the system is built. 2. Expert systems can process multiple values for any problem parameter. This permits more than one line of reasoning to be pursued and the results of incomplete (not fully determined) reasoning to be presented. 3. Problem solving is accomplished by applying specific knowledge rather than specific technique. This is a key idea in expert systems technology. It reflects the belief that human experts do not process their knowledge differently from others, but they do possess different knowledge. With this philosophy, when one finds that their expert system does not produce the desired results, work begins to expand the knowledge base, not to re-program the procedures. A wide variety of methods can be used to simulate the performance of the expert; however, common to most or all are: 1) the creation of a knowledge base which uses some knowledge representation structure to capture the knowledge of the Subject Matter Expert (SME); 2) a process of gathering that knowledge from the SME and codifying it according to the structure, which is called knowledge engineering; and 3) once the system is developed, it is placed in the same real world problem solving situation as the human SME, typically as an aid to human workers or as a supplement to some information system. Expert systems may or may not have learning components. Chaining

Two methods of reasoning when using inference rules are forward chaining and backward chaining. Forward chaining starts with the data available and uses the inference rules to extract more data until a desired goal is reached. An inference engine using forward chaining searches the inference rules until it finds one in which the if clause is known to be true. It then concludes the then clause and adds this information to its data. It continues to do this until a goal is reached. Because the data available determines which inference rules are used, this method is also classified as data driven. Backward chaining starts with a list of goals and works backwards to see if there is data which will allow it to conclude any of these goals. An inference engine using backward chaining would search the inference rules until it finds one which has a then clause that matches a desired goal. If the if clause of that inference rule is not known to be true, then it is added to the list of goals. For example, suppose a rule base contains 1. 2. 3. 4. (1) IF X is green THEN X is a frog. (Confidence Factor: +1%) (2) IF X is NOT green THEN X is NOT a frog. (Confidence Factor: +99%) (3) IF X is a frog THEN X hops. (Confidence Factor: +50%) (4) IF X is NOT a frog THEN X does NOT hop. (Confidence Factor +50%)

Suppose a goal is to conclude that Fritz hops. Let X = "Fritz". The rule base would be searched and rule (3) would be selected because its conclusion (the then clause) matches the goal. It is not known that Fritz is a frog, so this "if" statement is added to the goal list. The rule base is again searched and this time rule (1) is selected because its then clause matches the new goal just added to the list. This time, the if clause (Fritz is green) is known to be true and the goal that Fritz hops is concluded. Because the list of goals determines which rules are selected and used, this method is called goal driven. However, note that if we use confidence factors in even a simplistic fashion - for example, by multiplying them together as if they were like soft probabilities - we get a result that is known with a confidence factor of only one-half of 1%. (This is by multiplying 0.5 x 0.01 = 0.005). This is useful, because without confidence factors, we might erroneously conclude with certainty that a sea turtle named Fritz hops just by virtue of being green In real world applications, few facts are known with absolute certainty and the opposite of a given statement may be more likely to be true ("Green things in the pet store are not frogs, with the probability or confidence factor of 99% in my pet store survey"). Thus it is often useful when building such systems to try and prove both the goal and the opposite of a given goal to see which is more likely. ES architecture/components Inference engine, explaination module, user, editor, user interface There are various expert systems in which a rule based and an inference engine co-operate to simulate the reasoning process that a human expert pursues in analyzing a problem and arriving at a conclusion. In these systems, in order to simulate the human reasoning process, a vast amount of knowledge needs to be stored in the knowledge base. Generally, the knowledge base of such an expert system consists of a relatively large number of "if/then" type statements that are interrelated in a manner that, in theory at least, resembles the sequence of mental steps that are involved in the human reasoning process. Because of the need for large storage capacities and related programs to store the rulebase, most expert systems have, in the past, been run only on large information handling systems.

Recently, the storage capacity of personal computers has increased to a point to which it is becoming possible to consider running some types of simple expert systems on personal computers In some applications of expert systems, the nature of the application and the amount of stored information necessary to simulate the human reasoning process for that application is too vast to store in the active memory of a computer. In other applications of expert systems, the nature of the application is such that not all of the information is always needed in the reasoning process. An example of this latter type of application would be the use of an expert system to diagnose a data processing system comprising many separate components, some of which are optional. When that type of expert system employs a single integrated rulebase to diagnose the minimum system configuration of the data processing system, much of the rulebase is not required since many of the optional components will not be present in the system. Nevertheless, early expert systems required the entire rulebase to be stored since all the rules were, in effect, chained or linked together by the structure of the rulebase. When the rulebase is segmented, preferably into contextual segments or units, it is then possible to eliminate the portions of the rulebase containing data or knowledge that is not needed in a particular application. The segmentation of the rulebase also allows the expert system to be run on or with systems having much smaller memory capacities than was possible with earlier arrangements, since each segment of the rulebase can be paged into and out of the system as needed. Segmentation into contextual units requires that the expert system manage various intersegment relationships as segments are paged into and out of memory during the execution of the program. Since the system permits a rulebase segment to be called and executed at any time during the processing of the first rulebase, provisions must be made to store the data that has accumulated up to that point so that later in the process, when the system returns to the first segment, it can proceed from the last point or rule node that was processed. Also, provisions must be made so that data that has been collected by the system up to that point can be passed onto the second segment of the rulebase after it has been paged into the system, and data collected during the processing of the second segment can be passed to the first segment when the system returns to complete processing that segment The user interface and the procedure interface are two important functions in the information collection process. End user There are two styles of user-interface design followed by expert systems. In the original style of user interaction, the software takes the end-user through an interactive dialog. Questions are posed to users. The system must function in the presence of partial information, since the user may choose not to respond to every question. There is no fixed control structure: Dialogues are dynamically synthesized from the "goal" of the system, the contents of the knowledge base, and the user's responses. Commercially viable systems will try to optimize the user experience by presenting options for commonly requested information based on a history of previous queries of the system using technology such as forms, augmented by keyword-based search. The gathered information may be verified by a confirmation step (e.g., to recover from spelling mistakes), and now act as an input into a forward-chaining engine. If confirmatory questions are asked in a subsequent phase, based on the rules activated by the obtained information, they are more likely to be specific and relevant. Explanation system/module

System has ability to give an explaination to all conclusions arrived It is very difficult to implement a general explanation system (answering questions like "Why" and "How") in a traditional computer program. An expert system can generate an explanation by retracing the steps of its reasoning. The response of the expert system to the question "Why" exposes the underlying knowledge structure. It is a rule; a set of conditions which, if true, allow the assertion of a consequent. The rule references values, and tests them against various constraints or asserts constraints onto them. This, in fact, is a significant part of the knowledge structure. There are values, which may be associated with some organizing entity.. The principal distinction between expert systems and traditional problem solving programs is the way in which the problem related expertise is coded. In traditional applications, problemrelated expertise is encoded in both program and data structures. In the expert system approach all of the problem expertise is encoded mostly in data structures. An example, related to tax advice, contrasts the traditional problem solving program with the expert system approach. In the traditional approach, data structures describe the taxpayer and tax tables, while a program contains rules (encoding expert knowledge) that relate information about the taxpayer to tax table choices. In the expert system approach, the latter information is also encoded in data structures. The collective data structures are called the knowledge base. The program (inference engine) of an expert system is relatively independent of the problem domain (taxes) and processes the rules without regard to the problem area they describe. This organization has several benefits:

New rules can be added to the knowledge base or altered without needing to rebuild the program. This allows changes to be made rapidly to a system (e.g., after it has been shipped to its customers, to accommodate very recent changes in state or federal tax codes). Rules are arguably easier for (non-programmer) domain experts to create and modify than writing code. Commercial rule engines typically come with editors that allow rule creation/modification through a graphical user interface, which also performs actions such as consistency and redundancy checks.

Modern rule engines allow a hybrid approach: some allow rules to be "compiled" into a form that is more efficiently machine-executable. Also, for efficiency concerns, rule engines allow rules to be defined more expressively and concisely by allowing software developers to create functions in a traditional programming language such as Java, which can then be invoked from either the condition or the action of a rule. Such functions may incorporate domain-specific (but reusable) logic. Participants There are generally three individuals having an interaction in an expert system. Primary among these is the end-user, the individual who uses the system for its problem solving assistance. In the construction and maintenance of the system there are two other roles: the problem domain expert who builds the system and supplies the knowledge base, and a knowledge engineer who assists the experts in determining the representation of their knowledge, enters this knowledge into an explanation module and who defines the inference technique required to solve the problem. Usually the knowledge engineer will represent the problem solving activity in the form of rules. When these rules are created from domain expertise, the knowledge base stores the rules of the expert system.

Inference rule An understanding of the "inference rule" concept is important to understand expert systems. An inference rule is a conditional statement with two parts: an if clause and a then clause. This rule is what gives expert systems the ability to find solutions to diagnostic and prescriptive problems. An expert system's rulebase is made up of many such inference rules. They are entered as separate rules and it is the inference engine that uses them together to draw conclusions. Because each rule is a unit, rules may be deleted or added without affecting other rules though it should affect which conclusions are reached. One advantage of inference rules over traditional programming is that inference rules use reasoning which more closely resembles human reasoning. Thus, when a conclusion is drawn, it is possible to understand how this conclusion was reached. Furthermore, because the expert system uses knowledge in a form similar to the that of the expert, it may be easier to retrieve this information directly from the expert. Procedure node interface The function of the procedure node interface is to receive information from the procedures coordinator and create the appropriate procedure call. The ability to call a procedure and receive information from that procedure can be viewed as simply a generalization of input from the external world. In some earlier expert systems external information could only be obtained in a predetermined manner, which only allowed certain information to be acquired. Through the knowledge base, this expert system disclosed in the cross-referenced application can invoke any procedure allowed on its host system. This makes the expert system useful in a much wider class of knowledge domains than if it had no external access or only limited external access. In the area of machine diagnostics using expert systems, particularly self-diagnostic applications, it is not possible to conclude the current state of "health" of a machine without some information. The best source of information is the machine itself, for it contains much detailed information that could not reasonably be provided by the operator. The type of information solicited by the system from the user by means of questions or classes should be tailored to the level of knowledge of the user. In many applications, the group of prospective uses is well-defined and the knowledge level can be estimated so that the questions can be presented at a level which corresponds generally to the average user. However, in other applications, knowledge of the specific domain of the expert system might vary considerably among the group of prospective users. One application where this is particularly true involves the use of an expert system, operating in a self-diagnostic mode on a personal computer to assist the operator of the personal computer to diagnose the cause of a fault or error in either the hardware or software. In general, asking the operator for information is the most straightforward way for the expert system to gather information, assuming that the information is or should be within the operator's understanding. For example, in diagnosing a personal computer, the expert system must know the major functional components of the system. It could ask the operator, for instance, if the display is a monochrome or color display. The operator should, in all probability, be able to provide the correct answer. The expert system could, on the other hand, cause a test unit to be run to determine the type of display. The accuracy of the data collected by either approach in this instance probably would not be that different so the knowledge engineer could employ either approach without affecting the accuracy of the

diagnosis. However, in many instances, because of the nature of the information being solicited, it is better to obtain the information from the system rather than asking the operator, because the accuracy of the data supplied by the operator is so low that the system could not effectively process it to a meaningful conclusion. In many situations the information is already in the system, in a form that permits the correct answer to a question to be obtained through a process of inductive or deductive reasoning. The data previously collected by the system could include answers provided by the user to less complex questions previously asked for a different reason or results returned from test units that were previously run. Real-time expert systems Industrial processes, data networks, and many other systems change their state and even their structure over time. Real time expert systems are designed to reason over time and change conclusions as the monitored system changes. Most of these systems must respond to constantly changing input data, arriving automatically from other systems such as process control systems or network management systems. Representation includes features for defining changes in belief of data or conclusions over time. This is necessary because data becomes stale. Approaches to this can include decaying belief functions, or the simpler validity interval that simply lets data and conclusions expire after specified time period, falling to "unknown" until refreshed. An often-cited example (attributed to real time expert system pioneer Robert L. Moore) is a hypothetical expert system that might be used to drive a car. Based on video input, there might be an intermediate conclusion that a stop light is green and a final conclusion that it is OK to drive through the intersection. But that data and the subsequent conclusions have a very limited lifetime. You would not want to be a passenger in a car driven based on data and conclusions that were, say, an hour old. The inference engine must track the times of each data input and each conclusion, and propagate new information as it arrives. It must ensure that all conclusions are still current. Facilities for periodically scanning data, acquiring data on demand, and filtering noise, become essential parts of the overall system. Facilities to reason within a fixed deadline are important in many of these applications. APPLICATION OF ES Expert systems are designed to facilitate tasks in the fields of accounting, medicine, process control, financial service, production, human resources, among others. Typically, the problem area is complex enough that a more simple traditional algorithm cannot provide a proper solution. The foundation of a successful expert system depends on a series of technical procedures and development that may be designed by technicians and related experts. As such, expert systems do not typically provide a definitive answer, but provide probabilistic recommendations. An example of the application of expert systems in the financial field is expert systems for mortgages. Loan departments are interested in expert systems for mortgages because of the growing cost of labour, which makes the handling and acceptance of relatively small loans less profitable. They also see a possibility for standardised, efficient handling of mortgage loan by applying expert systems, appreciating that for the acceptance of mortgages there are hard and fast rules which do not always exist with other types of loans. Another common application in the financial area for expert systems are in trading recommendations in various marketplaces. These markets involve numerous variables and

human emotions which may be impossible to deterministically characterize, thus expert systems based on the rules of thumb from experts and simulation data are used. Expert system of this type can range from ones providing regional retail recommendations, like Wishabi, to ones used to assist monetary decisions by financial institutions and governments. While expert systems have distinguished themselves in AI research in finding practical application, their application has been limited. Expert systems are notoriously narrow in their domain of knowledge as an amusing example, a researcher used the "skin disease" expert system to diagnose his rustbucket car as likely to have developed measles and the systems are thus prone to making errors that humans would easily spot. Additionally, once some of the mystique had worn off, most programmers realized that simple expert systems were essentially just slightly more elaborate versions of the decision logic they had already been using. Therefore, some of the techniques of expert systems can now be found in most complex programs without drawing much recognition. An example and a good demonstration of the limitations of an expert system is the Windows operating system troubleshooting software located in the "help" section in the taskbar menu. Obtaining technical operating system support is often difficult for individuals not closely involved with the development of the operating system. Microsoft has designed their expert system to provide solutions, advice, and suggestions to common errors encountered while using their operating systems. Another 1970s and 1980s application of expert systems, which we today would simply call AI, was in computer games. For example, the computer baseball games Earl Weaver Baseball and Tony La Russa Baseball each had highly detailed simulations of the game strategies of those two baseball managers. When a human played the game against the computer, the computer queried the Earl Weaver or Tony La Russa Expert System for a decision on what strategy to follow Advantages of ES

Compared to traditional programming techniques, expert-system approaches provide the added flexibility (and hence easier modifiability) with the ability to model rules as data rather than as code. In situations where an organization's IT department is overwhelmed by a software-development backlog, rule-engines, by facilitating turnaround, provide a means that can allow organizations to adapt more readily to changing needs. In practice, modern expert-system technology is employed as an adjunct to traditional programming techniques, and this hybrid approach allows the combination of the strengths of both approaches. Thus, rule engines allow control through programs (and user interfaces) written in a traditional language, and also incorporate necessary functionality such as inter-operability with existing database technology.

Disadvantages

The Garbage In, Garbage Out (GIGO) phenomenon: A system that uses expertsystem technology provides no guarantee about the quality of the rules on which it operates. All self-designated "experts" are not necessarily so, and one notable challenge in expert system design is in getting a system to recognize the limits to its knowledge. An expert system or rule-based approach is not optimal for all problems, and considerable knowledge is required so as to not misapply the systems.

Ease of rule creation and rule modification can be double-edged. A system can be sabotaged by a non-knowledgeable user who can easily add worthless rules or rules that conflict with existing ones. Reasons for the failure of many systems include the absence of (or neglect to employ diligently) facilities for system audit, detection of possible conflict, and rule lifecycle management (e.g. version control, or thorough testing before deployment). The problems to be addressed here are as much technological as organizational.

Types of problems solved Expert systems are most valuable to organizations that have a high-level of know-how experience and expertise that cannot be easily transferred to other members. They are designed to carry the intelligence and information found in the intellect of experts and provide this knowledge to other members of the organization for problem-solving purposes. Typically, the problems to be solved are of the sort that would normally be tackled by a professional, such as a medical professional in the case of clinical decision support systems. Real experts in the problem domain (which will typically be very narrow, for instance "diagnosing skin conditions in teenagers") are asked to provide "rules of thumb" on how they evaluate the problem either explicitly with the aid of experienced systems developers, or sometimes implicitly, by getting such experts to evaluate test cases and using computer programs to examine the test data and derive rules from that (in a strictly limited manner). Generally, expert systems are used for problems for which there is no single "correct" solution which can be encoded in a conventional algorithm one would not write an expert system to find the shortest paths through graphs, or to sort data, as there are simpler ways to do these tasks. Simple systems use simple true/false logic to evaluate data. More sophisticated systems are capable of performing at least some evaluation, taking into account real-world uncertainties, using such methods as fuzzy logic. Such sophistication is difficult to develop and still highly imperfect.

ENTERPRISE WIDE COMPUTING Technological changes are now occurring that may expand computational power just as the invention of desktop calculators and personal computers did. In the near future computationally demanding applications will no longer be executed primarily on supercomputers and single workstations dependent on local data sources. Instead enterprise-wide systems, and someday nationwide systems, will be used that consist of workstations, vector supercomputers, and parallel supercomputers connected by local and wide-area networks. Users will be presented the illusion of a single, very powerful computer, rather than a collection of disparate machines. The system will schedule application components on processors, manage data transfer, and provide communication and synchronization so as to dramatically improve application performance. Further, boundaries between computers will be invisible, as will the location of data and the failure of processors. To illustrate the concept of an enterprise-wide system, first consider the workstation or personal computer on your desk. By itself it can execute applications at a rate that is loosely a function of its cost, manipulate local data stored on local disks, and make

printouts on local printers. Sharing of resources with other users is minimal and difficult. If your workstation is attached to a department-wide local area network (LAN), not only are the resources of your workstation available to you, but so are the network file system and network printers. This allows expensive hardware such as disks and printers to be shared, and allows data to be shared among users on the LAN. With department-wide systems, processor resources can be shared in a primitive fashion by remote login to other machines. To realize an enterprise-wide system, many department-wide systems within a larger organization, such as a university, company, or national lab, are connected, as are more powerful resources such as vector supercomputers and parallel machines. However, connection alone does not make an enterprise-wide system. If it did then we would have enterprise-wide systems today. To convert a collection of machines into an enterprise-wide system requires software that makes sharing resources such as databases and processor cycles as easy as sharing printers and files on a LAN; it is just that software that is now being developed. more effective collaboration by putting coworkers in the same virtual workplace; higher application performance due to parallel execution and exploitation of off-site resources; improved access to data; improved productivity resulting from more effective collaboration; and a considerably simpler programming environment for the applications programmers. Three key technological changes make enterprise-wide computing possible. The first is the much heralded information superhighway or national information infrastructure (NII) and the gigabit (10^9 bits/second) networks which are its backbone. These networks can carry orders of magnitude more data than current systems. The effect is to "shrink the distance" between computers connected by the network. This, in turn, lowers the cost of computer-to-computer communication, enabling computers to more easily exchange both information and work to be performed.

The second technological change is the development and maturation of parallelizing compiler technology for distributed-memory parallel computers. Distributed memory parallel computers are computers consisting of many processors, each with its own memory and capable of running a different program, connected together by a network. Parallelizing compilers are programs that take source programs in a language such as High Performance Fortran and generate programs that execute in parallel across multiple processors, reducing the time required to perform the computation. Depending on the application and the equipment used the performance improvement can be from a modest factor of two or three to as much as two orders of magnitude. Most distributed memory parallel computers to date have been tightly coupled, where all of the processors are in one cabinet, connected by a special purpose, high-performance network. Loosely coupled systems, constructed of highperformance workstations and local area networks, are now competitive with tightly coupled distributed memory parallel computers on some applications. These workstation farms [1,2] have become increasingly popular as cost-effective alternatives to expensive parallel computers. The third technological change is the maturation of heterogeneous distributed systems technology [3]. A heterogeneous distributed system consists of multiple computers, called hosts, connected by a network. The distinguishing feature is that the hosts have different processors (80486 versus 68040), different operating systems (Unix versus VMS), and different available resources (memory or disk). These differences and the distributed nature of the system introduce complications not present in traditional, single-processor mainframe systems. After twenty years of research, solutions have been found to many of the difficulties that arise in heterogeneous distributed systems.

The combination of mature parallelizing compiler technology and gigabit networks means that it is possible for applications to run in parallel on an enterprise-wide system. The gigabit networks also permit applications to more readily manipulate data regardless of its location because they will provide sufficient bandwidth to either move the data to the application or to move the application to the data. The addition of heterogeneous distributed system technology to the mix means that issues such as data representation and alignment, processor faults, and operating system differences can be managed. Today enterprise-wide computing is just beginning. As of yet these technologies have not been fully integrated. However, projects are underway at the University of Virginia and elsewhere [4] that if successful will lead to operational enterprise-wide systems. For the moment systems available for use with networked workstations fall into three non-mutually-exclusive categories, (i) heterogeneous distributed systems, (ii) throughput oriented parallel systems, and (iii) response-time oriented parallel systems. Heterogeneous distributed systems are ubiquitous in research labs today. Such systems allow different computers to interoperate and exchange data. The most significant feature is the shared file system, which permits users to see the same file system, and thus share files, regardless of which machine they are using or its type. The single file-naming environment significantly reduces the barriers to collaboration and increases productivity. Throughput-oriented systems focus on exploiting available resources in order to service the largest number of jobs, where a job is a single program that does not communicate with other jobs. The benefit of these systems is that available, otherwise idle, processor resources within an organization can be exploited. While no single job runs any faster than it would on the owner's workstation, the total number of jobs executed in the organization can be significantly increased. For example, in such a system I could submit five jobs to the system at the same time in a manner reminiscent of old-style batch systems. The system would then select five idle processors on which to execute my jobs. If insufficient resources were available, then some of the jobs are queued for execution at a later time. Response time oriented systems are concerned with minimizing the execution time of a single application, that is, with harnessing the available workstations to act as a virtual parallel machine. The purpose is to more quickly solve larger problems than would otherwise be possible on a single workstation. Unfortunately, to achieve the performance benefits an application must be rewritten to use the parallel environment. The difficulty of parallelizing applications has limited the acceptance of parallel systems.

OBJECT ORIENTED ANALYSIS AND DESIGN Object-oriented analysis and design (OOAD) is a software engineering approach that models a system as a group of interacting objects. Each object represents some entity of interest in the system being modeled, and is characterised by its class, its state (data elements), and its behavior. Various models can be created to show the static structure, dynamic behavior, and run-time deployment of these collaborating objects. There are a number of different notations for representing these models, such as the Unified Modeling Language (UML). Object-oriented analysis (OOA) applies object-modeling techniques to analyze the functional requirements for a system. Object-oriented design (OOD) elaborates the analysis models to produce implementation specifications. OOA focuses on what the system does, OOD on how the system does it.

OBJECT-ORIENTED SYSTEMS An object-oriented system is composed of objects. The behavior of the system results from the collaboration of those objects. Collaboration between objects involves them sending messages to each other. Sending a message differs from calling a function in that when a target object receives a message, it itself decides what function to carry out to service that message. The same message may be implemented by many different functions, the one selected depending on the state of the target object. The implementation of "message sending" varies depending on the architecture of the system being modeled, and the location of the objects being communicated with. OBJECT-ORIENTED ANALYSIS Object-oriented analysis (OOA) looks at the problem domain, with the aim of producing a conceptual model of the information that exists in the area being analyzed. Analysis models do not consider any implementation constraints that might exist, such as concurrency, distribution, persistence, or how the system is to be built. Implementation constraints are dealt during object-oriented design (OOD). Analysis is done before the Design.The sources for the analysis can be a written requirements statement, a formal vision document, interviews with stakeholders or other interested parties. A system may be divided into multiple domains, representing different business, technological, or other areas of interest, each of which are analyzed separately.The result of object-oriented analysis is a description of what the system is functionally required to do, in the form of a conceptual model. That will typically be presented as a set of use cases, one or more UML class diagrams, and a number of interaction diagrams. It may also include some kind of user interface mock-up. The purpose of object oriented analysis is to develop a model that describes computer software as it works to satisfy a set of customer defined requirements. OBJECT-ORIENTED DESIGN Object-oriented design (OOD) transforms the conceptual model produced in object-oriented analysis to take account of the constraints imposed by the chosen architecture and any nonfunctional technological or environmental constraints, such as transaction throughput, response time, run-time platform, development environment, or programming language. The concepts in the analysis model are mapped onto implementation classes and interfaces. The result is a model of the solution domain, a detailed description of how the system is to be built.

UNIT 7 : Ethical and Security Issues in IT IS Security : Security includes Plans, procedures and technical measures to prevent unauthorised access, theft, physical damage to IS. No of tools and techniques are available to protect data, information, hardware and software. Earlier in traditional methods most of the tasks were paper based there were less chances of insecurity creeping in. Nowadays, IS is present in all places in organisation. People carry their laptops home so they are carrying the vital information therefore its vulnerable to security risks. Information is carried from, to and within the orgsn. Threats are in form of destruction, deletion, theft, corruption and bugs. Security goals include : Prevent unauthorised access of data Prevent unintentional damage Prevent intentional damage Protect integrity and confidentially of data A. Threats can be intentional threats - theft of data, equipment deliberate manipulation and transfer of data and

B)unintentional threats Environmental earthquakes, floods, power failure Human Errors program testing, design of hardware, data entry, Computer system failure poor manufacturing, defective parts There are computer crimes conducted where computer can be used as target(stolen or destroyed), medium(environment for crime like false data being entered ,and as a tool(plan a crime)

PROTECT IS Protecting IS includes: a. Correction : Correct the system for IS to prevent future problems. b. Detection : Specially designed diagnostic software available to detect for errors c. Damage control : When a malfunction occurs control measures are applied to minimise losses. Use of fault tolerant systems d. Recovery : Quickly fix damage e. Controls for prevention : Properly designed controls Fault tolerant systems- Have additional hardware, software, power supply which acts as backup even if system fails IS Controls : Control is method, policies and procedures that ensure protection of organizational assets, accuracy and reliability of data.

IS controls are of two types : General controls and Application Controls

(a) General Controls : Controls the design, secured use of data files, programs and complete IT infrastructure. General controls include : (i) Data Security controls protect data from accidental or intentional disclosure to unauthorised person/ modification/ destruction. Secure all business files. Controls needs to address confidentiality of data, access control, critical data and integrity. (ii) Hardware control physically secure the hardware and resources from human and natural hazards. Can have human guards or locked doors. (iii) Software control Ensure reliability and security of software, Monitor system software , computer and software program (iv) Implementation Control Auditing the software development process. Formal review points at various points. Examines user involvement, cost benefit. Focus on quality assurance. (v) Computer operation control Correct processes for storing , processing and have a backup and recovery (vi) Administrative control Have formal rules, procedures, standardsto ensure that organisations general and application controls are properly executed. (vii) Biometric controls : Automated method of verifying the identity of a person or physiological and behavioural characteristics. Common biometric techniques are Hand geometry( Verifier uses a TV like camera to take a persons picture of hand. 90 characteristics are compared with that stored in computer. - Blood vessel pattern of retina of eye is compared with pre stored pics of authorised persons. - Voice and facial recognition facilities are available (viii) Access control Restrict unauthorized user access to portion of system or entire system. To gain access authorisation is required which we refer as authentication (Ix) Firewall System or group of systems that enforces an access control policy between 2 networks. (b) Application Controls i. Input controls Prevent data alteration or loss. Check accuracy, completeness, consistency. ii. Process Controls- Check if all processes are complete , valid and accurate. iii. Output control Check if output is accurate, valid, consistent and complete.

Protect a digital Firm : Use of following :

(i) (ii)

FTS: Provide continous availability and elimination of recovery time Highly available computing : Helps to prevent from a crash. For maximum performance we use tools and techniques like a) mirroring(duplicate of all processes and transactions of primary server. . If server fails backup server can take its place without interruption. Its expensive.) b) load

(iii)

(iv)

balancing(distribute access requests to various servers so that no single device is overwhelmed) c) Clustering Link two computers so that the second acts as backup to primary computer. Firewall : System or group of system to prevent unauthorised users from accessing private networks. Placed btwn LANs, and WAN and external networks. Intrusion detection systems Monitoring toos placed at vulnerable points.

RISKS IN IS

Major risks involved are : 1. Organisational Risks Value of IT infrastructure to the performance of enterprise depends upon a host of environmental factors. Availability of skills for implementation of IS Projects and exploitation of IT infrastructure is a major constraint in success of IS. Organisations find resistance to use the IT infrastructure within he organization even when necessary skills are available or not difficult to develop. Such resistance is caused by ears that may be created due to communication gap regarding implications of using IT infrastructure for a given application. Resistance in usage of It infrastructure causes non-utilisation or under utilisation of infrastructure resulting in failure of delivering benefits of IT. 2. Infrastructure Risks Sometimes architecture of existing It architecture and strategies of on-going IS are such that they are not in tune with proposed IS project. Some projects have greater degree of dependence on existing IT infrastructure., degree of IT infrastructure risk is greater. If proposed project fits into overall plan of existing IT infrastructure, probability of success is even higher. 3. Definitional Risk Objectives have to be defined properly or communicated to and received by IS designers. Any ambiguity in objectives and related details may cause projects not to deliver what was evaluated at time of acceptance of proposal. 4. Technical Risks Advancements in hardware, software and data organisation, make new technologies very attractive in terms of stated return to cost ratios. Theres temptation to move to new technologies. Its more risky than the ones established and commercially tried.

In addition there are risks in form of a. False input data Major threat to infrastructure. These can b in form of changes in keyed in data, misinterpretation of input type, unauthorised addition, deletion, modification of data elements, unreasonable and inconsistent data. b. Misuse of Infrastructure- Selling information or program, listing of files or modification of data c. Unauthorised access d. Ineffective security measures Poor definition of access permissions, inadequate or incomplete follow up on security violations. e. Operational lapses f. System development process Wrong testing, inadequate controls. g. Trojan Horse Program that inserts instructions in software of iS to perform unauthorised acts or functions. There are also risks in form of Virus, worms.

CYBERTERRORISM Cyberterrorism is the leveraging of a target's computers and information , particularly via the Internet, to cause physical, real-world harm or severe disruption of infrastructure.Cyberterrorism is defined as The premeditated use of disruptive activities, or the threat thereof, against computers and/or networks, with the intention to cause harm or further social, ideological, religious, political or similar objectives. Or to intimidate any person in furtherance of such objectives. There are some that say cyberterrorism does not exist and is really a matter of hacking or information warfare. They disagree with labeling it terrorism because of the unlikelihood of the creation of fear, significant physical harm, or death in a population using electronic means, considering current attack and protective technologies. The National Conference of State Legislatures (NCSL), a bipartisan organization of legislators and their staff created to help policymakers of all 50 states address vital issues such as those affecting the economy or homeland security by providing them with a forum for exchanging ideas, sharing research and obtaining technical assistance [ defines cyberterrorism as follows: the use of information technology by terrorist groups and individuals to further their agenda. This can include use of information technology to organize and execute attacks against networks, computer systems and telecommunications infrastructures, or for exchanging information or making threats electronically. Examples are hacking into computer systems, introducing viruses to vulnerable networks, web site defacing, Denial-of-service attacks, or terroristic threats made via electronic ommunication. [2] Demitri Jesus Olmo. BACKGROUND INFORMATION Public interest in cyberterrorism began in the late 1980s. As the year 2000 approached, the fear and uncertainty about the millennium bug heightened and interest in potential cyberterrorist attacks also increased. However, although the millennium bug was by no means a terrorist attack or plot against the world or the United States, it did act as a catalyst in sparking the fears of a possibly large-scale devastating cyber-attack. Commentators noted

that many of the facts of such incidents seemed to change, often with exaggerated media reports. The high profile terrorist attacks in the United States on September 11, 2001 lead to further media coverage of the potential threats of cyberterrorism in the years following. Mainstream media coverage often discusses the possibility of a large attack making use of computer networks to sabotage critical infrastructures with the aim of putting human lives in jeopardy or causing disruption on a national scale either directly or by disruption of the national economy. Authors such as Winn Schwartau and John Arquilla are reported to have had considerable financial success selling books which described what were purported to be plausible scenarios of mayhem caused by cyberterrorism. Many critics claim that these books were unrealistic in their assessments of whether the attacks described (such as nuclear meltdowns and chemical plant explosions) were possible. A common thread throughout what critics perceive as cyberterror-hype is that of non-falsifiability; that is, when the predicted disasters fail to occur, it only goes to show how lucky we've been so far, rather than impugning the theory. EFFECTS Cyberterrorism can have a serious large-scale influence on significant numbers of people. It can weaken countries' economy greatly, thereby stripping it of its resources and making it more vulnerable to military attack. Cyberterror can also affect internet-based businesses. Like brick and mortar retailers and service providers, most websites that produce income (whether by advertising, monetary exchange for goods or paid services) could stand to lose money in the event of downtime created by cyber criminals. As internet-businesses have increasing economic importance to countries, what is normally cybercrime becomes more political and therefore "terror" related. EXAMPLES One example of cyberterrorists at work was when terrorists in Romania illegally gained access to the computers controlling the life support systems at an Antarctic research station, endangering the 58 scientists involved. However, the culprits were stopped before damage actually occurred. Mostly non-political acts of sabotage have caused financial and other damage, as in a case where a disgruntled employee caused the release of untreated sewage into water in Maroochy Shire, Australia. Computer viruses have degraded or shut down some non-essential systems in nuclear power plants, but this is not believed to have been a deliberate attack. More recently, in May 2007 Estonia was subjected to a mass cyber-attack in the wake of the removal of a Russian World War II war memorial from downtown Talinn. The attack was a distributed denial-of-service attack in which selected sites were bombarded with traffic in order to force them offline; nearly all Estonian government ministry networks as well as two major Estonian bank networks were knocked offline; in addition, the political party website of Estonia's current Prime Minister Andrus Ansip featured a counterfeit letter of apology from Ansip for removing the memorial statue. Despite speculation that the attack had been coordinated by the Russian government, Estonia's defense minister admitted he had no evidence linking cyber attacks to Russian authorities. Russia called accusations of its involvement "unfounded," and neither NATO nor European Commission experts were able

to find any proof of official Russian government participation.] In January 2008 a man from Estonia was convicted for launching the attacks against the Estonian Reform Party website and fined. Even more recently, in October 2007, the website of Ukrainian president Viktor Yushchenko was attacked by hackers. A radical Russian nationalist youth group, the Eurasian Youth Movement, claimed responsibility. Since the world of computers is evergrowing and still largely unexplored, countries new to the cyber-world produce young computer scientists usually interested in "having fun". Countries like China, Greece, India, Israel, and South Korea have all been in the spotlight before by the U.S. Media for attacks on information systems related to the CIA and NSA. Though these attacks are usually the result of curious young computer programmers, the United States has more than legitimate concerns about national security when such critical information systems fall under attack. In the past five years, the United States has taken a larger interest in protecting its critical information systems. It has issued contracts for high-leveled research in electronic security to nations such as Greece and Israel, to help protect against more serious and dangerous attacks. In 1999 hackers attacked NATO computers. The computers flooded them with email and hit them with a denial of service (DoS). The hackers were protesting against the NATO bombings in Kosovo. Businesses, public organizations and academic institutions were bombarded with highly politicized emails containing viruses from other European countries. Encryption is the conversion of data into a form, called a ciphertext, that cannot be easily understood by unauthorized people. Decryption is the process of converting encrypted data back into its original form, so it can be understood. The use of encryption/decryption is as old as the art of communication. In wartime, a cipher, often incorrectly called a code, can be employed to keep the enemy from obtaining the contents of transmissions. (Technically, a code is a means of representing a signal without the intent of keeping it secret; examples are Morse code and ASCII.) Simple ciphers include the substitution of letters for numbers, the rotation of letters in the alphabet, and the "scrambling" of voice signals by inverting the sideband frequencies. More complex ciphers work according to sophisticated computer algorithms that rearrange the data bits in digital signals. In order to easily recover the contents of an encrypted signal, the correct decryption key is required. The key is an algorithm that undoes the work of the encryption algorithm. Alternatively, a computer can be used in an attempt to break the cipher. The more complex the encryption algorithm, the more difficult it becomes to eavesdrop on the communications without access to the key. Encryption/decryption is especially important in wireless communications. This is because wireless circuits are easier to tap than their hard-wired counterparts. Nevertheless, encryption/decryption is a good idea when carrying out any kind of sensitive transaction, such as a credit-card purchase online, or the discussion of a company secret between different departments in the organization. The stronger the cipher -- that is, the harder it is for unauthorized people to break it -- the better, in general. However, as the strength of encryption/decryption increases, so does the cost. In recent years, a controversy has arisen over so-called strong encryption. This refers to ciphers that are essentially unbreakable without the decryption keys. While most companies and their customers view it as a means of keeping secrets and minimizing fraud, some governments view strong encryption as a potential vehicle by which terrorists might evade authorities. These governments, including that of the United States, want to set up a key-escrow arrangement. This means everyone who uses a

cipher would be required to provide the government with a copy of the key. Decryption keys would be stored in a supposedly secure place, used only by authorities, and used only if backed up by a court order. Opponents of this scheme argue that criminals could hack into the key-escrow database and illegally obtain, steal, or alter the keys. Supporters claim that while this is a possibility, implementing the key escrow scheme would be better than doing nothing to prevent criminals from freely using encryption/decryption. Why Have Cryptography Encryption is the science of changing data so that it is unrecognisable and useless to an unauthorised person. Decryption is changing it back to its original form. The most secure techniques use a mathematical algorithm and a variable value known as a 'key'. The selected key (often any random character string) is input on encryption and is integral to the changing of the data. The EXACT same key MUST be input to enable decryption of the data. This is the basis of the protection.... if the key (sometimes called a password) is only known by authorized individual(s), the data cannot be exposed to other parties. Only those who know the key can decrypt it. This is known as 'private key' cryptography, which is the most well known form.

OTHER USES OF CRYPTOGRAPHY Many techniques also provide for detection of any tampering with the encrypted data. A 'message authentication code' (MAC) is created, which is checked when the data is decrypted. If the code fails to match, the data has been altered since it was encrypted. This facility has may practical applications. Cyberterrorism is a phrase used to describe the use of Internet based attacks in terrorist activities, including acts of deliberate, large-scale disruption of computer networks, especially of personal computers attached to the Internet, by the means of tools such as computer viruses.

Cyberterrorism is a controversial term. Some authors choose a very narrow definition, relating to deployments, by known terrorist organizations, of disruption attacks against information systems for the primary purpose of creating alarm and panic. By this narrow definition, it is difficult to identify any instances of cyberterrorism. Cyberterrorism can also be defined much more generally as any computer crime targeting computer networks without necessarily affecting real world infrastructure, property, or lives. There is much concern from government and media sources about potential damages that could be caused by cyberterrorism, and this has prompted official responses from government agencies. Cyberterrorism is defined by the Technolytics Institute as "The premeditated use of disruptive activities, or the threat thereof, against computers and/or networks, with the intention to cause harm or further social, ideological, religious, political or similar objectives. Or to intimidate any person in furtherance of such objectives." [2] The term was coined by Barry C. Collin.[3] The National Conference of State Legislatures, an organization of legislators created to help policymakers issues such as economy and homeland security defines cyberterrorism as: [T]he use of information technology by terrorist groups and individuals to further their agenda. This can include use of information technology to organize and execute attacks against networks, computer systems and telecommunications infrastructures, or for exchanging information or making threats electronically. Examples are hacking into computer systems, introducing viruses to vulnerable networks, web site defacing, Denial-ofservice attacks, or terroristic threats made via electronic communication.[4] For the use of the Internet by terrorist groups for organization, see Internet and terrorism. Cyberterrorism can also include attacks on Internet business, but when this is done for economic motivations rather than ideological, it is typically regarded as cybercrime. As shown above, there are multiple definitions of cyber terrorism and most are overly broad. There is controversy concerning overuse of the term and hyperbole in the media and by security vendors trying to sell "solutions". CONCERNS As the Internet becomes more pervasive in all areas of human endeavor, individuals or groups can use the anonymity afforded by cyberspace to threaten citizens, specific groups (i.e. with membership based on ethnicity or belief), communities and entire countries, without the inherent threat of capture, injury, or death to the attacker that being physically present would bring. As the Internet continues to expand, and computer systems continue to be assigned more responsibility while becoming more and more complex and interdependent. Sabotage or terrorism via cyberspace may become a more serious threat and is possibly one of the top 10 events to "end the human race History

Public interest in cyberterrorism began in the late 1980s. As 2000 approached, the fear and uncertainty about the millennium bug heightened and interest in potential cyberterrorist attacks also increased. However, although the millennium bug was by no means a terrorist attack or plot against the world or the United States, it did act as a catalyst in sparking the fears of a possibly large-scale devastating cyber-attack. Commentators noted that many of the facts of such incidents seemed to change, often with exaggerated media reports. The high profile terrorist attacks in the United States on September 11, 2001 and the ensuing War on Terror by the US led to further media coverage of the potential threats of cyberterrorism in the years following. Mainstream media coverage often discusses the possibility of a large attack making use of computer networks to sabotage critical infrastructures with the aim of putting human lives in jeopardy or causing disruption on a national scale either directly or by disruption of the national economy. Authors such as Winn Schwartau and John Arquilla are reported to have had considerable financial success selling books which described what were purported to be plausible scenarios of mayhem caused by cyberterrorism. Many critics claim that these books were unrealistic in their assessments of whether the attacks described (such as nuclear meltdowns and chemical plant explosions) were possible. A common thread throughout what critics perceive as cyberterror-hype is that of non-falsifiability; that is, when the predicted disasters fail to occur, it only goes to show how lucky we've been so far, rather than impugning the theory. US MILITARY RESPONSE The US Department of Defense (DoD) charged the United States Strategic Command with the duty of combating cyberterrorism. This is accomplished through the Joint Task ForceGlobal Network Operations, which is the operational component supporting USSTRATCOM in defense of the DoD's Global Information Grid. This is done by integrating GNO capabilities into the operations of all DoD computers, networks, and systems used by DoD combatant commands, services and agencies. On November 2, 2006, the Secretary of the Air Force announced the creation of the Air Force's newest MAJCOM, the Air Force Cyber Command, which would be tasked to monitor and defend American interest in cyberspace. The plan was however replaced by the creation of Twenty-Fourth Air Force which became active in August 2009 and would be a component of the planned United States Cyber Command. On December 22, 2009, the White House named its head of Cyber Security as Howard Schmidt. He will coordinate U.S Government, military and intelligence efforts to repel hackers. EXAMPLES Sabotage Mostly non-political acts of sabotage have caused financial and other damage, as in a case where a disgruntled employee caused the release of untreated sewage into water in Maroochy Shire, Australia More recently, in May 2007 Estonia was subjected to a mass cyber-attack in the wake of the removal of a Russian World War II war memorial from downtown Tallinn. The attack was a distributed denial-of-service attack in which selected sites were bombarded with traffic to force them offline; nearly all Estonian government ministry networks as well as two major

Estonian bank networks were knocked offline; in addition, the political party website of Estonia's current Prime Minister Andrus Ansip featured a counterfeit letter of apology from Ansip for removing the memorial statue. Despite speculation that the attack had been coordinated by the Russian government, Estonia's defense minister admitted he had no conclusive evidence linking cyber attacks to Russian authorities. Russia called accusations of its involvement "unfounded," and neither NATO nor European Commission experts were able to find any conclusive proof of official Russian government participation. [8] In January 2008 a man from Estonia was convicted for launching the attacks against the Estonian Reform Party website and fined. Website defacement and denial of service

The website of Air Botswana, defaced by a group calling themselves the "Pakistan Cyber Army". Even more recently, in October 2007, the website of Ukrainian president Viktor Yushchenko was attacked by hackers. A radical Russian nationalist youth group, the Eurasian Youth Movement, claimed responsibility. In 1999 hackers attacked NATO computers. The computers flooded them with email and hit them with a denial of service (DoS). The hackers were protesting against the NATO bombings in Kosovo. Businesses, public organizations and academic institutions were bombarded with highly politicized emails containing viruses from other European countries. Other Since the world of computers is ever-growing and still largely unexplored, countries with young Internet cultures produce young computer scientists usually interested in "having fun". Countries like China, Pakistan, Greece, India, Israel, and South Korea have all been in the spotlight before by the U.S. Media for attacks on information systems related to the CIA and NSA. Though these attacks are usually the result of curious young computer programmers, the United States has concerns about national security when such critical information systems fall under attack. In the past five years, the United States has taken a larger interest in protecting its critical information systems. It has issued contracts for high-leveled research in electronic security to nations such as Greece and Israel, to help protect against more serious and dangerous attacks

FIREWALL

An illustration of how a firewall works. A firewall is a part of a computer system or network that is designed to block unauthorized access while permitting authorized communications. It is a device or set of devices configured to permit, deny, encrypt, decrypt, or proxy all (in and out) computer traffic between different security domains based upon a set of rules and other criteria.Firewalls can be implemented in either hardware or software, or a combination of both. Firewalls are frequently used to prevent unauthorized Internet users from accessing private networks connected to the Internet, especially intranets. All messages entering or leaving the intranet pass through the firewall, which examines each message and blocks those that do not meet the specified security criteria.There are several types of firewall techniques: 1. Packet filter: Packet filtering inspects each packet passing through the network and accepts or rejects it based on user-defined rules. Although difficult to configure, it is fairly effective and mostly transparent to its users. In addition, it is susceptible to IP spoofing. 2. Application gateway: Applies security mechanisms to specific applications, such as FTP and Telnet servers. This is very effective, but can impose a performance degradation. 3. Circuit-level gateway: Applies security mechanisms when a TCP or UDP connection is established. Once the connection has been made, packets can flow between the hosts without further checking. PROXY SERVER: INTERCEPTS ALL MESSAGES ENTERING AND LEAVING THE NETWORK. A firewall is a dedicated appliance, or software running on a computer, which inspects network traffic passing through it, and denies or permits passage based on a set of rules.It is a software or hardware that is normally placed between a protected network and an unprotected network and acts like a gate to protect assets to ensure that nothing private goes out and nothing malicious comes in.A firewall's basic task is to regulate some of the flow of traffic between computer networks of different trust levels. Typical examples are the Internet which is a zone with no trust and an internal network which is a zone of higher trust. A zone with an intermediate trust level, situated between the Internet and a trusted internal network, is often referred to as a "perimeter network" or Demilitarized zone (DMZ). A firewall's function within a network is similar to physical firewalls with fire doors in building construction. In the former case, it is used to prevent network intrusion to the private

network. In the latter case, it is intended to contain and delay structural fire from spreading to adjacent structures. Without proper configuration, a firewall can often become worthless. Standard security practices dictate a "default-deny" firewall ruleset, in which the only network connections which are allowed are the ones that have been explicitly allowed. Unfortunately, such a configuration requires detailed understanding of the network applications and endpoints required for the organization's day-to-day operation. Many businesses lack such understanding, and therefore implement a "default-allow" ruleset, in which all traffic is allowed unless it has been specifically blocked. This configuration makes inadvertent network connections and system compromise much more likely. HISTORY The term "firewall" originally meant a wall to confine a fire or potential fire within a building, cf. firewall (construction). Later uses refer to similar structures, such as the metal sheet separating the engine compartment of a vehicle or aircraft from the passenger compartment. Firewall technology emerged in the late 1980s when the Internet was a fairly new technology in terms of its global use and connectivity. The predecessors to firewalls for network security were the routers used in the late 1980s to separate networks from one another. The view of the Internet as a relatively small community of compatible users who valued openness for sharing and collaboration was ended by a number of major internet security breaches which occurred in the late 1980s: Clifford Stoll's discovery of German spies tampering with his system

Bill Cheswick's "Evening with Berferd" 1992 in which he set up a simple electronic jail to observe an attacker The Morris Worm spread itself through multiple vulnerabilities in the machines of the time. Although it was not malicious in intent, the Morris Worm was the first large scale attack on Internet security; the online community was neither expecting an attack nor prepared to deal with one

Packet filters The first paper published on firewall technology was in 1988, when engineers from Digital Equipment Corporation (DEC) developed filter systems known as packet filter firewalls. This fairly basic system was the first generation of what became a highly evolved and technical internet security feature. At AT&T Bell Labs, Bill Cheswick and Steve Bellovin were continuing their research in packet filtering and developed a working model for their own company based upon their original first generation architecture. Packet filters act by inspecting the "packets" which represent the basic unit of data transfer between computers on the Internet. If a packet matches the packet filter's set of rules, the packet filter will drop (silently discard) the packet, or reject it (discard it, and send "error responses" to the source).This type of packet filtering pays no attention to whether a packet is part of an existing stream of traffic (it stores no information on connection "state"). Instead, it filters each packet based only on information contained in the packet itself (most commonly using a combination of the packet's source and destination address, its protocol, and, for TCP and UDP traffic, the port number).TCP and UDP protocols comprise most communication over the Internet, and because TCP and UDP traffic by convention uses well known ports for particular types of traffic, a "stateless" packet filter can distinguish between, and thus control, those types of traffic (such as web browsing, remote printing, email

transmission, file transfer), unless the machines on each side of the packet filter are both using the same non-standard ports. Second generation - Application layer The key benefit of application layer filtering is that it can "understand" certain applications and protocols (such as File Transfer Protocol, DNS, or web browsing), and it can detect whether an unwanted protocol is being sneaked through on a non-standard port or whether a protocol is being abused in any harmful way. Third generation - "stateful" filters From 1989-1990 three colleagues from AT&T Bell Laboratories, Dave Presetto, Janardan Sharma, and Kshitij Nigam developed the third generation of firewalls, calling them circuit level firewalls. Third generation firewalls in addition regard placement of each individual packet within the packet series. This technology is generally referred to as a stateful packet inspection as it maintains records of all connections passing through the firewall and is able to determine whether a packet is either the start of a new connection, a part of an existing connection, or is an invalid packet. Though there is still a set of static rules in such a firewall, the state of a connection can in itself be one of the criteria which trigger specific rules. This type of firewall can help prevent attacks which exploit existing connections, or certain Denial-of-service attacks. Subsequent developments In 1992, Bob Braden and Annette DeSchon at the University of Southern California (USC) were refining the concept of a firewall. The product known as "Visas" was the first system to have a visual integration interface with colours and icons, which could be easily implemented to and accessed on a computer operating system such as Microsoft's Windows or Apple's MacOS. In 1994 an Israeli company called Check Point Software Technologies built this into readily available software known as FireWall-1. The existing deep packet inspection functionality of modern firewalls can be shared by Intrusion-prevention systems (IPS). Currently, the Middlebox Communication Working Group of the Internet Engineering Task Force (IETF) is working on standardizing protocols for managing firewalls and other middleboxes. Another axis of development is about integrating identity of users into Firewall rules. Many firewalls provide such features by binding user identities to IP or MAC addresses, which is very approximate and can be easily turned around. The NuFW firewall provides real identity based firewalling, by requesting user's signature for each connection. TYPES There are several classifications of firewalls depending on where the communication is taking place, where the communication is intercepted and the state that is being traced. Network layer and packet filters

Network layer firewalls, also called packet filters, operate at a relatively low level of the TCP/IP protocol stack, not allowing packets to pass through the firewall unless they match the established rule set. The firewall administrator may define the rules; or default rules may apply. The term "packet filter" originated in the context of BSD operating systems. Network layer firewalls generally fall into two sub-categories, stateful and stateless. Stateful firewalls maintain context about active sessions, and use that "state information" to speed packet processing. Any existing network connection can be described by several properties, including source and destination IP address, UDP or TCP ports, and the current stage of the connection's lifetime (including session initiation, handshaking, data transfer, or completion connection). If a packet does not match an existing connection, it will be evaluated according to the ruleset for new connections. If a packet matches an existing connection based on comparison with the frewall's state table, it will be allowed to pass without further processing.Stateless firewalls require less memory, and can be faster for simple filters that require less time to filter than to look up a session. They may also be necessary for filtering stateless network protocols that have no concept of a session. However, they cannot make more complex decisions based on what stage communications between hosts have reached.Modern firewalls can filter traffic based on many packet attributes like source IP address, source port, destination IP address or port, destination service like WWW or FTP. They can filter based on protocols, TTL values, netblock of originator, of the source, and many other attributes. Application-layer Application layer firewallApplication-layer firewalls work on the application level of the TCP/IP stack (i.e., all browser traffic, or all telnet or ftp traffic), and may intercept all packets traveling to or from an application. They block other packets (usually dropping them without acknowledgment to the sender). In principle, application firewalls can prevent all unwanted outside traffic from reaching protected machines. On inspecting all packets for improper content, firewalls can restrict or prevent outright the spread of networked computer worms and trojans. The additional inspection criteria can add extra latency to the forwarding of packets to their destination. Proxies A proxy device (running either on dedicated hardware or as software on a general-purpose machine) may act as a firewall by responding to input packets (connection requests, for example) in the manner of an application, whilst blocking other packets. Proxies make tampering with an internal system from the external network more difficult and misuse of one internal system would not necessarily cause a security breach exploitable from outside the firewall (as long as the application proxy remains intact and properly configured). Conversely, intruders may hijack a publicly-reachable system and use it as a proxy for their own purposes; the proxy then masquerades as that system to other internal machines. While use of internal address spaces enhances security, crackers may still employ methods such as IP spoofing to attempt to pass packets to a target network. Network address translation Firewalls often have network address translation (NAT) functionality, and the hosts protected behind a firewall commonly have addresses in the "private address range", Firewalls often have such functionality to hide the true address of protected hosts. Originally, the NAT function was developed to address the limited number of IPv4 routable addresses that could

be used or assigned to companies or individuals as well as reduce both the amount and therefore cost of obtaining enough public addresses for every computer in an organization. Hiding the addresses of protected devices has become an increasingly important defense against network reconnaissance.

S-ar putea să vă placă și