Sunteți pe pagina 1din 68

ANSWER BANK INTRODUCTION TO COMPUTERS

SL lQUESTION lPAGE l2 l5 l l9 l l l l 1 lWrite short note on Generation of Computers. 2 lWrite Short Notes on RAM, ROM, Cache Memory. 3 lDescribe a standard fully featured desktop configuration. Mention a single line description against each item in the configuration. 4 lEnumerate the key purposes and name one example of an operating system? 5 lDistinguish Micro computers, Mini computers, Mainframe and Super computer. 6 lMention the key differences between 3rd generation language V/s 4th generation language 7 lWrite short notes on the various options available for a company to establish network / connectivity across its offices/ branches? 8 lNetworks can be classified based on various criteria such as 1. Geographical spread / distance 2. Type of switching 3. Topologies 4. Medium of data communication. Give two examples of the types networks for each of the classification mentioned above? 9 lExplain Interface 10 lExplain Graphical User Interface (GUI). 11 lDifferentiate between Graphical User Interface and Character User Interface. 12 lDifferentiate between Main Memory and Secondary Memory. 13 lDifferentiate between Compilers and Interpreters. 14 lWhat is a database? What are the advantages of database management system over conventional file management system? 15 How is DBMS different from Conventional File System? 16 Enumerate the key purpose of a modem. 17 Explain the history of development of different generations of languages. 18 lOffice Automation? 19 lWrite Short Notes on Fibre Optics? 20 lWhat are benefits of networking your business? 21 lWhat is a router? 22 lWhy is a router required if there are several computers? 23 lHow is email, intranet and internet beneficial to business organisations? 24 lExplain various file systems. 25 lExplain Assemblers, Compilers and Interpreters. 26 lList the advantages and disadvantages of Distributed Processing 27 lDescribe in brief each stage of the System Development Life Cycle with emphasis on the deliverables/ output of each stage?

l l l l l l l l l l l l l l l l l l l

28 lMrs. Shenaz says office automation packages like spreadsheets, word processors and presentation packages have helped in increasing the productivity of the secretaries and line function managers. Comment on her statement giving relevant examples. 29 lWord Processing Packages (Short Notes) 30 lWrite short note on Centralised Data Processing 31 lCompare Online and batch processing 32 lDifferentiate between System Software and Application Software 33 lSingle User and Multi-user systems.

l l l l l

Write short note on Generation of Computers [2000] [5mks]

ANS. Computer Generations basically refer to development of computer hardware and operating systems. With advances in the field of electronics, computers also developed alongside, becoming smaller, faster and more capable every year. Operating systems and computer architecture have a great deal of influence on each other. To facilitate the use of the hardware, operating systems were developed. But as operating system were designed and used, it became obvious that changes in the design of the hardware could simplify the operating system. Similarly, operating systems had to be continuously upgraded to keep pace with the hardware development initiated by advances in electronics. Though the development was a seamless continuous process, there were certain landmarks which signified paradigm shifts in architecture and capabilities of computers. These landmarks marked the beginning of new generations of computers. Computer development can be broadly divided into 5 generations: a b c d e Zeroth Generation - from the mid1800s to 1950 First Generation - 1951-1956 Second Generation - 1956-1964 Third Generation - 1964-1979 Fourth Generation - 1979 - Present

The Zeroth Generation The term zeroth generation is used to refer to the period of development of computing, which predated the commercial production and sale of computer equipment. The period might be dated as extending from the mid-1800s to 1951. The hardware component technology of this period was electronic vacuum tubes. The actual operation of these early computers took place without be benefit of an operating system. Early programs were written in machine language. This system was clearly inefficient and depended on the varying competencies of the individual programmer as operators. The First Generation, 1951-1956 The first generation marked the beginning of commercial computing. The first generation was characterized by high-speed vacuum tube as the active component technology. As a result, computers were very large and occupied a whole room. They were faster than Zeroth generation computer but far slower than even palmtops today. Operation continued without the benefit of an operating system for some time. Programs began to be written in higher level, procedure-oriented languages. However, there was no provision for moving a program to different location in storage for any reason. Similarly, a program bound to specific devices could not be run at all if any of those devices were busy or broken. Data was stored in tapes in spools where there was physical punching done to mark various bytes.

At the same time, the development of programming languages was moving away from the basic machine language; first to assembly language, and later to procedure oriented languages, the most significant being the development of FORTRAN for advanced mathematical computing. The second Generation, 1956-1964 The second generation of computer hardware was most notably characterized by transistors replacing vacuum tubes as the hardware component technology. Replacement of bulky vacuum tubes by small transistors almost miniaturized the computer. In addition, Operating Systems were installed for controlling the hardware functioning. The second most significant innovation addressed the problem of mismatch between speed of CPU with high speed electronic component and I/O devices which involved mechanical devices (card readers and tape drives) which were significantly slower, by use of random access capable disks instead of sequential access tape spools. System libraries (pre-written short modules for routine tasks) became more widely available making programming simpler and faster. The Third Generation, 1964-1979 The third generation officially began in April 1964 with IBMs announcement of its System/360 family of computers. The landmark technological shift marking this new generation was use of integrated circuits (ICs) which yielded significant advantages in both speed and economy. Operating system development continued with the introduction and widespread adoption of multiprogramming. In addition, memory management became more sophisticated in that the program code or at least that part of the code being executed, was resident in main storage (RAM). The third generation was an exciting time, indeed, for the development of both computer hardware and the accompanying operating system. During this period, the topic of operating systems became, in reality, a major element of the discipline of computing. The Fourth Generation, 1979 - Present The landmark event marking the birth of fourth generation of computers was launch of PC (Personal Computer) and the workstation. Miniaturization of electronic circuits and components continued and large scale integration (LSI), the component technology of the third generation, was replaced by very large scale integration (VLSI), which characterizes the fourth generation. However, improvements in hardware miniaturization and technology have evolved so fast that we now have inexpensive workstation-class computer capable of supporting multiprogramming and time-sharing. Another development of this generation is Networking. Many of these desktop computers are now connected as networked or distributed systems. Computers in a networked system each have their operating system augmented with communication capabilities that enable users to remotely log into any

system on the network and transfer information among machines that are connected to the network. The machines that make up distributed system operate as a virtual single processor system from the users point of view.

Write Short Notes on RAM, ROM, Cache Memory.[1999] [7mks]

RAM is abbreviation for Random Access Memory. As the name suggests any part of this memory can be accessed directly. This kind of memory is located on one or more microchips that are physically close to the microprocessor in the computer. This memory stores part of the program and data currently being used so that time is not wasted in accessing hard disk/other storage media again and again. Higher the RAM capacity, less frequently the computer has to access instructions and data from hard disk form of storage. Most desktop and notebook computers sold today include at least 64 MB of RAM (which is really the minimum to be able to install an operating system). They are upgradeable, so more memory can be added, if needed, to speed up the processing on computer. RAM is temporary memory and is erased when computer is turned off. Memory should be distinguished from storage, or the physical medium that holds the much larger amounts of data that won't fit into RAM and may not be immediately needed there. Storage devices include hard disks, floppy disks, CDROMs, and tape backup systems. The terms auxiliary storage, auxiliary memory, and secondary memory have also been used for this kind of data repository. Additional kinds of integrated and quickly accessible memory are Read Only Memory (ROM), programmable ROM (PROM), and erasable programmable ROM (EPROM). These are used to keep special programs and data, such as the BIOS, that need to be in your computer all the time. ROM is "built-in" computer memory containing data that normally can only be read, not written to (hence the name Read Only). ROM contains the programming that allows your computer to be "booted up" or regenerated each time you turn it on. Unlike a computer's Random Access Memory (RAM), the data in ROM is not lost when the computer power is turned off. The ROM is sustained by a small longlife battery computer called the CMOS battery. If you ever do the hardware setup procedure with your computer, you effectively will be writing to ROM. RAM Memory that can be instantly changed is called Read-Write Memory or Random Access Memory (RAM). The purpose of RAM is to hold programs and data while they are currently in use. A computer does not have to search its entire memory each time it needs to find data, because CPU uses a memory address to store and retrieve each piece of data. A memory address is a number that indicates a location on the memory chips. This type of memory is referred to as random access memory because of its ability to access each byte of data directly.

There are 2 types of RAM: Dynamic and Static. Dynamic RAM (DRAM). It is refreshed frequently. That means DRAM chips must be recharged many times each second, or they will lose their contents. Static RAM (SRAM). Does not need to be refreshed as often and can hold its contents longer than DRAM. SRAM is also considerably faster than DRAM. Cache Memory Moving data between RAM and CPU registers is one of the most time consuming operation a CPU must perform, because RAM is much slower than CPU. A partial solution to this problem is to include cache memory in the CPU. Cache memory is similar to RAM, except that it is extremely fast compared to normal memory. When a program is running and the CPU needs to read data or program instructions from RAM, the CPU checks first to see whether data is in cache memory. If the data is not there, the CPU reads the data from RAM into its registers, but it also loads a copy of the data into cache memory. The next time the CPU needs that same data, it finds it in the cache memory and saves the time needed to load the data from the RAM. There are many levels of cache memory, but most often references to cache memory refer to the secondary cache or L2 cache. Cache memory is used in many parts of the modern PC to enhance system performance by acting as a buffer to most recently used information. The system cache is placed between the CPU and the RAM. The system cache is responsible for a great deal of the system performance improvement of today's PCs. The presence of the cache allows the processor to do its work while waiting for memory far less often than it otherwise would.

Describe a standard fully featured desktop configuration. Mention a single line description against each item in the configuration. [2002] [10mks] OR Write Short Notes on storage devices.[2001][5mks]

l A standard fully featured configuration has basically three type of featured devices 1. 2. 3. 4. 5. Motherboard Input Device Output Device Storage Device Memory

desktop

lMotherboard A motherboard or main circuit board is the physical arrangement in a computer that contains the computer's basic circuitry and components. lInput Devices There are several ways to get new information or input into a computer. The two most common ways are the keyboard and the mouse. The keyboard has keys for characters (letters, numbers and punctuation marks) and special commands. Pressing the keys tells the computer what to do or what to write. The mouse can be Two button, three button, track roller type, mechanical or optical. It facilitates easy movement of cursor on the video screen and selection. A touchpad (generally on laptops) allows you to drag your finger across a pressure sensitive pad to make the pointer on the screen imitate the movement and press to click. A scanner copies a picture or document into the computer. Another input device is a graphics tablet. A pressure sensitive pad copies the movement of a special pen on pad onto screen. The tablet and pen can also be used like a mouse to move the cursor and click. Latest advent in technology has allowed input through microphone also Output Devices Output devices display information in a way that can be understood. The most common output device is a monitor. It looks a lot a like a TV and houses the computer screen. The monitor allows to 'see' what user and the computer are doing together. Speakers are output

devices that allow the sound from the computer to be heard. A printer is another common part of a computer system. There are various types of printers. Wheel Printers, Drum Printers, Dot Matrix Printers, Inkjet printer, Laser Printers. Laser printers run much faster but are expensive to operate. Ports are the places on the computer case where various peripheral devices can be plugged. The keyboard, mouse, monitor, and printer all plug into ports. There are also extra ports to plug in extra hardware like joysticks, gamepads, scanners, digital cameras and the like. lStorage Devices Storage Devices are meant to store programs and data for retrieval and use when required by the CPU. Computers use disks for storage: hard disks that are located inside the computer, and floppy, compact disks and pen drives that are used externally. Magnetic tape are also used in some places but are slowly getting obsolete because they are too slow. In addition, they are limited to sequential access. Thus tapes are more suited for storing files, like video recordings which are rarely accessed except in sequential fashion.. lHard Disks Computer uses two types of memory: primary memory which is part of the CPU and secondary memory that is stored outside the CPU. Primary memory holds all of the essential memory that tells the computer how to be a computer. Secondary memory holds the information that user stores in the computer. Inside the hard disk drive case are circular disks arranged in a stack (one over the other) that are made from polished steel. Within the disk are tracks. A column of tracks (say 5th track on each of the disk in the stack) is called cylinder. Tracks are further subdivided into sectors. (all this arrangement is for convenience of address location when retrieving data. Within the hard drive, electronic reading/writing device called the heads pass back and forth over the cylinders, reading information from the disk or writing information to it. Hard drives spin at 3600 or more rpm (Revolutions Per Minute). Today's hard drives can hold a great deal of information - 80GB hard disks have become the norm for PCs! lFloppy Disks Floppy disks are the smallest capacity storage medium, holding only 1.44MB of information. A floppy disk is a thin plastic disk that is coated with microscopic iron particles on one surface. It is encased in a 3 inch square semi hard plastic jacket for ease of handling and storing. This disk spins @ 300 rpm and information is retrieved or transferred from/to it by a head in similar fashion to hard disk. lCompact Disks

Instead of electromagnetism, CDs use pits (microscopic indentations) and lands (flat surfaces) to signify 0s and 1s much the same way floppies and hard disks use magnetic and non-magnetic spaces. Inside the CD-Rom is a device that emits laser beam onto CD and reads the reflected beam off the surface of the disk. The reflection off the pit and the land are different. The pattern of reflected light from pit and land creates a code that represents data. CDs usually store about 700 MB. This is quite a bit more than the 1.44MB that a floppy disk stores. A DVD or Digital Video Disk holds even more information than a CD, because the DVD can store information on two levels, in smaller pits or sometimes on both sides. Pen Drives Pen Drives are small (2.5 3 inch long) non moving kind of storage devices that can be kept in pocket like a pen and thus derived its name. They have redefined the data portability. These are devices which can carry as much as a DVD and yet so convenient to carry. The data can be stored and erased 100s of times over and does not require any special devices for reading or writing. They can be connected to external ports and used. Memory 1 2 Ram & Rom (Explained above) PROM (Programmable Read Only Memory)

A variation of the ROM chip is programmable read only memory. PROM can be programmed to record information using a facility known as prom-programmer. However once the chip has been programmed the recorded information cannot be changed, i.e. the prom becomes a ROM and the information can only be read. 1 EPROM (Erasable Programmable Read Only Memory)

As the name suggests, the data on Erasable Programmable Read Only Memory, can be erased and the chip reprogrammed to record different information using a special prom-Programmer. When EPROM is in use, information can only be read. Information remains on the chip until it is erased.

What are the main purposes of an Operating System? Explain the function of Operating System? [1999, 2000] [10mks] OR lEnumerate the key purposes and name one example of an operating system? [2002] [10mks] ANS. An Operating System is in fact the computers internal management program which micromanages the functions of computer system. Externally, it works as an interface between the hardware, software and the user. Complete control of the hardware and partially of the software is done through the Operating System. It is like an Administrator in the laymans language. Functions of the operating system involve: a b c d e f g h Process Management Storage allotment Memory management Input Output System File Management Command Interpreter System Networking Multitasking management

Process Management The operating system is responsible for the following activities in connection with processes management: i ii iii iv Starting and stopping of processes, user initiated as well as system initiated. The suspension and resumption of processes. The provision of mechanisms for process synchronization The provision of mechanisms for deadlock handling.

Memory Management Main Memory - Memory management is central to the operation of a modern computer system. In order to improve utilization of CPU and speed of the computer's response to its users, maximum useful information should be kept in memory. The Operating System is responsible for the following activities in connection with memory management: i Keep track of parts of memory which are currently being used and by whom.

ii Decide which processes are to be loaded into memory when memory space becomes available. iii Allocate and de-allocate memory space as needed.

Secondary Memory - The operating system is responsible for the following activities in connection with disk management i Free space management

ii Storage allocation iii Disk scheduling. Input Output System The Input/Output system consists of: i A buffer caching system

ii A general device driver code iii Drivers for specific hardware devices. Only the device driver knows the peculiarities of a specific device. File Management File management is one of the most visible services of an operating system. The operating system is responsible for the following activities in connection with file management: i The creation and deletion of files

ii The creation and deletion of directory. iii Access control (who can access which files). iv The support of primitives for manipulating files and directories v The mapping of files onto disk storage. vi Backup of files on stable (non volatile) storage. Networking An operating system must provide for networking of the computer with other computers. The communication network design must consider routing and connection strategies, and the problems of connection and security. Command Interpreter System One of the most important components of an operating system is its command interpreter. The command interpreter is the primary interface between the user and the rest of the system. Commands are given to the operating system by control statements, click of a mouse or touch on the screen. These commands have to be interpreted to the computer to initiate a process.

lDifferent OS DOS DOS is a single user OS that supports only 640 KB of memory. It features a command-line interface. Unix Unix was the first multi user, multiprocessor, multitasking OS available for use on PCs. Unix served as a model for other PC operating systems. The Macintosh OS The Mac OS supports the graphical nature of the Mac computers. The Mac OS brought the first truly graphical user interface to consensus. Windows 3.X Windows 3.0, 3.1 & 3.11 brought a GUI and multitasking capabilities to PCs that ran DOS. Windows 3.X is an operating environment. OS/2 Wrap IBMs OS/2 Wrap was the first true GUI based OS for Intel based PCs. OS/2 is a multitasking OS that provides support for networking and multiusers. Windows NT Microsoft Windows NT was originally meant as a replacement for DOS. But it was too resourceful, intensive to work on most PCs at the time of its release. Microsoft issued two versions Windows NT workstation & Windows NT servers. Windows 95 - Windows 95 was Microsofts first true GUI based 32 bit OS for Intel PCs. It supports multitasking and can run older DOS and Windows 3.X programs. Windows 98 - The feature of Win 98 include advanced internet capabilities and improved user interface and enhanced file system performance. Linux Linux is a version of UNIX and is available free or at a low cost from various sources. It is a powerful 32 bit OS that supports multitasking, multiple user networking and almost any application. Windows 2000 Win 2000 includes the same interface and features of Win 98, with file system, networking, power and stability of Win NT. Several versions of Win 2000 are available, each targeting a specific user or computing environment. Windows XP It is improved version of Windows 2000 with plug and play capability. It includes most of the drivers for commonly used peripherals including cameras etc.

Distinguish Micro computers, Mini computers, Mainframe and Super computer. [1999][10Mks]

ANS. There are four classifications of digital computer systems: super-computer, mainframe computer, minicomputer, and microcomputer.

Super-computers are ultra fast and powerful machines (in a relative sense). Todays miniframe computers are almost as fast as first supercomputer. The latest supercomputers operate at the speeds of over 100 trillion Hertz which is about one lac times faster than average desktops. Super-computers are very expensive and for this reason are generally used for military, space and atomic applications. They have off late begun to be used in commercial applications as well. Examples of super-computers are: Cray and CDC Cyber 205. Beacause of their application in military and Nuclear field, these computers are not available commercially and require govt permission of respective countries for export and import. Mainframe computers are built for general computing, directly serving the needs of business and engineering. Although these computing systems are a step below supercomputers, they are still very fast and will process information at about 1/10 speed of Super computers. Mainframe computing systems are located in a centralized computing center with 20-1000+ workstations. This type of computer is still very expensive and is not readily found in architectural/interior design offices. Minicomputers were developed in the 1960s resulting from advances in microchip technology. Smaller and less expensive than mainframe computers, minicomputers run at several MIPS and can support 5-100 users. They are commonly used as servers in LANs that handle data sharing needs of people in an organisation. They are ideal for organisations that cannot afford or do not need a mainframe. Microcomputers. The terms microcomputers and personal computer are interchangeable. were invented in the 1970s and are generally used for home computing and dedicated data processing workstations. Advances in technology have improved microcomputer capabilities, resulting in the explosive growth of personal computers in industry. In the 1980s many medium and small design firms were finally introduced to CAD as a direct result of the low cost and availability of microcomputers. Examples are: IBM, Compaq, Dell, Gateway, Apple Macintosh. The average computer user today uses a microcomputer. These types of computers include PCs, laptops, notebooks, and hand-held computers such as Palmtops. Larger computers fall into a mini-or mainframe category. A mini-computer is 3-25 times faster than a micro. It is physically larger and has a greater storage capacity. A mainframe is a larger type of computer and is typically 10-100 times faster than the micro. These computers require a controlled environment both for temperature and humidity. Both the mini and mainframe computers will support more workstations than will a micro. They also cost a great deal more than the microcomputers.

Mention the key differences between 3rd generation language V/s 4th generation language [2002] [10mks]

ANS. Through the early 1960s till 1980 saw the emergence of the third generation programming languages. Languages like ALGOL 58, 60 and 68, COBOL, FORTRAN IV, ADA and C are examples of this and were considered as High-Level Languages. Most of these languages had compilers and the advantage of this was speed. Independence was another factor as these languages were machine independent and programs written in these languages could run on different machines. The comparative ease of use and learning, improved portability, simplified debugging, modifications and maintenance led to reliability and lower software costs. lThird generation languages often followed procedural code, meaning the language performs functions defined in specific procedures on how something is done. In comparison, most fourth generation languages are nonprocedural. A disadvantage with fourth generation languages was they were slow compared to compiled languages and they also lacked control. Programmers whose primary interests was programming and computing continued to use third generation languages even after development of fourth generation languages due to the precise control was possible with them. Fourth Gen languages were used by non programmers for writing short applications for their use. Sl 3rd Generation Languages 4th Generation Languages 1. Meant for use by professional Can be used by non programmers also. programmers 2. Procedural Language Non Procedural Language 3. Requires specification to do a task Does not require Specification to do a task 4. All alternatives must be specified Default alternatives are in built 5. Errors are difficult to debug Errors are easy to debug 6. Typically file oriented Typically Database oriented 7. Codes may be difficult to read, Codes are easy to read understand and understand and maintain maintain.

Write short notes on the various options available for a company to establish network / connectivity across its offices/ branches? [2001][10mks]

ANS. Corporates in India first started setting up LANs to connect their computers within the office. But as these companies expanded, they began looking at options for connecting their various offices spread across different locations. This gave rise to the deployment of WANs. Then as enterprises started applying technology to solve business problems by deploying enterprise applications such as ERP, supply chain management (SCM) and customer relationship management (CRM), the need for deploying cost-effective solutions in different environments or industry segments became imperative. While there are different technology options for connectivity, each technology has its own advantages and disadvantages. Heres a look at the different options. Leased lines Leased lines are one of the best options. A leased line offers fixed bandwidth, and one only needs to pay a pre-determined price for it irrespective of the amount of bandwidth one uses. This is well-suited for organizations in cities, for internet connectivity and interbranch connectivity for large organizations, and data-sensitive environments such as banking and insurance. A leased line is the best option to connect long-distance offices because of the existing infrastructure of telephone companies. This connectivity option is suited for offices where there is a need for multi-connectivity within an office. One of the most widelyused applications of leased lines is having a secure dedicated data circuit between two locations via a private line, which can be used to transmit data at a constant speed. Disadvantages: One of the major disadvantages is that it takes weeks to allocate a connection. Additionally, since a leased line depends on the existing infrastructure of telephone companies, getting a leased line in remote locations is difficult.

VSATs Indias huge geographical spread, low tele-density and strong demand for reliable communication infrastructure has made VSATs a good choice. Business houses, which have a distributed environment with a number of offices in difficult-to-access areas and where telecommunication services have not penetrated yet, normally look at this connectivity option. VSAT technology represents a cost-effective solution for users seeking an independent communication network connecting a large number of geographically dispersed sites. Additionally, VSATs give an organization the power to quickly roll out a network. There are numerous examples that showcase how VSATs have been used by organizations to roll out their business plans speedily. A case in point is the National Stock Exchange (NSE), which decided to use VSATs for reaching the length and breadth of the country. Today, NSEs network has grown to a massive 3,000 sites, making it Asias largest VSAT network. L&Ts engineering and construction division is another organization which uses VSATs in an innovative way. Most of the projects that are in far-off places where there are no telephone connections takes the help of VSATs to establish connectivity with its various business partners. Industry sectors such as stock exchanges and ATMs, which cant sacrifice uptime, are prime candidates for the usage of VSATs. Currently, VSATs support a whole set of applications

such as ATMs, distance learning, online lottery, rural connectivity, corporate training, bandwidth monitoring, bandwidth-on-demand and Internet access. Conclusion: Ideally, VSATs should be used where information must be exchanged between locations that are too far apart to be reliably linked using other communication networks.

VPNs Virtual Private Networks (VPNs) have attracted the attention of many organizations looking to provide remote connectivity at reduced costs. The key feature of a VPN is its ability to use public networks like the Internet rather than private leased lines, and provide restricted access networks without compromising on security. A VPN can provide remoteaccess-client-connection, LAN-to-LAN networking, and controlled access within an intranet. VPNs are the cornerstone of new-world networking services. They manifest a technological breakthrough that transforms the industry and revolutionizes services. They deliver enterprise-scale connectivity from a remote location, over the public Internet, with the same policies enjoyed in a private network. Organizations that are looking to provide connectivity between different locations can look at VPNs as a significant cost-effective option. With VPNs, an organization only needs a dedicated connection to the local service provider. Another way VPNs reduce costs is by reducing the need for long-distance telephone calls for remote access. This is because VPN clients need to only call into the nearest service providers access point. Leased lines are being replaced by VPN solutions provided by ISPs such as Sify. ISPs provide security and privacy for individual customers who run on their larger data pipes through the use of technologies. ISPs also commit service level agreements which include packet loss, latency and uptime. VPNs are also engineered and provided as a managed service by ISPs, thereby removing the need for the customer to retain trained professionals for network set-up and management. This adds up to a compelling solution for most Corporatesthe best of price effectiveness and managed network solutions. A VPN has another advantage when compared to private networksit uses the public Internet infrastructure. This gives it the ability to offer far wider coverage than private networks. A VPN can support the same intranet/extranet services as a traditional WAN. But the real reason why VPNs have grown in popularity is their ability to support remote access services. Security is another big plus in VPNs. In traditional private networks, data security relies solely on the telecommunications service provider. But if a corporate installs a VPN it does not have to rely on the physical security practices of the service provider. Data that goes through a VPN is fully secure. Most VPN technologies implement strong encryption codes, so data cannot be directly viewed using network sniffers. Limitations: There are some concerns which need to be taken care of by service providers. There are four concerns regarding VPN solutions which are often raised: a VPNs require an in-depth understanding of public network security issues and proper deployment of precautions.

The availability and performance of an organisations wide-area VPN (over the Internet in particular) depends on factors largely outside their control. The Quality of Service needs to be carefully examined. VPN technologies from different vendors may not work well together due to immature standards. VPNs need to accommodate protocols other than IP and existing (legacy) internal network technology. These four factors comprise the hidden costs of a VPN solution. While VPN advocates cost savings as the primary advantage of this technology, detractors cite hidden costs as the primary disadvantage of VPNs.

c d e

Conclusion: VPN solutions are widely used to provide cost-effective long-distance connectivity with excellent security. This is a good solution for companies that have offices in India and want to connect their offices abroad. Most financial institutions and banks use VPN solutions as they want to maintain confidentiality of information. This mode of connectivity is also popular among manufacturing companies.

Wireless LANs Lately, wireless LANs (WLANs) have attracted a lot of attention due to the various advantages the technology provides with respect to mobility. A WLAN can be defined as a type of LAN that uses high frequency radio waves rather than wires to communicate and transmit data among nodes. A look at the benefits shows the immense potential of the technology. One, it enables users to move unhindered and where wires are difficult to deploy. While the initial investment required for WLAN hardware might be higher than the cost of wired LAN hardware, overall installation expenses and life cycle costs will be significantly lower. Further, there is no problem of interoperability. WLANs can be configured in a variety of topologies to meet the needs of specific applications and installations. Due to the various benefits, today WLANs can be seen almost in every industry where organizations want to add extensions to their wired networks or provide wireless connectivity. The future is undoubtedly bright for this technology as devices like notebooks, palmtops, Tablet PCs and similar newer devices will all be used without wired power connections. This makes it necessary for the network to be available anywhere, without the restriction of wires. So devices working without wired power connections would demand wireless Internet services. Due to this even mobile operators are providing Internet connectivity through new generation handsets. While the initial investment is high, it offers good bandwidth for enterprises at low recurring costs. The most quantifiable benefit of WLAN usage is time savings. Wireless applications will also be in high demand in sectors like retail; real-time information availability and immediate validation of data. Disadvantages: The main disadvantage of this technology is the high cost of equipment, which can come down if deployments of WLANs increase. Additionally, it has certain limitations and is not advised for streaming audio/video content or viewing extremely graphic-intensive websites.

Free Space Optics

Free Space Optics (FSO) is a line-of-sight technology which uses lasers or light pulses to provide bandwidth connections. It has the ability to link huge buildings within a couple of days. Since it uses air and not fibre as the transport medium, the cost savings are considerable. An added benefit is that since FSO is a non-radio frequency technology, there are no licenses to be obtained for deploying it. For FSO to work, all one needs is an optical transceiver with a laser transmitter and a receiver to provide bi-directional capability. Each FSO system uses a high-power optical source, plus a lens that transmits light through the atmosphere to another lens receiving data. In addition, FSO allows service providers to deliver flexible bandwidth requirements as needed by a customer. For instance, a company may require a higher amount of bandwidth for a few days during a conference. FSO can make this happen. Currently, FSO is seen as a complementing technology rather one that will replace others and is heralded as the best technology to cover niche areas. Disadvantages: FSO can only provide connectivity up to a maximum of four kilometres, so if a company wants to link its offices spread over huge distances a VSAT would be more relevant. Also, being a line-of-sight technology, interference of any kind can pose problems. Factors like rain and fog can disrupt a signal. But for a company looking at providing connectivity between different locations within a few kilometres of each other, this technology is an ideal option.

Connectivity options today Power Users: Without data crunching they become irrelevant. Examples: Investment analysts, finance professionals, business analysts, stock brokers, top executives. They need more robust connectivity. Action Users: Medium data needs. Field action is more than data crunching. Examples: Sales, operations staff, field offices. They require reliable but not robust connectivity. Occasional Users: Low data needs. Examples: Dealers, distributors, customers, suppliers, prospects. They require a channel which is generally available.

How various connectivity options compare: Connectivity option Leased line Dedicated VPNs Dial-up Internet PAMA VSAT Reliability/Security Action users/WAN Power/Action users/WAN Occasional users, backup to action users main connectivity/WAN Power users/WAN Relevance to category of users No guaranteed uptime. Secure Connectivity Guaranteed uptime. Secure connectivity No guaranteed uptime. Not secure on its own. Guaranteed uptime of 99.5 percent. Secure. Guaranteed uptime of 99.5 percent. Secure Frequency interference. Secure

TDMA broadband Action users, Limited power VSAT users/WAN Microwave Power users/MAN not WAN

2.5G/3G Fibre WLAN

Occasional users

No guaranteed uptime. Not Secure on its own. Power users Secure Power/Action users/LAN applications Not secure on its own.

Networks can be classified based on various criteria such as 1. Geographical spread / distance 2. Type of switching 3. Topologies 4. Medium of data communication. Give two examples of the types networks for each of the classification mentioned above? [2002][20mks] OR Describe Similarity between Bus And Ring topology.[1999] [10mks] ANS. A computer network can be defined as a network of data processing nodes that are interconnected for the purpose of data sharing, or alternatively as a communications network in which the end instruments are computers. A network is generally estalished based on either of the following ways: Network Design Geography Topology LAN WAN MAN Strategies BUS STAR RING MESH Server Based Client/Server Peer-To-Peer

lNetwork Topologies A network configuration is also called a network topology. Each configuration has its own set of strengths and weaknesses. Choosing the best-fit topology for an Intranet is crucial, as changeover from one arrangement to another is difficult and expensive. The network designer has three major goals when establishing the topology of a network: a Provide Adequate Reliability: Higher the reliability, higher the cost. Therefore depending upon the requirement, proper trade-off between cost and reliability is required. Data Transfer Cost: Cost of data transfer varies between different topologies. Depending upon the data traffic, least cost channel option for a particular application is to be identified. Give the end users the best possible response time and throughput within the constraints of fixed and operating costs. Physical Topology: The topology as seen from the layout of the cable. Logical Topology: The connections between nodes as seen by data travelling from one node to another - reflects the network's function, use, or implementation without regard to the physical interconnection of network elements.

The topology of the network can be viewed in two ways: a b

Common patterns for connecting computers include the star and bus topologies.

lBus topology

The bus topology is the simplest network configuration. It uses a single central transmission medium called a bus to connect computers together. Coaxial cable is often used to connect computers in a bus topology. It often serves as the backbone BUS for a network. The cable, in most cases, is not one length, but many short ones joined by T-connectors. T-connectors allow the cable to branch off in a third direction to enable a new computer to be connected to the network. Special hardware has to be used to terminate both ends of the coaxial cable so that a signal traveling to the end of the bus would come back as a repeat data transmission. Since a bus topology network uses minimum amount of wire and hardware, it is inexpensive and relatively easy to install. It is kind of arrangement used by cable TV networks. Disadvantages - The number of computers that can be attached to the bus is limited, due to signal strength loss on long cable. If more computers have to be added to the network, a repeater/booster must be used to strengthen the signal at fixed locations along the bus. The biggest problem with bus topology is that the network fails if the cable breaks at any point. Further, a bad network card may cause noisy signals on the bus, which can cause the entire network to function improperly. Heavy network traffic can slow a bus considerably. lRing topology

In a ring topology, the network has no end connection. It forms a continuous ring through which data travels from one node to another. Ring topology allows more computers to be connected to the network than do the other two topologies. Each node in the network is able to purify and amplify the data signal before

sending it to the next node. Therefore, ring topology introduces less signal loss as data travels along the path. Ring-topology network is often used to cover a larger geographic location where implementation of star topology is difficult. Disadvantages - The problem with ring topology is that a break anywhere in the ring will cause network communications to stop. A backup signal path may be implemented in this case to prevent the network from going down. Another drawback of ring topology is that users may access the data circulating around the ring when it passes through his or her computer. Star Network A star Network is a LAN in which all nodes are directly connected to a common central computer. Every workstation is indirectly connected to every other through the central computer. In some Star networks, the central computer can also operate as a workstation. The star network topology works well when the workstations are at scattered locations. It is easy to add or remove workstation. If workstations are located reasonably close and the system requirements are the modest, the ring network topology may serve the intended purpose at lower cost than the star network topology. If the workstation lie nearly along a straight line, the bus network topology may be best. In a star network, a cable failure will isolate the workstation that is linked to the central computer, while all other workstation on network will remain unaffected. Complete network will be affected only if the central computer fails. The star topology can have a number of different transmission mechanisms, depending on the nature of the central hub. Broadcast Star Network: The hub receives and resends the signal to all of the nodes on a network. Switched Star Network: The hub sends the message to only the destination node. Active Hub (Multi-port Repeater): Regenerates the electric signal and sends it to all the nodes connected to the hub. Passive Hub: Does not regenerate the signal; simply passes it along to all the nodes connected to the hub. Hybrid Star Network: Placing another star hub where a client node might otherwise go.

Star networks are easy to modify and one can add new nodes without disturbing the rest of the network. Intelligent hubs provide for central monitoring and managing. Often there are facilities to use several different cable types with hubs. Mesh Topology The mesh topology has been used more frequently in recent years. Its primary attraction is its relative immunity to bottlenecks and channel/node failures. Due to the

multiplicity of paths between nodes, traffic can easily be routed around failed or busy nodes. A mesh topology is reliable and offers redundancy. However, this reliability comes at high price. A comparative statement of properties of different topologies is given below: Topology Bus Ring Star Mesh Reliability Very low. Cable break can segment network, or prevent transmission Very low. One failure destroys network Cost Lowest cost. Single cable Response Shared medium, limits performance

Low. Single cable, withSingle medium, limits repeaters at each performance station High. Easy to troubleshoot, loss ofHigh. Cost of wiring Sharing, or switching one node does not affect others and central hub if possible required Very high. Quite immune to Very High. Wiring Alternative routes individual cable breaks expensive available

Network Size / Geography lLocal Area Networks (LAN) A Local Area Network (LAN) is a group of computers within a small geographic area that share a common communication line and may share the resources of a single processor or server. The actual communications path is the cable (twisted pair, coaxial, optical fiber) that interconnects each network adapter. Specifically it has the properties: A limited-distance (typically under a few kilometers) High-speed network (typically 4 to 100 Mbits/s) Can support reasonably large number of computers. (typically two to thousands). Generally owned by a single organization Most commonly uses ring or bus topologies

lWide Area Networks (WAN) As the name suggests, a Wide Area Network (WAN) is a network that covers a wide geographic area, such as state or a country. Contrast this to a LAN (local area network), which is contained within a building or complex, and a MAN (metropolitan area network), which generally covers a city or suburb. The WAN can span any distance and is usually provided by a public carrier. It is two or more LANs connected together and covering a wide geographical area. For example an individual office will have a LAN. But it

will be connected to offices in other cities WAN. The LANs are connected using devices such as bridges, routers or gateways. A wide area network may be privately owned or rented. Metropolitan Area Network A MAN is a network that interconnects users with computer resources in a geographic area or region larger than that covered by even large LAN but smaller than the area covered by a WAN. The term is applied to the interconnection of networks in a city into a single larger network. It is also used to meet the interconnection of several LAN by bridging them with backbones lines. A MAN typically covers an area of between 5 & 50 Kms diameter. Strategy in Network Designs Every network has a strategy, or way of coordinating the information exchange and resource sharing. Networks are often broadly classified in terms of the typical communication patterns that employed by one may find on them. The most common are: (a) Server-based (client/server) There is a main server and other computers connected to it called clients. (b) Peer (peer-to-peer) There is no server, and all computers use network to share resources among individual peers. (c) Client/server that also contains peers sharing resources. Server Based Network: One large capacity computer working as server which stores programs & data while other small capacity computers, called clients access server for their data and program requirements. Prime advantage of Server based network system is integrity and singularity of data. There is no duplication of effort maintaining the same data base at two or more locations nor is mismatch between data at different centers. Often the cost of software is also low compared to distributed loading in individual locations. Servers may be classified as: a b c d e File Servers - Provide central file management services. Allow network users to share files. Print Servers - Manage and control printing on a network, allowing users to share printers. Application Servers - Allow client machines to access and use extra computing power and expensive software applications that reside on the server. Message Servers - Data can pass between users on a network in the form of graphics, digital video or audio, as well as text and binary data (for example: email). Database Servers -Provide a network with powerful database capabilities that are available for use on relatively weaker client machines.

Peer-To-Peer Nodes can act as servers and clients. A typical configuration is bus network. Peer networks are defined by a lack of central control over a network. Users share resources, disk space, and equipment. The users control resource sharing, and so there may be lower security levels, and no trained administrator. Since there is no reliance on other computers (server) for their operation such networks are often more tolerant to single points of failure. Advantage is easy to install and are inexpensive. Peer networks place additional load on individual PCs because of resource sharing. The lack of central organization may make data hard to find, backup or archive. Client/Server Networks They can combine the advantages and disadvantages of both of the above types. These network architectures can be compared with the pre-network host-based model. Terminals could connect only to the mainframe, and never to each other. In a client-server environment, the clients can do some processing on their own as well, without taxing the server. In a peer-to-peer environment, clients can be connected to one another. Client/Server networks offer a single strong central security point, with central file storage, which provides multi-user capability, easy backup and supports a large network easily. It also gives the ability to pool the available hardware and software, lowering overall costs. Optimized dedicated servers can make networks run faster. Dedicated server hardware is usually expensive, and the server must run often-expensive network operating system software. A dedicated network administrator is usually required. Medium of Data Communication Cables Cable is what physically connects network devices together, serving as the conduit for information traveling from one computing device to another. The type of cable you choose for your network will be dictated in part by the network's topology, size and media access method. Small networks may employ only a single cable type, whereas large networks tend to use a combination. Coaxial Cable Coaxial cable, with BNC end connector, and T piece.

Co-axial cable is similar to the cabling used in cable television systems. There are 2 conductors in the co-axial cables. Out is a single thick wire at the centre of the cable. The other is a wire mesh that surrounds the central wire. Between these two is an insulator. All these are in an enclosed pre casing. The wire mesh shields the centre wire from external interference. Coaxial cable includes a copper wire surrounded by insulation, a secondary conductor that acts as a ground, and a plastic outer covering. Because of coaxial cable's two layers of shielding, it is relatively immune to electronic noise, such as motors, and can thus transmit data packets long distances. Coaxial cable is a good choice for running the lengths of buildings (in a bus topology) as a network backbone. Co-axial cable supports data transmission speeds up to 10 mbps. As the need for transmission speeds increased, this type of cabling became expensive and also cumbersome to maintain. There are two types of co-axial cables thick and thin. Thick co-axial cable is used generally to set up the backbone and thin co-axial is used in the LAN environment. Local area networks (LANs) primarily use two sizes of coaxial cable, commonly referred to as thick and thin. Thick coaxial cable can extend longer distances than thin and was a popular backbone (bus) cable in the 1970s and 1980s. However, thick is more expensive than thin and difficult to install. Today, thin (which looks similar to a cable television connection) is used more frequently than thick. Twisted-Pair Cable Twisted Pair Cable normally consists of two wires individually insulated in plastic and twisted around each other. They are then bound together in another layer of plastic insulations. Except for the plastic shielding nothing shields this type of cable from outside interference. So it is called unshielded twisted pair (UTP) cable. In case shielding is used then it is called STP. Due to easy availability and cost factor, this is the most commonly used in networks today.

Twisted-pair cable consists of two insulated wires that are twisted together and covered with a plastic casing. It is available in two varieties, unshielded and shielded. Unshielded twisted-pair (UTP) is similar in appearance to the wire used for telephone systems. UTP cabling wire is grouped into categories, numbered 1-5. The higher the category rating, the more tightly the wires are twisted, allowing faster data transmission without crosstalk. Since many buildings are pre-wired (or have been retrofitted) with extra UTP cables, and because UTP is inexpensive and easy to install, it has become a very popular network media over the last few years. Shielded twisted-pair cable (STP) adds a layer of shielding to UTP. Although STP is less affected by noise interference than UTP and can transmit data further, it is more expensive and more difficult to install.

Fiber-Optic Cable Fiber Optic Cable is a thin strand of glass that transmits pulsating beams of light rather than electric frequencies. When one end of the strand is exposed to light, light is carried all the way to the other end. Because light travels at much higher speed than electric signal, fiber optic cable can carry data in billions of bits per second. Due to improvements, fiber optic cables can now carry up to 100 gbps. Fiber optic cables are also immune to electromagnetic interference which is a problem in copper wire. Due to all these factors fiber optic cable is not only the fastest media, it also provides maximum bandwidth. In addition it is a very secure transmission medium. The disadvantages of fiber optic cable are its cost relative to co-axial and twisted pair and difficulty associated with installations. Special equipment is required to cut the cable and install connectors. Also great care must be taken when bending the cable. Over a period of time the cost of fiber optic cable has come down. Due to this it is being increasingly used in setting up networks. Even cable operators have started using fiber optic cable so as to deliver large number of channels and also deliver high speed internet. Fiber-optic cable is constructed of flexible glass and plastic. It transmits information via photons, or light. It is significantly smaller, which could be half the diameter of a human hair. More resistant to electronic interference than the other media types, fiber-optic is ideal for environments with a considerable amount of noise (electrical interference). Furthermore, since fiber-optic cable can transmit

signals further than coaxial and twisted-pair, more and more educational institutions are installing it as a backbone in large facilities and between buildings. It is the most efficient communication cable invented so far but the cost of installing and maintaining fiber-optic cable remains too high to replace the other cable forms completely. Microwave Microwaves are high frequency radio waves that travels in straight lines through the air. Because the waves cannot bend with the curvature of the earth, they can be transmitted only over short distances. Satellite It uses satellites orbiting above the earth as microwave relay station. Satellites revolve at a precise speed above the earth synchronous with earths rotational speed. Such satellites are called Geo-stationery satellites. This makes them appear stationery, so they can amplify and relay microwave signals from a transmitter on the ground to a receiver. Thus, they can be used to send large volumes of data. The major drawback is that bad weather can interrupt the flow of data. Signal Transmission Mechanisms. Data transmission The way is delivered through networks requires solutions to several problems Methods for carrying multiple data streams on common media. Methods for switching data through paths on the network. Methods for determining the path to be used.

Multiplexing LANs generally operate in baseband mode, which means that a given cable is carrying a single data signal at any one time. Various devices on LAN, therefore, must take await their turn for data transmission. To enable many data streams to share the same medium, a technique called multiplexing is employed. Multiplexing loosely explained is a single source of service being used by/for different users on a formally specified arrangement basis. Therefore, a movie theatre which runs different movies in different shows as per declared time table is called a multiplex. A person who distributes newspaper in the morning, sells vegetable during the day and runs a vadapao stall in the evening is multiplexing.

Thus, in a signals-carrying medium, fixed time slots are assigned to each signal for data transmission. This technique called Time-Division Multiplexing (TDM), is illustrated in the opposite figure. Because the sending and receiving devices are synchronized to recognize the same time slots, the receiver can identify each data stream and re-create the original signals. The sending device, which places data into the time slots, is called a multiplexer or mux. The receiving device is called a demultiplexer or demux. TDM can at times be inefficient. If there is no data requirement by a user, its time slots is not reallocated to another user and remains unutilized. Since the transmission speed is fairly high, this arrangement gives satisfactory performance for LANs. However, it slows down the performance when networks are large and quantum of data exchange is also high as is the case in WAN. WANs, therefore, tend to use broadband media, which can support two or more data streams simultaneously. With falling costs of broadband media, it is now being for LAN as well. A more advanced technique is statistical time-division multiplexing. Time slots are still used, but some data streams are allocated more time than others. An idle channel, D, is allocated no time slots at all. A device that performs statistical TDM often is called a stat-MUX.

lSwitching Data
On a long distance network, direct transmission of data from one terminal to other is not feasible. Therefore, they are routed through various intermediate points which redirect the data to next point in the required direction. This process of receiving data and redirecting to next point is called switching. Two contrasting methods of switching data are commonly used: Circuit switching and packet switching.

Circuit Switching When the first installment of data is exchanged between two devices, it establishes a path through the network, called a circuit. Once the circuit is established, all further data exchange between those two devices flows through that path/circuit only. The chief disadvantage of circuit switching is that when communication takes place at less than the assigned circuit capacity, bandwidth is wasted because other device can not use this circuit. Also, communicating devices are not able to take advantage of free capacity released subsequent to establishing the circuit unless the circuit is reconfigured. Circuit switching does not necessarily mean that a continuous, physical pathway exists for the sole use of the circuit. The message stream may be multiplexed with other message streams in a broadband circuit. In fact, sharing of media is the more likely case with modern telecommunications. The appearance to the end devices, however, is that the network has configured a circuit dedicated to their use. End devices benefit greatly from circuit switching. Since the path is pre-established, data travels through the network with little processing in transit. And, because multipart messages travel sequentially through the same path, message segments arrive in an order and little effort is required to reconstruct the original message. lPacket Switching

Packet switching takes a more efficient approach to switching data through networks. Each messages is broken into number of small segments called packets (larger the message, higher the number of packets), which are routed individually through random paths in the network (see above figure). Packets are reassembled to construct the complete message before delivery to final destination. Messages are divided into packets to ensure that large messages neither get bogged down by limited capacity of circuit nor do they monopolize the network. Packets from several

messages can be multiplexed through the same communication channel. Thus, packet switching ensures optimum utilization of available bandwidth.

Explain Interface.

ANS. Interface is the computer jargon used to indicate how instructions are given to computer. In the early days, all the instructions were given to computers by typing the commands. It was difficult for every one to remember all the commands and typing use to consume time and effort. Present day computers accept instructions either by clicking on the icons through the use of mouse or even by touch screens. Now even voice commands have begun to be recognised by OS like Windows XP. When a computer accepts the instructions through typed commands, it is called command line or character user interface. In 1980 the most popular command-line interface were Microsoft MS-DOS, its near twin PC-DOS from IBM and UNIX. Command line interface is still available in windows 95 and 98 for those who want to run DOS programs to work with DOS keyboard commands. When a computer is configured to accept the instructions through click of mouse on an icon on the screen, it is called Graphical User Interface (GUI). GUIs have become the standard because most users preferred them, in part because they are easier to learn. These interfaces are built into the operating systems. 1 Explain Graphical User Interface (GUI).

ANS. GUI allows users to select files, programs or commands by pointing to pictorial representation on the screen rather than by typing long, complex commands after the command prompt. It finds its applications in WINDOWS in pull down menus, dialog boxes and other graphical elements, such as scroll bars and icons. It is more user friendly and easy to operate and of great benefit to user, because as soon as user knows how to use the interface in one program, he can use it in all programs running in the same environment. GUIs have emerged for most computing environments including WINDOWS, WINNT, UNIX, OS/2 Desktop 8, NetWare 6 etc. Advantages of using GUI. It offers an environment for application developers that takes care of the interaction of users with the computer. This helps the developer to concentrate on the application without getting bogged down in the details of screen display or mouse or keyboard input. It also saves the programmers the efforts of writing programs (modules) for frequently performed tasks, such as saving a data file. Another benefit is that application writer for a GUI device is independent. As the interface changes to support new input and output devices, such as a large screen monitor or an optical storage device, the application can, without modification use those devices. Lastly, a user does not have to remember different commands. They are mostly visible on screen and easy to use.

Differentiate between Graphical User Interface and Character User Interface. [1999] [5 mks] Character User Interface Generally used in Programming Languages Its use lies in character control features such as textual elements or characters. Used to create words and sentences. Gives users the ability to specify desired options through function keys. Can Create popup / pull down menus. Scrolling of text is possible. E.g. Unix, Cobol, FoxPro Less prone to virus attack.

SNo Graphical User Interface 1. Generally used in Multimedia 2. Its use lies in control of graphical features such as toolbars, buttons or icons 3. Used to create/operate/run animations or pictures 4. Variety of input devices are used to manipulate text & images as visually displayed 5. Employs graphical interface E.G web pages, Image maps which helps user to navigate to any site. E.g.: Windows, Mac programming 6. Prone to virus attack

1 Differentiate between Main Memory and Secondary Memory. [999] [ 5mks] SNo Main/ Primary Memory Secondary Memory 1. Used to store information required for Essential to any computer system to provide immediate processing by CPU. backup storage 2. E.G. Two types of memory in the E.g. The two main ways of storing data are immediate access store of the computer, serial-access and direct-access. Like RAM and ROM economical storage of large volumes of data on magnetic media, Floppy Disk, Magnetic disk. 3. Made up of number of memory locations Locations divided into sectors and tracks or Cells. for ease of addressing. 4. Measured in terms of capacity and speed Measured in terms of storage space 5. Magnetic core technology and Various methods of used to manufacture semiconductor are used to make main different media available for storage memory of a computer system. 6. Memory size is limited. Memory size is very large. 7. High Cost Comparatively low Cost. 8. Stores program instructions and the data Stores data and programs in the form of in binary machine code. bytes made up of bits. 9. Temporary storage of data (Volatile) till Offers permanent storage of data.(Non power is not switched off. Volatile) 1 Differentiate between Compilers and Interpreters. [1999] [5 mks] SNo Compilers 1. A compiler is a translation program that translates the instructions in a high level language in to machine language. Interpreters An Interpreter is another type of translator used for translating instructions written in a high level language to machine code.

2. Compiler translates entire source program Interpreter translates and executes the into a .exe file or object code. .exe program line by line and therefore there is no file is a self executable permanent file and consolidated object code to be saved for therefore, compiler is not involved during future. Therefore, compiler is required execution of the file.. Thus compiler is alongside the program every time to execute required only once for a program. it. 3. Compilers are complex programs Interpreters are easy to write. 4. Require large memory space Do not require large memory space. 5. Runs faster as no translation is required Slow in operation every time code is Executed. 6. Changes in compiled program are very Source code is readily available so easier to difficult to make. Source code is needed to effect any changes. make any changes in program which is recompiled after changes and replaces the old compiled program. 7. Slow for debugging and testing. Good for faster debugging

What is a database? What are the advantages of database management system over conventional file management system?

ANS. A database is a repository for collection of related data or facts. A database contains a collection of data arranged in a specific structure. The most common example of a non computerised database is a telephone directory. Database programs are designed to help the users to manipulate the data. He can search items with specified characteristics. Llike in a database of address, he can pull out all addresses belonging to Mumbai. He can sort lists of data, arranging them in order of preference like alphabetical, numeric, chronological, etc, order. Some common databases used daily are address book in outlook or other mailing softwares. Some common terminology used in databases are as follows: l Fields l Records l Tables Fields Taking an example of an address book, each piece of information like name, address or city is a field. Each unique type of information is stored in its field. Record One full set of fields is called a record. Therefore, all information regarding a person like first name, last name, address, street, city, pin code, telephone no., email ID, etc is a record. It is not necessary that all the fields in a record be compulsorily filled up. Some fields can be left blank also. Table A collection of records is called a table like a complete address book is called a table. Types of Database: Hierarchical Database is an older style of database, originally developed for a mainframe computers. Tables are organised into fixed tree like structure. Each table storing one type of data.

The trunk table also called the main table, stores general information. Any field in this table may refer to another table related to it that contains subdivisions of data such as

vendor details or personnel details. Each one of these tables may refer to other sets of tables related to it that have finer data subdivisions like vendor credit or employee data. It can not access data from table not related to it (on a different branch of the tree structure) The relationship between tables is said to be parent child relationship. Each child table is related to only one parent. However each parent can refer to many child tables. This is known as one to many relationship. Because the relationship are fixed, data location is fast and easy but these databases require a bit of duplication of data. However, they allow only limited flexibility and hence some reports may be difficult or impossible to generate. Network database is formed when any one table can relate to any number of other tables. Note that the tree structure is absent.

The network databases tables are said to have many to many relationship. As a result no tier or hierarchy is said to exist within the database. Object Oriented Database groups data into complex items called objects. These objects, which parallel the object structures used in object oriented programming can represent anything a product, an event, a customer complaint or purchase. An object is defined by its characteristics, attributes and procedures. Many early database applications and some current low end applications could access and manipulate only one table at a time. Each table was stored in its own file along with any related documents. So, in such cases database and table means the one and the same. Since the database file consists of a single table, it is called a flat file database. They are mainly used for single user or small group situations. Although easy to create and use, flat file database systems can be difficult to maintain and limited in their power. When numerous files exist, there is bound to be a lot of data duplication among the tables. Adding, deleting or editing any data requires that the changes are made in multiple files that contain the same data. This increases chances of errors, wastes time and uses excess storage space. A database management system (DBMS), sometimes just called a database manager, is a program that lets one or more computer users create and access data in a database. The DBMS manages user requests (and requests from other programs) so that users and other programs are free from having to understand where the data is physically located on storage media and, in a multi-user system, who else may also be accessing the data. In handling user requests, the DBMS ensures the integrity of the data (that is, making sure it continues to be accessible and is consistently organized as intended) and security (making sure only those with access privileges can access the data). The most typical DBMS is a relational database management system (RDBMS).

In a relational database, the database is made up of a set of tables. Any common field that exists between any two tables creates the relationship between the tables. This is shown in the example below:

Customer table
Customer ID Customer name Address City Pin code Contact person Telephone no

Order table
Order ID Customer ID Order date Required date Ship to address Bill to address Shipment date Product ID

Product table
Product name Product ID Units in stock Unit price

The relational database structure is easily the most prevalent today. Multiple tables of this kind make it possible to handle many data management tasks. The repetition can be reduced. Changes can be done only in the relevant table as there will be only one table for any particular type of record. The standard user and application program interface in a relational database is the Structured Query Language (SQL). SQL statements are used both for interactive queries for information from a relational database and for gathering data for reports. In addition to being relatively easy to create and access, a relational database has the important advantage of being easy to extend. After the original database creation, a new data category (Field) can be added without having to modify the existing applications.

How is DBMS different from Conventional File System? [2000] [20 mks] OR What is a database management system? [[1999] [5 mks] l OR What is Conventional File management System? What are benefits of DBMS? [1998] l OR Discuss the importance of RDBMS model to the users and designers? [1999] [5 mks]

ANS. A database management system (DBMS), is a data management tool for a multiuser environment Broadly it makes easy job of access control, data integrity, data redundancy check, etc. There are three main features of a database management system that make it attractive to use in preference to more conventional software. These features are: (a) Centralized data management, (b) Data independence, and (c) Systems integration. Some of the advantages of data independence, integration and centralized control are: 1. Redundancies and Inconsistencies are eliminated. Since the data is stored in a single location, possibility of inconsistency of data (different values of same data in different copies at various locations) is completely eliminated. Any changes made to data by whosoever will always be reflected in DBMS. Similarly, redundant data will also not exist. 2. Improved System Flexibility. Changes are often necessary to the database structure itself. These changes are more easily made in a DBMS than in a conventional system due to its basic design architecture. Since database is independent of application programs in DBMS, these changes rarely have any impact on application programs. Thus, it is easy to add new fields of information in a DBMS to respond more quickly to the expanding needs of the business. 3. Easy to Maintain. Since there is single data source, any changes to the data or even data structure is required to be done only once, unlike in other systems where same changes need to be repeated at different locations. Job is quicker and system performance is better. 4. Low Development, Implementation and Maintenance Cost. Although the initial cost of setting up of a DBMS can be large, development of data processing programs becomes very easy and fast. Any future changes in data or updates are required to be carried out at one place only saving costs.

5. Improved Data Security. Data Security in a DBMS incorporates security definition of data and access control for various groups of users. Each person is given an access code and each access code can be programmed to permit limited data However, if the access code at highest level (CMDs or Data Base Managers) is broken or compromised in any way, complete data can be lost or compromised. 6. Integrity can be improved. It is easier to enforce data integrity controls in DBMS than other data systems. Integrity may be compromised in many ways. If a number of users are allowed to access and work upon the same data at the same time, there is a possibility that the result of the updates is not quite what was intended. Controls therefore must be introduced to prevent such errors to occur because of concurrent updating activities. However, since all data is stored only once, it is often easier to maintain integrity in DBMS than in conventional systems. 7. Data Model Development. Perhaps the most important advantage of setting up a DBMS is the requirement that an overall data model for the enterprise be built. In conventional systems, it is more likely that files will be designed as needs of particular applications demand. The overall view is often not considered. Building an overall view of the enterprise data, although often an expensive exercise, is usually very cost-effective in the long term.

Enumerate the key purpose of a modem.

ANS. A modem is a device that facilitates computers to transmit electronic data over regular telephone lines. In standard telephone service, a telephone converts the sound into an electric signal that flows through the telephone lines. The telephone at the other end converts this electric signal back into sound so that the person at the other end can hear the sound. Both the sound wave and the electrical signals are analog signals. They vary continuously with the volume and pitch of the speakers voices. However, computer data is in digital form, Os and 1s, represented by on/off pulses, which can not be transmitted over the normal telephone line. Thus, if two computers want to exchange data over the normal telephone line, there is a requirement to convert this digital signal into analog signal while transmitting and back from analog to digital when receiving so that the transmission can take place over a normal telephone line. The device used for the purpose is called modem (short form of Modulator Demodulator). In modulation phase, the modem turns the computers digital signal into analog signals. In its demodulation phase, the reverse takes place and analog signals are converted into digital signals Dial-up modems could be external, internal and PC Card type. There are two ways by which modems could be configured as data/fax modems or data/fax/voice modems. Data/fax modems provide only these two facilities while voice capability in a modem acts as an answering machine.

Explain the history of development of different generations of languages.

ANS. Programming languages have been under continuous evolution since early 1950s and this evolution has resulted in tens of different languages being designed and used in the industry. This evolution of languages was necessitated firstly by the need to simplify the languages for use and secondly to take advantage of ever growing speed and capability of processors. The languages in use today have begun to resemble day to day use languages rather than being heavy on syntax (FORTRAN - only two generation ago) which required that every i" be essentially dotted and every t be cut. The first and second-generation languages during the period of 1950-60, were machine and assembly languages. Earliest language was for automated calculation for mathematical functions. Micro-code is an example of the first generation language residing in the CPU written for doing multiplication or division. Further developments in early 1950s brought us Machine Language without interpreters and compilers to translate languages. Computers then were programmed in binary notation that was very prone to errors. A simple algorithm resulted in lengthy code. This was then improved to Mnemonic Codes to represent operations. Symbolic Assembly Codes came next in the mid 1950s, the second generation of programming language like AUTOCODER, SAP and SPS. Symbolic addresses allowed programmers to represent memory locations, variables and instructions with names. Programmers now had the flexibility not to change the addresses for new locations of variables whenever they are modified. This also meant that programs were machine dependent and could not be run other on machine other than written on (lack of portability). Simply stated, Assembly or machine code could not run on different machines. Through the early 1960s till 1980 saw the emergence of the third generation programming languages. Languages like ALGOL 58, 60 and 68, COBOL, FORTRAN IV, ADA and C are examples of this and were considered as High-Level Languages. Most of these languages had compilers and the advantage of this was speed. Independence was another factor as these languages were machine independent and programs written in these languages could run on different machines. The comparative ease of use and learning, improved portability and simplified debugging, modifications and maintenance led to reliability and lower software costs. Primary feature of fourth generation languages was their user friendliness, portability and independence of operating systems. They were simple enough to be used by nonprogrammers, having intelligent default options about what the user wants and allowing the user to obtain results fasts by writing database assisted code, which was not possible using COBOL or PL/I. Access is the easiest example of such languages The 1990s saw the developments of fifth generation languages like PROLOG, referring to systems used in the field of artificial intelligence, fuzzy logic and neural networks. This means computers can in the future have the ability to think for themselves and draw their own inferences using programmed information in large databases. Speech recognition is another area of their application.

The current trend of the Internet and the World Wide Web could cultivate a whole new breed of radical programmers for the future, now exploring new boundaries with languages like HTML and Java.

Office Automation? [1998]

ANS. The term Office Automation is generally used to describe the use of computer systems to perform office operations such as desktop application suites, groupware systems and workflow. (a) Desktop Application Suites. suites generally include: (i) (ii) (iii) (iv) (v) (vi) Desktop application

Word Processors to create, display, format, store, and print documents. Spreadsheets to create and manipulate multidimensional data tables. Presentation Designers to create highly stylized images for slide shows and reports. Desktop Publishers to create professional quality printed documents. Desktop Database Support to collect limited amounts of information and organize it by fields, records, and files. Web Browsers to locate and display World Wide Web content.

(b) Groupware Systems. Groupware refers to any computer-related software that assists person-to-person interaction. Simply put, it is software that helps people work together. Groupware systems generally include: (i) (ii) (iii) (iv) (v) (vi) Email to transmit messages and files. Calendaring to record events and appointments in a fashion that allows groups of users to coordinate their schedules. Exchange of documents and pictures by fax. Instant Messaging to allow immediate, text-based conversations. Desktop Audio/Video Conferencing to allow dynamic, on-demand sharing of information through a virtual face-to-face meeting. Chat Services to provide a lightweight method of real-time communication between two or more people interested in a specific topic. Presence Detection to enable one computer user to see whether another user is currently logged on. White-boarding to allow multiple users to write or draw on a shared virtual tablet. Application Sharing to enable the user of one computer to take control of an application running on another users computer. Collaborative Applications to integrate business logic with groupware technologies in order to capture, categorize, search, and share employee resources in a way that makes sense for the organization.

(vii) (viii) (ix) (x)

(c) Workflow. Workflow is defined as a series of tasks within an organization to produce a final outcome. Sophisticated applications allow workflows to be defined for different types of jobs. Information once processed by one individual is automatically routed to the next individual or group as predefined. The system ensures that the individuals responsible for the next task are notified and receive the data they need to execute their stage of the process. This continues until the final outcome is achieved. Although workflow applications are considered part of Office Automation, workflow itself is part of a larger document management initiative. Therefore, the Document Management Domain Team will take responsibility for it. Advanced Features While the office automation procedure is itself a large task, often, even peripheral issues take more attention, time and money in execution. These issues are important because without them the whole procedure would get derailed in no time at all. Some of these issues are: (i) (ii) (iii) (iv) (v) (vi) Anti-Virus Protection. Anti-Spam Protection. Open-Relay Protection to ensure that email servers within our environment are not used by outside parties to route Spam. Enterprise Directories to provide a single repository of user accounts for authentication, access control, directory lookups, and distribution lists. Digital Signatures and Encryption to allow for authentic and secure transfers of data. Data Integrity, Security and back up against corruption, theft and loss.

Any office automation procedure is built on following three principles (i) (ii) Principle A - Minimize System Complexity Principle B - Maximize Interoperability Compatibility of system across entire length and breadth of organization which means standardization of software and procedures at different offices/centers. Principle C - Responsive Training Training all the users to take full advantages of all the available feature.

(iii)

1 What is the difference between Optical Fibre and Copper Wire? [2002] l OR lWrite Short Notes on Fibre Optics? [1999] ANS. Optical Fibre cables are slowly replacing copper cables for multitude of advantages that they offer. (We have all seen those multi-coloured synthetic pipes being laid underground in our cities). Fibre optic transmission systems consists of a fibre optic transmitter and receiver, connected by fibre optic cable. This cable offers a wide range of benefits not offered by traditional copper wire or coaxial cable. These include: (a) Their ability to carry much more information and deliver it with greater accuracy than either copper wire or coaxial cable. (b) Fibre optic cable can support much higher data rates, and at greater distances, than coaxial cable, making it ideal for transmission of serial digital data. (c) The fibre is totally immune to virtually all kinds of interference, including lightning, and will not conduct electricity. It can therefore come in direct contact with high voltage electrical equipment and power lines. It will also not create ground loops of any kind. (d) As the basic fibre is made of glass, it will not corrode and is unaffected by most chemicals. It can be buried directly in most kinds of soil or exposed to most corrosive atmospheres in chemical plants without significant concern. (e) Since the only carrier in the fibre is light, there is no possibility of a spark from a broken fibre. Thus it reduces fire hazard and there is no danger of electrical shock to personnel. (f) Fibre optic cables are virtually unaffected by outdoor atmospheric conditions, allowing them to be lashed directly to telephone poles or existing electrical cables without concern for extraneous signal pickup. (g) A fibre optic cable, even one that contains many fibres, is usually much smaller and lighter in weight than a wire or coaxial cable with similar information carrying capacity. It is easier to handle and install, and uses less duct space. (It can frequently be installed without ducts.) (h) Fibres do not leak light and are quite difficult to tap. Fibre optic cable is ideal for secure communications systems because it is very difficult to tap but very easy to monitor. In addition, there is absolutely no electrical radiation from a fibre. (i) It can handle much higher bandwidths than copper wires. This alone would make it ideal for use in high-end networks. (j) It is not affected by power surges, electromagnetic interference, or power failures. Nor it is affected by corrosive chemical in the air, making it ideal for harsh factory environments. Disadvantages of Fibre cable over Copper wires

(a) Fibre is an unfamiliar technology requiring skills not commonly available. (b) Since optical transmission is inherently unidirectional, two-way communication requires either two fibres or two frequency bands on one fibre. (c) Fibre interfaces are costlier than electrical interfaces.

What are benefits of networking your business? [1998]

ANS. The main benefit of networking lies in speeding up the business processes and squeezing out the inefficiencies. This is because information is easily available and exchange is fast and easy. Communication between employees, customers and suppliers can be improved. All kinds of documents - from emails to spreadsheets and product specifications - can be exchanged rapidly between networked users. An intra-office network enabled file sharing system keeps the cost of support staff down despite rapid expansion. The shared resource means that a team can work on a single client much more efficiently. Well-built, well-used networks deliver measurable benefits on productivity and efficiency. Computer resources, such as printers, storage, modems and other communications devices, can also be shared. Staff can remain in touch almost as effectively in the field as working from a desk. The advantages of wireless networking Stay connected: Wireless LANs allow users to stay connected to network continuously irrespective of their movement within the area. A user with a laptop and a wireless connection can roam in the office building without losing his connection, or having to log in again on a new machine in a different location. Spaghetti free: Perhaps the most obvious advantage comes from removing the need for extensive cabling and patching.

What is a router?

ANS. When a network of computers wishes to have access to the Internet, an additional protocol needs to be "bound" to each computer requiring Internet access. The protocol used on the Internet it called TCP/IP, and the routers are limited to that protocol. Unlike the protocols used for file and printer sharing, TCP/IP adds an Internet Protocol (IP) address to every packet that is being sent on the internet. The purpose of the router is to examine every packet on a network and route it to the correct place. If one computer is communicating with another computer on the hub, the router ignores those packets whether they are TCP/IP or not. But TCP/IP packets that are involved with Internet traffic are passed through to the cable modem connection. Firewall. When a computer is configured to have access to the Internet, it also means allowing it to be accessed by any other computer on the Internet. Thus, they are vulnerable to virus and spyware attack from outside. Computers can be safeguarded against such malicious software to a great extent by simply turning off all networking features like, file sharing, printer sharing, and "Microsoft Client. But these are the features which are required for local network as well. Thus, this option becomes unfeasible in case of networked computers. This is where firewall comes into play. It provides the first line of defence as any external computer will only be able see the router as router allows several computers to share one IP address.. 1 Why is a router required if there are several computers? ANS. The routers contain several components that are fine tuned to solve the following problems: (a) They allow several computers to share one IP address. (b) They prevent packets to and from computers in the home network from being passed on to the cable modem. Thus on one hand data is secured and on the other hand it avoids system congestion. (c) They keep "broadcast" packets from passing to the cable modem. Many LAN protocols generate frequent messages to enable other computers on the same network to be aware of their address. They provide a firewall to protect your computer's information from unwanted and unauthorized access.

How is email, intranet and internet beneficial to business organisations?

ANS. Email e-mail is a system for exchanging digital (electronic) messages (typed text, tables, sketches, photographs, etc) through a computer network. Email has now become the most commonly used medium of data communication. In addition to main body of email, there is provision to attach data files. This is analogous to enclosures and annexure to the covering or main letter. With this data files can be shared between a select group of people for whom the information is relevant and useful. This is particularly useful when peripheral devices are not shared. If connected to the internet, then mails can be sent and files can be shared with virtually any/all people who have access to the internet. Email is both efficient and inexpensive. Users can send email without worrying if the recipient is online or not. In corporate networks email is delivered almost instantly. The cost of sending messages is negligible. Email has provided the modern world with an entirely new and immensely valuable form of communication. Internet Internet has begun to percolate our lives in a irreversible fashion. Many of the modern comforts like easy availability of information whether it is seat availablility on a particular train or booking the tickets or exchanging audio and video messages with friends, family and clients across the world at affordable cost are the gifts of internet. This has changed the way the work and business are conducted. It has enabled us to access nearly any kind of information form a PC. Internet is a huge co-operative community with no central ownership. The lack of ownership is important as no single person owns the internet. Any person who can access the internet can use it to carry out a variety of transactions and also create his own set of resources for others to use. As a business tool, internet has many uses. Electronic mail is an efficient and inexpensive way to send and receive messages and documents around the world in seconds. Internet is becoming an important medium of advertising, distributing software and information services. It is a space where in people with similar interest can share data on various topics. When business organisations connect parts of their network to the internet, they allow users to work on the network from virtually anywhere in the world. Once connected, internet can be used to sell goods, track inventory, order products, send invoices and receive payments in a very cost effective manner. This means the world is the market place for todays organisations. They can buy and sell any where in the world today. Intranets & Extranets Before the advent of World Wide Web, most corporate networks were bland environments for email and file sharing only. However, now corporate networks are being reconfigured to resemble and give the feel of internet. This setup enables users to work in a web like environment using web browsers as the front-end to corporate data. Two common types of spin-offs of the web are called intranets & extranets. They are used for data sharing, scheduling and workgroup activities.

Intranets It is a LAN or WAN that uses TCP/IP protocol but belongs exclusively to a corporation. The intranet is accessible exclusively to an organisation and its employees. It has all the relevant data that can be shared and used for day to day functioning of the organisation like attendance muster, leave records, leave and other application forms, daily production records, policies, brochures of the company that can be used by sales teams etc. it can also be connected to the internet through proper security features like firewalls. Extranet It is an intranet that can be accessed by outside users over the Internet. This is typically used by telecommuters or business associates like suppliers and distributors to access relevant data and also log data on to the corporate systems on a day to day basis. It helps the organisation by getting vital data for proper planning of procurement and distribution. With such state of the art tools overheads in terms of inventory and excess sticks can be reduced to a great extent.

Explain various file systems.

ANS. Sequential files.. It is a kind of file in which the records are stored in some sequence. The address of the next record called pointer is available in preceding record. So unless the preceding record is accessed, location and of next record in not known. Therefore, whenever a record is to be accessed, every search has to start from the beginning of the file till the record is found. The process is slow and tedious and therefore not suitable for applications where there may be requirement to access the records in a random fashion like Banks, or Online queries. It is not necessary that the records of a sequential file should be in physical adjacent positions. However, on a magnetic tape, the records are written one after the other along the length of the tape. In case of disks the record of a sequential file may not be in contiguous (continuous/unbroken) locations. The sequential order may be given with the help of pointers on each record. Sequential files is best suited for master files on which batch processing is done because the address of next record is available at the end of current record. Thus time, otherwise spent in accessing the index to seek the address of next record, is saved. Transaction files are also sorted in the same sequence as the master file and can be periodically updated. lDrawbacks (a) Updating requires that all transaction records are sorted in the record key sequence. (b) Information on the file is not always current. Addition and deletion of records is not simple. lAdvantages (c) File design is simple. (d) Location of records requires only the record key. (e) When the activity rate is high, simplicity of the accessing method makes processing efficient and fast. (f) Low cost file media such as tapes can be used. Direct or Random Access Files. Even though a sequential file is excellent for batch processing, it is unsuitable for on-line enquiry. Customers arrive at random and each time accessing the new customers record will involve search of all the records from beginning. It is like accessing the record of customer from the ledger which does not have a index page and records have no sequence. You would need to flip through every page from beginning each time you have to search a customers page. In such situations random access file organisation provides a mean of accessing records directly. In this method of file organisation, each record has its own address by which it can be directly accessed for reading or writing. The record of these addresses is maintained separately from where the operating system fetches it. The records need not be in adjacent location on the storage medium. Such a file cannot be created on a magnetic tape medium. Random (or discrete) files are created only on disks. Since every record can be independently accessed, every transaction can be processed individually. Random access file is best suited for on-line

processing systems where information is required on random basis. It is not necessary for the user to know where the record is kept on the disk. It identifies the record only by its key (say the account number of the customer at the bank). The operating system finds the address of the record from the key using some address generating functions. lAdvantages (a) Immediate access to records is possible. (b) Up to date information will always be available on the file. (c) Several files can be simultaneously updated (d) Addition and deletion of records is simple (e) No new master file is created for updating random access file lDisadvantages l (a) Less efficient on the use of storage space. (b) Storage medium is relatively expensive (c) Lags behind sequential files in speed for batch processing Indexed sequential files. This method combines the advantages of sequential and direct file organisation but sacrifices a little of advantage of each at the same time. The file is divided into a number of blocks and the highest key in each block is indexed. The index allows direct access to the block. The records within each block are in sequential fashion and therefore searched sequentially. Thus, though the access is not as fast as Random Access Files method, there still is considerable improvement in speed of access. Similarly, while batch processing, there is a slight compromise due to occasional requirement of accessing the address of next block from the index. This type can support many data processing operations. Many files may be required to support both batch and online activities. For eg. a stock file may be updated periodically by batch processing. At the same time it may have to provide current information about availability of items. They can be thus organised as indexed sequential files. lAdvantages (a) Suitable for both sequential and online lDisadvantages (a) Less efficient in the use of storage space. (a) Addition and deletion of records are more complex, as they affect both index and the file.

Explain Assemblers, Compilers and Interpreters.

ANS. Assemblers. A program which converts an assembly language program into machine code is called an assembler. An assembler which runs on a computer for which it produces object codes (machine codes) is called a self assembler (or resident assembler). A less powerful and cheaper computer may not have enough software and hardware facilities for program development and convenient assembly. In such a situation, a faster and powerful computer can be used for program development. The programs so developed are to be run on smaller computers. For such program development a cross assembler is required. A cross assembler is an assembler that runs on a computer other than that for which it produces machine codes. Interpreters. An interpreter is a program which translates one instruction of a high-level language program into machine codes and allows the machine to executes it before translating the next instruction. In this way it proceeds further till all the instructions of the program are translated and executed. An interpreter is a smaller program as compared to the compiler and therefore occupies less memory space. It can be used in smaller systems which have limited memory space. On the other hand, the object code of the statement produced by an interpreter is not saved. Thus, an interpreter is required every time the program is to be run. It is very handy during debugging of the program as the result of each step is known and source of any ghastly errors are easily spotted. Compilers.A program which translates a high level language program into a machine language program is called a compiler. A compiler goes through the entire high-level language program once or twice and then translates the entire program into machine codes at once. A compiler is nearly 5 to 25 times faster than an interpreter. The object program produced by the compiler is permanently saved for future use. Thus a compiler is required only once for a program unless any changes are made.

List the advantages and disadvantages of Distributed Processing

ANS. A distributed processing system is one where in the processing of data takes places at places other than server. Usually, every node has processing capability and processes its own requirement after fetching data and programs which are centrally located on the server. Distributed processing systems afford reliability against major system failure but are expensive and not as efficient in processing capacity utilisation. lAdvantages (a) Flexibility. Greater flexibility in placing true computer power at locations where it is needed. (b) Better availability of resources. to the end users with least disruptions. Computer resources are easily available

(c) Lower cost of communication. Telecommunication costs can be lower when much of the local processing is handled by on-site mini and micro computers rather than by distant central mainframe computers. lDisadvantages Security. Lack of proper security controls for protecting the confidentiality and integrity of the user programs and data that are stored on-line and transmitted over network channels (its easy to tap a data communication line) Linking of different systems. Due to lack of adequate computing /communication standards, it is not possible to link different items of equipment produced by different vendors. Thus, several good resources may not be available to users of a network. Maintenance difficulty. Due to decentralisation of resources to remote sites, management from a central control point becomes very difficult. This normally results in increased complexity, Poor documentation and non availability of skilled computer/communication specialists at the various sites for proper maintenance of the system pose another problem.

Describe in brief each stage of the System Development Life Cycle with emphasis on the deliverables/ output of each stage?

ANS. Programs are the building blocks of information systems. Thus programmers use a development process or life cycle that is similar to the life cycle for entire information systems. The systems development life cycle (SDLC) is composed of 5 phases l Need analysis l System design l Development l Implementation l maintenance lPhase I Need Analysis During the needs analysis phase, the development team focuses on completing three tasks: (a) Defining the problem and deciding whether to proceed (b) Analyzing the current system in depth and developing possible solutions to the problem. (c) Selecting the best solution and defining its function Phase I begins when the need is felt for a new or modified information system. The system analyst then begin a preliminary investigation, talking with end users and managers of the department that will be affected. The first challenge is to define the problem accurately. With the problem defined accurately, the concerned department can decide whether to undertake the project. When the decision to proceed is made, system analyst undertake a thorough investigation of the current system and its limitations. The knowledge gathered regarding the current system is documented in several different ways. Some analysis use data flow diagrams, which show the flow of data through a system. Analysts may also use structured English, a method that uses plain English terms and phrases to describe events and alternative actions that can occur within the system. Another option is to represent the action taken under different conditions in a decision tree, which graphically illustrates the events and actions that can occur in the system. At the end of Phase I, the team recommends a possible solution. Throughout the need analysis phase, they remain focussed only on WHAT the system must do, not on HOW it will do. lPhase II System Design During the systems design phase, the project team tackles the HOW of the possible solution. The analyst and programmers involved at this point often use a combination of top-down & bottom-ups design to answer these questions.

In top-down design, team members begin with the larger picture and then decend down to finer details gradually. They look at major functions that the system must provide and break these down into smaller and smaller activities. Each of the activities will be programmed in the next phase of the SLDC. In bottom-up design, the team starts with the details (for eg. the reports to be produced by the system) and then moves to the big picture (the major functions or processes). At the end of the Design phase, a larger review is conducted with the top management and the department that will be affected. If the design passes inspection, development begins. In some instances, the review highlights problems with the overall solution, and the team must return to analysis or terminate the project. lPhase III Development During the development phase, programmers play the key role, creating or customizing the software for all the various parts of the system. There are two alternative paths through phase 3.: the acquisition path or the local development path. Typically, the programmers on the team are assigned to specific components of the overall system. If a component is built, the programmers write the necessary code or use CASE tools (if possible) to speed the development process. For purchased components, the programmers must customize the code as necessary to make the components fit into the new system. As early as phase 1, during the need analysis, the team may determine that some or all of the necessary system components should be purchased, rather than developed. lPhase IV Implementation In the implementation phase, the project team finishes buying any necessary hardware for the system and then installs the hardware and software in the user environment. The process of moving from the old system to the new system is called conversion. There are several ways to convert a department or an organisation into the new system: Direct Conversion All users stop using the old system at the same time and then begin using the new. This option is fast but it can be disruptive; as the system may throw up unexpected errors at a slightly late stage. In addition, some teething troubles are bound to be there and therefore pressure on support personnel is excessive. Parallel conversion Users continue to use the old system while an increasing amount of data is processed through the new system. The outputs from the two systems are compared, if they agree, the switch is made at the appropriate time. Phase conversion Users start using the new system, component by component. This option works only for systems that can be compartmentalized.

Pilot conversion Personnel in a single pilot site use the new system and then the entire organisation makes the switch. Although this approach may take some more time than the other three, it gives support personnel the opportunity to test user response to the system thoroughly and they will be better prepared when many people make the conversion.

lPhase V Maintenance After the information systems are implemented, System professionals continue to provide support during the maintenance phase. They monitor various indices of the system performance such as response time to ensure that the system is performing as intended. They also respond to changes in user requirements. Errors in the system are also corrected during Phase V. Changes or upgrades to the system are made regularly during the remaining life of the system. At some point, however, patch repairs to the system no longer meet user requirements, which may have changed radically since the system was installed. Information System Professionals or managers in a user department start calling for a major modification or a new system. At this point the SLDC has come full circle and the analysis phase begins again.

Mrs. Shenaz says office automation packages like spreadsheets, word processors and presentation packages have helped in increasing the productivity of the secretaries and line function managers. Comment on her statement giving relevant examples.

ANS. Because resources are so limited for the business, application suites and a new breed of financial software are making it easy to run a small office/home office (SOHO). Instead of relying on outside accountants, marketers, designers and other consultants, many managers can do many non-traditional chores, as well as their normal work by using sophisticate software packages. These applications help managers solve various problems without making a large educational or monetary investment. Application suites such as MS Office, Corels Word Perfect Office, Lotus Smart Suite includes the following programs. Word Processor Most word processors include professional templates to give documents a clean look and help the user with spelling, grammar and word choices. Word processors greatly simplify mass mailings, can print envelopes, brochures and other complex documents. For managers who want to design their own web pages, a word processor, may be the only tool they need. Spreadsheet Spreadsheets help managers tackle crucial financial tasks. They have intercell relationship and mathematical function calculation facility which is of great help. Once basic data is amended, remaining calculations are done automatically by the computer in a flash. The resulting files can be imported into many financial or accounting programs and can be useful to an accountant or consultant. Presentations These programs help the user quickly create impressive presentations for use in slide shows in overheads and on the computer screen. Colour graphics, animations and concise text can help persuade client and close the sale. Database These packages enable the line manager to track products, orders, shipments, customers and much more when used as part of an application suite. The database program can provide much of the data required for invoices, receipts, form letters and other missioncritical documents. Email, contact & schedule management Even in a small office, time is valuable and people cannot afford to confuse schedules. Programs like MS outlook and lotus organiser help people (individually and in group) manage and co-ordinate their schedules, set appointments and manage contacts. These programs offer email software, making it even easier to send a message to someone from the contact list. The speciality software market is growing rapidly. Here are examples of the types of special business oriented programs targeted at small businesses.

Financial These impressive and powerful packages can track inventories, billings, expenses and much more. They can also help the user categorize income and expenses and do tax planning. Business Planning New business planning programs provide templates to help the user create business plans and customise documents by industry, product type or market type. These programs can help the aspiring business owner find investors. Tax Planning and Preparations Tax software enables business owners to prepare their own tax returns without using an accountant or consultant. The user punches in the basic data and the software does the rest. l

Word Processing Packages (Short Notes)

ANS. Word processing software (also called word processor) is an application that provides extensive tools for creating all kinds of text based documents. Word processors are not limited to working with text. Word processors enable you to add images to documents and design documents, the look like products of a professional print shop. A word processor can enhance documents in other ways. You can embed sounds, video clips and animations into them. You can link different documents together for eg. link a chart from a spreadsheet into a word processing report to create complex documents that update themselves automatically. Word processors can even create documents for publishing on the worldwide web, complete with hyperlinked text and graphics. Leading word processing programs include Microsoft word, Corel WordPerfect and Lotus word pro. lFeatures of word processing software l Entering and editing text l Formatting text l Enables you to add graphics and sound files to your documents l You can use spell checker, grammar checker and thesaurus to improve the language of a word processing document l Using mail merge, you can combine a form letter with contents from an address database and create separate copy of the letter for each person in the database. l Templates are pre-designed documents. They simplify document design enabling you to create professional looking documents simply by typing your text.

Write short note on Centralised Data Processing

ANS. Historically mainframe computers were widely used in business data processing. In this kind of a system, several dumb terminals are attached to the central mainframe computer. Dumb terminals are the machines which users can input data and see the results of processed data. However no processing takes place at the dumb terminals. In earlier days individual organisations processed large amount of data usually at the head office. The main advantage of such systems was that design was much straight forward and the organisation can have tighter control on the main database. In such systems one or more processors handle the workload of several distant terminals. The central processor switches from one terminal to another and does a part of each job in a time phased mode. This switching from task of one terminal to another continues till all tasks are completed. Hence such systems are also called time sharing systems. The biggest disadvantage of such a system is if the main computer fails, the whole system fails. All remote terminals will stop functioning. Also all end users have to format data based on the format of central office. The cost of communication of data to the central server is high as even the minutest of processes have to be done centrally.

Compare Online and batch processing

ANS. Batch processing is a technique in which a number of similar transactions to be processed are grouped (batched) together and then processed. Processing happens sequentially in one go without any interruptions. Many financial systems use this kind of processing. The main reason for this is This method uses technological resources very effectively, especially where large numbers of transaction are involved. Many calculations can actually be done centrally and outside business hours like interest calculation, cheque clearing etc.

lHow does batch processing work Batch processing system work in the following steps: Forms/data/documents needed for processing are collected during business hours. At the end of the day, this data is collected from all users, sorted and arranged in batches. All the data is submitted to the computer for processing in sequential order. Once the processing is complete, the data is again sorted in a sequential order with all calculations updated. Finally, the master database is updated with the results of processing. The new file will be available to all users for the next days operation.

Online Processing : Batch processing is advantages and economical when large volumes of data is to be processed. It is not suitable for small volume of data. Also, in case where immediate response is required, batch processing is not desirable. Online processing was developed to overcome difficulty faced in batch processing. In many systems it is necessary that transactions should immediately update the master data file, like railway reservation system, ATM, Prepaid Utilities like mobile phones, etc. When a ticket is booked at one counter, the database must be updated immediately so that any other counter does not accept another booking against the same seat. l

lHow Online Processing Works: Data is fed to the central database All processing that is required happens instantaneously. When a person updates a record, it is locked for other users till the first person completes his transaction. It is called record locking. Batch Processing Online Processing

Applicable for high volume transactions Suitable for business control application payroll / invoicing railway reservation Data is collected in time periods andRandom data input as event occurs. processed in batches No direct access to system for the user. All users have direct access to the system Files are online only when processing takesFiles are always online. place

Differentiate between System Software and Application Software.

ANS. Software which contributes to the control and performance of computer system is called system software eg: Operating Systems, Assemblies etc. One major type of system software is Operating System. This tells the computer how to use its own resources. Application software does specific job for the user, such as solving equations, payroll etc. In other words application software tell the computer how to accomplish specific tasks for the user. Application programs are provided by the computer manufactures or suppliers. System Software is a set of instructions that initialize the machine so as to make it operational. System software may or may not be portable across various types of systems. Some system softwares are proprietary in nature. They can be used only on a specified hardware. System software for mainframes cannot be interchanged across systems. But systems software on a lower end system is developed in such a way that it can be easily ported across various platforms. System software also includes utility programs like language translators, interpreters, compilers, system loaders, link editors, system libraries, security software, etc. These are written by specialized software programmers called system programmers. Application Software A computer running only on system software is not useful as it runs to make the computer operational. Application software is required to make the computer useful to people. Application software is written to do almost all imaginable tasks. A particular Application Software is a set of instructions that help the user to get solution to specific problem. Application software is a generalised set of instruction that can be used to do certain specific tasks like word processors, payroll software, accounting packages etc. Application software is mostly written in such a way that it can be put to use by a vast number of people. Some of the advantages of application software are:

Single User and Multi-user systems

ANS. Stand alone or single user system is generally the initial stage of any computerization. As the name implies, only one person can use it. In the past personal / desktop computers had limited capacity for both storage and data processing. Hence PCs are generally called as single user systems. They were deployed in situations or points where data was to be collected. Whenever data is to be processed and analysed for individual benefits, standalone systems are used. Multi-user systems as the name suggests are computers on which several people can work simultaneously. Mainframes, super computers and more powerful minicomputers fall under this category. Though not necessarily, such systems are often based on centralised processing concept. All data and information is stored in the central computer. Various terminals are connected to the mainframe for inputting data. All users can work on the system simultaneously. The central computer has to be powerful enough to handle transactions of multiple terminals connected to it. The terminals connected to the mainframe can be single user systems (PCs / desktops.)

S-ar putea să vă placă și