Sunteți pe pagina 1din 31

1.

INTRODUCTION
1.1 ORGANISATION PROFILE
DAFFODILLS INDIA, established in the year 2006.They are located near Edayarpalayam, Coimbatore. DAFFODILLS INDIA is the organization and the development from based on the customer satisfaction and service towards the nation and the clients in abroad. Solutions are committed in providing innovative Software Solutions to its Clients and recognize the importance of technology. DAFFODILLS INDIA seasoned software Professional has Expertise in a wide range of technologies including, but limited to, Accounts Programming Server Side Programming Development of System Software Low Level Web Technology Client Server Technology Database Design, Development and Administration Customized Package Software implementation like

DAFFODILLS INDIA undertake Short Team and Long Term Projects on a contract and regular basis with reputed Clients. DAFFODILLS INDIA goal is to clearly understand their Clients need and provide them with real and lasting Solutions that meet and exceed their Expectations.

1.2

ABOUT THE PROJECT


Disk storage systems can often be the most expensive components of

a database solution. For large warehouses or databases with huge volumes of data, the cost of the storage subsystem can easily exceed the combined cost of the hardware server and the data server software. Therefore, even a small reduction in the storage subsystem can result in substantial cost savings for the entire database solution. Data compression has an undeserved reputation for being difficult to master, hard to implement, and tough to maintain. In fact, the techniques used in the previously mentioned programs are relatively simple, and can be implemented with standard utilities taking only a few lines of code. This article discusses a good all-purpose data compression technique: LempelZiv-Welch, or LZW compression. The routines shown here belong in any programmer's toolbox. For example, a program that has a few dozen help screens could easily chop 50K bytes off by compressing the screens. Or 500K bytes of software could be distributed to end users on a single 360K byte floppy disk. Highly redundant database files can be compressed down to 10% of their original size. Once the tools are available, the applications for compression will show up on a regular basis.

LZW compression replaces strings of characters with single codes. It does not do any analysis of the incoming text. Instead, it just adds every new string of characters it sees to a table of strings. Compression occurs when a single code is output instead of a string of characters. So, any code in the dictionary must have a string in the compressed buffer for the first time only, that means the dictionary just keeps a reference to uncompressed strings in the compressed buffer, no need to let the dictionary take a copy of these strings. So, it is not necessary to preserve the dictionary to decode the LZW data stream. This can save quite a bit of space when storing the LZWencoded data. The code that the LZW algorithm outputs can be of any arbitrary length, but it must have more bits in it than a single character. The first 256 codes (when using eight bit characters) are, by default, assigned to the standard character set. The remaining codes are assigned to strings as the algorithm proceeds. The sample program runs as shown with 12 bit to 31 bit codes. This means, codes 0-255 refer to individual bytes, while codes 2562^n refer to substrings, where n equal to number of bits per code, as in the following figure.

2. SYSTEM CONFIGURATION
2.1 HARDWARE CONFIGURATION
1. 2. 3. 4. 5. 6. Processor RAM HDD Monitor Pointing device Keyboard Pentium IV with 3.0 GHz. 512MB 60GB 15 Color monitor with 16 million colors 3-button Mouse. 104 Keys

2.2 SOFTWARE CONFIGURATION


1. 2. FRONT END Platform VB.NET Microsoft Windows server

3. SYSTEM STUDY AND ANALYSIS


The system analysis is conducted with the following objectives in mind. They are to satisfy the customers according to their needs, to evaluate the system concept for feasibility, to allocate functions to hardware, software, people, database and other system elements to create a system definition that forms the foundation for all subsequent engineering works.

3.1 FACT FINDING


Fact finding is the stage in which data about the system are collected in terms of technical and functional requirements. In this project the data collection is completed using the data carriers which are existing in the tables.

3.2 FEASIBILITY ANALYSIS


When developing a system, it is necessary to evaluate the feasibility of project at the earliest possible time. Unexpected technical problems and timing problems can occur when poor problem definition is obtained. It is advisable to conduct discussions regarding the analysis and design of the project before starting it.

Economic Feasibility
The proposed system developed includes the following related issues. Cost of resources needed for development. Cost benefit analysis. Potential market growth. While concerning the cost of resources for the server side, amount is invested for the high capacity of storage media, high speed processor and large

amount of memory needed for the system. While concerning the client side a minimal amount of cost is needed. Overall the cost of setting up the server and configuring it will be costly at the time of installation. While considering the cost benefit analysis, huge amount will be invested only at the time of first installation.

Technical Feasibility
The technical feasibility involves the analysis of all possible condition for obtaining the system. It actually involves a study of function Performance and constraints that may affect the ability to achieve an acceptable system, the considerations that are normally associated with the technical feasibility include the following: Development risk Resource availability Technology The development risk concerns the probability the function of all elements and its performance should be same in all platforms and in the system that is being developed. This system is developed according to the web standards and the development software tools are selected in such a way to avoid the problems sited above. The resource availability states whether skilled staffs are available to develop the system elements and the availability of hardware and software. The hardware is provided by the organization satisfying all the requirements.

3.3 EXISTING SYSTEM


Because real-world files usually are quite redundant, compression can often reduce the file sizes considerably. This in turn reduces the needed storage size and transfer channel capacity. Especially in systems where memory is at premium compression can make the difference between impossible and implementable. Commodore 64 and its relatives are good examples of this kind of a system. The most used 5.25-inch disk drive for Commodore 64 only holds 170 kB of data, which is only about 2.5 times the total random access memory of the machine. With compression, many more programs can fit on a disk. This is especially true for programs containing flashy graphics or sampled sound. Compression also reduces the loading times from the notoriously slow 1541 drive, whether you use the original slow serial bus routines or some kind of a disk turbo loader routine. Dozens of compression programs are available for Commodore 64. I leave the work to chronicle the history of the C64 compression programs to others and concentrate on a general overview of the different compression algorithms.

3.4 PROPOSED SYSTEM


Columns

of data from the same attribute compress better than

rows of tuples in a table, and


A

well-architected database engine using appropriate compression

techniques can operate directly on the compressed data, without decompressing it. Achieving a high degree of compression is highly desirable. Depending on the dataset, column databases use up to 90% less storage space than the raw data loaded into the database. This means column databases require less storage hardware and/or allow users to analyze much more data on the hardware they already have. There are several common approaches to database compression. One involves compressing tables using a block-based dictionary compression algorithm like Lempel-Ziv. A variant of these block-based dictionary schemes are value-based approaches, which replace individual values in records with shorter values from a dictionary - for example, every time the string "John Smith" occurs, it might be replaced with the code "*A". Other codes would be used for other common strings.

4. SYSTEM DESIGN
System design consist of a following, 1. 2. Input Design Output Design

INPUT DESIGN
Input design is the link between the information system and the users and those steps that are necessary to put transaction data in to a usable form for processing data entry. The activity of putting data into the computer for processing can be activated by instructing the computer to read data from a written printed document or it can occur by keying data directly into the system. The designs of input focusing on controlling the amount of input required controlling the errors, avoid delay extra steps, and keeping the process simple. The compressor algorithm builds a string translation table from the text being compressed. The string translation table maps fixed-length codes to strings. The string table is initialized with all single-character strings (256 entries in the case of 8-bit characters). As the compressor character-serially examines the text, it stores every unique two-character string into the table as a code/character concatenation, with the code mapping to the corresponding first character. As each two-character string is stored, the first character is sent to the output. Whenever a previously-encountered string is read from the input, the longest such previously-encountered string is determined, and then the code for this string concatenated with the extension

character is stored in the table. The code for this longest previouslyencountered string is output and the extension character is used as the beginning of the next word. The decompressor algorithm only requires the compressed text as an input, since it can build an identical string table from the compressed text as it is recreating the original text. However, an abnormal case shows up whenever the sequence character/string/character/string/character is encountered in the input and character/string is already stored in the string table. When the decompressor reads the code for character/string/character in the input, it cannot resolve it because it has not yet stored this code in its table. This special case can be dealt with because the decompressor knows that the extension character is the previously-encountered character.
Compressor algorithm:

w = NIL; add all possible charcodes to the dictionary for (every character c in the uncompressed data) do if ((w + c) exists in the dictionary) then w = w + c; else add (w + c) to the dictionary; add the dictionary code for w to output; w = c; endif done add the dictionary code for w to output; and display output;

4.2 OUTPUT DESIGN


Designing computer should proceed in well thought out manner. The term output means any information produced by the information system whether printed or displayed. When analyst design computer out put they identified the specific output that is needed to meet the requirement. Computer is the most important source of information to the users. Output design is a process that involves designing necessary outputs that have to be used by various users according to requirements. Efficient intelligent output design should improve the system relationship with the user and help in decision making. Decompressor algorithm: add all possible charcodes to the dictionary read a char k; output k; w = k; while (read a char k) do if (index k exists in dictionary) then entry = dictionary entry for k; else if (k == currSizeDict) entry = w + w[0]; else signal invalid code; endif output entry; add w+entry[0] to the dictionary;

w = entry; done

Encoding we weren't using LZW, and just sent the message as it stands (25 symbols at 5 bits each), it would require 125 bits. We will be able to compare this figure to the LZW output later. We are now in a position to apply LZW to the message. Symbol: Bit Code: (= output) T O B E O strings R N O T TO BE OR 18 = 010010 14 = 001110 15 = 001111 20 = 010100 28 = 011100 30 = 011110 32 = 100000 32: OR 33: RN 34: NO 35: OT 36: TT 37: TOB 38: BEO 20 = 10100 15 = 01111 2 = 00010 5 = 00101 15 = 01111 28: TO <--- Don't forget, we 29: OB 30: BE 31: EO <--- start using 6-bit New Dictionary Entry:

originally had 27 symbols, so the next one is 28th.

TOB EO RN OT #

37 = 100101 31 = 011111 33 = 100001 35 = 100011 0 = 000000

39: ORT 40: TOBE 41: EOR 42: RNO 43: OT#

This is somewhat clearer: Current Sequence NULL T #27. O B E O R <-N O T TO BE Starting B E O R N at O T T B O R, 6 15 = 5 bits 2 = 5 bits 5 = 5 bits 15 = 5 bits 18 = 6 bits bits are used 28: 29: 30: 31: 32: {floor(lg2(init_dict_size 33: OT TT TOB BEO OB BE EO OR RN + NO T O 20 = 5 bits 27: TO Next Char Output Value (# of bits) Extended Dictionary

<-- This IS the 28th entry, but the initial entries are numbered 0-26 so this is

num_chars_output)) + 1} 14 = 6 bits 15 = 6 bits 34: 20 = 6 bits 35: 27 = 6 bits 36: 29 = 6 bits 37: i.e. O: floor(lg2(27 + 4)) + 1 = 5 bits -> 01111 R: floor(lg2(27 + 5)) + 1 = 6 bits -> 010010

OR TOB EO RN OT #

T E R O # 0 = 6 bits

31 = 6 bits 38: 30 = 6 bits 40: 32 = 6 bits 41: 34 = 6 bits 42:

ORT EOR RNO OT#

36 = 6 bits 39: TOBE

Total Length = 5*5 + 12*6 = 97 bits. In using LZW we have made a saving of 28 bits out of 125 -- we have reduced the message by almost 22%. If the message were longer, then the dictionary words would begin to represent longer and longer sections of text, allowing repeated words to be sent very compactly. Decoding Imagine now that we have received the message produced above, and wish to decode it. We need to know in advance the initial dictionary used, but we can reconstruct the additional entries as we go, since they are always simply concatenations of previous entries. Bits: Output: Full: 10100 = 20 01111 = 15 00010 = 2 00101 = 5 T O B E 28: T? 28: TO 29: OB 30: BE 29: O? 30: B? 31: E? New Entry: Partial:

01111 = 15 using 6-bit strings 010010 = 18 001110 = 14 001111 = 15 010100 = 20 011100 = 28 011110 = 30 word 100000 = 32 100101 = 37 011111 = 31 100001 = 33 100011 = 35 000000 = 0 #

O R N O T TO BE OR TOB EO RN OT 37: TOB

31: EO 32: OR 33: RN 34: NO 35: OT 36: TT 38: BE?

32: O? 33: R? 34: N? 35: O? 36: T?

<--- start

37: TO? <--- for 36, only add 1st element of next dictionary 39: OR? 40: TOB? 41: EO? 42: RN? 43: OT?

38: BEO 39: ORT 40: TOBE 41: EOR 42: RNO

The only slight complication comes if the newly-created dictionary word is sent immediately. In the decoding example above, when the decoder receives the first symbol, T, it knows that symbol 28 begins with a T, but what does it end with? The problem is illustrated below. We are decoding part of a message that reads ABABA:

Bits:

Output: Full:

New Entry: Partial:

. . . 011101 = 29 101111 = 47 AB 46: (word) 47: AB? AB? <--- what do we do here?

At first glance, this may appear to be asking the impossible of the decoder. We know ahead of time that entry 47 should be ABA, but how can the decoder work this out? The critical step is to note that 47 is built out of 29 plus whatever comes next. 47, therefore, ends with "whatever comes next". But, since it was sent immediately, it must also start with "whatever comes next", and so must end with the same symbol it starts with, namely A. This trick allows the decoder to see that 47 must be ABA.

5. SYSTEM DEVELOPMENT
FEATURES OF VB.NET The Architect
The project architect is responsible for the overall design of the application, ensuring that the planned development will meet the business and technical goals that have been specified by the business sponsor. The architect is ideally placed to assess the security needs of the application and to formulate the security policy that will be implemented by the programmers.

The Programmer
The programmer is responsible for implementing the application design produced by the architect and for meeting the software security goals specified in the design. The programmer must have a firm understanding of the security features provided by the development platform in use, and must be trusted to implement the security policy completely and without modification.

The Security Tester


The security tester does not perform the same role as an ordinary application tester. A normal tester creates test scenarios that ensure that the planned functionality works as expected, by simulating the actions of a user. By contrast, the security tester simulates the actions of a hacker, in order to uncover behaviors that would circumvent the software security measures. Security testing is an underrated and underemployed activity, but is vital in validating the security measures designed by the architect and implemented by the programmer.

The System Administrator

The system administrator is responsible for installing, configuring, and managing the application; these tasks require a good understanding of general security issues, and an appreciation of the security features provided by the development platform and the application itself. One of the most important aspects of system administration is application monitoring. Well-designed applications provide system administrators with information about potential security breaches, and it is the responsibility of the system administrator to monitor for such information and to formulate a response plan in the event that the security of an application is subverted.

The User
The user is the final consumer of the functionality provided by the application, and is often required to interact with its software security measures for example, by entering a username and password to gain access to its functionality. The users of an application create their own tensions against the security policy; their expectations that the software system will protect them are high, but their willingness to be constrained by intrusive security measures is limited. For example, retail customers expect that a software system will conceal their credit card numbers from unauthorized third parties and protect their accounts from unauthorized changes. However, the same users will resist taking any responsibility for their own securityfor example, by remembering and specifying a PIN code when they purchase goods. Successful security policies take into account the users' attitudes, and do not force them to accept security demands that they cannot or will not adhere to. Unsuccessful security policies do not take into account the needs of the userfor example, requiring users to remember long and difficult passwords that are

frequently changed. In such circumstances, users will simply write the password down on a piece of paper and thereby negate all of the effort made during the development process.

The Hacker/Cracker
The final role is the cracker, more popularly known as a hacker. The hacker attempts to circumvent or subvert software security for financial gain or perhaps to rise to a perceived intellectual challenge. The hacker is the person whom security measures are meant to foil, but the label does not accurately describe the range of people who will attack software security systems. Throughout this book, we detail a number of specific security systems and explain the type of attack against which each is intended to protect. The .NET Framework is a flexible general-purpose computing platform designed to address the needs of commercial organizations and individuals alike, and supports many different application models. However, .NET places an emphasis on supporting the trend towards highly distributed systems, componentbased applications, and web-based server solutions, including XML web services. The .NET Framework is designed to run on multiple operating system platforms. Although Windows is currently the platform of choice for .NET deployment, there are implementations available for FreeBSD, Linux, and Mac OS X that are compliant with ECMA Standard 335 (which defines the core functionality of the .NET Framework). It is likely that in the future, .NET applications compliant with ECMA Standard 335 will run and integrate seamlessly across a variety of other operating systems. The .NET Framework's strong support for distributed application models and its cross-platform capabilities means that software is more mobile, accessible,

and integrated than before; although this provides functionality and productivity benefits, it also means that software consumers, producers, and service providers need to place a greater emphasis on software and system security. The increase in accessible web-based server solutions and mobile components also produces an increase in the number of people who try to hack those systems or who distribute malicious code in the form of viruses, worms, and Trojan horses for the purpose of gathering information or causing damage. Cross-platform solutions cannot always rely on the underlying operating system to provide the same security services on all platforms. The .NET runtime implements the following three features that provide platform-independent security services to improve the security of both client- and server-side software solutions:

Role-based security Code-access security Isolated storage

These features surround and permeate the .NET runtime to ensure that managed code does not perform actions and access resources it should not. They provide a baseline set of security, enforced by the .NET runtime, that managed code must comply with and can depend on regardless of the underlying operating system. These features do not replace existing operating system security measures; they either augment or compliment existing platform security, and mean that the .NET Framework provides one of the most secure and flexible environments in which to run distributed software solutions.

Running Unverifiable and Native Code

Runtime security works because the CLR is a managed execution environment that goes to great lengths to ensure that the code it loads is type-safe, meaning that it does not perform illegal operations, access memory directly, or try to access type members incorrectly. During execution, the .NET runtime, class libraries, and all required application assemblies are loaded to the same operating system process, albeit across different application domains. If type safety is not enforced, malicious code could manipulate the state of the runtime's security system, enabling it to access resources and carry out operations to which it would not normally have permission. Another common risk to runtime security is the use of native code. Clearly, for the purpose of backward compatibility and to enable access to functionality not yet provided by the .NET class library, managed code must be able to call native code, such as Win32 DLLs and COM objects. However, once native code is running, the .NET runtime has no control over the actions the code performs; malicious native code can delete important files or even change the .NET runtime's security configuration files, which are stored as XML. Only the underlying operating system security can control the actions of unmanaged code, and this is not always sufficient because it bases security decisions on the identity of the current user. Despite the risks, the .NET runtime designers understood that there are situations when it is necessary to use both native and unverifiable code. However, the runtime restricts access to these capabilities using its code-access security mechanism. Permission to load unverifiable code or call native code are two of the highest trust permissions you can grant to code. Once you grant these permissions to code, you are placing the security of your system at risk. You should have the

utmost trust in the source and integrity of code before granting it these capabilities and must be confident that your operating system security is sufficient to protect your machine against attack.

6. TESTING AND IMPLEMENTATION


6.1 SYSTEM TESTING Testing Methodologies
Testing is generally done at two levels - Testing of individual modules and testing of the entire system (System testing). During systems testing, the system is used experimentally to ensure that the software does not fail, i.e., that it will run according to its specifications and in the way users expect. Special test data are input for processing, and the results examined. A limited number of uses may be allowed to use the system so analysis can see whether they use it in unforeseen ways. It is preferable to discover any surprise before the organization implements the system and depends on it. Testing is done throughout systems development at various stages (not just at the end). It is always a good practice to test the system at many different levels at various intervals, that is, sub systems, program modules as work progresses and finally the system as a whole. If this is not done, then the poorly tested system can fail after installation. As you may already have gathered, testing is very tedious and time-consuming job. For a test to be successful the tester, should try and make the program fail. The tester maybe an analyst, programmer, or specialist trained in software testing. One should try and find areas in which the program can fail. Each test case is designed with the intent of finding errors in the way the system will process it. Through testing of programs do not guarantee the reliability of systems. It is assure that the system runs error free.

Unit Testing
This involves the tests carried out on modules programs, which make up a system. This is also called as a Program Testing. The units in a system are the modules and routines that are assembled and integrated to perform a specific function. In a large system, many modules at different levels are needed. Unit testing focuses on the modules, independently of one another, to locate errors. The programs should be tested for correctness of logic applied and should detect errors in coding. For example in the OBSE system, feeding the system with all combinations of data should test all the calculations. Valid and invalid data should be created and the programs should be made to process the data to catch errors. In the OBSE system, the Employee no: consists of three digits, so during testing one should ensure that the programs do not accept anything other than a three-digit code for the employee no. Another e.g. for valid and invalid data check is that, in case three digit no is entered during the entry of transaction, and that number does not exit in the master file, or if the number entered is an exit case, then the programs should not allow the entry of such cases. All dates that are entered should be validated. No program should accept invalidates. The checks that are needed to be incorporated are: in the month of Feb the date cannot be more than 29. For the months having days one should not be allowed to enter 31. All conditions present in the program should be tested. Before proceeding one must make sure that all the programs are working independently.

System Testing

When unit tests are satisfactorily concluded, the system, as a complete entity must be tested. At this stage, end users and operators become actively involved in testing. While testing one should also test to find discrepancies between the system and its original objective, current specifications and systems documentation. For example, one module may expect the data item for employee number to be numeric field, while other modules expect it to be a character data item. The system itself may not report this error, but the output may show unexpected results. A record maybe created and stored in one module, using the employee number as a numeric field. If this is later sought on retrieval with the expectation that it will be a character field, the field will not be recognized and the message requested record not found would not be displayed. System testing must also verify that file sizes are adequate and their indexes have been built properly. Sorting and rendering procedures assumed to be present in lower level modules must be tested at the systems level to see that they in fact exist and achieve the results modules expect.

Output Testing
After performing the validation testing, the next step is output testing of the proposed system, since no system could be useful if it does not produce the required output in the specified format. The outputs generated or displayed by the system under consideration are tested by asking the users about the format required by them. Hence the output format is considered in 2 ways one is on screen and another in printed format.

Validation Checking
Validation checks are performed on the following fields. Text Field The text field can contain only the number of characters lesser than or equal to its size. The text fields are alphanumeric in some tables and alphabetic in other tables. Incorrect entry always flashes and error message. Numeric Field The numeric field can contain only numbers from 0 to 9. An entry of any character flashes an error messages. The individual modules are checked for accuracy and what it has to perform. Each module is subjected to test run along with sample data. The individually tested modules are integrated into a single system. Testing involves executing the real data information is used in the program the existence of any program defect is inferred from the output. The testing should be planned so individually tested. A successful test is one that gives out the defects for the inappropriate data and produces and output revealing the errors in the system. that all the requirements are

6.2 IMPLEMENTATION
Implementation Procedures After proper testing and validation, the question arises whether the system can be implemented or not. Implementation includes all those activities that take place to convert from old system to new. The new system may be totally new replacing an existing manual or automated system, or it may be a major modification to an existing system. In other case, proper implementation is essential to provide a reliable system to meet organization requirements. User Training A well-designed system, if not operated and used properly could fail. Training the users is important, as if not done well enough could prevent the successful implementation of an information system. Through the systems development life cycle the user has been involved. By this stage the analyst should possess an accurate idea of the users they need to be trained. They must know what their roles will be, how they can use the system and what the system will do and will not do. Both system operators and users need training. During their training, they need to be given a trouble-shooting list that identifies possible problems and identifies remedies for the problem. They should be advised of the common mal functions that may arise and how to solve them. Operational Documentation Once the implementation plan is decided, it is essential that the user of the system is made familiar and comfortable with the environment. Education involves right atmosphere & motivating the user. A

documentation providing the whole operations of the system is being developed. The system is developed in such a way that the user can work with it in a well consistent way. The system is developed user friendly so that the user can work the system from the tips given in the application itself. Useful tips and guidance is given inside the application itself to help the user. Users have to be made aware that what can be achieved with the new system and how it increases the performance of the system. The user of the system should be given a general idea of the system before he uses the system. System Maintenance A system should be created whose design is comprehensive and farsighted enough to serve current and projected user for several years to come. Part of the analysts expertise should be in projecting what those needs might be in building flexibility and adaptability into the system. The better the system design, the easier it will be to maintain and the maintenance costs is a major concern, since software maintenance can prove to be very expensive. It is important to detect software design errors early on; as it is less costly than if errors remain unnoticed until maintenance is necessary. Maintenance is performed most often to improve the existing software rather than to respond to a crisis or system failure. As user requirements change, software and documentation should be changed as part of the maintenance work. Maintenance is also done to update software in response to the change made in an organization. This work is not as substantial as enhancing the Software, but it must be done. The system could fail if the system is not maintained properly.

7. CONCLUSION
The new computerized system was found to be much faster and reliable and user friendly then the existing system. The system has been designed and developed step by step and tested successfully. It eliminates the human error that are likely to creep in the kind of working in which a bulk quantity of data and communications as to be processed.

Data compression is a viable possibility to aid in optimal use of frequency spectra. Less data transmitted translates into less bandwidth necessary to transmit the data. Further work is necessary. Making use of the data structure of the data sets can result in the use of prediction techniques, which can provide more compression. Work in this area is needed. Also, analysis of how data performs in the channel will also determine its ultimate usefulness. This work will continue by simulating the data compression in aeronautical channels.

7. SCOPE FOR FURTHER DEVELOPMENT


The application developed is designed in such a way that any further enhancements can be done with ease. The system has the capability for easy integration with other systems. New modules can be added to the existing system with less effort.

8. BIBLIOGRAPHY
[1] Irving, Chuck, "Advanced Range Telemetry (ARTM) Concept Exploration", Air Force Flight Test Center Edwards AFB, CA, October 6, 1997 [2] Satellite News, Vol. 22, Issue 14, April 5, 1999 [3] Sayood, K., Data Compression, Morgan Kaufman Publishers, 1997 [4] Huffman, D. A., "A Method for the construction of minimum redundancy codes". Proceedings IRE, 40, 1951, pages 1098-1101 [5] Ziv, J. and Lempel, A., "A universal algorithm for data compression". IEEE Transactions on Information Theory, IT-23(3), May 1977, pages 337-343 [6] Nelson, M., The Data Compression Book, M&T Books, California, 1987.

VB.NET Unleashed, Second Edition, By Stephen Walther

http://www.developer.com/net/asp/article.php/3096831 http://www.aspnetemail.com/Examples.aspx

S-ar putea să vă placă și