Sunteți pe pagina 1din 95

PROJECT REPORT ON Proxy Server

(IN LINUX)

INDEX
.

LINUX OPERATING SYSTEM History of Linux UNIX


In order to understand the popularity of we need to travel back in time, about 30 ago.Imagine computers as big as houses, stadiums. While the sizes of those computers substantial. problems, there was one thing made this even worse: every computer had a different operating system.Software was always customized to serve a specific purpose, and software for one given system didn't run on another system. Being able to work with one system didn't automatically mean that you could work with another. It was difficult, both for the users and the system administrators.Computers were extremely expensive then, and sacrifices had to be made even after the original purchase just to get the users to understand how they worked. The total cost per unit of computing power was enormous.Technologically the world was not quite that advanced, so they had to live with the size for another decade. In1969, a team of developers in the Bell Labs laboratories started working on a solution for the softwareproblem, to address these compatibility issues. They developed a new operating system, which was 1. Simple and elegant. 2. Written in the C programming language instead of in assembly code. Linux, years even posed that

3. Able to recycle code. The Bell Labs developers named their project "UNIX." The code recycling features were very important. Until then, all commercially available computer systems were written in a code specifically developed for one system. UNIX on the other hand needed only a small piece of that special code, which is now commonly named the kernel. This kernel is the only piece of code that needs to be adapted for every specific system and forms the base of the UNIX system. The operating system and all other functions were built around this kernel and written in a higher programming language, C.

Introduction to Linux
This language was especially developed for creating the UNIX system. Using this new technique, it was much easier to develop an operating system that could run on many different types of hardware. The software sell from different vendors vendors were quick to adapt, since they could existence: imagine for instance computers

ten times more software almost effortlessly. Weird new situations came in communicating in the same network, or users working on different systems without the need for extra education to use another computer. UNIX did a great deal to help users become compatible with different systems. Throughout the next couple of decades the development of UNIX continued. More things became possible to do and more hardware and software vendors added support for UNIX to their products. UNIX was initially found only in very large environments with mainframes and minicomputers (note that a PC is a "micro" computer). You had to work at a university, for the government

or for large financial corporations in order to get your hands on a UNIX system. But smaller computers were being developed, and by the end of the 80's, many people had home computers. By that time, there were several versions of UNIX available for the PC architecture, but none of them were truly free and more important: they were all terribly slow, so most people ran MS DOS or Windows 3.1 on their home PCs.

Linus and Linux


By the beginning of the 90s home PCs were finally powerful enough to run a full blown UNIX. Linus Torvalds, a young man studying computer science at the university of Helsinki, thought it would be a good idea to have some sort of freely available academic version of UNIX, and promptly started to code .it was Linus' goal to have a free system that was completely compliant with the original UNIX. That is why he asked for POSIX standards, POSIX still being the standard for UNIX. In those days plug-and-play wasn't invented yet, but so many people were interested in having a UNIX system of their own, that this was only a small obstacle. New drivers became available for all kinds of new hardware, at a continuously rising speed. Almost as soon as a new piece of hardware became available, someone bought it and submitted it to the

Linux test, as the system was gradually being called, releasing more free code for an ever wider range of hardware. These coders didn't stop at their PC's; every piece of hardware they could find was useful for Linux Back then, those people were called "nerds" or "freaks", but it didn't matter to them, as long as the supported hardware list grew longer and longer. Thanks to these people, Linux is now not only ideal to run on new PC's, but is also the system of choice for old and exotic hardware that would be useless if Linux didn't exist.Two years after Linus' post, there were 12000 Linux users. The project, popular with hobbyists, grew steadily, all the while staying within the bounds of the POSIX standard. All the features of UNIX were added over the next couple of years, resulting in the mature operating system Linux has become today. Linux is a full UNIX clone, fit for use on workstations as well as on middle-range and high-end servers. Today, a lot of the important players on the hard- and software market each have their team of Linux developers; at your local dealer's you can even buy pre-installed Linux systems with official support even though there is still a lot of hard- and software that is not supported, too.

Current application of Linux systems


Today Linux has joined the desktop market. Linux developers concentrated on networking and services in the beginning, and office applications have been the last barrier to be taken down. We

don't like to admit that Microsoft is ruling this market, so plenty of alternatives have been started over the last couple of years to make Linux an acceptable choice as a workstation, providing an easy user interface and MS compatible office applications like word processors, spreadsheets, presentations and the like. On the server side, Linux is well-known as a stable and reliable platform, providing database and trading services for companies like Amazon, the well-known online bookshop, US Post Office, the German army and many others. Especially Internet providers and Internet service providers have grown fond of Linux as firewall, proxy- and web server, and you will find a Linux box within reach of every UNIX system administrator who appreciates a comfortable management station. Clusters of Linux machines are used in the creation of movies such as "Titanic", "Shrek" and others. In post offices, they are the nerve centers that route mail and in large search engine, clusters are used to perform internet searches.These are only a few of the thousands of heavy-duty jobs that Linux is performing dayto-day across the world. It is also worth to note that modern Linux not only runs on workstations, mid- and high-end servers, but also on "gadgets" like PDA's, mobiles, a shipload of embedded applications and even on experimental wristwatches. This makes Linux the only operating system in the world covering such a wide range of hardware.

Is Linux difficult or not


Whether Linux is difficult to learn depends on the person you're asking. Experienced UNIX users will say no, because Linux is an ideal operating system for power-users and programmers, because it has been and is being developed by such people. Everything a good programmer can wish for is

available: compilers, libraries, development and debugging tools. These packages come with every standard Linux distribution. The C-compiler is included for free as opposed to many UNIX distributions demanding licensing fees for this tool. All the documentation and manuals are there, and examples are often included to help you get started in no time. It feels like UNIX and switching between UNIX and Linux is a natural thing. In the early days of Linux, being an expert was kind of required to start using the system. Those who mastered Linux felt better than the rest of the "lusers" who hadn't seen the light yet. It was common practice to tell a beginning user to "RTFM" (read the manuals). While the manuals were on every system, it was difficult to find the documentation, and even if someone did, explanations were in such technical terms that the new user became easily discouraged from learning the system. The Linux-using community started to realize that if Linux was ever to be an important player on the operating system market, there had to be some serious changes in the accessibility of the system.

Linux for non-experienced users


Companies such as RedHat, SuSE and Mandriva have sprung up, providing packaged Linux distributions suitable for mass consumption. They integrated a great deal of graphical user interfaces (GUIs), developed by the community, in order to ease management of programs and services. As a Linux user today you have all the means of getting to know your system inside out, but it is no longer necessary to have that knowledge in order to make the system comply to your requests.

Now a days you can log in graphically and start all required applications without even having to type a single character, while you still have the ability to access the core of the system if needed. Because of its structure, Linux allows a user to grow into the system: it equally fits new and experienced users. New users are not forced to do difficult things, while experienced users are not forced to work in the same way they did when they first started learning Linux. While development in the service area continues, great things are being done for desktop users, generally considered as the group least likely to know how a system works. Developers of desktop applications are making incredible efforts to make the most beautiful desktops you've ever seen, or to make your Linux machine look just like your former MS Windows or an Apple workstation. The latest developments also include 3D acceleration support and support for USB devices, single-click updates of system and packages, and so on. Linux has these, and tries to present all available services in a logical form that ordinary people can understand.

Structures of UNIX, LINUX


The highlighting features of this operating system are: (i) Portable Written in C language (ii) Source code is available in high level language(C ). (iii) Multi-user system implements Time-sharing (iv) Powerful-defines the concept of power as not the programs, but the interaction of programs. The structure of UNIX is as follows

Most prominent are: 1. Red Hat LINUX Educational and commercial 2. Caldera LINUX Business oriented 3. SUSE LINUX Retailers and applications 4. Turbo LINUX Multilanguage 5. Debian LINUX Truly free, no commercial version 6. Slackware LINUX Most UNIX like.

Kernel Version

First official version was was 1.0 As per changes, (i) Development are phase, being

0.0.2 and first stable version kernel goes through a where lot of changes and added and developed. It is the versions e.g., 1.1, 2.1 etc.

modifications

denoted by odd number in

(ii) Stabilization phase where small modifications are made to the kernel. It is denoted by even numbers in the versions e.g., 1.2, 2.2. Current stable kernel is 2.2.10

Hence, Linux architecture as simplified can be seen as:A seed like core or the kernel which performs rapid switching between tasks under execution, attaches drivers to the few hardware processes, inter-process communication, prevents interference with one-another and protects itself from corrupt program routines. This kernel while running grows into a fully Blown OS Linux shell to provide a powerful user interface while still protecting itself from user-mode unauthorized access. It provides the user an environment to work upon and take ownership of it by customizing it.

FEATURES OF LINUX
MULTI-TASKING: In Linux it is possible to have many programs running at the same time which means that not only can you have many programs going at once but that the Linux operating system can self have programs running in the background (and this is the main reason why Linux is used as server).So that back-ground processes can run side by side. These processes are called DIEMANS MULTI-USER: Not only can you have many user accounts available on a Linux system you can also have multiple logged in and working on the system at the same time. Users can have their own environment arranged in the way they want and there own desktop interfaces with icons, menus and applications. User accounts can be password protected. GRAPHICAL USER INTERFACE: The framework for working with graphical application in Linux is known as X-windows system or simply X. The X based desktop environment provides the look and feel of GUI Icons, menus, themes etc.

HARDWARE SUPPORT: Linux can configure support for almost every kind of hardware that can be connected to a computer. NETWORKING CONNECTIVITY: To connect a Linux system to a network Linux offer support for a variety of local area network bolts, modems and serial devices. NETWORK SERVER: Linux is the best for providing networking services to the client computer on RAM or the entire internet. Therefore Linux can be used to a print server, file server, mail server, web server etc. act

Installing Red Hat Enterprise Linux 5


This appendix is straightforward. It illustrates the steps required to install Red Hat Enterprise Linux 5 Server on your computer, using graphical and text based installation methods. Both are governed by the Red Hat program known as Anaconda.

Graphical Installation
To install Red Hat Enterprise Linux 5 on your computer, take the following steps: 1. Select and insert the media that youll use to boot the Red Hat Enterprise Linux 5 installation program. It can be the first Red Hat Enterprise Linux 5 CD, a boot CD created from the boot.iso file from the /images directory of the first installation CD, or a boot USB key created from the diskboot.img file from the same directory.

2. Power on your system. Press the appropriate key, typically ESC, F12, or DEL, to access the boot menu shown here. If a boot menu isnt available, youll need to adjust the boot sequence in the computer BIOS, which you can then use to boot directly from your selected media. Set your computers BIOS to boot from the first installation CD or USB drive. Details vary by PC. Make sure your BIOS saves your changes before you reboot.

3. Type linux askmethod when you see the boot

4. Choose a language and select OK.

5. Select a keyboard and select OK.

6. Set up an NFS installation method and select OK.

7. Start configuring your network, as shown. If you have a DHCP server on your network, that is simplestunless, of course, youre told to enable static addressing during your exam. For this installation, disable DHCP for both IPv4 and IPv6addressing. If you dont have an IPv6-capable DHCP server or router, disable IPv6 addressing completely. Select OK. If you enable DHCP, skip the next step.

8. Add static address information for your network. Use the IP Address, Gateway, and Name Server (DNS) addresses associated with your existing network. Select OK.

9. Add connection information to the installation server. If you dont have a DNS server for the local network, you can substitute the IP address for the NFS server name. Select OK.

10. Assuming your connection works, youll see the first installation screen. Click Next.

11. Youve already selected a language and keyboard, so the next step, if youre actually running Red Hat Enterprise Linux 5 (as opposed to one of there build distributions), is to enter an installation number associated with your purchased or trial subscription, as shown next. The subscription configures a custom set of package groups and repositories. If you dont have an installation number, select the Skip Entering Installation Number radio button. Click OK, and If you did not enter an installation number, youll be given a warning. Click Skip to continue.

12. The installation program searches for existing installations. If found, youre prompted to either upgrade or install over the existing installation. If not found, you wont see the screen shown here; if the drive is unformatted, youll get a warning. Otherwise, skip to the next step. Select Install Red Hat Enterprise Server and click Next to continue.

13. You can allow Anaconda to configure an optimized partition configuration based on your memory and available hard disk space (based on free spaceafter removing partitions), or choose

to customize the configuration, as shown next. For the purpose of this installation, select Review And Modify Partitioning Layout.If you have network attached storage (NAS) that is configured to communicate via iSCSI (a TCP/IP protocol), click Advanced Storage Configuration. You can also configure communication with that storage here. If thats what you want to do, select Add iSCSI target, and click Add Drive. This opens a Configure iSCSI Parameters window, where youd enter the IP address and iSCSI Initiator Name. But thats beyond the scope of the current Red Hat exams. Click Cancel to return to the previous screen, and then click Next.

14. If there are existing partitions on the installed hard drives, youll get the chance to confirm that you want to remove said partitions (this step isnt final). If youre configuring a dual-boot with another operating system, dont delete the partitions! Instead, click Back and select Create A Custom Layout. However, the Red Hat exams are Linux exams, so I believe a dual-boot configuration, especially with Microsoft Windows, is unlikely during the exam. These options are shown next. Choose Yes or No, as appropriate.

15. Now inspect and change partitions in Disk Druid, as shown. You can also create and then configure RAID and LVM partitions,Make any desired changes, and then click Next.

16. Once youve configured your partitions, set up a boot loader. If you select No Boot Loader Will Be Installed, youll need to use a third-party boot loader such as Partition Magic or Microsofts NTLDR. Unless you want to set up a Boot Loader Password or Configure Advanced Boot Loader Options, click Next.

17. Configure your connection to the network. It should reflect the settings you input in steps 8 and possibly 9. If you dont want the DHCP server to assign a hostname (or you dont have a DHCP server), you can assign it manually, as shown here. Click Next to move on.

18. Set the time zone for your system. If you dont have another operating system on this computer, keep the System Clock Uses UTC option active. Then click Next.

19. Type in and confirm the root password for your system. Follow any instructions on your exam carefully for this; you want to make it easy for the person grading your exam to see what

youve done. Yes, lost root passwords can be reset, but what will your score be if the person grading your exam finds that you ignored his or her instructions? Click Next.

20. There are two package customization screens available. Everyone sees the screen shown next. (The choices are slightly different for Red Hat Enterprise Linux 5 Client.) You can accept

the defaults, select available options, and/or select Customize Now. Its usually best to customize modestly (Ive selected the Customize Now option), based on the requirements on your particular Installation and Configuration exam. Click Next. (If you dont select Customize Now, skip the next step.)

21. Select the package groups of your choice. This should conform to the requirements of the Installation and Configuration section of your particular exam. Click Next.

22. Once youve selected the package groups of your choice, you get one last chance to go back before starting the installation process. Click Next if youre happy with your choices, or click Back to make changes.

23. Some configuration options associated with previous versions of RHEL are no longer part of the installation process. Some of these options are part of the First Boot process described in Chapter 2. Normally, youll see the First Boot screens only the first time you reboot a system. Click Next when youre ready to start the actual installation process. Depending on hardware and network connections, it may take a few minutes, or more.

24. The next screen congratulates you for completing the installation. The next step is to reboot your computer into RHEL. Click Reboot. If you had to modify your computers BIOS menu, change it back so it boots from the hard drive in the future. Make sure your BIOS saves your changes before you reboot.

Directory Structure of Linux These are the basic directories that you (should) have after installing any Linux distribution. Root(/) is the frist components which install firstly when we install Linux in the computer and execute first when we start Linux . Directory structures of Linux following directories
/bin/ /dev/ /etc/ /home/ /lib/ /mnt/

also as

/proc/ /root/ /sbin/ /tmp/ /usr/ /var/

/bin/ This is where all your programs that are accessible to all users will be stored once installed. /dev/ This is a virtual directory where your devices are 'stored.' Devfs allows Linux to list devices (hard drives, input devices, modems, sound cards, etc.) as 'files.' /etc/ This is where you'll find all your global settings. Daemons such as ssh, telnet, and smtp/pop3 mail servers find their configuration files here. Also in /etc/ is the system's password file, group lists, user skeletons, and cron jobs. /home/ This is the default directory where non-root users' homes are created. When you add a user, the default home directory is created as /home/username. You can change this default in the proper file in /etc/. /lib/ This is where shared libraries (perl, python, C, etc.) are stored. Also in /lib/ are your kernel modules. /mnt/ This is the default location for mounting cdroms, floppy disk drives, USB memory sticks, etc. You can mount anything anywhere, but by default there is a /mnt/floppy (if you have a floppy drive) and /mnt/cdrom. /proc/ This virtual folder contains information about your system. You can view processor statistics/specifications, PCI bus information, ISA bus information, and pretty much anything else you want to know about the hardware on your system.

/root/ This is the default home directory for the user root. /sbin/ This is where system programs are installed. These include fdisk, tools to make partitions, certain network tools, and other things that normal users shouldn't have a need for. /tmp/ This is the default location to place files for temporary use. When you install a program, it uses /tmp/ to put files during installation that won't be needed once the program is installed. /usr/ This contains various programs, non-daemon program settings and program resources.

Linux Shell Commands


Command shell A program that interprets commands Allows a user to execute commands by typing them

manually at a terminal, or automatically in programs called shell scripts. A shell is not an

operating system. It is a way to interface with the operating system and run Commands. What is BASH BASH = Bourne Again Shell. Bash is a shell written as a free replacement to the standard Bourne Shell (/bin/sh) originally written by Steve Bourne for UNIX systems. It has all of the features of the original Bourne Shell, plus additions that make it easier to program with and use from the command line. Since it is Free Software, it has been adopted as the default shell on most Linux systems. Linux Basic Commands MKDIR: This command is used for creating directory in the Linux SYNTAX: $ mkdir dir name CD: This command is used for changing directory.

SYNTAX: $ cd<dir name> PWD: This command is used for displaying the present working directory. SYNTAX: $pwd

TOUCH: This command create an empty file. SYNTAX: $ touch<file name> CAT: This command is used for creating, reading ,writing , appending a file. SYNTAX: $cat<file name> CP: This command is used for copying a file from source to destination. SYNTAX: $cp <sourcepath> <destination path>

MV :This command is used for moving a perticuler file from one memory location to other. This command can also be used for renaming a file. This command can also be used for overwriting a file. SYNTAX: $ mv <source file> <directory name> RM: This command is used for removing a file, directory , sud-directories.

SYNTAX: $rm <file name> <directory name> RM-i: This command is used for removing file interactively. i.e. before deleting files it would confirm it from user. SYNTAX: $ rm-I <dir name> RMDIR: This command is used for removing directory but before removing the directory you must assure that the directory is empty i.e. there is no sub-directory or file in it. SYNTAX: $ rmdir <dir name> LS: This command is used for displaying all the files and sub-directories of present working directories. SYNTAX: $ ls

Ls-a : display all hidden files of present working directory. Ls-l : display list of file and sub-directories of present working directories.

MAN:This command is used for displaying the full descriptions of any command. SYNTAX: $man<command name>

UMASK:This command is used to display the default permission of the file SYNTAX:-

$umask CHMOD:This command is used for chaning mode of the file. SYNTAX: chmod<mode><filename> LOGNAME:This command is used for displaying the login name. SYNTAX: $logname UNAME:This command is used for displaying the version of the linux . SYNTAX: $uname PASSWD:This command is used for chaning the current password of current user. SYNTAX: $passwd

WHO:this command is used for numbers,namesof the connected dumb terminals. SYNTAX: $who

WHO AM I:This command is used for displaying the details of yours terminals SYNTAX:

$ whami.

Partition of Hard Disk in Linux using Fdisk

In the Linux we can partition the hard disk using fdisk command. Linux support the file system ext2 and ext3 by default . but we can also used fat file system in Linux

Fdisk usage
Frist to see the all partition made already type following command # fdisk l It will show all partition made in linux. fdisk is started by typing (as root) fdisk device at the command prompt. device might be something like /dev/hda or /dev/sda
#fdisk /dev/sda #fdisk /dev/hda (for Sata Hard disk) (for IDE Hard disk)

After using upper command use following basic fdisk commands you need are:
p n d q w

print the partition table create a new partition delete a partition quit without saving changes write the new partition table and exit

Changes you make to the partition table do not take effect until you issue the write (w) command. Here is a sample partition table:
Disk /dev/hdb: 64 heads, 63 sectors, 621 cylinders Units = cylinders of 4032 * 512 bytes Device Boot /dev/hdb1 * /dev/hdb2 /dev/hdb3 /dev/hdb4 Start 1 185 369 553 End 184 368 552 621 Blocks 370912+ 370944 370944 139104 Id 83 83 83 82 System Linux Linux Linux Linux swap

Creating Three primary partitions with one Swap


The overview: Decide on the size of your swap space and where it ought to go Divide up the remaining space for the three other partitions. Example: I start fdisk from the shell prompt: # fdisk /dev/hdb which indicates that I am using the second drive on my IDE controller. When I print the partition table, I just get configuration information.

Command (m for help): p Disk /dev/hdb: 64 heads, 63 sectors, 621 cylinders Units = cylinders of 4032 * 512 bytes I knew that I had a 1.2Gb drive, but now I really know: 64 * 63 * 512 * 621 = 1281982464 bytes. I decide to reserve 128Mb of that space for swap, leaving 1153982464. If I use one of my primary partitions for swap, that means I have three left for ext2 partitions. Divided equally, that makes for 384Mb per partition. Now I get to work.

Command (m for help): n Command action e extended p primary partition (1-4) p Partition number (1-4): 1 First cylinder (1-621, default 1):<RETURN> Using default value 1 Last cylinder or +size or +sizeM or +sizeK (1-621, default 621): +384M Next, I set up the partition I want to use for swap: Command (m for help): n Command action e extended p primary partition (1-4) p Partition number (1-4): 2 First cylinder (197-621, default 197):<RETURN> Using default value 197 Last cylinder or +size or +sizeM or +sizeK (197-621, default 621): +128M Now the partition table looks like this: Device Boot Start End Blocks Id System /dev/hdb1 1 196 395104 83 Linux /dev/hdb2 197 262 133056 83 Linux I set up the remaining two partitions the same way I did the first. Finally, I make the first partition bootable: Command (m for help): a Partition number (1-4): 1

And I make the second partition of type swap: Command (m for help): t Partition number (1-4): 2 Hex code (type L to list codes): 82 Changed system type of partition 2 to 82 (Linux swap) Command (m for help): p

The end result: Disk /dev/hdb: 64 heads, 63 sectors, 621 cylinders Units = cylinders of 4032 * 512 bytes Device Boot /dev/hdb1 * /dev/hdb2 /dev/hdb3 /dev/hdb4 Start 1 197 263 459 End Blocks 196 395104+ 262 133056 458 395136 621 328608 Id 83 82 83 83 System Linux Linux swap Linux Linux

Finally, I issue the write command (w) to write the table on the disk. After doing the partition please reboot the computer or user following command to refresh the partition hard disk # partprobe /dev/hda

Formatting the partition in Linux


After creating the partition you need to format the partition for using the space. Linux defaults using the ext2 or ext3 file system. but we can also used fat file system in Linux

For format the partitation in ext3 file system use following commands # mke2fs j /dev/hda4 For format the partitation in ext2 file system use following commands #mke2fs /dev/hda4 For format the partitation in FAT file system use following commands #mke2fs vfat /dev/hda4

Mounting the Partition


mount attaches the filesystem specified by source (which is often a device name, but can also be a directory name or a dummy) to the directory specified by target. umount and umount2 remove the attachment of the (topmost) filesystem mounted on target. Only the super-user may mount and unmount filesystems. Since Linux 2.4 a single filesystem can be visible at multiple mount points, and multiple mounts can be stacked on the same mount point. In linux we can not use the space in the partition directly . first we mount the partition on one directory and then we can user the space (directory space) . to mount the partition the use the following commands .. Mount the partition #mount Command / dev/hda4 Partition /Songs Address of directory

Mounting the CDROM in Linux #mount Command /dev/cdrom Device Address /media Address of partition (on which mounted)

The upper commands will mount the partition temporary, for mount partition we enter the detail of partition in fstab file .

Umount
it mean un mount the partition form directory. This command is used to un mount the partition . use this a following.

#umount Command

/ dev/hda4 Device Address

/Songs Address of partition (on which mounted)

After mounting or un mounting the any partition run following command to refresh the fstab file as following #mount a

Fstab file

Fstab File Open by using Vi Editior # vi /etc/fstab ( command used to open the fstab file )

In the fstab file there are the record of the mounted partition . fstab file is placed in the /etc/ directopry and if you mount the partation in any directory and you wants to mount that partation paramantly then add new line in fstab as following syntax.

#Partation address

Directory Address

Partition file system 0 0

As Show in upper figure the fstab file is open. For example if u want to mount the one partation in the linux parmanently frist mount the partition as using upper mounting command . after that you open the fstab file and add new line in it as following . the new line is /dev/sda8 /Songs vfat defaults 00

After enter the new line save the file and run the command # mount a To refresh the fstab file. the partition permanently mounted

Run levels
A runlevel is a preset operating state on a Unix-like operating system. A system can be booted into (i.e., started up into) any of several runlevels, each of which is represented by a single digit integer. Each runlevel designates a different system configuration and allows access to a

different combination of processes (i.e., instances of executing programs). The are differences in the runlevels according to the operating system. Seven runlevels are supported in the standard Linux kernel (i.e., core of the operating system). They are: 0 - System halt; no activity, the system can be safely powered down. #init 0 1 - Single user; rarely used. #init 1 2 - Multiple users, no NFS (network filesystem); also used rarely. #init 2 3 - Multiple users, command line (i.e., all-text mode) interface; the standard runlevel most Linux-based server hardware. #init 3
Red Hat Linux/Fedora runlevels

for

Description

5 - Multiple users, GUI (graphical user interface); the standard runlevel for most Linux-based desktop systems. #init 5 6 - Reboot; used when restarting the system. #init6

Halt

Single user

Not used/User definable

Full multi-user, console logins only

Not used/User definable

Run Levels in Red Hat Linux and Fedora


Red Hat as well as most of its derivatives uses runlevels like this: By default Linux boots either

Full multi-user, with display manager as well as console logins

Reboot

to runlevel 3 or to runlevel 5.

The former permits the system to run all services except for a GUI. The latter allows all services including a GUI. In addition to the standard runlevels, users can modify the preset runlevels or even create new ones if desired. Runlevels 2 and 4 are usually used for user defined runlevels. Booting into a different runlevel can help solve certain problems. Likewise, if a machine will not boot due to a damaged configuration file or will not allow logging in because of a corrupted /etc/passwd file (which stores user names and other data about users) or because of a forgotten password, the problem can solved by first booting into single-user mode (i.e. runlevel

How to get access of root Or Breaking the password


To break the password of the root . first start the computer in the single user and trouble shooting mood (init 0). To start the computer in the init 0 run level do as following Power on the computer and press esc button to stop the linux on boot screen Press e to open the grub configuration Select the kernel line and pess e t edit the kernel line Give the space at the last of line ahe write 1 on it as following rhgb quiet 1 Press enter to save (but this 1 is temparory entered that not save ) Press b to boot the computer again After rebooting the computer will start in the trouble shooting mode. And following type of command promptis shown Sh 3.0# There are two type of breaking passwords 1. Assign new password 2. Password stage break

1. Assign new password:- by using this we give new password to the user by using the following commands Sh 3.0 # passwd Enter the new password After give new password run the computer in defaults mode (run level 3) by using following commands Sh 3.0 # init 3 1. Password stage break:- In this method after breaking the password you no need to enter any password . To do this do as following. Start the Computer in trouble shooting mode as upper After that open the following file in vi edition Sh 3.0 # vi /etc/password

All the computer password stored in this file but in encrypted form In this file remove the X first line

Save the file and exit Then start the computer in default mode(init 3)

Sh 3.0 # init 3 After the reboot enter the user name root but do not ether the password simple press the enter and computer will start.

RPM Packages installation in Linux Introduction

RPM is the RPM Package Manager. It is an open packaging system available for anyone to use. It allows users to take source code for new software and package it into source and binary form such that binaries can be easily installed and tracked and source can be rebuilt easily. It also maintains a database of all packages and their files that can be used for verifying packages and querying for information about files and/or packages. Red Hat, Inc. encourages other distribution vendors to take the time to look at RPM and use it for their own distributions. RPM is quite flexible and easy to use, though it provides the base for a very extensive system. It is also completely open and available, though we would appreciate bug reports and fixes. Permission is granted to use and distribute RPM royalty free under the GPL RPM :- it stands for red hat package management . rpm command is used to manage the package . it also used to install, update, remove nd query the packages in the Linux .

Using RPM

In its simplest form, RPM can be used to install packages:


rpm i foobar 1.0 1 .i386.rpm

The next simplest command is to uninstall a package:


rpm e foobar

One of the more complex but highly useful commands allows you to install packages via FTP. If you are connected to the net and want to install a new package, all you need to do is specify the file with a valid URL, like so:
rpm i ftp://ftp.redhat.com/pub/redhat/rh 2.0 b eta/RPMS/foobar 1.0 1 .i386.rpm

Please note, that RPM will now query and/or install via FTP. While these are simple commands, rpm can be used in a multitude of ways. To see which options are available in your version of RPM, type:
rpm help

Installing the packages using RPM commands


Examples:- if you wants to install the Ddcpf packages in the computer . using the rpm command . first open the directory in which the packages is placed and run the following command

# rpm ivh dhcp* --aid force Ivh:- its mean install in visible mode processing Aid:- add and check install dependcies. Force :- ignore error

Updating the packages using RPM commands


If the pacakages is already install and you wants to update the packages with higher version the use the following commands we update the dhcpd with update packages as following. #rpm -uvh dhcpd* --aid force -uvh:- update in visible mode

Query the installed packages


To check the packages is installed or not then use the following commands #rpm qa dhcpd* -qa:-query the packages

Removing the packages from linux


#rpm e dhcpd nodeps -nodeps:- do not remove dependences

Backup and the Compression

In the Linux extension of the backup file is *.tar. colour of the backup file is always red To Create the backup of the directory use the following commands #tar cuf backup.tar /data

Tar:- its is an command to put the backup of data Cuf:- its mean create the backup Backup.tar:- it is an backup file Data:- it is an source file or directory To list the files which are in the backup directory run following commands # tar tuf backup.tar Tuf means list the files in the backup.tar directory

To restore the backup files #tar data backup.tar Data is the directory where restore the data Backup is the backup files

DHCP Server in Linux What is DHCP server It stands for Dynamic host configuration protocol .There may be one or more dhcp peer network. DHCP forwarding agent allows clients to receive address from dhcp server which is part of network. Server can be configured accept request from specified MAC net address only. Dhcp tell about IP address, mask default gateway, domain name, dns server and location of kick start files If boot protol at client is BOOTP, client retain their configuration information indefinitely this is no lease time in BOOTP will not to

To check working of dhcp server: boot a machine thru dhcp server then check /var /log/syslog of dhcp server. DHCP Service Profile: Type: System V-managed Service Packages: dhcp Daemons: dhcpd Script: dhcpd Ports: 67, Configs: /etc/dhcpd.conf, /var/lib/dhcp/dhcpd.leases DHCP Used for Dynamic host configuration protocol Providing I.P Address , Gateway, DNS Working on the Port no. 67,68

Installation of Dhcp packages for setup the dhcp server


We ca install the packages by tree ways:1. By using the RPM commands 2. By using the tar Packages (Unix standard) 3. By using the yum Server 1. By using the RPM commands :The Dhcp packages is available in the linux enterprise edition and you can install it from the it. By using the following commands Insert the dvd of the Red hat enterprise edition in dvdRom Mount the cd rom by usingh following commands # mount /dev/cdrom /media After mounting the drive and open media directory and the open the server directory

Install the packages in by using the following rpm commands

The 4 packages are installed after installed the package the copy the sample file in the conf file of the server and do setting of server in the files as following

After copy the file then open the file as show in the below figure # vi /etc/dhcpd.conf

After open the fill all the information in the file according to needs and remove the #(comments) from line which you wants to use In this file fill the following information about network Router I.p Address Subnet mask used in network Domain server name (if configure) If of the domain name server Range of the ip address which you wants to distributes in the network

After do the upper setting then save the file and exit Now start and stop service of dhcp To start the service do following commands # service dhcpd restart To start the service permanently run following commands #chconfig dhcpd on To see the status of the server user following commands # talif /var/log/messages To stop the service use following # service dhcpd stop

About FTP

FTP (File Transfer Protocol) is a client/server protocol that allows a user to transfer files to and from a remote network site.

It works with TCP and is most commonly used on the Internet, although it can also be used on a LAN. An FTP site is a computer that is running FTP server software (also known an FTP daemon, or ftpd). A public ftp site can usually be accessed by anybody by logging in as anonymous or ftp. There are many excellent public ftp sites that make repositories of free Unix software available. By learning how to use FTP, you give yourself access to an indespensible resource. Private FTP sites require a user name or password. If you have a shell account with your ISP, you may be able to access your files via FTP (contact your system administrator to check on this). An FTP client is the userland application that provides access to FTP servers. There are many FTP clients available. Some are graphical, and some are textbased. Short for File Transfer Protocol, the protocol for exchanging files over the Internet. FTP works in the same way as HTTP for transferring Web pages from a server to a user's browser and SMTP for transferring electronic mail across the Internet in that, like these technologies, FTP uses the Internet's TCP/IP protocols to enable data transfer. FTP is most commonly used to download a file from a server using the Internet or to upload a file to a server

What is FTP Server

FTP (File Transfer Protocol) is the simplest and most secure way to exchange files over the Internet. Whether you know it or not, you most likely use FTP all the time. The most common use for FTP is to download files from the Internet. Because of this, FTP is the backbone of the MP3 music craze, and vital to most online auction and game enthusiasts. In addition, the ability to transfer files back-and-forth makes FTP essential for anyone creating a Web page, amateurs and professionals alike. When downloading a file from the Internet you're actually transferring the file to your computer from another computer over the Internet. This is why the T (transfer) is in FTP. You may not know where the computer is that the file is coming from but you most likely know it's URL or Internet address. An FTP address looks a lot like an HTTP, or Website, address except it uses the prefix ftp:// instead of http://. Example Website address: Example FTP site address: http://www.ftpplanet.com/ ftp://ftp.ftpplanet.com/

Types of FTP

From a networking perspective, the two main types of FTP are active and passive. In active FTP, the FTP server initiates a data transfer connection back to the client. For passive FTP, the connection is initiated from the FTP client. These are illustrated in Figure . Figure Active And Passive FTP Illustrated

From a user management perspective there are also two types of FTP: regular FTP in which files are transferred using the username and password of a regular user FTP server, and anonymous FTP in which general access is provided to the FTP server using a well known universal login method.

Active FTP
The sequence of events for active FTP is:

1. Your client connects to the FTP server by establishing an FTP control connection to port 21 of the server. Your commands such as 'ls' and 'get' are sent over this connection. 2. Whenever the client requests data over the control connection, the server initiates data transfer connections back to the client. The source port of these data transfer connections is always port 20 on the server, and the destination port is a high port (greater than 1024) on the client. 3. Thus the ls listing that you asked for comes back over the port 20 to high port connection, not the port 21 control connection. FTP active mode therefore transfers data in a counter intuitive way to the TCP standard, as it selects port 20 as it's source port (not a random high port that's greater than 1024) and connects back to the client on a random high port that has been pre-negotiated on the port 21 control connection. Active FTP may fail in cases where the client is protected from the Internet via many to one NAT (masquerading). This is because the firewall will not know which of the many servers behind it should receive the return connection.

Passive FTP
Passive FTP works differently: 1. Your client connects to the FTP server by establishing an FTP control connection to port 21 of the server. Your commands such as ls and get are sent over that connection. 2. Whenever the client requests data over the control connection, the client initiates the data transfer connections to the server. The source port of these data transfer connections is always a high port on the client with a destination port of a high port on the server. Passive FTP should be viewed as the server never making an active attempt to connect to the client for FTP data transfers. Because client always initiates the required connections, passive FTP works better for clients protected by a firewall. As Windows defaults to active FTP, and Linux defaults to passive, you'll probably have to accommodate both forms when deciding upon a security policy for your FTP server.

Regular FTP
By default, the VSFTPD package allows regular Linux users to copy files to and from their home directories with an FTP client using their Linux usernames and passwords as their login credentials. VSFTPD also has the option of allowing this type of access to only a group of Linux users, enabling you to restrict the addition of new files to your system to authorized personnel. The disadvantage of regular FTP is that it isn't suitable for general download distribution of software as everyone either has to get a unique Linux user account or has to use a shared username and password. Anonymous FTP allows you to avoid this difficulty.

Anonymous FTP
Anonymous FTP is the choice of Web sites that need to exchange files with numerous unknown remote users. Common uses include downloading software updates and MP3s and uploading diagnostic information for a technical support engineers' attention. Unlike regular FTP where you login with a preconfigured Linux username and password, anonymous FTP requires only a username of anonymous and your email address for the password. Once logged in to a VSFTPD server, you automatically have access to only the default anonymous FTP directory (/var/ftp in the case of VSFTPD) and all its subdirectories.

What is an FTP Site

An FTP site is like a large filing cabinet. With a traditional filing cabinet, the person who does the filing has the option to label and organize the

files how ever they see fit. They also decide which files to keep locked and which remain public. It is the same with an FTP site. The virtual 'key' to get into an FTP site is the UserID and Password. If the creator of the FTP site is willing to give everyone access to the files, the UserID is 'anonymous' and the Password is your e-mail address (e.g. name@domain.com). If the FTP site is not public, there will be a unique UserID and Password for each person who is granted access.When connecting to an FTP site that allows anonymous logins, you're frequently not prompted for a name and password. Hence, when downloading from the Internet, you most likely are using an anonymous FTP login and you don't even know it.To make an FTP connection you can use a standard Web browser (Internet Explorer, Netscape, etc.) or a dedicated FTP software program, referred to as an FTP 'Client'.When using a Web browser for an FTP connection, FTP uploads are difficult, or sometimes impossible, and downloads are not protected (not recommended for uploading or downloading large files). When connecting with an FTP Client, uploads and downloads couldn't be easier, and you have added security and additional features. For one, you're able to to resume a download that did not successfully finish, which is a very nice feature for people using dial-up connections who frequently loose their Internet connection.

What is an FTP Client?

An FTP Client is software that is designed to transfer files back-and-forth between two computers over the Internet. It needs to be installed on your computer and can only be used with a live connection to the Internet. The classic FTP Client look is a two-pane design. The pane on the left displays the files on your computer and the pane on the right displays the files on the remote computer.

File transfers are as easy as dragging-and-dropping files from one pane to the other or by highlighting a file and clicking one of the direction arrows located between the panes. Additional features of the FTP Client include: multiple file transfer; the auto re-get or resuming feature; a queuing utility; the scheduling feature; an FTP find utility; a synchronize utility; and for the advanced user, a scripting utility. All of these features will be explained in later tutorials. First you need to download and install an FTP Client.

Connection methods

FTP runs exclusively over TCP. It defaults to listen on port 21 for incoming connections from FTP clients. A connection to this port from the FTP Client forms the control stream on which commands are passed to the FTP server from the FTP client and on occasion from the FTP server to the FTP client. FTP uses out-of-band control, which means it uses a separate connection for control and data. Thus, for the actual file transfer to take place, a different connection is required which is called the data stream. Depending on the transfer mode, the process of setting up the data stream is different. In active mode, the FTP client opens a dynamic port, sends the FTP server the dynamic port number on which it is listening over the control stream and waits for a connection from the FTP server. When the FTP server initiates the data connection to the FTP client it binds the source port to port 20 on the FTP server. In passive mode, the FTP server opens a dynamic port, sends the FTP client the server's IP address to connect to and the port on which it is listening (a 16-bit value broken into a high and low byte, as explained above) over the control stream and waits for a connection from the FTP client. In this case, the FTP client binds the source port of the connection to a dynamic port. To use passive mode, the client sends the PASV command to which the server would reply with something similar to "227 Entering Passive Mode (127,0,0,1,192,52)". The syntax of the IP address and port are the same as for the argument to the PORT command. While data is being transferred via the data stream, the control stream sits idle. This can cause problems with large data transfers through firewalls which time out sessions after lengthy periods

of idleness. While the file may well be successfully transferred, the control session can be disconnected by the firewall, causing an error to be generated. The FTP protocol supports resuming of interrupted downloads using the REST command. The client passes the number of bytes it has already received as argument to the REST command and restarts the transfer. In some commandline clients for example, there is an often-ignored but valuable command, "reget" (meaning "get again") that will cause an interrupted "get" command to be continued, hopefully to completion, after a communications interruption. Resuming uploads is not as easy. Although the FTP protocol supports the APPE command to append data to a file on the server, the client does not know the exact position at which a transfer got interrupted. It has to obtain the size of the file some other way, for example over a directory listing or using the SIZE command. In ASCII mode (see below), resuming transfers can be troublesome if client and server use different end of line characters. The objectives of FTP, as outlined by its RFC, are: 1. To promote sharing of files (computer programs and/or data).
2. To encourage indirect or implicit use of remote computers. 3. To shield a user from variations in file storage systems among different hosts. 4. To transfer data reliably, and efficientl

Criticisms of FTP

Passwords and file contents are sent in clear text, which can be intercepted by eavesdroppers. There are protocol enhancements that remedy this, for instance by using SSL, TLS or Kerberos.

Multiple TCP/IP connections are used, one for the control connection, and one for each download, upload, or directory listing. Firewalls may need additional logic and/or configuration changes to account for these connections.

It is hard to filter active mode FTP traffic on the client side by using a firewall, since the client must open an arbitrary port in order to receive the connection. This problem is largely resolved by using passive mode FTP.

It is possible to abuse the protocol's built-in proxy features to tell a server to send data to an arbitrary port of a third computer; see FXP. FTP is a high latency protocol due to the number of commands needed to initiate a transfer. No integrity check on the receiver side. If a transfer is interrupted, the receiver has no way to know if the received file is complete or not. Some servers support extensions to calculate for example a file's MD5 sum (e.g. using the SITE MD5 command), XCRC, XMD5, XSHA or CRC checksum, however even then the client has to make explicit use of them. In the absence of such extensions, integrity checks have to be managed externally.

No date/timestamp attribute transfer. Uploaded files are given a new current timestamp, unlike other file transfer protocols such as SFTP, which allow attributes to be included. There is no way in the standard FTP protocol to set the time-last-modified (or timecreated) datestamp that most modern filesystems preserve. There is a draft of a proposed extension that adds new commands for this, but as of yet, most of the popular FTP servers do not support it.

Installing (Setup ) FTP Server (VSFTPD) packages

You need the vsftpd-1.2.1-5.i386.rpm for Red hat/Fedora packages for install the new FTP server in the red hat Linux you can install the Ftp Server by using two commands as following 1. By using RPM command 2. By using YUM server 1. By using the RPM command :- to install the packages write following command #rpm ivh vsftpd* --aid force 2. By using the yum Server:-to install the packages do as following #yum install vsftd* After install the ftp server .VSFTPD only reads the contents of its vsftpd.conf configuration file only when it starts, so you'll have to restart VSFTPD each time you edit the file in order for the changes to take effect. The file may be located in either the /etc or the /etc/vsftpd directories depending on your Linux distribution. After Installing the Vsftps open the vsftpd file in vi editior as following # vi /etc/vsftpd/vsftpd.conf

The vsftpd.conf File

Basic view of the vsftpd.conf file


# Allow anonymous FTP? anonymous_enable=YES ... # The directory which vsftpd will try to change # into after an anonymous login. (Default = /var/ftp) anon_root=/data/directory ... # Uncomment this to allow local users to log in. local_enable=YES ... # Uncomment this to enable any form of FTP write command. # (Needed even if you want local users to be able to upload files) write_enable=YES ... # Uncomment to allow the anonymous FTP user to upload files. This only # has an effect if global write enable is activated. Also, you will # obviously need to create a directory writable by the FTP user. #anon_upload_enable=YES ... # Uncomment this if you want the anonymous FTP user to be able to create # new directories. #anon_mkdir_write_enable=YES ... # Activate logging of uploads/downloads. xferlog_enable=YES ... # You may override where the log file goes if you like. # The default is shown below. xferlog_file=/var/log/vsftpd.log ...

Defaults setting of the Vsftpd .conf files


VSFTPD only reads the contents of its vsftpd.conf configuration file only when it starts, so you'll have to restart VSFTPD each time you edit the file in order for the changes to take effect. The file may be located in either the /etc or the /etc/vsftpd directories depending on your Linux distribution. This file uses a number of default settings you need to know about.

VSFTPD runs as an anonymous FTP server. Unless you want any remote user to log into to your default FTP directory using a username of anonymous and a password that's the same as their email address, I would suggest turning this off. The configuration file's anonymous_enable directive can be set to no to disable this feature. You'll also need to simultaneously enable local users to be able to log in by removing the comment symbol (#) before the local_enable instruction.

If you enable anonymous FTP with VSFTPD, remember to define the root directory that visitors will visit. This is done with the anon_root directive.

anon_root=/data/directory

VSFTPD allows only anonymous FTP downloads to remote users, not uploads from them. This can be changed by modifying the anon_upload_enable directive shown later. VSFTPD doesn't allow anonymous users to create directories on your FTP server. You can change this by modifying the anon_mkdir_write_enable directive. VSFTPD logs FTP access to the /var/log/vsftpd.log log file. You can change this by modifying the xferlog_file directive. By default VSFTPD expects files for anonymous FTP to be placed in the /var/ftp directory. You can change this by modifying the anon_root directive. There is always the risk with anonymous FTP that users will discover a way to write files to your anonymous FTP directory. You run the risk of filling up your /var partition if you use the default setting. It is best to make the anonymous FTP directory reside in its own dedicated partition.

Yum server

About Yum server It is the server in which we place all the pacakages of the linux which we wants to install on the client computer as well as on server computer(Yum server) as show in the figure

Before installing the yum server . we use the RPM command to install the pacakages . which is difficult to install and use . because we can not install all the packages in one time . we run rpm command individual for installing the all depend ices of the packages . in the rpm command we need the cd and dvd of the pacakages to install on evey time to install but . in the yum server we copy the cd in the system not need to cd or dvd of the pacakage on every time to install. After installing the yum server we not need to install the all dependences of the packages . just write the following command and your pacakages will install .
# yum install pacakages name

Installation (setup) of the yum server


Method for installing the yum Server
1. Copy all Rpm Packages from the DVD of linux in the system on directory

2. Install the vsftpd and createrepo packages in the linux 3. Create the data base of all packages 4. Then create *.repo file of the yum server 5. Start the yum server

To install the yum server . first of all copy all the pancakes of Linux in one directory for example we copy all the packages in the /Server directory

1. Insert the DVD of the red hat Linux in cd rom 2. Mount the cdRom in media directory using following command

#mount /dev/cdrom /media # cd /media # cd Serve

3. After opening the server directory then copy all the packages from the server directory in the computer directory /var/ftp/pub/ by using following command # cp var /media/Server /var/ftp/pub

Installing vsftpd and createrepo packages in the linux

Before installing the yum server we install the vsftpd packages in the system to configure the ftp server because It is necessary to transfer the packages over the network and yum is used to install the packages for installing these packages use following commands

After install the upper packages open the location of the where we copy all the packages and create the data base of all the packages Creating the database of the all packages by Using the following commands # createrepo -v /var/ftp/pub/Server Computer take few minutes to create the database of all packages in this directory

Creating the *.* repo files to setup the yum server


This file contain all the setting of the yum server and location of the packages where the packages placed in the server. This files is created by two type as following
1.

On server Computer

2. On client Computer

1. On the server side :- on the server open the following director and make the repo file

by using the vi editor as following

In this file write following things

name :- its the server name you can write any name on that as that name baseurl:- it is the location of all the packages where they palaced in the computer system enabled :- if it is one the yum server is start . if it is 0 then it is disabled

After making the above file save the file and start the yum server and refresh th server by using following command

After this start the service of ftp and yum as following #yum clean all #service vsftpd restart #chkconfig vsftpd

Now the yum server is ready to use

Yum server setting on the client computer


On client computer create the following file by using the vi editior # vi /etc/yum.repos.d/preet.repo In this file write the following things [class] name=harry baseurl=ftp://192.168.0.50/pub/server enabled=1 gpgcheck=0 then save the file and exit name :- its the server name you can write any name on that as that name

baseurl:- it is the location(ftp location) of all the packages where they placed in the server computer system enabled :- if it is one the yum server is start . if it is 0 then it is disabled

start the service by using the following # yum clean all

Apache HTTP Server


APACHE LOGO

The Apache HTTP Server, is web server software notable for playing a key role in the initial growth of the World Wide Web. Released under the Apache License, Apache is characterized as open source software.

Features
Apache supports a variety of features, many implemented as compiled modules which extend the core functionality. These can range from serverside programming language support to authentication schemes. Some common language interfaces support Perl, Python, Tcl, and PHP. Popular authentication modules include mod_access, mod_auth, mod_digest, and mod_auth_digest, the successor to mod_digest. A sample of other features include SSL and TLS support (mod_ssl), a proxy module (mod_proxy), a URL rewriter (also known as a rewrite engine, implemented under mod_rewrite), custom log files (mod_log_config), and filtering support (mod_include and mod_ext_filter).

Virtual hosting allows one Apache installation to serve many different actual websites. For example, one machine with one Apache installation could simultaneously serve www.example.com, www.test.com, test47.testserver.test.com, etc. Apache is used for many other tasks where content needs to be made available in a secure and reliable way. One example is sharing files from a personal computer over the Internet. A user who has Apache installed on their desktop can put arbitrary files in Apache's document root which can then be shared. The main design goal of Apache is not to be the "fastest" web server, but rather to implement almost all the standards. The default configuration file installed with the Apache HTTP Server works without alteration for most situations. This chapter outlines many of the directives found within its configuration file (/etc/httpd/conf/httpd.conf) to aid those who require a custom configuration.

Features of Apache HTTP Server 2.0


The arrival of Apache HTTP Server 2.0 brings with it a number of new features, including:

New Apache API Modules utilize a new, more powerful set of Application Programming Interfaces (APIs). Filtering Modules can act as content filters. IPv6 Support The next generation IP addressing format is supported. Simplified Directives A number of confusing directives have been removed while others have been simplified. Multilingual Error Responses When using Server Side Include (SSI) documents, customizable error response pages can be delivered in multiple languages. Multiprotocol Support Multiple protocols are supported.

Secure Apache
Apache HTTP Secure Server Configuration
The mod_ssl module is a security module for the Apache HTTP Server. The mod_ssl module uses the tools provided by the OpenSSL Project to add a very important feature to the Apache HTTP Server the ability to encrypt communications. In contrast, regular HTTP communications between a browser and a Web server are sent in plain text, which could be intercepted and read by someone along the route between the browser and the server.

The mod_ssl configuration file is located at /etc/httpd/conf.d/ssl.conf. For this file to be loaded, and hence for mod_ssl to work, you must have the statement Include conf.d/*.conf in the /etc/httpd/conf/httpd.conf file. This statement is included by default in the default Apache HTTP Server configuration file.
An Overview of Security-Related Packages

To enable the secure server, you must have the following packages installed at a minimum: httpd The httpd package contains the httpd daemon and related utilities, configuration files, icons, Apache HTTP Server modules, man pages, and other files used by the Apache HTTP Server. mod_ssl The mod_ssl package includes the mod_ssl module, which provides strong cryptography for the Apache HTTP Server via the Secure Sockets Layer (SSL) and Transport Layer Security (TLS) protocols. openssl The openssl package contains the OpenSSL toolkit. The OpenSSL toolkit implements the SSL and TLS protocols, and also includes a general purpose cryptography library.

An Overview of Certificates and Security


Your secure server provides security using a combination of the Secure Sockets Layer (SSL) protocol and (in most cases) a digital certificate from a Certificate Authority (CA). SSL handles the encrypted communications as well as the mutual authentication between browsers and your secure server. The CA-approved digital certificate provides authentication for your secure server (the CA puts its reputation behind its certification of your organization's identity). When your browser is communicating using SSL encryption, the https:// prefix is used at the beginning of the Uniform Resource Locator (URL) in the navigation bar.

The BIND DNS Server


On most modern networks, including the Internet, users locate other computers by name. This frees users from the daunting task of remembering the numerical network address of network resources. The most effective way to configure a

network to allow such name-based connections is to set up a Domain Name Service (DNS) or a nameserver, which resolves hostnames on the network to numerical addresses and vice versa. This chapter reviews the nameserver included in Red Hat Enterprise Linux and the Berkeley Internet Name Domain (BIND) DNS server, with an emphasis on the structure of its configuration files and how it may be administered both locally and remotely.

BIND is also known as the service named in Red Hat Enterprise Linux. You can manage it via the Services Configuration Tool (system-config-service).

DNS associates hostnames with their respective IP addresses, so that when users want to connect to other machines on the network, they can refer to them by name, without having to remember IP addresses. Use of DNS also has advantages for system administrators, allowing the flexibility to change the IP address for a host without affecting name-based queries to the machine. Conversely, administrators can shuffle which machines handle a namebased query. DNS is normally implemented using centralized servers that are authoritative for some domains and refer to other DNS servers for other domains. When a client host requests information from a nameserver, it usually connects to port 53. The nameserver then attempts to resolve the name requested. If the nameserver does not have an authoritative answer about the name the which host requested, or does not already have the answer cached from an earlier query, it queries other nameservers, called root nameservers, to determine which nameservers are authoritative for the name in question. Then, with that information, it queries the authoritative nameservers to get the requested name. Nameserver Zones In a DNS server such as BIND, all information is stored in basic data elements called resource records. A resource record is usually the fully qualified domain name (FQDN) of a host. Resource records are broken down into multiple sections. These sections are organized into a tree-like hierarchy consisting of a main trunk, primary branches, secondary branches, and so forth. Consider the following resource record:
bob.sales.example.com

When looking at how a resource record is resolved to find, for example, the IP address that relates to a particular system, read the name from right to left. Each level of the hierarchy is divided by a period (often called a "dot": . ). In this example, therefore, com defines the top-level domain for this resource record. The name example is a sub-domain under com, while sales is a sub-domain under

example. The name furthest to the left, bob, identifies a resource record which is part of the sales.example.com domain.

Except for the first (leftmost) part of the resource record (bob), each section is called a zone. Zone defines a specific namespace. A zone contains definitions of resource records, which usually contain host-to-IP address mappings and IP addressto-host mappings, which are called reverse records). Zones are defined on authoritative nameservers through the use of zone files, which define the resource records in that zone. Zone files are stored on primary nameservers (also called master nameservers), where changes are made to the files, and secondary nameservers (also called slave nameservers), which receive zone definitions from the primary nameservers. Both primary and secondary nameservers are authoritative for the zone and look the same to clients. Any nameserver can be a primary or secondary nameserver for multiple zones at the same time. It all depends on how the nameserver is configured.

Nameserver Types There are two nameserver configuration types: authoritative This category includes both primary (master) and secondary (slave) servers. Those servers answer only for resource records which are part of their zones. recursive Offers resolution services, but is not authoritative for any zones. Answers for all resolutions are cached in memory for a fixed period of time, which is specified by the retrieved RR. BIND as a Nameserver BIND is set of DNS related programs. It contains a monolithic nameserver called /usr/sbin/named, an administration utility called /usr/sbin/rndc and DNS debugging utility called /usr/bin/dig. BIND stores its configuration files in the following locations:
/etc/named.conf

The configuration file for the named daemon


/var/named/ directory

The named working directory which stores zone and statistic files

2.1 Installation & configuration of Apache HTTP


Type: System V-managed service Packages: httpd, httpd-devel, httpd-manual Daemon: /usr/sbin/httpd Script: /etc/init.d/httpd Ports: 80/tcp (http), 443/tcp (https) Configuration: /etc/httpd/*, /var/www/* Related: system-config-httpd, mod_ssl

The Apache HTTP server is configured here to serve the needs of making Mail Server fully functional. Here, we give the installation of the Apache HTTP server on station49. We configure the server using its configuration files to cater to the needs of displaying websites on the web browser. Then the process of Name Virtualization is shown in brief. The Apache HTTP server is needed to implement the GUI mailing service called SquirrelMail, use of which will be shown as we proceed through the document.

1)Installation
The packages listed above are installed by YUM utility as shown below:

All the packages associated with httpd will be installed using this command and there dependencies will be resolved.

2)Configuration

The first step in the configuring httpd will definitely be coding index.html . In our project we display 5 websites that are designed through html coding. These will be shown after successful implementation of Apache HTTP server. The sites are made in the following directories:

station49.example.com - /var/www/html nike.example.com - /var/www/html/nike reebok.example.com - /var/www/html/reebok ndtv.example.com - /var/www/ndtv audi.example.com - /var/www/audi

Now we configure Name Virtualization in the apache server through the configuration file /etc/httpd/conf/httpd.conf .

The line 972 of the file is edited as shown below to enable NameVirtual by giving the IP address on port number 80 which is of httpd.

Entries for the above mentioned site are added by successively defining <VirtualHost> </VirtualHost> block, each for each of the above mentioned sites. The code is given below.

The following command must be run to refresh the changes made in the configuration file to the Operating System.

service httpd restart

After successful implementation of above command, our server is ready to use.

3)Implementation
The following sites were accessible through the web browser, firefox in our case, by typing in the name of the website in the url bar. For example, station49.example.com gives you the following display.

Similarly the displays of sites made by us are given below in the following order, nike.example.com, reebok.example.com, ndtv.example.com, audi.example.com.

nike.example.com

reebok.example.com

ndtv.example.com

audi.example.com

2.2 Installation & configuration of Apache Encrypted Web Server (https)

Apache can provide encrypted communications using the mod_ssl Apache module. To make use of encrypted communications, a client must request the https protocol, which uses port 443. The configuration file for mod_ssl in Red Hat Enterprise Linux is /etc/http/conf.d/ssl.conf.

1)Installation
The https related components can be installed by using YUM utility. The package name for https is mod_ssl.

The packages were successfully installed.

2)Configuration
Configuring https involves making certificate for the encryption to implement. This certificate is required to be implemented in /etc/httpd/conf.d/ssl.conf.

Here, we first change our native directory to /etc/pki/tls/certs. Implementing SSL requires making certificate. Here, we make the certificate using make command by the name of certificate.crt as shown below. While making certificate, we fill the certificate details. These details include name of country, state, organization, email, hostname etc.

After compiling the certificate, the necessary changes in the configuration file are made. As we know, the configuration file for https is /etc/httpd/conf.d/ssl.conf . We edit this file using VI text editor.

As we see here, the changes are made to the only two uncommented lines in the above screenshot to notify the exact path name and the certificate name to the configuration file. The file is then saved.

The successive step is to move the key file generated with the certificate.crt named certificate.key to the /etc/pki/tls/private directory as mentioned in the configuration file.

The httpd service is then restarted to refresh the changes made in the configuration file. It should be noticed that when we restart the service, we are asked for the passphrase by the shell. This signifies that our Apache server is now secure.

3)Implementation
The encrypted Apache HTTP server enables the web browser to display a warning while entering a protected site. This is shown below.

Here we use a private handmade certificate. This is not recognized by the browser until we add an exception for it by clicking on the link as shown above. The certificate details can be seen as below:

2.3 Installation and configuration of Domain Name System (DNS)


Name server
A name server consists of a program or computer server that implements a name-service protocol. It maps a human-recognizable identifier to a systeminternal, often numeric, identification or addressing component. The most prominent types of name servers in operation today are the name servers of the Domain Name System (DNS), one of the two principal name spaces of the Internet. The most important function of these DNS servers is the translation (resolution) of humanly memorable domain names and hostnames into the corresponding numeric Internet Protocol (IP) addresses, the second principal Internet name space, used to identify and locate computer systems and resources on the Internet. Configuration Files: /var/named/chroot/etc/named.conf /var/named/chroot/var/named/f.zone /var/named/chroot/var/named/r.zone Type: System V-managed Daemon: /usr/sbin/named Ports: 53(named) 953(rndc)

The Domain Name System (DNS) is a hierarchical naming system for computers, services, or any resource connected to the Internet or a private network. It associates various information with domain names assigned to each of the participants. Most importantly, it translates domain names meaningful to humans into the numerical (binary) identifiers associated with networking equipment for the purpose of locating and addressing these devices worldwide. An often-used analogy to explain the Domain Name System is that it serves as the "phone book" for the Internet by translating human-friendly computer hostnames into IP addresses. For example, www.example.com translates to 192.0.32.10. DNS is also known as a distributed database that provides mapping between IP addresses and Host names. The Domain Name System makes it possible to assign domain names to groups of Internet users in a meaningful way, independent of each user's physical location. Because of this, World Wide Web (WWW) hyperlinks and

Internet contact information can remain consistent and constant even if the current Internet routing arrangements change or the participant uses a mobile device. Internet domain names are easier to remember than IP addresses such as 208.77.188.166 (IPv4) or 2001:db8::1f70:6e8 (IPv6). People take advantage of this when they recite meaningful URLs and e-mail addresses without having to know how the machine will actually locate them. The Domain Name System distributes the responsibility of assigning domain names and mapping those names to IP addresses by designating authoritative name servers for each domain. Authoritative name servers are assigned to be responsible for their particular domains, and in turn can assign other authoritative name servers for their sub-domains. This mechanism has made the DNS distributed and fault tolerant and has helped avoid the need for a single central register to be continually consulted and updated. In general, the Domain Name System also stores other types of information, such as the list of mail servers that accept email for a given Internet domain. By providing a worldwide, distributed keyword-based redirection service, the Domain Name System is an essential component of the functionality of the Internet. A hostname is a label that is assigned to a device connected to a computer network and that is used to identify the device in various forms of electronic communication such as the World Wide Web, e-mail or Usenet. Hostnames may be simple names consisting of a single word or phrase, or they may include the name of a Domain Name System (DNS) domain at the end, that is separated from the host specific label by a full stop (dot). In the latter form, a hostname is also called a domain name. If the domain name is completely specified including a top-level domain of the Internet, the hostname is said to be a fully qualified domain name (FQDN). A fully-qualified domain name (FQDN), sometimes referred to as an absolute domain name, is a domain name that specifies its exact location in the tree hierarchy of the Domain Name System (DNS). It specifies all domain levels, including the top-level domain, relative to the root domain. A fullyqualified domain name is distinguished by this absoluteness in the name space. For example, given a device with a local hostname myhost and a parent domain name example.com, the fully-qualified domain name is written as myhost.example.com. This fully-qualified domain name therefore uniquely identifies the host while there may be many resources in the world called myhost, there is only one myhost.example.com. In the Domain Name System, and most notably, in DNS zone files, a fullyqualified domain name is specified with a trailing dot. For example, somehost.example.com

Specifies an absolute domain name which ends with an empty top level domain label. The DNS root domain is unnamed, which is expressed by an empty label, resulting in a domain name ending with the dot separator. However, many DNS resolvers will process a domain name that contains a dot in any position as being fully qualified or add the final dot needed for the root of the DNS tree. Resolvers will process a domain name without a dot as unqualified and automatically append the system's default domain name and the final dot. Some applications, such as web browsers will try to resolve the domain name part of a Uniform Resource Locator (URL) if the resolver cannot find the specified domain or if it is clearly not fully qualified by appending frequently used top-level domains and testing the result. Some applications, however, never use trailing dots to indicate absoluteness, because the underlying protocols require the use of FQDNs, such as Simple Mail Transfer Protocol (email).

Structure
The domain name space

The hierarchical domain name system, organized into zones, each served by a name server. The domain name space consists of a tree of domain names. Each node or leaf in the tree has zero or more resource records, which hold information

associated with the domain name. The tree sub-divides into zones beginning at the root zone. A DNS zone consists of a collection of connected nodes authoritatively served by an authoritative nameserver. (Note that a single nameserver can host several zones.) Administrative responsibility over any zone may be divided, thereby creating additional zones. Authority is said to be delegated for a portion of the old space, usually in form of sub-domains, to another nameserver and administrative entity. The old zone ceases to be authoritative for the new zone.

1)Installation
You may install caching* and bind* in the DNS server as shown below:

2)Configuration

After installation of bind* and caching* on the DNS server. Go to the configuration file and follow the steps shown below:

Open named.rfc1912.zones file.

Copy content line no.21 to 31 of named.rfc1912.zones to named.conf

Now copy localhost.zone to f.zone in /var/named/chroot/var/named/ and named.local to r.zone in same path.

This is how f.zone and r.zone would look like after modifications.Here we have set example.com as the domain name.

Now you may try pinging all sites made in apache.

CONCLUSION
In our project..
Apart from the regular curriculum we tried our level best to experiment on the topics and managed to achieve the following :

Successfully configured DNS and APPACHE We can host multiple sides on single IP Scheduled to access the sides from local system to remote system. We have shown an elaborate configuring of the PROXY SERVER.

Success.

Apache helped to support DNS with its web serving features. DNS maintained the link between IP Addresses and Domain names. We successfully managed to create a LAN and configure PROXY SERVER. Even though we were working on a small network at our home and college,we had a delightful and complete experience of networking.

Our Limitations.

Working on a small network which was not registered with the ISP restricted us from multiple websites to the best sites existing today like yahoo.com,gmail.com,etc.

Because of time limitations we could not implement scheduling of websites to other systems on the network in our project.

PLATFORM USED LINUX

Reference:1. www.google.co.in 2.. www.wikipedia.org 3. Linux.org 4. Redhat.com

References

Page95

95

S-ar putea să vă placă și