Documente Academic
Documente Profesional
Documente Cultură
org
This page describes how to repair a computer whose kernel panics at boot. This has to do with
the very basic OS kernel and the first part of the boot routine. (For issues regarding graphical
interface problems or program freeze-ups, etc., save yourself some wasted effort and time, and
please look elsewhere.)
Definition
A decent definition of Kernel Panic comes to us from Wikipedia, which states in part; "A kernel
panic is an action taken by an operating system upon detecting an internal fatal error from which
it cannot safely recover; the term is largely specific to Unix and Unix-like systems. The
equivalent in Microsoft Windows operating systems is the Blue Screen of Death."
What to do
Basically, the problem is that the operating system doesn't start correctly. Various behavior may
be expressed, such as that one may get the computer to freeze, or the operating system may give
an error message of some sort or one may not go to the place they were expecting (Command
prompt, Desktop or whathaveyou). This will require some basic troubleshooting from the
command line, if you can boot to it, or from a boot disk if it will get you a command prompt or
your favorite interface.
Troubleshooting
To make troubleshooting easier, ensure that the kernel is not in quiet mode. Remove 'quiet' from
the kernel line in GRUB, if it is found there. Upon boot, check the output immediately before the
panic, and decide whether there is any useful information. There are probably too many causes
for a kernel panic to keep well-documented in this wiki. Make sure that your system's
configuration in /boot is correct, and that none of the computer's hardware is faulty - it is good
idea to run memtest from the Arch install/rescue CD or another utility (red entries are bad). If
you believe the configuration in /boot may be erroneous, try Option 1 to repair your bootloader
setup. If you believe the kernel panic is the fault of the kernel itself, follow Option 2 in order to
reinstall the existing version or an earlier kernel.
The first step is booting the installation CD. Once booted, you are presented with an
automatically logged-in virtual console as the root user.
When booted, you are in a minimal but functional live GNU/Linux environment with some basic
tools. Now, you have to mount your normal root disk (or partition) to /mnt.
If you are using legacy IDE drives, then use the command:
This is a good point to stop and gather your information onto another drive or partition so that it
can be analyzed and/or emailed for outside viewing before the files change again. Simply create
a separate directory on your main partition or mount a USB drive to contain the files. Then you
may copy any files you will need to keep unchanged during the next boot with your new kernel.
If you do not want to use the Bash shell, remove /bin/bash from the arch-chroot command.
If you keep your downloaded pacman packages, you now can easily roll back. If you did not
keep them, you have to find a way[broken link: invalid section] to get a previous kernel version on your
system now.
Let us suppose you kept the previous versions. We will now install the last working one.
# pacman -U /var/cache/pacman/pkg/linux-4.xx-x.pkg.tar.xz
(Of course, make sure that you adapt this line to your own kernel version. You can find the ones
you still have in your cache by examining the directory above.)
Reboot
Note: If you choose to do anything else before you reboot, remember that you are still in the
chroot environment and will likely have to exit and login again.
Now is the time to reboot and see if the system modifications have stopped the panic. If reverting
to an older kernel works, do not forget to check the arch-newspage to check what went wrong
with the kernel build. If there is no mention of the problem there, then go to the bug reporting
area and search for it there. If you still do not find it, open a new bug report and attach those files
you saved during the troubleshooting step above.
File recovery
Related articles
Post recovery tasks#Photorec
This article lists data recovery and undeletion options for Linux.
Contents
1 Special notes
o 1.1 Before you start
o 1.2 Failing drives
o 1.3 Backup flash media/small partitions
o 1.4 Working with digital cameras
2 Foremost
3 Scalpel
4 Extundelete
o 4.1 Installation
o 4.2 Usage
5 Testdisk and PhotoRec
o 5.1 Installation
o 5.2 Usage
o 5.3 Files recovered by photorec
o 5.4 See also
6 e2fsck
o 6.1 Installation
o 6.2 See also
7 Working with raw disk images
o 7.1 Mount the entire disk
o 7.2 Mounting partitions
7.2.1 Getting disk geometry
o 7.3 Using QEMU to Repair NTFS
8 Text file recovery
9 See also
Special notes
Before you start
This page is mostly intended to be used for educational purposes. If you have accidentally
deleted or otherwise damaged your valuable and irreplaceable data and have no previous
experience with data recovery, turn off your computer immediately (Just press and hold the off
button or pull the plug; do not use the system shutdown function) and seek professional help. It is
quite possible and even probable that, if you follow any of the steps described below without
fully understanding them, you will worsen your situation.
Failing drives
In the area of data recovery, it is best to work on images of disks rather than physical disks
themselves. Generally, a failing drive's condition worsens over time. The goal ought to be to first
rescue as much data as possible as early as possible in the failure of the disk and to then abandon
the disk. The ddrescue and dd_rescue utilities, unlike dd, will repeatedly try to recover from
errors and will read the drive front to back, then back to front, attempting to salvage data. They
keep log files so that recovery can be paused and resumed without losing progress.
The image files created from a utility like ddrescue can then be mounted like a physical device
and can be worked on safely. Always make a copy of the original image so that you can revert if
things go sour!
A tried and true method of improving failing drive reads is to keep the drive cold. A bit of time
in the freezer is appropriate, but be careful to avoid bringing the drive from cold to warm too
quickly, as condensation will form. Keeping the drive in the freezer with cables connected to the
recovering PC works great.
Do not attempt a filesystem check on a failing drive, as this will likely make the problem worse.
Mount it read-only.
As an alternative to working with a 'live' partition (mounted or not), it is often preferable to work
with an image, provided that the filesystem in question is not too large and that you have
sufficient free HDD space to accommodate the image file. For example, flash memory devices
like thumb drives, digital cameras, portable music players, cellular phones, etc. are likely to be
small enough to image in many cases.
Be sure to read the man pages for the utilities listed below to verify that they are capable of
working with image files.
# dd if=/dev/target_partition of=/home/user/partition.image
In order for some of the utilities listed in the next section to work with flash media, the device in
question needs to be mounted as a block device (i.e., listed under /dev). Digital cameras
operating in PTP (Picture Transfer Protocol) mode will not work in this regard. PTP cameras are
transparently handled by libgphoto and/or libptp. In this case, "transparently" means that PTP
devices do not get block devices. The alternative to PTP mode, USB Mass Storage (UMS) mode,
is not supported by all cameras. Some cameras have a menu item that allows switching between
the two modes; refer to your camera's user manual. If your camera does not support UMS mode
and therefore cannot be accessed as a block device, your only alternative is to use a flash media
reader and physically remove the storage media from your camera.
Foremost
Foremost is a console program to recover files based on their headers, footers, and internal data
structures. This process is commonly referred to as data carving. Foremost can work on disk
image files (such as those generated by dd, Safeback, Encase, etc.) or directly on a drive. The
headers and footers can be specified by a configuration file or command line switches can be
used to specify built-in file types. These built-in types look at the data structures of a given file
format, allowing for more reliable and faster recovery.
Scalpel
Scalpel is a console file-carving program originally based on Foremost, although significantly
more efficient. Originally developed by Golden G. Richard III, it allows an examiner to specify a
number of headers and footers to recover filetypes from a piece of media. Licensed under the
Apache licence, Scalpel is maintained by Golden G. Richard III and Lodovico Marziale.
Extundelete
Extundelete is a terminal-based utility designed to recover deleted files from ext3 and ext4
partitions. It can recover all the recently deleted files from a partition and/or a specific file(s)
given by relative path or inode information. Note that it works only when the partition is
unmounted. The recovered files are saved in the current directory under the folder named
RECOVERED_FILES/.
Installation
Usage
To recover data from a specific partition, the device name for the partition, which will be in the
format /dev/sdXN (X is a letter and N is a number.), must be known. The example used here is
/dev/sda4, but your system might use something different (For example, MMC card readers use
/dev/mmcblkNpN as their naming scheme.) depending on your filesystem and device
configuration. If you are unsure, run df, which prints currently mounted partitions.
Once which partition data is to be recovered from has been determined, simply run:
Any subdirectories must be specified, and the command runs from the highest level of the
partition, so, to recover a file in /home/SomeUserName/, assuming /home is on its own partition,
run:
For advanced users, to manually recover blocks or inodes with extundelete, debugfs can be used
to find the inode to be recovered; then, run:
inode stands for any valid inode. Additional inodes to recover can be listed in an unspaced,
comma-separated fashion.
TestDisk is primarily designed to help recover lost partitions and/or make non-booting disks
bootable again when these symptoms are caused by faulty software, certain types of viruses, or
human error, such as the accidental deletion of partition tables.
PhotoRec is file recovery software designed to recover lost files including photographs (Hint:
PhotographRecovery), videos, documents, archives from hard disks and CD-ROMs. PhotoRec
ignores the filesystem and goes after the underlying data, so it will still work even with a re-
formatted or severely damaged filesystems and/or partition tables.
Installation
Install the testdisk package, which provides both TestDisk and PhotoRec.
Usage
After running e.g. ddrescue to create image.img, photorec image.img will open a terminal UI
where you can select what file types to search for and where to put the recovered files.
The photorec utility stores recovered files with a random names(for most of the files) under a
numbered directories, e.g. ./recup_dir.1/f872690288.jpg,
./recup_dir.1/f864563104_wmclockmon-0.1.0.tar.gz.
See also
e2fsck
e2fsck is the ext2/ext3 filesystem checker included in the base install of Arch. e2fsck relies on a
valid superblock. A superblock is a description of the entire filesystem's parameters. Because this
data is so important, several copies of the superblock are distributed throughout the partition.
With the -b option, e2fsck can take an alternate superblock argument; this is useful if the main,
first superblock is damaged.
To determine where the superblocks are, run dumpe2fs -h on the target, unmounted partition.
Superblocks are spaced differently depending on the filesystem's blocksize, which is set when
the filesystem is created.
An alternate method to determine the locations of superblocks is to use the -n option with
mke2fs. Be sure to use the -n flag, which, according to the mke2fs manpage, "Causes mke2fs to
not actually create a filesystem, but display what it would do if it were to create a filesystem.
This can be used to determine the location of the backup superblocks for a particular filesystem,
so long as the mke2fs parameters that were passed when the filesystem was originally created
are used again. (With the -n option added, of course!)".
Installation
Both e2fsck and dumpe2fs are included in the base Arch install as part of e2fsprogs.
See also
e2fsck man page: http://phpunixman.sourceforge.net/index.php/man/e2fsck/8
dumpe2fs man page:
http://phpunixman.sourceforge.net/index.php?parameter=dumpe2fs&mode=man
Notes: please use the second argument of the template to provide more detailed indications. (Discuss in
Talk:File recovery#)
If you have backed up a drive using ddrescue or dd and you need to mount this image as a
physical drive, see this section.
To mount a complete disk image to the next free loop device, use the losetup command:
# losetup -f -P /path/to/image
Tip:
The -f flag mounts the image to the next available loop device.
The -P flag creates additional devices for every partition.
Mounting partitions
In order to be able to mount a partiton of a whole disk image, follow the steps above.
Once the whole disk image is mounted, a normal mount command can be used on the loop
device:
This command mounts the first partition of the image in loop0 to the folder to the mountpoint
/mnt/example. Remember that the mountpoint directory must exist!
Once the entire disk image has been mounted as a loopback device, its drive layout can be
inspected.
Using QEMU to Repair NTFS
With a disk image that contains one or more NTFS partitions that need to be chkdsked by
Windows since no good NTFS filesystem checker for Linux exists, QEMU can use a raw disk
image as a real hard disk inside a virtual machine:
Warning: Do not use lower version of Windows to check NTFS partitions create by higher version of it,
e.g. Windows XP can do damage to NTFS partitions created by Windows 8 by "fixing" metadata
configuration that has support for, not supported entries will be removed or miss-configured.
Use grep to search for fixed strings (-F) directly on the partition:
Hopefully, the content of the deleted file is now in OutputFile, which can be extracted from the
surrounding context manually.
Note: The -C 200 option tells grep to print 200 lines of context from before and after each match of
the string. Alternatives are the -A and -B flags, which print context only from after and before each
match, respectively. You may need to adjust the number of lines if the file you are looking for is very
long.
See also
1 Installation
2 Upgrading to GRUB2
o 2.1 Is upgrading necessary?
o 2.2 How to upgrade
o 2.3 Differences
2.3.1 Backup important data
o 2.4 Converting GRUB Legacy's config file to the new format
o 2.5 Restore GRUB Legacy
3 Configuration
o 3.1 Finding GRUB's root
o 3.2 Dual booting with Windows
o 3.3 Dual booting with GNU/Linux
o 3.4 chainloader and configfile
o 3.5 Dual booting with GNU/Linux (GRUB2)
4 Bootloader installation
o 4.1 Manual recovery of GRUB libs
o 4.2 General notes about bootloader installation
o 4.3 Installing to the MBR
o 4.4 Installing to a partition
o 4.5 Alternate method (grub-install)
5 Tips and tricks
o 5.1 Graphical boot
o 5.2 Framebuffer resolution
5.2.1 GRUB recognized value
5.2.2 hwinfo
o 5.3 Naming partitions
5.3.1 By Label
5.3.2 By UUID
o 5.4 Boot as root (single-user mode)
o 5.5 Password protection
o 5.6 Restart with named boot choice
o 5.7 LILO and GRUB interaction
o 5.8 GRUB boot disk
o 5.9 Hide GRUB menu
6 Advanced debugging
7 Troubleshooting
o 7.1 GRUB Error 17
o 7.2 /boot/grub/stage1 not read correctly
o 7.3 Accidental install to a Windows partition
o 7.4 Edit GRUB entries in the boot menu
o 7.5 device.map error
o 7.6 KDE reboot pull-down menu fails
o 7.7 GRUB fails to find or install to any virtio /dev/vd* or other non-BIOS devices
8 See also
Installation
GRUB Legacy has been dropped from the official repositories in favor of GRUB version 2.x but
is still available from the grub-legacyAUR package.
Additionally, GRUB must be installed to the boot sector of a drive or partition to serve as a
bootloader. This is covered in the Bootloader installation section.
Upgrading to GRUB2
Is upgrading necessary?
The short answer is No. GRUB legacy will not be removed from your system and will stay fully
functional.
However, as any other packages which are not supported anymore, bugs are unlikely to be fixed.
So you should consider upgrading to GRUB version 2.x, or one of the other supported Boot
loaders.
GRUB legacy does not support GPT disks, Btrfs filesystem and UEFI firmwares.
How to upgrade
Upgrade from GRUB Legacy to GRUB version 2.x is much the same as installing GRUB on a
running Arch Linux. Detailed instructions is covered here.
Differences
There are differences in the commands of GRUB Legacy and GRUB. Familiarize yourself with
GRUB commands before proceeding (e.g. "find" has been replaced with "search").
GRUB is now modular and no longer requires "stage 1.5". As a result, the bootloader itself is
limited -- modules are loaded from the hard drive as needed to expand functionality (e.g. for
LVM or RAID support).
Device naming has changed between GRUB Legacy and GRUB. Partitions are numbered from 1
instead of 0 while drives are still numbered from 0, and prefixed with partition-table type. For
example, /dev/sda1 would be referred to as (hd0,msdos1) (for MBR) or (hd0,gpt1) (for
GPT).
GRUB is noticeably bigger than GRUB legacy (occupies ~13 MB in /boot). If you are booting
from a separate /boot partition, and this partition is smaller than 32 MB, you will run into disk
space issues, and pacman will refuse to install new kernels.
Although a GRUB installation should run smoothly, it is strongly recommended to keep the
GRUB Legacy files before upgrading to GRUB v2.
# mv /boot/grub /boot/grub-legacy
Backup the MBR which contains the boot code and partition table (replace /dev/sdX with your
actual disk path):
Only 446 bytes of the MBR contain boot code, the next 64 contain the partition table. If you do
not want to overwrite your partition table when restoring, it is strongly advised to backup only
the MBR boot code:
# dd if=/dev/sdX of=/path/to/backup/bootcode_backup bs=446 count=1
For example:
/boot/grub/menu.lst
default=0
timeout=5
If you forgot to create a GRUB /boot/grub/grub.cfg config file and simply rebooted into
GRUB Command Shell, type:
# mv /boot/grub /boot/grub.nonfunctional
Warning: This command also restores the partition table, so be careful of overwriting a modified
partition table with the old one. It will mess up your system.
# dd if=/path/to/backup/first-sectors of=/dev/sdX bs=512 count=1
Configuration
The configuration file is located at /boot/grub/menu.lst. Edit this file to suit your needs.
timeout # -- time to wait (in seconds) before the default operating system is automatically
loaded.
default # -- the default boot entry that is chosen when the timeout has expired.
/boot/grub/menu.lst
# general configuration:
timeout 5
default 0
color light-blue/black light-cyan/blue
# (1) Windows
#title Windows
#rootnoverify (hd0,0)
#makeactive
#chainloader +1
GRUB must be told where its files reside on the system, since multiple instances may exist (i.e.,
in multi-boot environments). GRUB files always reside under /boot, which may be on a
dedicated partition.
Note: GRUB defines storage devices differently than conventional kernel naming does.
Hard disks are defined as (hdX); this also refers to any USB storage devices.
Device and partitioning numbering begin at zero. For example, the first hard disk recognized in
the BIOS will be defined as (hd0). The second device will be called (hd1). This also applies to
partitions. So, the second partition on the first hard disk will be defined as (hd0,1).
If you are unaware of the the location of /boot, use the GRUB shell find command to locate the
GRUB files. Enter the GRUB shell as root by:
# grub
The following example is for systems without a separate /boot partition, wherein /boot is
merely a directory under /:
GRUB will find the file, and output the location of the stage1 file. For example:
(hd0,0)
This value should be entered on the root line in your configuration file. Type quit to exit the
shell.
Add the following to the end of your /boot/grub/menu.lst (assuming that your Windows
partition is on the first partition of the first drive):
/boot/grub/menu.lst
title Windows
rootnoverify (hd0,0)
makeactive
chainloader +1
Note:
If you are attempting to dual-boot with Windows 7, you should comment out the line
makeactive.
Windows 2000 and later versions do NOT need to be on the first partition to boot (contrary to
popular belief). If the Windows partition changes (i.e. if you add a partition before the Windows
partition), you will need to edit the Windows boot.ini file to reflect the change (see this
article for details on how to do that).
If Windows is located on another hard disk, the map command must be used. This will make your
Windows install think it is actually on the first drive. Assuming that your Windows partition is
on the first partition of the second drive:
/boot/grub/menu.lst
title Windows
map (hd0) (hd1)
map (hd1) (hd0)
rootnoverify (hd1,0)
makeactive
chainloader +1
Note: If you are attempting to dual-boot with Windows 7, you should comment out the line
makeactive.
This can be done the same way that an Arch Linux install is defined. For example:
/boot/grub/menu.lst
The chainloader command will load another bootloader (rather than a kernel image); useful if
another bootloader is installed in a partition's boot sector (GRUB, for example). This allows one
to install a "main" instance of GRUB to the MBR and distribution-specific instances of GRUB to
each partition boot record (PBR).
The configfile command will instruct the currently running GRUB instance to load the
specified configuration file. This can be used to load another distribution's menu.lst without a
separate GRUB installation. The caveat of this approach is that other menu.lst may not be
compatible with the installed version of GRUB; some distributions heavily patch their versions of
GRUB.
For example, GRUB is to be installed to the MBR and some other bootloader (be it GRUB or
LILO) is already installed to the boot sector of (hd0,2).
---------------------------------------------
| | | | % |
| M | | | B % |
| B | (hd0,0) | (hd0,1) | L % (hd0,2) |
| R | | | % |
| | | | % |
---------------------------------------------
| ^
| chainloading |
-----------------------------
The chainloader command can also be used to load the MBR of a second drive:
If the other Linux distribution uses GRUB2 (e.g. Ubuntu 9.10+), and you installed its boot loader
to its root partition, you can add an entry like this one to your /boot/grub/menu.lst:
/boot/grub/menu.lst
Selecting this entry at boot will load the other distribution's GRUB2 menu assuming that the
distribution is installed on /dev/sda3.
Bootloader installation
Manual recovery of GRUB libs
The *stage* files are expected to be in /boot/grub, which may not be the case if the bootloader
was not installed during system installation or if the partition/filesystem was damaged,
accidentally deleted, etc.
# cp -a /usr/lib/grub/i386-pc/* /boot/grub
Note: Do not forget to mount the system's boot partition if your setup uses a separate one! The above
assumes that either the boot partition resides on the root filesystem or is mounted to /boot on the root
file system!
GRUB may be installed from a separate medium (e.g. a LiveCD), or directly from a running
Arch install. The GRUB bootloader is seldom required to be reinstalled and installation is not
necessary when:
# grub
Use the root command with the output from the find command (see Finding GRUB's root) to
instruct GRUB which partition contains stage1 (and therefore, /boot):
Installing to a partition
The following example installs GRUB to the first partition of the first drive:
After running setup, enter quit to exit the shell. If you chrooted, exit your chroot and unmount
partitions. Now reboot to test.
Note: This procedure is known to be less reliable, the recommended method is to use the GRUB shell.
Use the grub-install command followed by the location to install the bootloader. For example
to install the GRUB bootloader to the MBR of the first drive:
# grub-install /dev/sda
GRUB will indicate whether it successfully installs. If it does not, you will have to use the
GRUB shell method.
Graphical boot
For those desiring eye candy, see grub-gfx. GRUB also offers enhanced graphical capabilities,
such as background images and bitmap fonts.
Framebuffer resolution
One can use the resolution given in the menu.lst, but you might want to use your LCD wide-
screen at its full native resolution. Here is what you can do to achieve this:
On Wikipedia, there is a list of extended framebuffer resolutions (i.e. beyond the ones in the
VBE standard). But, for example, the one I want to use for 1440x900 (vga=867) does not work.
This is because the graphic card manufacturers are free to choose any number they wish, as this
is not part of the VBE 3 standard. This is why these codes change from one card to the other
(possibly even for the same manufacturer).
So instead of using that table, you can use one of the tools mentioned below to get the correct
code:
This is an easy way to find the resolution code using only GRUB itself.
On the kernel line, specify that the kernel should ask you which mode to use.
Now reboot. GRUB will now present a list of suitable codes to use and the option to scan for
even more.
You can pick the code you would like to use (do not forget it, it is needed for the next step) and
boot using it.
Now replace ask in the kernel line with the correct one you have picked.
hwinfo
Naming partitions
By Label
If you alter (or plan to alter) partition sizes from time to time, you might want to consider
defining your drive/partitions by a label. You can label ext2, ext3, ext4 partitions by:
e2label /dev/drive|partition label
The label name can be up to 16 characters long but cannot have spaces for GRUB to understand
it. Then define it in your menu.lst:
By UUID
The UUID (Universally Unique IDentifier) of a partition may be discovered with blkid or ls -
l /dev/disk/by-uuid. It is defined in menu.lst with either:
or:
At the boot loader, select an entry and edit it (e key). Append the following parameters to the
kernel options:
This will start in single-user mode (init 1), i.e. you will end up to a root prompt without being
asked for password. This may be useful for recovery features, like resetting the root password.
However, this is a huge security flaw if you have not set any #Password protection for grub.
Password protection
You can enable password protection in the GRUB configuration file for operating systems you
wish to have protected. Bootloader password protection may be desired if your BIOS lacks such
functionality and you need the extra security.
First, choose a password you can remember and then encrypt it:
# grub-md5-crypt
Password:
Retype password:
$1$ZOGor$GABXUQ/hnzns/d5JYqqjw
Then add your password to the beginning of the GRUB configuration file at
/boot/grub/menu.lst (the password must be at the beginning of the configuration file for
GRUB to be able to recognize it):
# general configuration
timeout 5
default 0
color light-blue/black light-cyan/blue
Then for each operating system you wish to protect, add the lock command:
It is always possible to reset your BIOS settings by setting the appropriate jumper on the
motherboard (see your motherboard's manual, as it is specific to every model). So in case other
have access to the hardware, there is basically no way to prevent boot breakthroughs.
If you realize that you often need to switch to some other non-default OS (e.g. Windows) having
to reboot and wait for the GRUB menu to appear is tedious. GRUB offers a way to record your
OS choice when restarting instead of waiting for the menu, by designating a temporary new
default which will be reset as soon as it has been used.
/boot/grub/menu.lst
# general configuration:
timeout 10
default 0
color light-blue/black light-cyan/blue
# (0) Arch
title Arch Linux
root (hd0,1)
kernel /boot/vmlinuz-linux root=/dev/disk/by-label/ARCH ro
initrd /boot/initramfs-linux.img
# (1) Windows
title Windows XP
rootnoverify (hd0,0)
makeactive
chainloader +1
Arch is the default (0). We want to restart in to Windows. Change default 0 to default saved
-- this will record the current default in a default file in the GRUB directory whenever the
savedefault command is used. Now add the line savedefault 0 to the bottom of the Windows
entry. Whenever Windows is booted, it will reset the default to Arch, thus making changing the
default to Windows temporary.
Now all that is needed is a way to easily change the default manually. This can be accomplished
using the command grub-set-default. So, to reboot into Windows, enter the following
commands:
# grub-set-default 1
Then reboot.
For ease of use, you might to wish to implement the "Allow users to shutdown fix" (including
/sbin/grub-set-default amongst the commands the user is allowed to issue without
supplying a password).
If the LILO package is installed on your system, remove it. As some tasks (e.g. kernel
compilation using make all) will make a LILO call, and LILO will then be installed over
GRUB. LILO may have been included in your base system, depending on your installer media
version and whether you selected/deselected it during the package selection stage.
Note: Removing liloAUR will not remove LILO from the MBR if it has been installed there; it will merely
remove the liloAUR package. The LILO bootloader installed to the MBR will be overwritten when GRUB (or
another bootloader) is installed over it.
# fdformat /dev/fd0
# mke2fs /dev/fd0
# umount /mnt/fl
Now you should be able to restart your computer with the disk in the drive and it should boot to
GRUB. Make sure that your floppy disk is set to have higher priority than your hard drive when
booting in your BIOS first, of course.
The hiddenmenu option can be used in order to hide the menu by default. That way no menu is
displayed and the default option is going to be automatically selected after the timeout passes.
Still, you are able to press Esc and the menu shows up. To use it, just add to your
/boot/grub/menu.lst:
hiddenmenu
Advanced debugging
See dedicated article.
Troubleshooting
GRUB Error 17
The first check to do is to unplug any external drive. Seems obvious, but sometimes we get
tired ;)
If your partition table gets messed up, an unpleasant "GRUB error 17" message might be the
only thing that greets you on your next reboot. There are a number of reasons why the partition
table could get messed up. Commonly, users who manipulate their partitions with GParted --
particularly logical drives -- can cause the order of the partitions to change. For example, you
delete /dev/sda6 and resize /dev/sda7, then finally re-create what used to be /dev/sda6 only
now it appears at the bottom of the list, /dev/sda9 for example. Although the physical order of
the partitions/logical drives has not changed, the order in which they are recognized has changed.
Fixing the partition table is easy. Boot from your Arch CD/DVD/USB, login as root and fix the
partition table:
# fdisk /dev/sda
Once in disk, enter e[x]tra/expert mode, [f]ix the partition order, then [w]rite the table and exit.
You can verify that the partition table was indeed fixed by issuing an fdisk -l. Now you just
need to fix GRUB. See the Bootloader installation section.
Basically you need to tell GRUB the correct location of your /boot then re-write GRUB to the
MBR on the disk.
For example:
# grub
grub> root (hd0,6)
grub> setup (hd0)
grub> quit
If you see this error message while trying to set up GRUB, and you are not using a fresh partition
table, it is worth checking it.
# fdisk -l /dev/sda
This will show you the partition table for /dev/sda. So check here, whether the "Id" values of
your partitions are correct. The "System" column will show you the description of the "Id"
values.
If your boot partition is marked as being "HPFS/NTFS", for example, then you have to change it
to "Linux". To do this, go to fdisk,
# fdisk /dev/sda
change a partition's system id with t, select you partition number and type in the new system id
(Linux = 83). You can also list all available system ids by typing L instead of a system id.
If you have changed a partitions system id, you should [v]erify your partition table and then
[w]rite it.
To fix this you will need to use the Windows Recovery Console for your Windows release.
Because many computer manufacturers do not include this with their product (many choose to
use a recovery partition) Microsoft has made them available for download. If you use XP, look at
this page to be able to turn the floppy disks to a Recovery CD. Boot the Recovery CD (or enable
Windows Recovery mode) and run fixboot to repair the partition boot sector. After this, you
will have to install GRUB again---this time, to the MBR, not to the Windows partition---to boot
Linux.
See Dual boot with Windows#Restoring a Windows boot record for more information.
Once you have selected and entry in the boot menu, you can edit it by pressing key e. Use tab-
completion if you need to to discover devices then Esc to exit. Then you can try to boot by
pressing b.
device.map error
to force GRUB to recheck the device map, even if it already exists. This may be necessary after
resizing partitions or adding/removing drives.
If you have opened a sub-menu with the list of all operating systems configured in GRUB,
selected one, and upon restart, you still booted your default OS, then you might want to check if
you have the line:
default saved
in /boot/grub/menu.lst.
GRUB fails to find or install to any virtio /dev/vd* or other non-BIOS devices
I had trouble installing GRUB while installing Arch Linux in an virtual KVM machine using a
virtio device for hard drive. To install GRUB, I figured out the following: Enter a virtual console
by typing Ctrl+Alt+F2 or any other F-key for a free virtual console. This assumes that your root
file system is mounted in the folder /mnt and the boot file system is either mounted or stored in
the folder /mnt/boot.
1. Assure that all needed GRUB files is present in your boot directory (assuming it is mounted in
/mnt/boot folder), by issuing the command:
# ls /mnt/boot/grub
2. If the /mnt/boot/grub folder already contains all the needed files, jump to step 3. Otherwise,
do the following commands (replacing /mnt, your_kernel and your_initrd with the real paths
and file names). You should also have the menu.lst file written to this folder:
# grub --device-map=/dev/null
4. Enter the following commands. Replace /dev/vda, and (hd0,0) with the correct device and
partition corresponding to your setup.
5. If GRUB reports no error messages, then you probably are done. You also need to add
appropriate modules to the ramdisk. For more information, please refer to QEMU#Preparing an
(Arch) Linux guest.
1 General procedures
o 1.1 Attention to detail
o 1.2 Questions/checklist
o 1.3 Be more specific
o 1.4 Additional support
2 Boot problems
o 2.1 Console messages
2.1.1 Flow control
2.1.2 Scrollback
2.1.3 Debug output
o 2.2 Recovery shells
o 2.3 Blank screen with Intel video
o 2.4 Stuck while loading the kernel
o 2.5 Debugging kernel modules
o 2.6 Debugging hardware
o 2.7 See also
3 Package management
4 fuser
5 Session permissions
6 error while loading shared libraries
7 file: could not find any magic files!
8 See also
General procedures
Attention to detail
In order to resolve an issue that you are having, it is absolutely crucial to have a firm basic
understanding of how that specific subsystem functions. How does it work, and what does it
need to run without error? If you cannot comfortably answer these question then you would best
review the Archwiki article for the subsystem that you are having trouble with. Once you feel
like you've understood it, it will be easier for you to pinpoint the cause of the problem.
Questions/checklist
The following gives a number of questions for you whenever dealing with a malfunctioning
system. Under each question there are notes explaining how you should be answering each
question, followed by some light examples on how to easily gather data output and what tools
can be used to review logs and the journal.
Be as precise as possible. This will help you not get confused and/or side-tracked when looking
up specific information.
Copy and paste full outputs that contain error messages related to your issue into a separate
file, such as $HOME/issue.log. For example, to forward the output of the following mkinitcpio
command to $HOME/issue.log:
$ mkinitcpio -p linux >> $HOME/issue.log
4. When did you first encounter these issues and what was changed between then and when the
system was operating without error?
If it occurred right after an update then, list all packages that were updated. Include version
numbers, also, paste the entire update from pacman.log (/var/log/pacman.log). Also take
note of the statuses of any service(s) needed to support the malfunctioning application(s) using
systemd's systemctl tools. For example, to forward the output of the following systemd
command to $HOME/issue.log:
$ systemctl status dhcpcd@eth0.service >> $HOME/issue.log
Note: Using >> will ensure any previous text in $HOME/issue.log will not be overwritten.
Be more specific
Additional support
With all the information in front of you you should have a good idea as to what is going on with
the system and you can now start working on a proper fix.
If you require any additional support, it can be found on the forums or IRC at irc.freenode.net
#archlinux See IRC channels for other options.
When asking for support post the complete output/logs, not just what you think are the
significant sections. Sources of information include:
Full output of any command involved - don't just select what you think is relevant.
Output from systemd's journalctl. For more extensive output, use the
systemd.log_level=debug boot parameter.
Log files (have a look in /var/log)
Relevant configuration files
Drivers involved
Versions of packages involved
Kernel: dmesg. For a boot problem, at least the last 10 lines displayed, preferably more
Networking: Exact output of commands involved, and any configuration files
Xorg: /var/log/Xorg.0.log, and prior logs if you have overwritten the problematic one
Pacman: If a recent upgrade broke something, look in /var/log/pacman.log
One of the better ways to post this information is to use an online pastebin. You can install the
pbpst or gist package to automatically upload information. For example, to upload the content of
your systemd journal from this boot you would do:
Additionally, before posting your question, you may wish to review how to ask smart questions.
See also Code of conduct.
Boot problems
Diagnosing errors during the boot process involves changing the kernel parameters, and
rebooting the system.
If booting the system is not possible, boot from a live image and change root to the existing
system.
Console messages
After the boot process, the screen is cleared and the login prompt appears, leaving users unable
to read init output and error messages. This default behavior may be modified using methods
outlined in the sections below.
Note that regardless of the chosen option, kernel messages can be displayed for inspection after
booting by using dmesg or all logs from the current boot with journalctl -b.
Flow control
This is basic management that applies to most terminal emulators, including virtual consoles
(vc):
This pauses not only the output, but also programs which try to print to the terminal, as they will
block on the write() calls for as long as the output is paused. If your init appears frozen, make
sure the system console is not paused.
To see error messages which are already displayed, see Getty#Have boot messages stay on tty1.
Scrollback
Scrollback allows the user to go back and view text which has scrolled off the screen of a text
console. This is made possible by a buffer created between the video adapter and the display
device called the scrollback buffer. By default, the key combinations of Shift+PageUp and
Shift+PageDown scroll the buffer up and down.
If scrolling up all the way does not show you enough information, you need to expand your
scrollback buffer to hold more output. This is done by tweaking the kernel's framebuffer console
(fbcon) with the kernel parameter fbcon=scrollback:Nk where N is the desired buffer size is
kilobytes. The default size is 32k.
If this does not work, your framebuffer console may not be properly enabled. Check the
Framebuffer Console documentation for other parameters, e.g. for changing the framebuffer
driver.
Debug output
Most kernel messages are hidden during boot. You can see more of these messages by adding
different kernel parameters. The simplest ones are:
debug enables debug messages for both the kernel and systemd
ignore_loglevel forces all kernel messages to be printed
Other parameters you can add that might be useful in certain situations are:
earlyprintk=vga,keep prints kernel messages very early in the boot process, in case the
kernel would crash before output is shown. You must change vga to efi for EFI systems
log_buf_len=16M allocates a larger (16MB) kernel message buffer, to ensure that debug
output is not overwritten
There are also a number of separate debug parameters for enabling debugging in specific
subsystems e.g. bootmem_debug, sched_debug. Check the kernel parameter documentation for
specific information.
Note: If you cannot scroll back far enough to view the desired boot output, you should increase the size
of the scrollback buffer.
Recovery shells
Getting an interactive shell at some stage in the boot process can help you pinpoint exactly where
and why something is failing. There are several kernel parameters for doing so, but they all
launch a normal shell which you can exit to let the kernel resume what it was doing:
rescue launches a shell shortly after the root filesystem is remounted read/write
emergency launches a shell even earlier, before most filesystems are mounted
init=/bin/sh (as a last resort) changes the init program to a root shell. rescue and
emergency both rely on systemd, but this should work even if systemd is broken
Another option is systemd's debug-shell which adds a root shell on tty9 (accessible with
Ctrl+Alt+F9). It can be enabled by either adding systemd.debug-shell to the kernel
parameters, or by enabling debug-shell.service. Take care to disable the service when done
to avoid the security risk of leaving a root shell open on every boot.
Debugging hardware
You can display extra debugging information about your hardware by following udev#Debug
output.
Ensure that Microcode updates are applied on your system.
Test your device's RAM with Memtest86+. Unstable RAM may lead to some extremely odd
issues, ranging from random crashes to data corruption.
See also
List of Tools for UBCD - Can be added to custom menu.lst like memtest
Wikipedia's page on BIOS Boot partition
QA/Sysrq - Using sysrq
systemd documentation: Debug Logging to a Serial Console
How to Isolate Linux ACPI Issues
Package management
See Pacman#Troubleshooting for general topics, and pacman/Package signing#Troubleshooting
for issues with PGP keys.
fuser
fuser is a command-line utility for identifying processes using resources such as files, filesystems
and TCP/UDP ports.
fuser is provided by the psmisc package, which should be already installed as part of the base
group.
Session permissions
Note: You must be using systemd as your init system for local sessions to work.[1] It is required for
polkit permissions and ACLs for various devices (see /usr/lib/udev/rules.d/70-uaccess.rules
and [2])
This should contain Remote=no and Active=yes in the output. If it does not, make sure that X
runs on the same tty where the login occurred. This is required in order to preserve the logind
session.
A D-Bus session should also be started along with X. See D-Bus#Starting the user session for
more information on this.
Basic polkit actions do not require further set-up. Some polkit actions require further
authentication, even with a local session. A polkit authentication agent needs to be running for
this to work. See polkit#Authentication agents for more information on this.
Reason: Or the program needs to be rebuilt after a soname bump. (Discuss in Talk:General
troubleshooting#)
Use pacman or pkgfile to search for the package that owns the missing library:
extra/libusb-compat 0.1.5-1
usr/lib/libusb-0.1.so.4
Example: After an every-day routine update or following the installation of a package you are
given the following error:
This will most likely leave your system crippled. And, any attempts made to recompile/reinstall
the package(s) responsible for the breakage will fail. Also, any attempts made to try to rebuild
the initramfs will result in the following:
# mkinitcpio -p linux
==> Building image from preset: 'default'
-> -k /boot/vmlinuz-linux -c /etc/mkinitcpio.conf -g /boot/initramfs-
linux.img
file: could not find any magic files!
==> ERROR: invalid kernel specifier: `/boot/vmlinuz-linux'
==> Building image from preset: 'fallback'
-> -k /boot/vmlinuz-linux -c /etc/mkinitcpio.conf -g /boot/initramfs-linux-
fallback.img -S autodetect
file: could not find any magic files!
@==> ERROR: invalid kernel specifier: `/boot/vmlinuz-linux'
Note: arch-chroot[broken link: invalid section] leaves mounting the /boot partition up to the user.
# mkinitcpio -p linux
1. Reboot back to your installed system.
2. Once booted, reinstall the package that was responsible for leaving your system inoperable
using:
# pacman -S <package>
See also
1 Configuration
2 Security
3 Networking
o 3.1 Improving performance
o 3.2 TCP/IP stack hardening
4 Virtual memory
5 MDADM
6 Troubleshooting
o 6.1 Small periodic system freezes
7 See also
Configuration
Note: From version 207 and 21x, systemd only applies settings from /etc/sysctl.d/*.conf
and /usr/lib/sysctl.d/*.conf. If you had customized /etc/sysctl.conf, you need to
rename it as /etc/sysctl.d/99-sysctl.conf. If you had e.g. /etc/sysctl.d/foo, you need
to rename it to /etc/sysctl.d/foo.conf.
# sysctl --system
which will also output the applied hierarchy. A single parameter file can also be loaded explicitly
with
# sysctl -p filename.conf
See the new configuration files and more specifically sysctl.d(5) for more information.
The parameters available are those listed under /proc/sys/. For example, the kernel.sysrq
parameter refers to the file /proc/sys/kernel/sysrq on the file system. The sysctl -a
command can be used to display all currently available values.
Note: If you have the kernel documentation installed (linux-docs), you can find detailed
information about sysctl settings in /usr/lib/modules/$(uname -
r)/build/Documentation/sysctl/. It is highly recommended reading these before changing
sysctl settings.
Settings can be changed through file manipulation or using the sysctl utility. For example, to
temporarily enable the magic SysRq key:
# sysctl kernel.sysrq=1
or:
Tip: Some parameters that can be applied may depend on kernel modules which in turn might
not be loaded. For example parameters in /proc/sys/net/bridge/* depend on the
br_netfilter module. If it is not loaded at runtime (or after a reboot), those will silently not be
applied. See Kernel modules.
Security
See Security#Kernel hardening.
Networking
Improving performance
Reason: Comments don't belong in the code box, use the wiki markup. (Discuss in Talk:Sysctl#)
# The maximum size of the receive queue.
# The received frames will be stored in this queue after taking them from the
ring buffer on the NIC.
# Use high value for high speed cards to prevent loosing packets.
# In real time application like SIP router, long queue must be assigned with
high speed CPU otherwise the data in the queue will be out of date (old).
net.core.netdev_max_backlog = 65536
# The maximum ancillary buffer size allowed per socket.
# Ancillary data is a sequence of struct cmsghdr structures with appended
data.
net.core.optmem_max = 65536
# The upper limit on the value of the backlog parameter passed to the listen
function.
# Setting to higher values is only needed on a single highloaded server where
new connection rate is high/bursty
net.core.somaxconn = 16384
# The default and maximum amount for the receive/send socket memory
# By default the Linux network stack is not configured for high speed large
file transfer across WAN links.
# This is done to save memory resources.
# You can easily tune Linux network stack by increasing network buffers size
for high-speed networks that connect server systems to handle more network
packets.
net.core.rmem_default = 1048576
net.core.wmem_default = 1048576
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
net.ipv4.tcp_rmem = 4096 87380 16777216
net.ipv4.tcp_wmem = 4096 65536 16777216
net.ipv4.udp_rmem_min = 16384
net.ipv4.udp_wmem_min = 16384
# An extension to the transmission control protocol (TCP) that helps reduce
network latency by enabling data to be exchanged during the sender’s initial
TCP SYN.
# If both of your server and client are deployed on Linux 3.7.1 or higher,
you can turn on fast_open for lower latency
net.ipv4.tcp_fastopen = 3
# The maximum queue length of pending connections 'Waiting Acknowledgment'
# In the event of a synflood DOS attack, this queue can fill up pretty
quickly, at which point tcp_syncookies will kick in allowing your system to
continue to respond to legitimate traffic, and allowing you to gain access to
block malicious IPs.
# If the server suffers from overloads at peak times, you may want to
increase this value a little bit.
net.ipv4.tcp_max_syn_backlog = 65536
# The maximum number of sockets in 'TIME_WAIT' state.
# After reaching this number the system will start destroying the socket in
this state.
# Increase this to prevent simple DOS attacks
net.ipv4.tcp_max_tw_buckets = 65536
# Whether TCP should start at the default window size only for new
connections or also for existing connections that have been idle for too
long.
# It kills persistent single connection performance and should be turned off.
net.ipv4.tcp_slow_start_after_idle = 0
# Whether TCP should reuse an existing connection in the TIME-WAIT state for
a new outgoing connection if the new timestamp is strictly bigger than the
most recent timestamp recorded for the previous connection.
# This helps avoid from running out of available network sockets.
net.ipv4.tcp_tw_reuse = 1
# Fast-fail FIN connections which are useless.
net.ipv4.tcp_fin_timeout = 15
# TCP keepalive is a mechanism for TCP connections that help to determine
whether the other end has stopped responding or not.
# TCP will send the keepalive probe contains null data to the network peer
several times after a period of idle time. If the peer does not respond, the
socket will be closed automatically.
# By default, TCP keepalive process waits for two hours (7200 secs) for
socket activity before sending the first keepalive probe, and then resend it
every 75 seconds. As long as there is TCP/IP socket communications going on
and active, no keepalive packets are needed.
# With the following settings, your application will detect dead TCP
connections after 120 seconds (60s + 10s + 10s + 10s + 10s + 10s + 10s)
net.ipv4.tcp_keepalive_time = 60
net.ipv4.tcp_keepalive_intvl = 10
net.ipv4.tcp_keepalive_probes = 6
# The longer the MTU the better for performance, but the worse for
reliability.
# This is because a lost packet means more data to be retransmitted and
because many routers on the Internet can't deliver very long packets.
# Enable smart MTU discovery when an ICMP black hole detected.
net.ipv4.tcp_mtu_probing = 1
# Turn timestamps off to reduce performance spikes related to timestamp
generation.
net.ipv4.tcp_timestamps = 0
The following specifies a parameter set to tighten network security options of the kernel for the
IPv4 protocol and related IPv6 parameters where an equivalent exists.
For some usecases, for example using the system as a router, other parameters may be useful or
required as well.
/etc/sysctl.d/51-net.conf
Virtual memory
There are several key parameters to tune the operation of the virtual memory (VM) subsystem of
the Linux kernel and the write out of dirty data to disk. See the official Linux kernel
documentation for more information. For example:
vm.dirty_ratio = 3
Contains, as a percentage of total available memory that contains free pages and
reclaimable pages, the number of pages at which a process which is generating disk
writes will itself start writing out dirty data.
vm.dirty_background_ratio = 2
Contains, as a percentage of total available memory that contains free pages and
reclaimable pages, the number of pages at which the background kernel flusher threads
will start writing out dirty data.
As noted in the comments for the parameters, one needs to consider the total amount of RAM
when setting these values. For example, simplifying by taking the installed system RAM instead
of available memory:
Consensus is that setting vm.dirty_ratio to 10% of RAM is a sane value if RAM is say
1 GB (so 10% is 100 MB). But if the machine has much more RAM, say 16 GB (10% is
1.6 GB), the percentage may be out of proportion as it becomes several seconds of
writeback on spinning disks. A more sane value in this case is 3 (3% of 16 GB is
approximately 491 MB).
Similarly, setting vm.dirty_background_ratio to 5 may be just fine for small memory
values, but again, consider and adjust accordingly for the amount of RAM on a particular
system.
vm.vfs_cache_pressure = 60
The value controls the tendency of the kernel to reclaim the memory which is used for
caching of directory and inode objects (VFS cache). Lowering it from the default value of
100 makes the kernel less inclined to reclaim VFS cache (do not set it to 0, this may
produce out-of-memory conditions).
MDADM
When the kernel performs a resync operation of a software raid device it tries not to create a high
system load by restricting the speed of the operation. Using sysctl it is possible to change the
lower and upper speed limit.
If mdadm is compiled as a module md_mod, the above settings are available only after the module
has been loaded. If the settings shall be loaded on boot via /etc/sysctl.d, the module md_mod
may be loaded beforehand through /etc/modules-load.d.
Troubleshooting
Small periodic system freezes
vm.dirty_background_bytes = 4194304
vm.dirty_bytes = 4194304
See also
1 Configuration
o 1.1 Syslinux
o 1.2 systemd-boot
o 1.3 GRUB
o 1.4 GRUB Legacy
o 1.5 LILO
o 1.6 rEFInd
o 1.7 EFISTUB
o 1.8 Hijacking cmdline
2 Parameter list
3 See also
Configuration
Note:
You can check the parameters your system was booted up with by running cat
/proc/cmdline and see if it includes your changes.
The Arch Linux installation medium uses Syslinux for BIOS systems, and systemd-boot
for UEFI systems.
Kernel parameters can be set either temporarily by editing the boot menu when it shows up, or
by modifying the boot loader's configuration file.
The following examples add the quiet and splash parameters to Syslinux, systemd-boot,
GRUB, GRUB Legacy, LILO, and rEFInd.
Syslinux
Press Tab when the menu shows up and add them at the end of the string:
systemd-boot
Press e when the menu appears and add the parameters to the end of the string:
GRUB
Press e when the menu shows up and add them on the linux line:
To make the change persistent after reboot, while you could manually edit
/boot/grub/grub.cfg with the exact line from above, the best practice is to:
GRUB Legacy
Press e when the menu shows up and add them on the kernel line:
For more information on configuring GRUB Legacy, see the GRUB Legacy article.
LILO
image=/boot/vmlinuz-linux
...
quiet splash
rEFInd
To make the change persistent after reboot, edit /boot/refind_linux.conf and append
them to all/required lines, for example
If you have disabled auto-detection of OSes in rEFInd and are defining OS stanzas
instead in esp/EFI/refind/refind.conf to load your OSes, you can edit it like:
EFISTUB
Hijacking cmdline
Even without access to your bootloader it is possible to change your kernel parameters to enable
debugging (if you have root access). This can be accomplished by overwriting /proc/cmdline
which stores the kernel parameters. However /proc/cmdline is not writable even as root, so this
hack is accomplished by using a bind mount to mask the path.
The -n option skips adding the mount to /etc/mtab, so it will work even if root is mounted
read-only. You can cat /proc/cmdline to confirm that your change was successful.
Parameter list
This list is not comprehensive. For a complete list of all options, please see the kernel
documentation.
parameter Description
root= Root filesystem.
rootflags= Root filesystem mount options.
ro Mount root device read-only on boot (default1).
rw Mount root device read-write on boot.
initrd= Specify the location of the initial ramdisk.
Run specified binary instead of /sbin/init (symlinked to systemd in
init=
Arch) as init process.
init=/bin/sh Boot to shell.
systemd.unit= Boot to a specified target.
resume= Specify a swap device to use when waking from hibernation.
nomodeset Disable Kernel mode setting.
zswap.enabled Enable Zswap.
video=<videosetting> Override framebuffer video defaults.
1
mkinitcpio uses ro as default value when neither rw or ro is set by the boot loader. Boot
loaders may set the value to use, for example GRUB uses rw by default (see FS#36275 as a
reference).
See also
01.org
Original by:
Note
This document is obsolete. In most cases, rather than using patch manually, you’ll almost
certainly want to look at using Git instead.
A frequently asked question on the Linux Kernel Mailing List is how to apply a patch to the
kernel or, more specifically, what base kernel a patch for one of the many trees/branches should
be applied to. Hopefully this document will explain this to you.
In addition to explaining how to apply and revert patches, a brief description of the different
kernel trees (and examples of how to apply their specific patches) is also provided.
What is a patch?¶
A patch is a small text document containing a delta of changes between two different versions of
a source tree. Patches are created with the diff program.
To correctly apply a patch you need to know what base it was generated from and what new
version the patch will change the source tree into. These should both be present in the patch file
metadata or be possible to deduce from the filename.
Patches for the Linux kernel are generated relative to the parent directory holding the kernel
source dir.
This means that paths to files inside the patch file contain the name of the kernel source
directories it was generated against (or some other directory names like “a/” and “b/”).
Since this is unlikely to match the name of the kernel source dir on your local machine (but is
often useful info to see what version an otherwise unlabeled patch was generated against) you
should change into your kernel source directory and then strip the first element of the path from
filenames in the patch file when applying it (the -p1 argument to patch does this).
To revert a previously applied patch, use the -R argument to patch. So, if you applied a patch like
this:
In all the examples below I feed the file (in uncompressed form) to patch via stdin using the
following syntax:
If you just want to be able to follow the examples below and don’t want to know of more than
one way to use patch, then you can stop reading this section here.
Patch can also get the name of the file to use via the -i argument, like this:
If your patch file is compressed with gzip or xz and you don’t want to uncompress it before
applying it, then you can feed it to patch like this instead:
If you wish to uncompress the patch file by hand first before applying it (what I assume you’ve
done in the examples below), then you simply run gunzip or xz on the file – like this:
gunzip patch-x.y.z.gz
xz -d patch-x.y.z.xz
Which will leave you with a plain text patch-x.y.z file that you can feed to patch via stdin or the
-i argument, as you prefer.
A few other nice arguments for patch are -s which causes patch to be silent except for errors
which is nice to prevent errors from scrolling out of the screen too fast, and --dry-run which
causes patch to just print a listing of what would happen, but doesn’t actually make any changes.
Finally --verbose tells patch to print more information about the work being done.
Checking that the file looks like a valid patch file and checking the code around the bits being
modified matches the context provided in the patch are just two of the basic sanity checks patch
does.
If patch encounters something that doesn’t look quite right it has two options. It can either refuse
to apply the changes and abort or it can try to find a way to make the patch apply with a few
minor changes.
One example of something that’s not ‘quite right’ that patch will attempt to fix up is if all the
context matches, the lines being changed match, but the line numbers are different. This can
happen, for example, if the patch makes a change in the middle of the file but for some reasons a
few lines have been added or removed near the beginning of the file. In that case everything
looks good it has just moved up or down a bit, and patch will usually adjust the line numbers and
apply the patch.
Whenever patch applies a patch that it had to modify a bit to make it fit it’ll tell you about it by
saying the patch applied with fuzz. You should be wary of such changes since even though patch
probably got it right it doesn’t /always/ get it right, and the result will sometimes be wrong.
When patch encounters a change that it can’t fix up with fuzz it rejects it outright and leaves a
file with a .rej extension (a reject file). You can read this file to see exactly what change
couldn’t be applied, so you can go fix it up by hand if you wish.
If you don’t have any third-party patches applied to your kernel source, but only patches from
kernel.org and you apply the patches in the correct order, and have made no modifications
yourself to the source files, then you should never see a fuzz or reject message from patch. If you
do see such messages anyway, then there’s a high risk that either your local source tree or the
patch file is corrupted in some way. In that case you should probably try re-downloading the
patch and if things are still not OK then you’d be advised to start with a fresh tree downloaded in
full from kernel.org.
Let’s look a bit more at some of the messages patch can produce.
If patch stops and presents a File to patch: prompt, then patch could not find a file to be
patched. Most likely you forgot to specify -p1 or you are in the wrong directory. Less often,
you’ll find patches that need to be applied with -p0 instead of -p1 (reading the patch file should
reveal if this is the case – if so, then this is an error by the person who created the patch but is not
fatal).
If you get Hunk #2 succeeded at 1887 with fuzz 2 (offset 7 lines). or a message
similar to that, then it means that patch had to adjust the location of the change (in this example it
needed to move 7 lines from where it expected to make the change to make it fit).
The resulting file may or may not be OK, depending on the reason the file was different than
expected.
This often happens if you try to apply a patch that was generated against a different kernel
version than the one you are trying to patch.
If you get a message like Hunk #3 FAILED at 2387., then it means that the patch could not be
applied correctly and the patch program was unable to fuzz its way through. This will generate a
.rej file with the change that caused the patch to fail and also a .orig file showing you the
original content that couldn’t be changed.
If you get Reversed (or previously applied) patch detected! Assume -R? [n] then
patch detected that the change contained in the patch seems to have already been made.
If you actually did apply this patch previously and you just re-applied it in error, then just say
[n]o and abort this patch. If you applied this patch previously and actually intended to revert it,
but forgot to specify -R, then you can say [y]es here to make patch revert it for you.
This can also happen if the creator of the patch reversed the source and destination directories
when creating the patch, and in that case reverting the patch will in fact apply it.
As I already mentioned above, these errors should never happen if you apply a patch from
kernel.org to the correct version of an unmodified source tree. So if you get these errors with
kernel.org patches then you should probably assume that either your patch file or your tree is
broken and I’d advise you to start over with a fresh download of a full kernel tree and the patch
you wish to apply.
This will let you move from something like 4.7.2 to 4.7.3 in a single step. The -z flag to interdiff
will even let you feed it patches in gzip or bzip2 compressed form directly without the use of
zcat or bzcat or manual decompression.
Although interdiff may save you a step or two you are generally advised to do the additional
steps since interdiff can get things wrong in some cases.
Another alternative is ketchup, which is a python script for automatic downloading and applying
of patches (http://www.selenic.com/ketchup/).
Other nice tools are diffstat, which shows a summary of changes made by a patch; lsdiff, which
displays a short listing of affected files in a patch file, along with (optionally) the line numbers of
the start of each patch; and grepdiff, which displays a list of the files modified by a patch where
the patch contains a given regular expression.
If regressions or other serious flaws are found, then a -stable fix patch will be released (see
below) on top of this base. Once a new 4.x base kernel is released, a patch is made available that
is a delta between the previous 4.x kernel and the new one.
To apply a patch moving from 4.6 to 4.7, you’d do the following (note that such patches do NOT
apply on top of 4.x.y kernels but on top of the base 4.x kernel – if you need to move from 4.x.y
to 4.x+1 you need to first revert the 4.x.y patch).
This is the recommended branch for users who want the most recent stable kernel and are not
interested in helping test development/experimental versions.
If no 4.x.y kernel is available, then the highest numbered 4.x kernel is the current stable kernel.
Note
The -stable team usually do make incremental patches available as well as patches against the
latest mainline release, but I only cover the non-incremental ones below. The incremental ones
can be found at https://www.kernel.org/pub/linux/kernel/v4.x/incr/
These patches are not incremental, meaning that for example the 4.7.3 patch does not apply on
top of the 4.7.2 kernel source, but rather on top of the base 4.7 kernel source.
So, in order to apply the 4.7.3 patch to your existing 4.7.2 kernel source you have to first back
out the 4.7.2 patch (so you are left with a base 4.7 kernel source) and then apply the new 4.7.3
patch.
These kernels are not stable and you should expect occasional breakage if you intend to run
them. This is however the most stable of the main development branches and is also what will
eventually turn into the next stable kernel, so it is important that it be tested by as many people as
possible.
This is a good branch to run for people who want to help out testing development kernels but do
not want to run some of the really experimental stuff (such people should see the sections about -
next and -mm kernels below).
The -rc patches are not incremental, they apply to a base 4.x kernel, just like the 4.x.y patches
described above. The kernel version before the -rcN suffix denotes the version of the kernel that
this -rc kernel will eventually turn into.
So, 4.8-rc5 means that this is the fifth release candidate for the 4.8 kernel and the patch should be
applied on top of the 4.7 kernel source.
In the past, -mm tree were used to also test subsystem patches, but this function is now done via
the linux-next <https://www.kernel.org/doc/man-pages/linux-next.html> tree. The Subsystem
maintainers push their patches first to linux-next, and, during the merge window, sends them
directly to Linus.
The -mm patches serve as a sort of proving ground for new features and other experimental
patches that aren’t merged via a subsystem tree. Once such patches has proved its worth in -mm
for a while Andrew pushes it on to Linus for inclusion in mainline.
The linux-next tree is daily updated, and includes the -mm patches. Both are in constant flux and
contains many experimental features, a lot of debugging patches not appropriate for mainline
etc., and is the most experimental of the branches described in this document.
These patches are not appropriate for use on systems that are supposed to be stable and they are
more risky to run than any of the other branches (make sure you have up-to-date backups – that
goes for any experimental kernel but even more so for -mm patches or using a Kernel from the
linux-next tree).
Testing of -mm patches and linux-next is greatly appreciated since the whole point of those are
to weed out regressions, crashes, data corruption bugs, build breakage (and any other bug in
general) before changes are merged into the more stable mainline Linus tree.
But testers of -mm and linux-next should be aware that breakages are more common than in any
other tree.
This concludes this list of explanations of the various kernel trees. I hope you are now clear on
how to apply the various patches and help testing the kernel.
Thank you’s to Randy Dunlap, Rolf Eike Beer, Linus Torvalds, Bodo Eggert, Johannes
Stezenbach, Grant Coady, Pavel Machek and others that I may have forgotten for their reviews
and contributions to this document.
yolinux.com
See Distribution erratas and security fixes (See Yolinux home page for list). [e.g. Red Hat
Linux Errata]
Update your system where appropriate.
o Red Hat/CentOS:
yum check-update
(Print list of packages to be updated.)
yum update
Note that this can be automated using the /etc/init.d/yum-updatesd service
(RHEL/CentOS 5) or create a cron job /etc/cron.daily/yum.cron
#!/bin/sh
/usr/bin/yum -R 120 -e 0 -d 0 -y update yum
/usr/bin/yum -R 10 -e 0 -d 0 -y update
o Ubuntu/Debian:
apt-get update
(Update package list to the latest version associated with that release of the
OS.)
apt-get upgrade
Reduce the number of network services exposed. These will be started by scripts in
/etc/rc.d/rc*.d/ directories. (See full list of services in: /etc/init.d/) There may
be no need to run sendmail (mail server), portmap (RPC listener required by NFS), lpd
(Line printer server daemon. Hackers probe my system for this service all the time.), innd
(News server), linuxconf etc. For example, sendmail can be removed from the boot
process using the command: chkconfig --del sendmail or by using the configuration
tool ntsysv. The service can be terminated using the command
/etc/rc.d/init.d/sendmail stop. At the very least one should run the command
chkconfig --list to see what processes are configured to be operable after boot-up.
See the YoLinux init process tutorial
Verify your configuration. List the open ports and processes which hold them: netstat
-punta (Also try netstat -nlp)
List RPC services: [root]# rpcinfo -p localhost
Ideally you would NOT be running portmapper so no RPC services would be available.
Turn off portmapper: service portmap stop (or: /etc/init.d/portmap stop) and
remove it from the system boot sequence: chkconfig --del portmap (Portmap is
required by NFS.)
Anonymous FTP (Using wu_ftpd - Last shipped with RH 8.0. RH 9 and FC use vsftpd):
By default Red Hat comes configured for anonymous FTP. This allows users to ftp to
your server and log in with the login anonymous and use an email address as the
password. If you wish to turn off this feature edit the file /etc/ftpaccess and change:
class all real,guest,anonymous *
to
class all real,guest *
For more on FTP configuration see: YoLinux Web server FTP configuration tutorial
Use the find command to locate vulnerabilities - find suid and guid files (which can
execute with root privileges) as well as world writable files and directories. For example:
o find / -xdev \( -perm -4000 -o -perm -2000 \) -type f -print
Remove suid privileges on executable programs with the command: chmod -s
filename
o find / -xdev \( -nouser -o -nogroup \) -print
Find files not owned by a valid user or group.
Use the command chattr and lsattr to make a sensitive security file un-modifiable over
and above the usual permissions.
Make a file un-modifiable: chattr +i /bin/ls
Make directories un-modifiable: chattr -R +i /bin /sbin /boot /lib
Make a file append only: chattr +a /var/log/messages
Use "tripwire" [sourceforge: tripwire] for security monitoring of your system for signs of
unauthorized file changes. Tripwire is offered as part of the base Red Hat and Ubuntu
distributions. Tripwire configuration is covered below.
Watch your log files especially /var/log/messages and /var/log/secure.
Avoid generic account names such as guest.
Use PAM network wrapper configurations to disallow passwords which can be found
easily by crack or other hacking programs. PAM authentication can also disallow root
network login access. (Default Red Hat configuration. You must login as a regular user
and su - to obtain root access. This is NOT the default for ssh and must be changed as
noted below.)
See YoLinux Network Admin Tutorial on using PAM
Remote access should NOT be done with clear text telnet but with an encrypted
connection using ssh. (Later in this tutorial)
Proc file settings for defense against attacks. This includes protective measures against IP
spoofing, SYN flood or syncookie attacks.
DDoS (Distributed Denial of Service) attacks: The only thing you can do is have gobs of
bandwidth and processing power/firewall. Lots of processing power or a firewall are
useless without gobs of bandwidth as the network can get overloaded from a distributed
attack.
Also see:
o Turn off ICMP (look invisible to network scans)
o Monitor the attack with tcpdump
Unfortunately the packets are usually spoofed and in my case the FBI didn't care. If the
server is a remote server, have a dial-up modem or a second IP address and route for
access because the attacked route is blocked by the flood of network attacks. You can
also request that your ISP drop ICMP traffic to the IP addresses of your servers. (and
UDP if all you are running is a web server. DNS name servers use UDP.) For very
interesting reading see "The Strange Tale" of the GRC.com DDoS attack. (Very
interesting read about the anatomy of the hacker bot networks.)
Remove un-needed users from the system. See /etc/passwd. By default Red Hat
installations have many user accounts created to support various processes. It you do not
intend to run these processes, remove the users. i.e. remove user ids games, uucp, rpc,
rpcd, ...
xinetd:
It is best for security reasons that you reduce the number of inetd network services
exposed. The more services exposed, the greater your vulnerability. Reduce the number
of network services accessible through the xinet or inet daemon by:
o inetd: (Red Hat 7.0 and earlier) Comment out un-needed services in the
/etc/initd.conf file.
Sample: (FTP is the only service I run)
o ftp stream tcp nowait root /usr/sbin/tcpd
in.ftpd -l -a
o xinetd: (Red Hat 7.1 and later) All network services are turned off by default
during an upgrade. Sample file: /etc/xinetd.d/wu-ftpd:
o service ftp
o {
o disable = yes - Default is off. This line controls
xinetd service (enabled or not)
o socket_type = stream
o wait = no
o user = root
o server = /usr/sbin/in.ftpd
o server_args = -l -a
o log_on_success += DURATION USERID
o log_on_failure += USERID
o nice = 10
o }
Tip:
List init settings including all xinetd controlled services: chkconfig --
list
List status of services (Red Hat/Fedora Core based systems): service --
status-all
Kernel Configuration:
Use Linux firewall rules to protect against attacks. (ipchains: kernel 2.6, 2.4 or iptables:
kernel 2.2) Access denial rules can also be implemented on the fly by portsentry.
(Place at the end of /etc/rc.d/rc.local to be executed upon system boot, or some
other appropriate script)
o iptables script:
o iptables -A INPUT -p tcp -s 0/0 -d 0/0 --dport 2049 -j DROP
- Block NFS
o iptables -A INPUT -p udp -s 0/0 -d 0/0 --dport 2049 -j DROP
- Block NFS
o iptables -A INPUT -p tcp -s 0/0 -d 0/0 --dport 6000:6009 -j DROP
- Block X-Windows
o iptables -A INPUT -p tcp -s 0/0 -d 0/0 --dport 7100 -j DROP
- Block X-Windows font server
o iptables -A INPUT -p tcp -s 0/0 -d 0/0 --dport 515 -j DROP
- Block printer port
o iptables -A INPUT -p udp -s 0/0 -d 0/0 --dport 515 -j DROP
- Block printer port
o iptables -A INPUT -p tcp -s 0/0 -d 0/0 --dport 111 -j DROP
- Block Sun rpc/NFS
o iptables -A INPUT -p udp -s 0/0 -d 0/0 --dport 111 -j DROP
- Block Sun rpc/NFS
o iptables -A INPUT -p all -s localhost -i eth0 -j DROP
- Deny outside packets from internet which
o
claim to be from your loopback interface.
o ipchains script:
o # Allow loopback access. This rule must come before the rules
denying port access!!
o iptables -A INPUT -i lo -p all -j ACCEPT - This rule is
essential if you want your own computer
o iptables -A OUTPUT -o lo -p all -j ACCEPT to be able to
access itself through the loopback interface
o
o ipchains -A input -p tcp -s 0/0 -d 0/0 2049 -y -j REJECT -
Block NFS
o ipchains -A input -p udp -s 0/0 -d 0/0 2049 -j REJECT -
Block NFS
o ipchains -A input -p tcp -s 0/0 -d 0/0 6000:6009 -y -j REJECT -
Block X-Windows
o ipchains -A input -p tcp -s 0/0 -d 0/0 7100 -y -j REJECT -
Block X-Windows font server
o ipchains -A input -p tcp -s 0/0 -d 0/0 515 -y -j REJECT -
Block printer port
o ipchains -A input -p udp -s 0/0 -d 0/0 515 -j REJECT -
Block printer port
o ipchains -A input -p tcp -s 0/0 -d 0/0 111 -y -j REJECT -
Block Sun rpc/NFS
o ipchains -A input -p udp -s 0/0 -d 0/0 111 -j REJECT -
Block Sun rpc/NFS
o ipchains -A input -j REJECT -p all -s localhost -i eth0 -l -
Deny and log ("-l") outside packets from internet
o
which claim to be from your loopback interface.
Note:
o iptables uses the chain rule "INPUT" and ipchains uses the lower case descriptor
"input".
o View rules with iptables -L or ipchains -L command.
o iptables man page
o When running an internet web server it is best from a security point of view, that
one NOT run printing, X-Window, NFS or any services which may be exploited
if a vulnerability is discovered or if mis-configured regardless of firewall rules.
Also see:
o cat /proc/sys/kernel/exec-shield
o cat /proc/sys/kernel/randomize_va_space
It is well known that there are various blocks of IP addresses where nefarious hackers and spam
bots reside. These IP blocks were often once owned by legitimate corporations and organizations
but have fallen into an unsupervised realm or have been hijacked and sold to criminal spammers.
These IP blocks should be blocked by firewall rules.
There are various friendly services which seek and discover these IP blocks to firewall and deny
and they share this information with us. Thanks!
The Spamhaus drop list: This is a script to download the total drop list and generate an iptables
filter script to block these very IP addresses:
#!/bin/bash
# Blacklist of hacker zones and bad domains from spamhaus.org
FILE=drop.lasso
/bin/rm -f $FILE
wget http://www.spamhaus.org/drop/drop.lasso
blocks=$(cat $FILE | egrep -v '^;' | awk '{ print $1}')
echo "#!/bin/bash" > Spamhaus-drop.lasso.sh
for ipblock in $blocks
do
echo "iptables -I INPUT -s $ipblock -j DROP" >> Spamhaus-drop.lasso.sh
done
chmod ugo+x Spamhaus-drop.lasso.sh
echo "...Done"
To block the IP addresses just execute the script on each of your servers:
./Spamhaus-drop.lasso.sh
At the very minimum, these blocks of IP addresses should be denied by all servers.
Block or allow by country: One can deny access by certain countries or the inverse, allow only
certain countries to access your server.
Block forum and comment list spammers: Use the list generated from honeypots operated by
StopForumSpam.com
#!/bin/bash
# Big list of IP addresses to block
# IPs gathered from the last 30 days
# Over 100k IP addresses
rm -f listed_ip_30.zip
wget http://www.stopforumspam.com/downloads/listed_ip_30.zip
rm -f listed_ip_30.txt
unzip listed_ip_30.zip
Be aware that this is an extremely long list and can take hours to run. It is also a rapidly changing
list which is updated constantly.
[Potential Pitfall]:
I found that by slowing down the execution of the script, I can avoid this error. I added a bash
echo to write each line to the screen and it behaved much better although also much slower.
#!/bin/bash
set -x verbose
/sbin/iptables -I INPUT -s XX.XX.XX.XX -j DROP
...
Identify the enemy:
Apache modules: Turn off modules you are not going to use. With past ssl exploits,
those using this philosophy did not get burned.
o Red Hat EL 5/CentOS 5 Apache 2.2: The configuration file
/etc/httpd/conf.d/ssl.conf enables SSL by default. This file is picked up
from the line Include conf.d/*.conf in the file
/etc/httpd/conf/httpd.conf Rename the file /etc/httpd/conf.d/ssl.conf
to ssl.conf_OFF to turn off SSL (any file ending with ".conf" is included in the
web server configuration).
o Ubuntu 8.04: a2dismod ssl
This will disable the loading of SSL. The Ubuntu distribution has a fairly frugal
use of modules by default.
The default configuration has SSL turned off.
o Apache 1.3.x config file /etc/httpd/conf/httpd.conf
o #<IfDefine HAVE_SSL>
o #LoadModule ssl_module modules/libssl.so
o #</IfDefine>
o ...
o ...
o #<IfDefine HAVE_SSL>
o #AddModule mod_ssl.c
o #</IfDefine>
o ...
o ...
o <IfDefine HAVE_SSL>
o Listen 80
o #Listen 443
o </IfDefine>
o ...
o ...
o #<IfModule mod_ssl.c>
o #...
o #...
o ...
o #<VirtualHost _default_:443>
o #...
o #...
o ...
Comment out the use of the ssl module by placing a "#" in the first column.
o One can also block the https port 443 using firewall rules:
o iptables -A INPUT -p tcp -s 0/0 -d 0/0 --dport 443 -j DROP
o iptables -A INPUT -p udp -s 0/0 -d 0/0 --dport 443 -j DROP
Apache version exposure: (Version 1.3+) Don't allow hackers to learn which version of
the web server software you are running by inducing an error and thus an automated
server response. Attacks are often version specific. Spammers also trigger errors to find
email addresses.
...
ServerAdmin webmaster at megacorp dot com
ServerSignature Off
...
The response may be meaningless anyway if you are using the web server as a proxy to
another.
Block hackers and countries which will never use your website. Use the Apache directive
Deny from to block access.
<Directory /home/projectx/public_html>
...
...
...
Order allow,deny
# Block form bots
Deny from 88.191.0.0/16 193.200.193.0/24 194.8.74.0/23
allow from all
</Directory>
For extensive lists of IP addresses to block, see the Wizcrafts.net block list
SSH protocol suite of network connectivity tools are used to encrypt connections across the
internet. SSH encrypts all traffic including logins and passwords to effectively eliminate network
sniffing, connection hijacking, and other network-level attacks. In a regular telnet session the
password is transmitted across the Internet un-encrypted.
SSH on Linux refers to OpenSSH secure shell terminal and sftp/scp file transfer connections.
SSH is also a commercial product but available freely for non-commercial use from SSH
Communications Security at http://www.ssh.com/. Two versions are available, SSH1 (now very
old) and SSH2 (current). The commercial version of SSH can be purchased and/or downloaded
from their web site. Note that SSH1 does have a major vulnerability issues. The "woot-project"
web site cracking and defacing gang uses this vulnerability. DO NOT USE SSH1
PROTOCOL!!!!! ("woot-project" exploit/attack description/recovery)
OpenSSH was developed by the the OpenBSD Project and is freely available. OpenSSH is
compatible with SSH1 and SSH2. OpenSSH relies on the OpenSSL project for the encrypted
communications layer. Current releases of Linux come with OpenSSH/OpenSSL.
Links:
OpenSSH:
Download:
o Download OpenSSH RPM's (sourceforge) - statically linked with OpenSSL 0.9.5
- Pick this one for an easy complete RPM install
o Download OpenSSH source (tgz)
o Red Hat Linux 6.x Open SSL RPM downloads (redhat.com) (SSL only)
Note: SSH and SSL are included with Red Hat Linux 7.0+
Installation:
o Common to Client and Server:
Red Hat/Fedora/CentOS:
rpm -ivh openssh-2.xxx-x.x.x86.rpm
Ubuntu/Debian:
apt-get install ssh
o Client:
Red Hat/Fedora/CentOS:
rpm -ivh openssh-askpass-2.xxx-x.x.x86.rpm
rpm -ivh openssh-clients-2.xxx-x.x.x86.rpm
rpm -ivh openssh-askpass-gnome-2.xxx-x.x.x86.rpm - Gnome
desktop users
Ubuntu/Debian:
apt-get install openssh-client ssh-askpass-gnome
o Server:
Red Hat/Fedora/CentOS:
rpm -ivh openssh-server-2.xxx-x.x.x86.rpm
Ubuntu/Debian:
apt-get install openssh-server
If upgrading from SSH1 you may have to use the RPM option --force.
The rpm will install the appropriate binaries, configuration files and openssh-server will
install the init script /etc/rc.d/init.d/sshd so that sshd will start upon system boot.
Configuration:
o Client configuration file /etc/ssh/ssh_config: (Default)
o # $OpenBSD: ssh_config,v 1.9 2001/03/10 12:53:51 deraadt Exp $
o
o # This is ssh client system wide configuration file. See ssh(1)
for more
o # information. This file provides defaults for users, and the
values can
o # be changed in per-user configuration files or on the command
line.
o
o # Configuration data is parsed as follows:
o # 1. command line options
o # 2. user-specific file
o # 3. system-wide file
o # Any configuration value is only changed the first time it is
set.
o # Thus, host-specific definitions should be at the beginning of
the
o # configuration file, and defaults at the end.
o
o # Site-wide defaults for various options
o
o # Host *
o # ForwardAgent no
o # ForwardX11 no
o # RhostsAuthentication no
o # RhostsRSAAuthentication yes
o # RSAAuthentication yes
o # PasswordAuthentication yes
o # FallBackToRsh no
o # UseRsh no
o # BatchMode no
o # CheckHostIP yes
o # StrictHostKeyChecking yes
o # IdentityFile ~/.ssh/identity
o # IdentityFile ~/.ssh/id_rsa
o # IdentityFile ~/.ssh/id_dsa
o # Port 22
o # Protocol 2,1 - Change this line to: Protocol 2
o # Cipher 3des
o # Ciphers aes128-cbc,3des-cbc,blowfish-cbc,cast128-
cbc,arcfour,aes192-cbc,aes256-cbc
o # EscapeChar ~
o Host *
o ForwardX11 yes
If changes are made to the configuration file, restart the "sshd" daemon to
pick up the new configuration:
Ubuntu: /etc/init.d/ssh restart
Red Hat: /etc/init.d/sshd restart or service sshd restart
Ssh protocol version 1 is not as secure, it should not take 10 minutes to
type your password and if someone logs in as root without logging in as a
particular user first then traceability is lost if there are multiple admins,
thus the changes were made as suggested above.
Setting "PermitRootLogin no" mandates that remote logins use an
undetermined user login. This removes root, a known login on all Linux
systems, from the list of dictionary attacks available.
It is a good idea to change the "Banner" so that a login greeting and legal
disclaimer is presented to the user. i.e. change file /etc/issue.net
contents to:
[Potential Pitfall]: Slow ssh logins - If you get the "login" prompt quickly
but the "password" prompt takes 30 seconds to a minute, then you have a
DNS lookup delay. Set UseDNS no in the config file
/etc/ssh/sshd_config and then restart sshd. The IP address of eth0 (or
the NIC used) should also refer to your own hostname in /etc/hosts
Generate system keys: /etc/ssh/
o ssh-keygen -q -t rsa -f /etc/ssh/ssh_host_rsa_key -C '' -N ''
o ssh-keygen -q -t dsa -f /etc/ssh/ssh_host_dsa_key -C '' -N ''
o Private keys generated: chmod 600 /etc/ssh/ssh_host_dsa_key
/etc/ssh/ssh_host_rsa_key
o Public keys generated: chmod 644 /etc/ssh/ssh_host_dsa_key.pub
/etc/ssh/ssh_host_rsa_key.pub
o For SELinux:
/sbin/restorecon /etc/ssh/ssh_host_rsa_key.pub
/sbin/restorecon /etc/ssh/ssh_host_dsa_key.pub
Generate user keys:
o Client:
Use the command: /usr/bin/ssh-keygen -t rsa
o Generating public/private rsa key pair.
o Enter file in which to save the key (/home/user-id/.ssh/id_rsa):
o Enter passphrase (empty for no passphrase):
o Enter same passphrase again:
o Your identification has been saved in /home/user-id/.ssh/id_rsa.
o Your public key has been saved in /home/user-id/.ssh/id_rsa.pub.
o The key fingerprint is:
o XX:bl:ab:la:bl:aX:XX:af:90:8f:dc:65:0d:XX:XX:XX:XX:XX user-
id@node-name
Files generated:
$HOME/.ssh/id_rsa - binary
$HOME/.ssh/id_rsa.pub - ssh-rsa ...223564257432 email
address
- Multiple keys/lines allowed.
Command options:
To use a different user name for the login, state it on the command line: ssh -l
username name-of server
sftp
Red Hat Open SSH Guide - Also scp, sftp, Gnome ssh-agent
Linux Journal: OpenSSH Part I
SSH Notes:
The sshd should not be started using xinetd/inetd due to time necessary to perform
calculations when it is initialized.
ssh client will suid to root. sshd on the server is run as root. Root privileges are required
to communicate on ports lower than 1024. The -p option may be used to run SSH on a
different port.
RSA is used for key exchange, and a conventional cipher (default Blowfish) is used for
encrypting the session.
Encryption is started before authentication, and no passwords or other information is
transmitted in the clear.
Authentication:
o Login is invoked by the user. The client tells the server the public key that the
user wishes to use for authentication.
o Server then checks if this public key is admissible.
If yes then random number is generated and encrypts it with the public key and
sends the value to the client.
o The client then decrypts the number with its private key and computes a
checksum. The checksum is sent back to the server
o The server computes a checksum from the data and compares the checksums.
o Authentication is accepted if the checksums match.
SSH will use $HOME/.rhosts (or $HOME/.shosts)
To establish a secure network connection on another TCP port, use "tunneling" options
with the ssh command:
o Forward TCP local port to hostport on the remote-host:
ssh remote-host -L port:localhost:hostport command
Man pages:
/usr/share/doc/openssh-XXX/
/usr/share/doc/openssh-askpass-XXX/
/usr/share/doc/openssl-0.XXX/
Test:
The network sniffer Ethereal (now Wireshark) was used to sniff network transmissions between
the client and server for both telnet and ssh with the following results:
Note that the entire login and password exchange was encrypted.
Any site on the public internet will be subjected to dictionary password attacks, constantly trying
new words, word and ASCII sequences from automated attack programs from compromised
servers. Use fail2ban to block these attempts. Fail2ban will examine log files to find repeated,
failed login attempts and either temporarily or permanently block the IP addresses of the
attacking system. The default configuration of fail2ban looks over the sshd log file
/var/log/secure to find the attacking system and will allow for 5 failed login attempts before
blocking for 600 seconds (10 minutes).
Installation:
Configuration:
/etc/fail2ban/fail2ban.conf
[Definition]
# 1 = ERROR
# 2 = WARN
# 3 = INFO
# 4 = DEBUG
loglevel = 3
# Values: STDOUT STDERR SYSLOG file Default: /var/log/fail2ban.log
# Only one log target can be specified.
logtarget = SYSLOG
socket = /var/run/fail2ban/fail2ban.sock
pidfile = /var/run/fail2ban/fail2ban.pid
Note: if your server is under attack, fail2ban may deliver a lot of email. You may want to
remove the sendmail-whois statement. [DEFAULT] directives:
Directive Description
IP addresses to never ban, like your gateway system. Multiple IPs are separated by a
ignoreip
space. This is your white list. Default 127.0.0.1 (localhost)
time period during which failure occurs. eg 600 refers to the maxretry number of
findtime
failures occurring during this findtime period will be banned. Default 600 seconds
maxretry specify the number of failures before an IP gets banned. Default 3
bantime number of seconds that an IP is banned
enabled true=monitor specified process. false for no monitoring. Default is true only for sshd
Configure init to start fail2ban upon boot: sudo chkconfig --level 345 fail2ban on
[host]# iptables -L
Chain INPUT (policy ACCEPT)
target prot opt source destination
fail2ban-SSH tcp -- anywhere anywhere tcp dpt:ssh
Links:
FTP uses clear text access to your server. This is fine if all systems in the datacenter are secure
and no one can sniff the network. Router and switch configurations make it almost impossible to
sniff most networks these days, but a security compromises at the datacenter on another server
can cause potential problems for your servers if you allow open un-encrypted passwords used by
FTP.
VsFTPd also allows one to limit the user's view of the filesystem to their own directories. This is
good. OpenSSH "sftp" does not provide this capability (until version 4.9. RHEL/CentOS 5 use
OpenSSH 4.3). The "sftp" file transfer does encrypt the passwords (good) but also requires shell
access (bash, csh, ...) for the account which allows full access to the filesystem (bad). The rssh
shell can be used with sftp, scp, cvs, rsync, and rdist and can chroot users to their own
directories and limit function to sftp access only (deny full shell access).
For newer systems (RHEL6/CentOS6/Fedora 11) with OpenSSH 4.9+ see the preferred chrooted
sftp configuration for OpenSSH 4.9+.
rssh
as your shell with OpenSSH "sftp":
This installs:
/usr/bin/rssh
/etc/rssh.conf
also support program /usr/libexec/rssh_chroot_helper and man pages
rssh -v Configuration:
Security note: Also be aware of the setting AllowTcpForwarding which controls port
forwarding.
1. User login id
2. First set of three number represent the umask
3. Second set of five number represent the bitmask to allow
1 1 1 1 1
rsync rdist cvs sftp scp
4. Specify the global chrooted directory for all using rssh. If omitted, then not chrooted. Can
be overwritten by user configuration.
Note: User configuration overrides the shared chroot settings. Omitted user settings do
not default to shared chroot settings.
31. Configuring the chrooted directory: This is true for a global user chroot or individual
chroot. In this example we will show a user chrooted to their own home directory
/home/user1. When chrooted, the user does not have access to the rest of the filesystem
and thus is blind to all of its executables and libraries. It will therefore be necessary to
copy local executables and libraries for their local use.
o Once chroot() takes place, programs will not have access to the regular log target.
Specify a chrooted syslog socket target which can be accessed. The number of
sockets are limited and thus configuring rssh for each user is not a good idea for a
large number of users. For use with many users, use the shared chrooted jail
defined by the rssh directive: chrootpath.
Blocking FTP: Setting up rssh does not turn off or block FTP access to your system. You must
still turn off vsftp: /etc/init.d/vsftpd stop. There is little point to setting up secure chrooted
sftp access with rssh and also running a FTP service.
Debugging:
One can pull in the full root path by issuing an internal mount:
o mount --bind /dev /home/user1/dev
o mount --bind /dev /home/user1/lib
o mount --bind /dev /home/user1/lib64
o mount --bind /dev /home/user1/usr
This technique can be used to narrow down the error to find which directory has the
missing files. It should not be used as a final solution.
Unmount when done: umount /home/user1/dev
If authenticating to ldap, nis, etc, pull in the appropriate libraries. You can test with all:
cp -p /lib/libnss_* /home/user1/lib
This can be performed for /lib64 as well.
Checklog files for errors: /var/log/messages
Man pages:
Links:
SentryTools: PortSentry
This tool will monitor the network probes and attacks against your server. It can be configured to
log and counter these probes and attacks. PortSentry can modify your /etc/hosts.deny (PAM
module) file and issue IP firewall commands automatically to block hackers.
PortSentry can be loaded as an RPM but this tutorial covers compiling PortSentry from source to
configure a more preferable system logging.
Note: Version 1.2 of portsentry can issue iptables, ipchains or route commands to thwart attacks.
Iptables/Ipchains is a Linux firewall system built into the Linux kernel. Linux kernel 2.6/2.4 uses
iptables, kernel 2.2 (old) uses ipchains. References to ipfwadm are for even older Linux kernels.
Route commands can be used by any Unix system including those non-Linux systems which do
not support Iptables/Ipchains.
Set file paths and configure separate log file for Portsentry:
Set options:
Change the following line by adding an extra log facility for portsentry
messages which are not going to be logged to the regular syslog output file
/var/log/messages. This lists what messages to filter out from
/var/log/messages.
*.info;mail.none;news.none;authpriv.none;cron.none;local6.n
one /var/log/messages
local6.* /var/log/portsentry.log
To:
Edit file: portsentry.conf to set paths for configuration files and ports to
monitor.
...
...
IGNORE_FILE="/opt/portsentry/portsentry.ignore"
HISTORY_FILE="/opt/portsentry/portsentry.history"
BLOCKED_FILE="/opt/portsentry/portsentry.blocked"
#KILL_ROUTE="/sbin/route add -host $TARGET$ reject" - Generic
Unix KILL_ROUTE
I prefer
iptables/ipchains options below
ADVANCED_EXCLUDE_TCP="21,22,25,53,80,110,113,119" - server
ADVANCED_EXCLUDE_UDP="21,22,53,110,520,138,137,68,67"
OR
ADVANCED_EXCLUDE_TCP="113,139" - workstation
ADVANCED_EXCLUDE_UDP="520,138,137,68,67"
PAM options:
KILL_HOSTS_DENY="ALL: $TARGET$"
Note on Red Hat 7.1: During installation/upgrade the firewall configuration tool
/usr/bin/gnome-lokkit may be invoked. It will configure a firewall using
ipchains and will add this to your boot process. To see if ipchains and the Lokkit
configuration is invoked during system boot, use the command: chkconfig --
list | grep ipchains. You can NOT use portsentry to issue iptables rules if
your kernel is configured to use ipchain rules.
More info on iptables and ipchains support/configuration in Red Hat 7.1 and
kernel 2.4.
127.0.0.1
0.0.0.0
Your IP address
The at Home network routinely scans for news servers on port 119 from a server
named authorized-scan1.security.home.net. Adding the IP address of this server
(24.0.0.203) greatly reduces the logging. I also added their BOOTP server.
(24.9.139.130)
INSTALLDIR = /opt
# /bin/rmdir $(INSTALLDIR)
And remove the line under "install": (troublesome line!!)
To:
Note: Is is possible to have all logging sent to a logging daemon on a single server. This will
allow the administrator to check the logs on only one server rather than individually on many.
Instead of using a firewall command (ipchains/iptables), a false route is used: /sbin/route add
-host $TARGET$ gw 127.0.0.1.
My init script calls the portsentry executable twice with the appropriate command line arguments
to monitor tcp and udp ports. The Red Hat 7.1 init script uses the file
/etc/portsentry/portsentry.modes and a for loop in the init script to call portsentry the
appropriate number of times. Their init script also recreates the portsentry.ignore file each
time portsentry is started by including the IP addresses found with ifconfig and the addresses
0.0.0.0 and localhost. Persistent addresses must be placed above a line stating: Do NOT edit
below this otherwise it is not included in the creation of the new file.
The Red Hat 7.1 Powertools portsentry version logs everything to /var/log/messages. My
configuration avoids log clutter by logging to a separate file.
iptables:
o List firewall rules: iptables -L
o Clear firewall rules: iptables -F
ipchains:
o List firewall rules: ipchains -L
o Clear firewall rules: ipchains -F
#!/bin/bash
# Purge and re-assign chain rules
ipchains -F
ipchains -A input -p tcp -s 0/0 -d 0/0 2049 -y -j REJECT
ipchains -A input -p udp -s 0/0 -d 0/0 2049 -j REJECT
ipchains -A input -p tcp -s 0/0 -d 0/0 6000:6009 -y -j REJECT
ipchains -A input -p tcp -s 0/0 -d 0/0 7100 -y -j REJECT
ipchains -A input -p tcp -s 0/0 -d 0/0 515 -y -j REJECT
ipchains -A input -p udp -s 0/0 -d 0/0 515 -j REJECT
ipchains -A input -p tcp -s 0/0 -d 0/0 111 -y -j REJECT
ipchains -A input -p udp -s 0/0 -d 0/0 111 -j REJECT
ipchains -A input -j REJECT -p all -s localhost -i eth0 -l
Also see:
Sourceforge: Portsentry Home Page - PortSentry, Logcheck and HostSentry home page.
Portsentry description
FAQ: Firewall Forensics - Robert Graham
#!/bin/bash
#
# Startup script for PortSentry
#
# chkconfig: 345 85 15
# description: PortSentry monitors TCP and UDP ports for network attacks
#
# processname: portsentry
# pidfile: /var/run/portsentry.pid
# config: /opt/portsentry/portsentry.conf
# config: /opt/portsentry/portsentry.ignore
# config: /opt/portsentry/portsentry.history
# config: /opt/portsentry/portsentry.blocked
exit 0
Logrotate Configuration:
File:
/etc/logrotate.d/portsentry
/var/log/portsentry.log {
rotate 12
monthly
errors root@localhost
missingok
postrotate
/usr/bin/killall -HUP portsentry 2> /dev/null || true
endscript
}
Also see the YoLinux Sys Admin tutorial covering logrotate.
Tests:
Portscan your workstation - Use your web browser to go to this site. Select "Probe my
ports" and it will scan you. You can then look at the file
/opt/portsentry/portsentry.blocked.atcp to see that portsentry dropped the
scanning site:
nmap: portscanner - This is the hacker tool responsible for many of the portscans you
may be receiving.
Command arguments:
Argument Description
-sO IP scan. Find open ports.
-sT TCP scan. Full connection made.
SYN scan (half open scan). This scan is typically not logged on
-sS
receiving system.
-sP Ping ICMP scan.
-sU UDP scan.
-P0 Don't ping before scan.
-PT Use ping to determine which hosts are available.
-F Fast scan. Scan for ports listed in configuration.
-T Set timing of scan to use values to avoid detection.
-O Determines operating system.
-p 1000-1999,5000-
Scan port ranges specified.
5999
Also see: nmap man page for a full listing of nmap command line arguments.
Examples:
Add the option -v (verbose) or -vv (super verbose) for more info.
The ports will be determined to be open, filtered or firewalled.
hackers do.
Nmap and nmapfe are available with distribution or on the Red Hat Powertools CD for
older (7.1) releases:
o nmap-XXX.i386.rpm
o nmap-frontend-XXX.i386.rpm
Links:
Tripwire monitors your file system for changes. Tripwire is used to create an initial database of
information on all the system files then runs periodically (cron) to compare the system to the
database.
Use the command tripwire --version or rpm -q tripwire to determine the version.
Red Hat includes Tripwire as an optional package during install. The Ubuntu/Debian install is as
easy as apt-get install tripwire. Upon installation it will proceed to scan your entire
filesystem to create a default database of what your system looks like. (files and sizes etc) It took
about ten minutes to run on my server!
These files are first edited and then processed by the script
/etc/tripwire/twinstall.sh which configures Tripwire after the installation of the
Tripwire RPM package.
Change:
LOOSEDIRECTORYCHECKING =false
to
LOOSEDIRECTORYCHECKING=TRUE
Change:
severity = $(SIG_XXX)
to
severity = $(SIG_XXX),
emailto = root@localhost
or
severity = $(SIG_XXX),
emailto = root@localhost;admin@isp.com
where XXX is the severity level. This will cause Tripwire to email a report of
discrepancies for the rule edited. Set the email address to one appropriate for you.
I also added:
o "User binaries" rule: directory /opt/bin
o "Libraries" rule: directory /opt/lib
I removed/commented out:
o the rule "System boot changes" as it reports changes due to system boot.
o Rule: "Root config files": Many of the non-existent files listed under /root were
commented out to reduce the number of errors reported.
o Rule "File System and Disk Administraton Programs": Many of the non-existent
binaries listed under /sbin were commented out to reduce the number of errors
reported.
After configuration files have been edited run the script: /etc/tripwire/twinstall.sh
The script will ask for a "passphrase" for the site and local system. This is a similar
concept to a password - remember it!
If at any point you want to make configuration/policy changes, edit these files and re-run
the configuration script. The script will generate the true configuration files used by
Tripwire:
o /etc/tripwire/tw.cfg
(View with command: twadmin --print-cfgfile)
o /etc/tripwire/tw.pol
(View with command: twadmin --print-polfile)
o /etc/tripwire/site.key
o /etc/tripwire/ServerName-a-local.key
Tripwire initialization:
If at any time you change the configuration file to monitor your system differently or install an
upgrade (changes a whole lot of files which will "trip" tripwire into reporting all changes) you
may want to generate a new database.
Tripwire 2.3.0-58:
File: /etc/cron.daily/tripwire-check
#!/bin/sh
HOST_NAME=`uname -n`
if [ ! -e /var/lib/tripwire/${HOST_NAME}.twd ] ; then
echo "**** Error: Tripwire database for ${HOST_NAME} not
found. ****"
echo "**** Run "/etc/tripwire/twinstall.sh" and/or "tripwire --
init". ****"
else
test -f /etc/tripwire/tw.cfg && /usr/sbin/tripwire --check
fi
You may move this cron script to the directory /etc/cron.weekly/ to reduce reporting
from a daily to a weekly event.
Tripwire reports will be written to: /var/lib/tripwire/report/HostName-Date.twr
Tripwire 1.2-3:
File: /etc/cron.daily/tripwire.verify script which runs the command:
/usr/sbin/tripwire -loosedir -q
Note: You may want to move the script to /etc/cron.weekly/tripwire.verify to
reduce email reporting to root.
Interactive mode:
Tripwire 1.2-3:
Update tripwire database - run: tripwire -interactive
This will allow you to respond Y/N to files if they should be permanently updated in the
tripwire database. This will still run tripwire against the whole file system. I ran it from
/root and it updated /root/databases/tw.db_ServerName You must then cp -p to
/var/spool/tripwire/ to update the tripwire database.
Default configuration file:
ROOT =/usr/sbin
POLFILE =/etc/tripwire/tw.pol
DBFILE =/var/lib/tripwire/$(HOSTNAME).twd
REPORTFILE =/var/lib/tripwire/report/$(HOSTNAME)-
$(DATE).twr
SITEKEYFILE =/etc/tripwire/site.key
LOCALKEYFILE =/etc/tripwire/$(HOSTNAME)-local.key
EDITOR =/bin/vi
LATEPROMPTING =false
LOOSEDIRECTORYCHECKING =false
MAILNOVIOLATIONS =true
EMAILREPORTLEVEL =3
REPORTLEVEL =3
MAILMETHOD =SENDMAIL
SYSLOGREPORTING =false
MAILPROGRAM =/usr/sbin/sendmail -oi -t
# Log file
@@define LOGFILEM E+pugn
# Config file
@@define CONFM E+pinugc
# Binary
@@define BINM E+pnugsci12
# Directory
@@define DIRM E+pnug
# Data file (same as BIN_M currently)
@@define DATAM E+pnugsci12
# Device files
@@define DEVM E+pnugsc
# exclude all of /proc
=/proc E
#=/dev @@DIRM
/dev @@DEVM
#=/etc @@DIRM
/etc @@CONFM
# Binary directories
#=/usr/sbin @@DIRM
/usr/sbin @@BINM
#=/usr/bin @@DIRM
/usr/bin @@BINM
#=/sbin @@DIRM
/sbin @@BINM
#=/bin @@DIRM
/bin @@BINM
#=/lib @@DIRM
/lib @@BINM
#=/usr/lib @@DIRM
/usr/lib @@BINM
=/usr/src E
=/tmp @@DIRM
Add:
Delete/comment out:
#/dev @@DEVM
This eliminated the reporting of too much junk due to a reboot of the system.
Man pages:
Tripwire 2.3.0-58:
Also see:
TripwireSecurity.com
Tripwire.org
Tripwire documentations
/usr/doc/tripwire-1.2/docs/designdoc.ps
ViperDB - Alternative to Tripwire.
Red Hat 7.1 tripwire manual
Tripwire will monitor your filesystems for intrusion or addition of a file so you may determine
what changes have occurred on your system in sensitive areas. Chkrootkit will scan your system
for known exploits, Trojan commands, and worms used to compromise a system.
Note:
This software is constantly being upgraded and updated to include scans for new exploits.
If running portsentry, chkrootkit may return a false error while performing the bindshell
test.
Let me start by saying that this should only be performed on your own systems. It is considered
and attack to run this against the systems of others and legal action may be taken against you for
performing such an audit. This is not a scan like NMAP. NESSUS will search and locate
vulnerabilities on your system by actively trying to perform known exploits against the system.
Nessus is amazingly complete and effective. In fact it is awesome!! It will identify services on
your system and try to exploit them. If a vulnerability is found it will make recommendations
about upgrades, configuration changes and where to find patches. It will also explain any causes
for concern in detail and explain why your system is vulnerable. And that's not all! It can output
reports in various formats including HTML with pie charts and bar charts!! The HTML reports
will have hyperlinks to the security reports, upgrades and patches. (I'm impressed) It can scan
Unix, Linux and Windows systems for vulnerabilities.
Note:
Running "Dangerous Plugins" may cause a crash of the system being audited!!
nessus-client-....rpm
nessus-common-....rpm
nessus-plugins-....rpm
nessus-server-....rpm : Nessus plugins which are used to perform the various checks.
(Scripts in nasl scripting language) Note that the RPM installs an init script which starts
nessusd during boot. Disable with chkconfig --del nessusd
nessus-devel-....rpm : Nessus development libraries and headers.
Running NESSUS:
You may also consider a popular branch of Nessus, OpenVAS: Open Vulnerability Assessment
System
Books:
"Linux Firewalls"
by Robert L. Ziegler, Carl Constaintine
ISBN #0735710996, New Riders 10/2001
"Linux Firewalls"
Robert L. Ziegler
ISBN #0-7357-0900-9, New Riders 11/1999