Sunteți pe pagina 1din 102

wiki.archlinux.

org

Kernel Panics - ArchWiki


6-7 minutes

This article or section is out of date.

This page describes how to repair a computer whose kernel panics at boot. This has to do with
the very basic OS kernel and the first part of the boot routine. (For issues regarding graphical
interface problems or program freeze-ups, etc., save yourself some wasted effort and time, and
please look elsewhere.)

Definition
A decent definition of Kernel Panic comes to us from Wikipedia, which states in part; "A kernel
panic is an action taken by an operating system upon detecting an internal fatal error from which
it cannot safely recover; the term is largely specific to Unix and Unix-like systems. The
equivalent in Microsoft Windows operating systems is the Blue Screen of Death."

See also Wikipedia:Kernel panic.

What to do
Basically, the problem is that the operating system doesn't start correctly. Various behavior may
be expressed, such as that one may get the computer to freeze, or the operating system may give
an error message of some sort or one may not go to the place they were expecting (Command
prompt, Desktop or whathaveyou). This will require some basic troubleshooting from the
command line, if you can boot to it, or from a boot disk if it will get you a command prompt or
your favorite interface.

Troubleshooting
To make troubleshooting easier, ensure that the kernel is not in quiet mode. Remove 'quiet' from
the kernel line in GRUB, if it is found there. Upon boot, check the output immediately before the
panic, and decide whether there is any useful information. There are probably too many causes
for a kernel panic to keep well-documented in this wiki. Make sure that your system's
configuration in /boot is correct, and that none of the computer's hardware is faulty - it is good
idea to run memtest from the Arch install/rescue CD or another utility (red entries are bad). If
you believe the configuration in /boot may be erroneous, try Option 1 to repair your bootloader
setup. If you believe the kernel panic is the fault of the kernel itself, follow Option 2 in order to
reinstall the existing version or an earlier kernel.

Option 1: Check bootloader configuration


Another possibility is an error in the bootloader's configuration[broken link: invalid section]. For example,
repartitioning hard drives can change partitions' order. GRUB users may recall whether
repartitioning has occurred recently and make sure the root and kernel lines match up with the
new partitioning scheme. And examine the file for typos and extraneous characters. An extra
space, or a character in the wrong place will cause a kernel panic.

Option 2: Reinstall kernel


Reinstalling the kernel is probably the best bet when no other major system modifications have
taken place recently.

Start from the installation CD

The first step is booting the installation CD. Once booted, you are presented with an
automatically logged-in virtual console as the root user.

Mount your partitions

When booted, you are in a minimal but functional live GNU/Linux environment with some basic
tools. Now, you have to mount your normal root disk (or partition) to /mnt.

# mount /dev/sdXY /mnt

If you are using legacy IDE drives, then use the command:

# mount /dev/hdXY /mnt

If you use a separate boot partition, do not forget to mount it with:

# mount /dev/sdXZ /mnt/boot

Gather your files for later troubleshooting

This is a good point to stop and gather your information onto another drive or partition so that it
can be analyzed and/or emailed for outside viewing before the files change again. Simply create
a separate directory on your main partition or mount a USB drive to contain the files. Then you
may copy any files you will need to keep unchanged during the next boot with your new kernel.

Chroot to your normal root


Now, you will have to chroot to the partition mounted in /mnt. Newer kernels use an initial
ramdisk to set up the kernel environment: when you reinstall a kernel, that initial ramdisk will be
regenerated with mkinitcpio. One of mkinitcpio's features is that it does automatic detection to
find out what kernel modules are required for starting up your computer. For this autodetection
to work, /dev, /sys, and /proc need to mounted in your chroot; make sure to read Change root.

To chroot to your normal root mounted at /mnt, run this command:

# arch-chroot /mnt /bin/bash

If you do not want to use the Bash shell, remove /bin/bash from the arch-chroot command.

Roll back to previous kernel version

If you keep your downloaded pacman packages, you now can easily roll back. If you did not
keep them, you have to find a way[broken link: invalid section] to get a previous kernel version on your
system now.

Let us suppose you kept the previous versions. We will now install the last working one.

First, you need to get the kernel details:

# find /var/cache/pacman/pkg -name 'linux-4*'

Now, use the kernel details in the command below.

# pacman -U /var/cache/pacman/pkg/linux-4.xx-x.pkg.tar.xz

(Of course, make sure that you adapt this line to your own kernel version. You can find the ones
you still have in your cache by examining the directory above.)

Reboot
Note: If you choose to do anything else before you reboot, remember that you are still in the
chroot environment and will likely have to exit and login again.

Now is the time to reboot and see if the system modifications have stopped the panic. If reverting
to an older kernel works, do not forget to check the arch-newspage to check what went wrong
with the kernel build. If there is no mention of the problem there, then go to the bug reporting
area and search for it there. If you still do not find it, open a new bug report and attach those files
you saved during the troubleshooting step above.

File recovery
Related articles
 Post recovery tasks#Photorec

This article lists data recovery and undeletion options for Linux.

Contents
 1 Special notes
o 1.1 Before you start
o 1.2 Failing drives
o 1.3 Backup flash media/small partitions
o 1.4 Working with digital cameras
 2 Foremost
 3 Scalpel
 4 Extundelete
o 4.1 Installation
o 4.2 Usage
 5 Testdisk and PhotoRec
o 5.1 Installation
o 5.2 Usage
o 5.3 Files recovered by photorec
o 5.4 See also
 6 e2fsck
o 6.1 Installation
o 6.2 See also
 7 Working with raw disk images
o 7.1 Mount the entire disk
o 7.2 Mounting partitions
 7.2.1 Getting disk geometry
o 7.3 Using QEMU to Repair NTFS
 8 Text file recovery
 9 See also

Special notes
Before you start

This page is mostly intended to be used for educational purposes. If you have accidentally
deleted or otherwise damaged your valuable and irreplaceable data and have no previous
experience with data recovery, turn off your computer immediately (Just press and hold the off
button or pull the plug; do not use the system shutdown function) and seek professional help. It is
quite possible and even probable that, if you follow any of the steps described below without
fully understanding them, you will worsen your situation.

Failing drives
In the area of data recovery, it is best to work on images of disks rather than physical disks
themselves. Generally, a failing drive's condition worsens over time. The goal ought to be to first
rescue as much data as possible as early as possible in the failure of the disk and to then abandon
the disk. The ddrescue and dd_rescue utilities, unlike dd, will repeatedly try to recover from
errors and will read the drive front to back, then back to front, attempting to salvage data. They
keep log files so that recovery can be paused and resumed without losing progress.

See Disk cloning.

The image files created from a utility like ddrescue can then be mounted like a physical device
and can be worked on safely. Always make a copy of the original image so that you can revert if
things go sour!

A tried and true method of improving failing drive reads is to keep the drive cold. A bit of time
in the freezer is appropriate, but be careful to avoid bringing the drive from cold to warm too
quickly, as condensation will form. Keeping the drive in the freezer with cables connected to the
recovering PC works great.

Do not attempt a filesystem check on a failing drive, as this will likely make the problem worse.
Mount it read-only.

Backup flash media/small partitions

As an alternative to working with a 'live' partition (mounted or not), it is often preferable to work
with an image, provided that the filesystem in question is not too large and that you have
sufficient free HDD space to accommodate the image file. For example, flash memory devices
like thumb drives, digital cameras, portable music players, cellular phones, etc. are likely to be
small enough to image in many cases.

Be sure to read the man pages for the utilities listed below to verify that they are capable of
working with image files.

To make an image, one can use dd as follows:

# dd if=/dev/target_partition of=/home/user/partition.image

Working with digital cameras

In order for some of the utilities listed in the next section to work with flash media, the device in
question needs to be mounted as a block device (i.e., listed under /dev). Digital cameras
operating in PTP (Picture Transfer Protocol) mode will not work in this regard. PTP cameras are
transparently handled by libgphoto and/or libptp. In this case, "transparently" means that PTP
devices do not get block devices. The alternative to PTP mode, USB Mass Storage (UMS) mode,
is not supported by all cameras. Some cameras have a menu item that allows switching between
the two modes; refer to your camera's user manual. If your camera does not support UMS mode
and therefore cannot be accessed as a block device, your only alternative is to use a flash media
reader and physically remove the storage media from your camera.

Foremost
Foremost is a console program to recover files based on their headers, footers, and internal data
structures. This process is commonly referred to as data carving. Foremost can work on disk
image files (such as those generated by dd, Safeback, Encase, etc.) or directly on a drive. The
headers and footers can be specified by a configuration file or command line switches can be
used to specify built-in file types. These built-in types look at the data structures of a given file
format, allowing for more reliable and faster recovery.

See Foremost article.

Scalpel
Scalpel is a console file-carving program originally based on Foremost, although significantly
more efficient. Originally developed by Golden G. Richard III, it allows an examiner to specify a
number of headers and footers to recover filetypes from a piece of media. Licensed under the
Apache licence, Scalpel is maintained by Golden G. Richard III and Lodovico Marziale.

Article about Scalpel on Forensicswiki

scalpel-gitAUR is available in the AUR.

Extundelete
Extundelete is a terminal-based utility designed to recover deleted files from ext3 and ext4
partitions. It can recover all the recently deleted files from a partition and/or a specific file(s)
given by relative path or inode information. Note that it works only when the partition is
unmounted. The recovered files are saved in the current directory under the folder named
RECOVERED_FILES/.

Installation

extundelete is available in the official repositories.

Usage

Derived from the post on Linux Poison.

To recover data from a specific partition, the device name for the partition, which will be in the
format /dev/sdXN (X is a letter and N is a number.), must be known. The example used here is
/dev/sda4, but your system might use something different (For example, MMC card readers use
/dev/mmcblkNpN as their naming scheme.) depending on your filesystem and device
configuration. If you are unsure, run df, which prints currently mounted partitions.

Once which partition data is to be recovered from has been determined, simply run:

# extundelete /dev/sda4 --restore-file directory/file

Any subdirectories must be specified, and the command runs from the highest level of the
partition, so, to recover a file in /home/SomeUserName/, assuming /home is on its own partition,
run:

# extundelete /dev/sda4 --restore-file SomeUserName/SomeFile

To speed up multi-file recovery, extundelete has a --restore-files option as well.

To recover an entire directory, run:

# extundelete /dev/sda4 --restore-directory SomeUserName/SomeDirectory

For advanced users, to manually recover blocks or inodes with extundelete, debugfs can be used
to find the inode to be recovered; then, run:

# extundelete --restore-inode inode

inode stands for any valid inode. Additional inodes to recover can be listed in an unspaced,
comma-separated fashion.

Finally, to recover all deleted files from an entire partition, run:

# extundelete /dev/sda4 --restore-all

Testdisk and PhotoRec


TestDisk and Photorec are both open-source data recovery utilities licensed under the terms of
the GNU Public License (GPL).

TestDisk is primarily designed to help recover lost partitions and/or make non-booting disks
bootable again when these symptoms are caused by faulty software, certain types of viruses, or
human error, such as the accidental deletion of partition tables.

PhotoRec is file recovery software designed to recover lost files including photographs (Hint:
PhotographRecovery), videos, documents, archives from hard disks and CD-ROMs. PhotoRec
ignores the filesystem and goes after the underlying data, so it will still work even with a re-
formatted or severely damaged filesystems and/or partition tables.

Installation
Install the testdisk package, which provides both TestDisk and PhotoRec.

Usage

After running e.g. ddrescue to create image.img, photorec image.img will open a terminal UI
where you can select what file types to search for and where to put the recovered files.

Files recovered by photorec

The photorec utility stores recovered files with a random names(for most of the files) under a
numbered directories, e.g. ./recup_dir.1/f872690288.jpg,
./recup_dir.1/f864563104_wmclockmon-0.1.0.tar.gz.

See also

 How to get the original filenames: PhotoRec FAQ


 Wiki (TestDisk): http://www.cgsecurity.org/wiki/TestDisk
 Wiki (Photorec): http://www.cgsecurity.org/wiki/PhotoRec
 Homepage: http://www.cgsecurity.org/

e2fsck
e2fsck is the ext2/ext3 filesystem checker included in the base install of Arch. e2fsck relies on a
valid superblock. A superblock is a description of the entire filesystem's parameters. Because this
data is so important, several copies of the superblock are distributed throughout the partition.
With the -b option, e2fsck can take an alternate superblock argument; this is useful if the main,
first superblock is damaged.

To determine where the superblocks are, run dumpe2fs -h on the target, unmounted partition.
Superblocks are spaced differently depending on the filesystem's blocksize, which is set when
the filesystem is created.

An alternate method to determine the locations of superblocks is to use the -n option with
mke2fs. Be sure to use the -n flag, which, according to the mke2fs manpage, "Causes mke2fs to
not actually create a filesystem, but display what it would do if it were to create a filesystem.
This can be used to determine the location of the backup superblocks for a particular filesystem,
so long as the mke2fs parameters that were passed when the filesystem was originally created
are used again. (With the -n option added, of course!)".

Installation

Both e2fsck and dumpe2fs are included in the base Arch install as part of e2fsprogs.

See also
 e2fsck man page: http://phpunixman.sourceforge.net/index.php/man/e2fsck/8
 dumpe2fs man page:
http://phpunixman.sourceforge.net/index.php?parameter=dumpe2fs&mode=man

Working with raw disk images

This article or section is a candidate for merging with QEMU.

Notes: please use the second argument of the template to provide more detailed indications. (Discuss in
Talk:File recovery#)

If you have backed up a drive using ddrescue or dd and you need to mount this image as a
physical drive, see this section.

Mount the entire disk

To mount a complete disk image to the next free loop device, use the losetup command:

# losetup -f -P /path/to/image
Tip:

 The -f flag mounts the image to the next available loop device.
 The -P flag creates additional devices for every partition.

See also more information about loop devices.

Mounting partitions

In order to be able to mount a partiton of a whole disk image, follow the steps above.

Once the whole disk image is mounted, a normal mount command can be used on the loop
device:

# mount /dev/loop0p1 /mnt/example

This command mounts the first partition of the image in loop0 to the folder to the mountpoint
/mnt/example. Remember that the mountpoint directory must exist!

Getting disk geometry

Once the entire disk image has been mounted as a loopback device, its drive layout can be
inspected.
Using QEMU to Repair NTFS

With a disk image that contains one or more NTFS partitions that need to be chkdsked by
Windows since no good NTFS filesystem checker for Linux exists, QEMU can use a raw disk
image as a real hard disk inside a virtual machine:

# qemu -hda /path/to/primary.img -hdb /path/to/DamagedDisk.img

Then, assuming Windows is installed on primary.img, it can be used to check partitions on


/path/to/DamagedDisk.img.

Warning: Do not use lower version of Windows to check NTFS partitions create by higher version of it,
e.g. Windows XP can do damage to NTFS partitions created by Windows 8 by "fixing" metadata
configuration that has support for, not supported entries will be removed or miss-configured.

Text file recovery


It is possible to find deleted plain text files on a hard drive by directly searching on the block
device. A preferably unique string from the file you are trying to recover is needed.

Use grep to search for fixed strings (-F) directly on the partition:

$ grep -a -C 200 -F 'Unique string in text file' /dev/sdXN > OutputFile

Hopefully, the content of the deleted file is now in OutputFile, which can be extracted from the
surrounding context manually.

Note: The -C 200 option tells grep to print 200 lines of context from before and after each match of
the string. Alternatives are the -A and -B flags, which print context only from after and before each
match, respectively. You may need to adjust the number of lines if the file you are looking for is very
long.

See also

 1 Installation
 2 Upgrading to GRUB2
o 2.1 Is upgrading necessary?
o 2.2 How to upgrade
o 2.3 Differences
 2.3.1 Backup important data
o 2.4 Converting GRUB Legacy's config file to the new format
o 2.5 Restore GRUB Legacy
 3 Configuration
o 3.1 Finding GRUB's root
o 3.2 Dual booting with Windows
o 3.3 Dual booting with GNU/Linux
o 3.4 chainloader and configfile
o 3.5 Dual booting with GNU/Linux (GRUB2)
 4 Bootloader installation
o 4.1 Manual recovery of GRUB libs
o 4.2 General notes about bootloader installation
o 4.3 Installing to the MBR
o 4.4 Installing to a partition
o 4.5 Alternate method (grub-install)
 5 Tips and tricks
o 5.1 Graphical boot
o 5.2 Framebuffer resolution
 5.2.1 GRUB recognized value
 5.2.2 hwinfo
o 5.3 Naming partitions
 5.3.1 By Label
 5.3.2 By UUID
o 5.4 Boot as root (single-user mode)
o 5.5 Password protection
o 5.6 Restart with named boot choice
o 5.7 LILO and GRUB interaction
o 5.8 GRUB boot disk
o 5.9 Hide GRUB menu
 6 Advanced debugging
 7 Troubleshooting
o 7.1 GRUB Error 17
o 7.2 /boot/grub/stage1 not read correctly
o 7.3 Accidental install to a Windows partition
o 7.4 Edit GRUB entries in the boot menu
o 7.5 device.map error
o 7.6 KDE reboot pull-down menu fails
o 7.7 GRUB fails to find or install to any virtio /dev/vd* or other non-BIOS devices
 8 See also

Installation
GRUB Legacy has been dropped from the official repositories in favor of GRUB version 2.x but
is still available from the grub-legacyAUR package.

Additionally, GRUB must be installed to the boot sector of a drive or partition to serve as a
bootloader. This is covered in the Bootloader installation section.

Upgrading to GRUB2
Is upgrading necessary?

The short answer is No. GRUB legacy will not be removed from your system and will stay fully
functional.

However, as any other packages which are not supported anymore, bugs are unlikely to be fixed.
So you should consider upgrading to GRUB version 2.x, or one of the other supported Boot
loaders.

GRUB legacy does not support GPT disks, Btrfs filesystem and UEFI firmwares.

How to upgrade

Upgrade from GRUB Legacy to GRUB version 2.x is much the same as installing GRUB on a
running Arch Linux. Detailed instructions is covered here.

Differences

 There are differences in the commands of GRUB Legacy and GRUB. Familiarize yourself with
GRUB commands before proceeding (e.g. "find" has been replaced with "search").
 GRUB is now modular and no longer requires "stage 1.5". As a result, the bootloader itself is
limited -- modules are loaded from the hard drive as needed to expand functionality (e.g. for
LVM or RAID support).
 Device naming has changed between GRUB Legacy and GRUB. Partitions are numbered from 1
instead of 0 while drives are still numbered from 0, and prefixed with partition-table type. For
example, /dev/sda1 would be referred to as (hd0,msdos1) (for MBR) or (hd0,gpt1) (for
GPT).
 GRUB is noticeably bigger than GRUB legacy (occupies ~13 MB in /boot). If you are booting
from a separate /boot partition, and this partition is smaller than 32 MB, you will run into disk
space issues, and pacman will refuse to install new kernels.

Backup important data

Although a GRUB installation should run smoothly, it is strongly recommended to keep the
GRUB Legacy files before upgrading to GRUB v2.

# mv /boot/grub /boot/grub-legacy

Backup the MBR which contains the boot code and partition table (replace /dev/sdX with your
actual disk path):

# dd if=/dev/sdX of=/path/to/backup/mbr_backup bs=512 count=1

Only 446 bytes of the MBR contain boot code, the next 64 contain the partition table. If you do
not want to overwrite your partition table when restoring, it is strongly advised to backup only
the MBR boot code:
# dd if=/dev/sdX of=/path/to/backup/bootcode_backup bs=446 count=1

If unable to install GRUB2 correctly, see Restore GRUB Legacy.

Converting GRUB Legacy's config file to the new format

If grub-mkconfig fails, convert your /boot/grub/menu.lst file to /boot/grub/grub.cfg


using:

# grub-menulst2cfg /boot/grub/menu.lst /boot/grub/grub.cfg


Note: This option works only in BIOS systems, not in UEFI systems.

For example:

/boot/grub/menu.lst

default=0
timeout=5

title Arch Linux Stock Kernel


root (hd0,0)
kernel /vmlinuz-linux root=/dev/sda2 ro
initrd /initramfs-linux.img

title Arch Linux Stock Kernel Fallback


root (hd0,0)
kernel /vmlinuz-linux root=/dev/sda2 ro
initrd /initramfs-linux-fallback.img
/boot/grub/grub.cfg

set default='0'; if [ x"$default" = xsaved ]; then load_env; set


default="$saved_entry"; fi
set timeout=5

menuentry 'Arch Linux Stock Kernel' {


set root='(hd0,1)'; set legacy_hdbias='0'
legacy_kernel '/vmlinuz-linux' '/vmlinuz-linux' 'root=/dev/sda2' 'ro'
legacy_initrd '/initramfs-linux.img' '/initramfs-linux.img'
}

menuentry 'Arch Linux Stock Kernel Fallback' {


set root='(hd0,1)'; set legacy_hdbias='0'
legacy_kernel '/vmlinuz-linux' '/vmlinuz-linux' 'root=/dev/sda2' 'ro'
legacy_initrd '/initramfs-linux-fallback.img' '/initramfs-linux-
fallback.img'
}

If you forgot to create a GRUB /boot/grub/grub.cfg config file and simply rebooted into
GRUB Command Shell, type:

sh:grub> insmod legacycfg


sh:grub> legacy_configfile ${prefix}/menu.lst
Boot into Arch and re-create the proper GRUB /boot/grub/grub.cfg config file.

Restore GRUB Legacy

 Move GRUB v2 files out of the way:

# mv /boot/grub /boot/grub.nonfunctional

 Copy GRUB Legacy back to /boot:

# cp -af /boot/grub-legacy /boot/grub

 Replace MBR and next 62 sectors of sda with backed up copy

Warning: This command also restores the partition table, so be careful of overwriting a modified
partition table with the old one. It will mess up your system.
# dd if=/path/to/backup/first-sectors of=/dev/sdX bs=512 count=1

A safer way is to restore only the MBR boot code use:

# dd if=/path/to/backup/mbr-boot-code of=/dev/sdX bs=446 count=1

Configuration
The configuration file is located at /boot/grub/menu.lst. Edit this file to suit your needs.

 timeout # -- time to wait (in seconds) before the default operating system is automatically
loaded.
 default # -- the default boot entry that is chosen when the timeout has expired.

An example configuration (/boot is on a separate partition):

/boot/grub/menu.lst

# Config file for GRUB - The GNU GRand Unified Bootloader


# /boot/grub/menu.lst

# DEVICE NAME CONVERSIONS


#
# Linux GRUB
# -------------------------
# /dev/fd0 (fd0)
# /dev/sda (hd0)
# /dev/sdb2 (hd1,1)
# /dev/sda3 (hd0,2)
#
# FRAMEBUFFER RESOLUTION SETTINGS
# +-------------------------------------------------+
# | 640x480 800x600 1024x768 1280x1024
# ----+--------------------------------------------
# 256 | 0x301=769 0x303=771 0x305=773 0x307=775
# 32K | 0x310=784 0x313=787 0x316=790 0x319=793
# 64K | 0x311=785 0x314=788 0x317=791 0x31A=794
# 16M | 0x312=786 0x315=789 0x318=792 0x31B=795
# +-------------------------------------------------+
# for more details and different resolutions see
# https://wiki.archlinux.org/index.php/GRUB#Framebuffer_Resolution

# general configuration:
timeout 5
default 0
color light-blue/black light-cyan/blue

# boot sections follow


# each is implicitly numbered from 0 in the order of appearance below
#
# TIP: If you want a 1024x768 framebuffer, add "vga=773" to your kernel line.
#
#-*

# (0) Arch Linux


title Arch Linux
root (hd0,0)
kernel /vmlinuz-linux root=/dev/sda3 ro
initrd /initramfs-linux.img

# (1) Windows
#title Windows
#rootnoverify (hd0,0)
#makeactive
#chainloader +1

Finding GRUB's root

GRUB must be told where its files reside on the system, since multiple instances may exist (i.e.,
in multi-boot environments). GRUB files always reside under /boot, which may be on a
dedicated partition.

Note: GRUB defines storage devices differently than conventional kernel naming does.

 Hard disks are defined as (hdX); this also refers to any USB storage devices.
 Device and partitioning numbering begin at zero. For example, the first hard disk recognized in
the BIOS will be defined as (hd0). The second device will be called (hd1). This also applies to
partitions. So, the second partition on the first hard disk will be defined as (hd0,1).

If you are unaware of the the location of /boot, use the GRUB shell find command to locate the
GRUB files. Enter the GRUB shell as root by:
# grub

The following example is for systems without a separate /boot partition, wherein /boot is
merely a directory under /:

grub> find /boot/grub/stage1

The following example is for systems with a separate /boot partition:

grub> find /grub/stage1

GRUB will find the file, and output the location of the stage1 file. For example:

grub> find /grub/stage1

(hd0,0)

This value should be entered on the root line in your configuration file. Type quit to exit the
shell.

Dual booting with Windows

Add the following to the end of your /boot/grub/menu.lst (assuming that your Windows
partition is on the first partition of the first drive):

/boot/grub/menu.lst

title Windows
rootnoverify (hd0,0)
makeactive
chainloader +1
Note:

 If you are attempting to dual-boot with Windows 7, you should comment out the line
makeactive.
 Windows 2000 and later versions do NOT need to be on the first partition to boot (contrary to
popular belief). If the Windows partition changes (i.e. if you add a partition before the Windows
partition), you will need to edit the Windows boot.ini file to reflect the change (see this
article for details on how to do that).

If Windows is located on another hard disk, the map command must be used. This will make your
Windows install think it is actually on the first drive. Assuming that your Windows partition is
on the first partition of the second drive:

/boot/grub/menu.lst

title Windows
map (hd0) (hd1)
map (hd1) (hd0)
rootnoverify (hd1,0)
makeactive
chainloader +1
Note: If you are attempting to dual-boot with Windows 7, you should comment out the line
makeactive.

Dual booting with GNU/Linux

This can be done the same way that an Arch Linux install is defined. For example:

/boot/grub/menu.lst

title Other Linux


root (hd0,2)
kernel /path/to/kernel root=/dev/sda3 ro
initrd /path/to/initrd
Note: There may be other options that are required, and an initial RAM disk may not be used. Examine
the other distribution's /boot/grub/menu.lst to match boot options, or see chainloader and
configfile (recommended).

chainloader and configfile

To facilitate system maintenance, the chainloader or configfile command should be used to


boot another Linux distribution that provides an "automagic" GRUB configuration mechanism
(e.g. Debian, Ubuntu, openSUSE). This allows the distribution to manage its own menu.lst and
boot options.

 The chainloader command will load another bootloader (rather than a kernel image); useful if
another bootloader is installed in a partition's boot sector (GRUB, for example). This allows one
to install a "main" instance of GRUB to the MBR and distribution-specific instances of GRUB to
each partition boot record (PBR).

 The configfile command will instruct the currently running GRUB instance to load the
specified configuration file. This can be used to load another distribution's menu.lst without a
separate GRUB installation. The caveat of this approach is that other menu.lst may not be
compatible with the installed version of GRUB; some distributions heavily patch their versions of
GRUB.

For example, GRUB is to be installed to the MBR and some other bootloader (be it GRUB or
LILO) is already installed to the boot sector of (hd0,2).

---------------------------------------------
| | | | % |
| M | | | B % |
| B | (hd0,0) | (hd0,1) | L % (hd0,2) |
| R | | | % |
| | | | % |
---------------------------------------------
| ^
| chainloading |
-----------------------------

One can simply include in menu.lst:

title Other Linux


root (hd0,2)
chainloader +1

Or, if the bootloader on (hd0,2) is GRUB:

title Other Linux


root (hd0,2)
configfile /boot/grub/menu.lst

The chainloader command can also be used to load the MBR of a second drive:

title Other drive


rootnoverify (hd1)
chainloader +1

Dual booting with GNU/Linux (GRUB2)

If the other Linux distribution uses GRUB2 (e.g. Ubuntu 9.10+), and you installed its boot loader
to its root partition, you can add an entry like this one to your /boot/grub/menu.lst:

/boot/grub/menu.lst

# other Linux using GRUB2


title Ubuntu
root (hd0,2)
kernel /boot/grub/core.img

Selecting this entry at boot will load the other distribution's GRUB2 menu assuming that the
distribution is installed on /dev/sda3.

Bootloader installation
Manual recovery of GRUB libs

The *stage* files are expected to be in /boot/grub, which may not be the case if the bootloader
was not installed during system installation or if the partition/filesystem was damaged,
accidentally deleted, etc.

Manually copy the GRUB libs like so:

# cp -a /usr/lib/grub/i386-pc/* /boot/grub
Note: Do not forget to mount the system's boot partition if your setup uses a separate one! The above
assumes that either the boot partition resides on the root filesystem or is mounted to /boot on the root
file system!

General notes about bootloader installation

GRUB may be installed from a separate medium (e.g. a LiveCD), or directly from a running
Arch install. The GRUB bootloader is seldom required to be reinstalled and installation is not
necessary when:

 The configuration file is updated.


 The grub-legacyAUR package is updated.

Installation is necessary when:

 A bootloader is not already installed.


 Another operating system overwrites the Linux bootloader.
 The bootloader fails for some unknown reason.

Before continuing, a few notes:

 Be sure that your GRUB configuration is correct (/boot/grub/menu.lst) before proceeding.


Refer to Finding GRUB's root to ensure your devices are defined correctly.
 GRUB must be installed on the MBR (first sector of the hard disk), or the first partition of the
first storage device to be recognized by most BIOSes. To allow individual distributions the ability
to manage their own GRUB menus, multiple instances of GRUB can be used, see chainloader and
configfile.
 Installing the GRUB bootloader may need to be done from within a chrooted environment (i.e.
from installed environment via a separate medium) for cases like RAID configurations or if you
forgot/broke your GRUB installation. You will need to Change root from a LiveCD or another
Linux installation to do so.

First, enter the GRUB shell:

# grub

Use the root command with the output from the find command (see Finding GRUB's root) to
instruct GRUB which partition contains stage1 (and therefore, /boot):

grub> root (hd1,0)


Tip: The GRUB shell also supports tab-completion. If you type 'root (hd' then press Tab twice you will
see the available storage devices, this can also be done for partitions. Tab-completion also works from
the GRUB boot menu. If there is an error in your configuration file you can edit in the boot menu and
use tab-completion to help find devices and partitions. See #Edit GRUB entries in the boot menu.

Installing to the MBR


The following example installs GRUB to the MBR of the first drive:

grub> setup (hd0)

Installing to a partition

The following example installs GRUB to the first partition of the first drive:

grub> setup (hd0,0)

After running setup, enter quit to exit the shell. If you chrooted, exit your chroot and unmount
partitions. Now reboot to test.

Alternate method (grub-install)

Note: This procedure is known to be less reliable, the recommended method is to use the GRUB shell.

Use the grub-install command followed by the location to install the bootloader. For example
to install the GRUB bootloader to the MBR of the first drive:

# grub-install /dev/sda

GRUB will indicate whether it successfully installs. If it does not, you will have to use the
GRUB shell method.

Tips and tricks


Additional configuration notes.

Graphical boot

For those desiring eye candy, see grub-gfx. GRUB also offers enhanced graphical capabilities,
such as background images and bitmap fonts.

Framebuffer resolution

One can use the resolution given in the menu.lst, but you might want to use your LCD wide-
screen at its full native resolution. Here is what you can do to achieve this:

On Wikipedia, there is a list of extended framebuffer resolutions (i.e. beyond the ones in the
VBE standard). But, for example, the one I want to use for 1440x900 (vga=867) does not work.
This is because the graphic card manufacturers are free to choose any number they wish, as this
is not part of the VBE 3 standard. This is why these codes change from one card to the other
(possibly even for the same manufacturer).
So instead of using that table, you can use one of the tools mentioned below to get the correct
code:

GRUB recognized value

This is an easy way to find the resolution code using only GRUB itself.

On the kernel line, specify that the kernel should ask you which mode to use.

kernel /vmlinuz-linux root=/dev/sda1 ro vga=ask

Now reboot. GRUB will now present a list of suitable codes to use and the option to scan for
even more.

You can pick the code you would like to use (do not forget it, it is needed for the next step) and
boot using it.

Now replace ask in the kernel line with the correct one you have picked.

e.g. the kernel line for [369] 1680x1050x32 would be:

kernel /vmlinuz-linux root=/dev/sda1 ro vga=0x369

hwinfo

1. Install the hwinfo package.


2. Run hwinfo --framebuffer as root.
3. Pick up the code corresponding to the desired resolution.
4. Use the 6 digit code with 0x prefix in vga= kernel option in menu.lst. Or convert it to decimal
to avoid the use of 0x prefix.

Example output of hwinfo:

Mode 0x0364: 1440x900 (+1440), 8 bits


Mode 0x0365: 1440x900 (+5760), 24 bits

And the kernel line:

kernel /vmlinuz-linux root=/dev/sda1 ro vga=0x0365

Naming partitions

By Label

If you alter (or plan to alter) partition sizes from time to time, you might want to consider
defining your drive/partitions by a label. You can label ext2, ext3, ext4 partitions by:
e2label /dev/drive|partition label

The label name can be up to 16 characters long but cannot have spaces for GRUB to understand
it. Then define it in your menu.lst:

kernel /boot/vmlinuz-linux root=/dev/disk/by-label/Arch_Linux ro

By UUID

The UUID (Universally Unique IDentifier) of a partition may be discovered with blkid or ls -
l /dev/disk/by-uuid. It is defined in menu.lst with either:

kernel /boot/vmlinuz-linux root=/dev/disk/by-uuid/uuid_number

or:

kernel /boot/vmlinuz-linux root=UUID=uuid_number

Boot as root (single-user mode)

At the boot loader, select an entry and edit it (e key). Append the following parameters to the
kernel options:

[...] single init=/bin/bash

This will start in single-user mode (init 1), i.e. you will end up to a root prompt without being
asked for password. This may be useful for recovery features, like resetting the root password.
However, this is a huge security flaw if you have not set any #Password protection for grub.

Password protection

You can enable password protection in the GRUB configuration file for operating systems you
wish to have protected. Bootloader password protection may be desired if your BIOS lacks such
functionality and you need the extra security.

First, choose a password you can remember and then encrypt it:

# grub-md5-crypt

Password:
Retype password:
$1$ZOGor$GABXUQ/hnzns/d5JYqqjw

Then add your password to the beginning of the GRUB configuration file at
/boot/grub/menu.lst (the password must be at the beginning of the configuration file for
GRUB to be able to recognize it):

# general configuration
timeout 5
default 0
color light-blue/black light-cyan/blue

password --md5 $1$ZOGor$GABXUQ/hnzns/d5JYqqjw


Note: Remember that Grub uses the standard QWERTY layout for input.

Then for each operating system you wish to protect, add the lock command:

# (0) Arch Linux


title Arch Linux
lock
root (hd0,1)
kernel /boot/vmlinuz-linux root=/dev/disk/by-label/Arch_Linux ro
initrd /boot/initramfs-linux.img
Warning: If you disable booting from other boot devices (like a CD drive) in the BIOS's settings and then
password protect all your operating system entries, it could be difficult to re-enable booting back into
the operating systems if the password is forgotten.

It is always possible to reset your BIOS settings by setting the appropriate jumper on the
motherboard (see your motherboard's manual, as it is specific to every model). So in case other
have access to the hardware, there is basically no way to prevent boot breakthroughs.

Restart with named boot choice

If you realize that you often need to switch to some other non-default OS (e.g. Windows) having
to reboot and wait for the GRUB menu to appear is tedious. GRUB offers a way to record your
OS choice when restarting instead of waiting for the menu, by designating a temporary new
default which will be reset as soon as it has been used.

Supposing a simple menu.lst setup like this:

/boot/grub/menu.lst

# general configuration:
timeout 10
default 0
color light-blue/black light-cyan/blue

# (0) Arch
title Arch Linux
root (hd0,1)
kernel /boot/vmlinuz-linux root=/dev/disk/by-label/ARCH ro
initrd /boot/initramfs-linux.img

# (1) Windows
title Windows XP
rootnoverify (hd0,0)
makeactive
chainloader +1
Arch is the default (0). We want to restart in to Windows. Change default 0 to default saved
-- this will record the current default in a default file in the GRUB directory whenever the
savedefault command is used. Now add the line savedefault 0 to the bottom of the Windows
entry. Whenever Windows is booted, it will reset the default to Arch, thus making changing the
default to Windows temporary.

Now all that is needed is a way to easily change the default manually. This can be accomplished
using the command grub-set-default. So, to reboot into Windows, enter the following
commands:

# grub-set-default 1

Then reboot.

For ease of use, you might to wish to implement the "Allow users to shutdown fix" (including
/sbin/grub-set-default amongst the commands the user is allowed to issue without
supplying a password).

LILO and GRUB interaction

If the LILO package is installed on your system, remove it. As some tasks (e.g. kernel
compilation using make all) will make a LILO call, and LILO will then be installed over
GRUB. LILO may have been included in your base system, depending on your installer media
version and whether you selected/deselected it during the package selection stage.

Note: Removing liloAUR will not remove LILO from the MBR if it has been installed there; it will merely
remove the liloAUR package. The LILO bootloader installed to the MBR will be overwritten when GRUB (or
another bootloader) is installed over it.

GRUB boot disk

First, format a floppy disk:

# fdformat /dev/fd0
# mke2fs /dev/fd0

Now mount the disk:

# mount -t ext2 /dev/fd0 /mnt/fl

Install GRUB to the disk:

# grub-install --root-directory=/mnt/fl '(fd0)'

Copy your menu.lst file to the disk:


# cp /boot/grub/menu.lst /mnt/fl/boot/grub/menu.lst

Now unmount your floppy:

# umount /mnt/fl

Now you should be able to restart your computer with the disk in the drive and it should boot to
GRUB. Make sure that your floppy disk is set to have higher priority than your hard drive when
booting in your BIOS first, of course.

See also: Super GRUB Disk.

Hide GRUB menu

The hiddenmenu option can be used in order to hide the menu by default. That way no menu is
displayed and the default option is going to be automatically selected after the timeout passes.
Still, you are able to press Esc and the menu shows up. To use it, just add to your
/boot/grub/menu.lst:

hiddenmenu

Advanced debugging
See dedicated article.

Troubleshooting
GRUB Error 17

Note: the solution below works also for GRUB Error 15

The first check to do is to unplug any external drive. Seems obvious, but sometimes we get
tired ;)

If your partition table gets messed up, an unpleasant "GRUB error 17" message might be the
only thing that greets you on your next reboot. There are a number of reasons why the partition
table could get messed up. Commonly, users who manipulate their partitions with GParted --
particularly logical drives -- can cause the order of the partitions to change. For example, you
delete /dev/sda6 and resize /dev/sda7, then finally re-create what used to be /dev/sda6 only
now it appears at the bottom of the list, /dev/sda9 for example. Although the physical order of
the partitions/logical drives has not changed, the order in which they are recognized has changed.

Fixing the partition table is easy. Boot from your Arch CD/DVD/USB, login as root and fix the
partition table:
# fdisk /dev/sda

Once in disk, enter e[x]tra/expert mode, [f]ix the partition order, then [w]rite the table and exit.

You can verify that the partition table was indeed fixed by issuing an fdisk -l. Now you just
need to fix GRUB. See the Bootloader installation section.

Basically you need to tell GRUB the correct location of your /boot then re-write GRUB to the
MBR on the disk.

For example:

# grub
grub> root (hd0,6)
grub> setup (hd0)
grub> quit

See [1] for a more in-depth summary of this section.

/boot/grub/stage1 not read correctly

If you see this error message while trying to set up GRUB, and you are not using a fresh partition
table, it is worth checking it.

# fdisk -l /dev/sda

This will show you the partition table for /dev/sda. So check here, whether the "Id" values of
your partitions are correct. The "System" column will show you the description of the "Id"
values.

If your boot partition is marked as being "HPFS/NTFS", for example, then you have to change it
to "Linux". To do this, go to fdisk,

# fdisk /dev/sda

change a partition's system id with t, select you partition number and type in the new system id
(Linux = 83). You can also list all available system ids by typing L instead of a system id.

If you have changed a partitions system id, you should [v]erify your partition table and then
[w]rite it.

Now try to set up GRUB again.

See also the forum post reporting this problem.

Accidental install to a Windows partition


If you accidentally install GRUB to a Windows partition, GRUB will write some information to
the boot sector of the partition, erasing the reference to the Windows bootloader. (This is true for
NTLDR the bootloader for Windows XP and earlier, unsure about later versions).

To fix this you will need to use the Windows Recovery Console for your Windows release.
Because many computer manufacturers do not include this with their product (many choose to
use a recovery partition) Microsoft has made them available for download. If you use XP, look at
this page to be able to turn the floppy disks to a Recovery CD. Boot the Recovery CD (or enable
Windows Recovery mode) and run fixboot to repair the partition boot sector. After this, you
will have to install GRUB again---this time, to the MBR, not to the Windows partition---to boot
Linux.

See Dual boot with Windows#Restoring a Windows boot record for more information.

Edit GRUB entries in the boot menu

Once you have selected and entry in the boot menu, you can edit it by pressing key e. Use tab-
completion if you need to to discover devices then Esc to exit. Then you can try to boot by
pressing b.

Note: These settings will not be saved.

device.map error

If an error is raised mentioning /boot/grub/device.map during installation or boot, run:

# grub-install --recheck /dev/sda

to force GRUB to recheck the device map, even if it already exists. This may be necessary after
resizing partitions or adding/removing drives.

KDE reboot pull-down menu fails

If you have opened a sub-menu with the list of all operating systems configured in GRUB,
selected one, and upon restart, you still booted your default OS, then you might want to check if
you have the line:

default saved

in /boot/grub/menu.lst.

GRUB fails to find or install to any virtio /dev/vd* or other non-BIOS devices

I had trouble installing GRUB while installing Arch Linux in an virtual KVM machine using a
virtio device for hard drive. To install GRUB, I figured out the following: Enter a virtual console
by typing Ctrl+Alt+F2 or any other F-key for a free virtual console. This assumes that your root
file system is mounted in the folder /mnt and the boot file system is either mounted or stored in
the folder /mnt/boot.

1. Assure that all needed GRUB files is present in your boot directory (assuming it is mounted in
/mnt/boot folder), by issuing the command:

# ls /mnt/boot/grub

2. If the /mnt/boot/grub folder already contains all the needed files, jump to step 3. Otherwise,
do the following commands (replacing /mnt, your_kernel and your_initrd with the real paths
and file names). You should also have the menu.lst file written to this folder:

# mkdir -p /mnt/boot/grub # if the folder is not yet present


# cp -r /boot/grub/stage1 /boot/grub/stage2 /mnt/boot/grub
# cp -r your_kernel your_initrd /mnt/boot

3. Start the GRUB shell with the following command:

# grub --device-map=/dev/null

4. Enter the following commands. Replace /dev/vda, and (hd0,0) with the correct device and
partition corresponding to your setup.

device (hd0) /dev/vda


root (hd0,0)
setup (hd0)
quit

5. If GRUB reports no error messages, then you probably are done. You also need to add
appropriate modules to the ramdisk. For more information, please refer to QEMU#Preparing an
(Arch) Linux guest.

 1 General procedures
o 1.1 Attention to detail
o 1.2 Questions/checklist
o 1.3 Be more specific
o 1.4 Additional support
 2 Boot problems
o 2.1 Console messages
 2.1.1 Flow control
 2.1.2 Scrollback
 2.1.3 Debug output
o 2.2 Recovery shells
o 2.3 Blank screen with Intel video
o 2.4 Stuck while loading the kernel
o 2.5 Debugging kernel modules
o 2.6 Debugging hardware
o 2.7 See also
 3 Package management
 4 fuser
 5 Session permissions
 6 error while loading shared libraries
 7 file: could not find any magic files!
 8 See also

General procedures
Attention to detail

In order to resolve an issue that you are having, it is absolutely crucial to have a firm basic
understanding of how that specific subsystem functions. How does it work, and what does it
need to run without error? If you cannot comfortably answer these question then you would best
review the Archwiki article for the subsystem that you are having trouble with. Once you feel
like you've understood it, it will be easier for you to pinpoint the cause of the problem.

Questions/checklist

The following gives a number of questions for you whenever dealing with a malfunctioning
system. Under each question there are notes explaining how you should be answering each
question, followed by some light examples on how to easily gather data output and what tools
can be used to review logs and the journal.

1. What is the issue(s)?

Be as precise as possible. This will help you not get confused and/or side-tracked when looking
up specific information.

2. Are there error messages? (if any)

Copy and paste full outputs that contain error messages related to your issue into a separate
file, such as $HOME/issue.log. For example, to forward the output of the following mkinitcpio
command to $HOME/issue.log:
$ mkinitcpio -p linux >> $HOME/issue.log

3. Can you reproduce the issue?

If so, give exact step-by-step instructions/commands needed to do so.

4. When did you first encounter these issues and what was changed between then and when the
system was operating without error?
If it occurred right after an update then, list all packages that were updated. Include version
numbers, also, paste the entire update from pacman.log (/var/log/pacman.log). Also take
note of the statuses of any service(s) needed to support the malfunctioning application(s) using
systemd's systemctl tools. For example, to forward the output of the following systemd
command to $HOME/issue.log:
$ systemctl status dhcpcd@eth0.service >> $HOME/issue.log
Note: Using >> will ensure any previous text in $HOME/issue.log will not be overwritten.

Be more specific

When attempting to resolve an issue, never approach it as:

Application X does not work.

Instead, look at it in its entirety:

Application X produces Y error(s) when performing Z tasks under conditions A and B.

Additional support

With all the information in front of you you should have a good idea as to what is going on with
the system and you can now start working on a proper fix.

If you require any additional support, it can be found on the forums or IRC at irc.freenode.net
#archlinux See IRC channels for other options.

When asking for support post the complete output/logs, not just what you think are the
significant sections. Sources of information include:

 Full output of any command involved - don't just select what you think is relevant.
 Output from systemd's journalctl. For more extensive output, use the
systemd.log_level=debug boot parameter.
 Log files (have a look in /var/log)
 Relevant configuration files
 Drivers involved
 Versions of packages involved
 Kernel: dmesg. For a boot problem, at least the last 10 lines displayed, preferably more
 Networking: Exact output of commands involved, and any configuration files
 Xorg: /var/log/Xorg.0.log, and prior logs if you have overwritten the problematic one
 Pacman: If a recent upgrade broke something, look in /var/log/pacman.log

One of the better ways to post this information is to use an online pastebin. You can install the
pbpst or gist package to automatically upload information. For example, to upload the content of
your systemd journal from this boot you would do:

# journalctl -xb | pbpst -S


A link will then be output that you can paste to the forum or IRC.

Additionally, before posting your question, you may wish to review how to ask smart questions.
See also Code of conduct.

Boot problems
Diagnosing errors during the boot process involves changing the kernel parameters, and
rebooting the system.

If booting the system is not possible, boot from a live image and change root to the existing
system.

Console messages

After the boot process, the screen is cleared and the login prompt appears, leaving users unable
to read init output and error messages. This default behavior may be modified using methods
outlined in the sections below.

Note that regardless of the chosen option, kernel messages can be displayed for inspection after
booting by using dmesg or all logs from the current boot with journalctl -b.

Flow control

This is basic management that applies to most terminal emulators, including virtual consoles
(vc):

 Press Ctrl+S to pause the output


 And Ctrl+Q to resume it

This pauses not only the output, but also programs which try to print to the terminal, as they will
block on the write() calls for as long as the output is paused. If your init appears frozen, make
sure the system console is not paused.

To see error messages which are already displayed, see Getty#Have boot messages stay on tty1.

Scrollback

Scrollback allows the user to go back and view text which has scrolled off the screen of a text
console. This is made possible by a buffer created between the video adapter and the display
device called the scrollback buffer. By default, the key combinations of Shift+PageUp and
Shift+PageDown scroll the buffer up and down.

If scrolling up all the way does not show you enough information, you need to expand your
scrollback buffer to hold more output. This is done by tweaking the kernel's framebuffer console
(fbcon) with the kernel parameter fbcon=scrollback:Nk where N is the desired buffer size is
kilobytes. The default size is 32k.

If this does not work, your framebuffer console may not be properly enabled. Check the
Framebuffer Console documentation for other parameters, e.g. for changing the framebuffer
driver.

Debug output

Most kernel messages are hidden during boot. You can see more of these messages by adding
different kernel parameters. The simplest ones are:

 debug enables debug messages for both the kernel and systemd
 ignore_loglevel forces all kernel messages to be printed

Other parameters you can add that might be useful in certain situations are:

 earlyprintk=vga,keep prints kernel messages very early in the boot process, in case the
kernel would crash before output is shown. You must change vga to efi for EFI systems
 log_buf_len=16M allocates a larger (16MB) kernel message buffer, to ensure that debug
output is not overwritten

There are also a number of separate debug parameters for enabling debugging in specific
subsystems e.g. bootmem_debug, sched_debug. Check the kernel parameter documentation for
specific information.

Note: If you cannot scroll back far enough to view the desired boot output, you should increase the size
of the scrollback buffer.

Recovery shells

Getting an interactive shell at some stage in the boot process can help you pinpoint exactly where
and why something is failing. There are several kernel parameters for doing so, but they all
launch a normal shell which you can exit to let the kernel resume what it was doing:

 rescue launches a shell shortly after the root filesystem is remounted read/write
 emergency launches a shell even earlier, before most filesystems are mounted
 init=/bin/sh (as a last resort) changes the init program to a root shell. rescue and
emergency both rely on systemd, but this should work even if systemd is broken

Another option is systemd's debug-shell which adds a root shell on tty9 (accessible with
Ctrl+Alt+F9). It can be enabled by either adding systemd.debug-shell to the kernel
parameters, or by enabling debug-shell.service. Take care to disable the service when done
to avoid the security risk of leaving a root shell open on every boot.

Blank screen with Intel video


This is most likely due to a problem with kernel mode setting. Try disabling modesetting or
changing the video port.

Stuck while loading the kernel

Try disabling ACPI by adding the acpi=off kernel parameter.

Debugging kernel modules

See Kernel modules#Obtaining information.

Debugging hardware

 You can display extra debugging information about your hardware by following udev#Debug
output.
 Ensure that Microcode updates are applied on your system.
 Test your device's RAM with Memtest86+. Unstable RAM may lead to some extremely odd
issues, ranging from random crashes to data corruption.

See also

 List of Tools for UBCD - Can be added to custom menu.lst like memtest
 Wikipedia's page on BIOS Boot partition
 QA/Sysrq - Using sysrq
 systemd documentation: Debug Logging to a Serial Console
 How to Isolate Linux ACPI Issues

Package management
See Pacman#Troubleshooting for general topics, and pacman/Package signing#Troubleshooting
for issues with PGP keys.

fuser

This article or section needs expansion.

Reason: Write an example how to use it. (Discuss in Talk:General troubleshooting#)

fuser is a command-line utility for identifying processes using resources such as files, filesystems
and TCP/UDP ports.

fuser is provided by the psmisc package, which should be already installed as part of the base
group.
Session permissions
Note: You must be using systemd as your init system for local sessions to work.[1] It is required for
polkit permissions and ACLs for various devices (see /usr/lib/udev/rules.d/70-uaccess.rules
and [2])

First, make sure you have a valid local session within X:

$ loginctl show-session $XDG_SESSION_ID

This should contain Remote=no and Active=yes in the output. If it does not, make sure that X
runs on the same tty where the login occurred. This is required in order to preserve the logind
session.

A D-Bus session should also be started along with X. See D-Bus#Starting the user session for
more information on this.

Basic polkit actions do not require further set-up. Some polkit actions require further
authentication, even with a local session. A polkit authentication agent needs to be running for
this to work. See polkit#Authentication agents for more information on this.

error while loading shared libraries

The factual accuracy of this article or section is disputed.

Reason: Or the program needs to be rebuilt after a soname bump. (Discuss in Talk:General
troubleshooting#)

If, while using a program, you get an error similar to:

error while loading shared libraries: libusb-0.1.so.4: cannot open shared


object file: No such file or directory

Use pacman or pkgfile to search for the package that owns the missing library:

$ pacman -Fs libusb-0.1.so.4

extra/libusb-compat 0.1.5-1
usr/lib/libusb-0.1.so.4

In this case, the libusb-compat package needs to be installed.


The error could also mean that the package that you used to install your program does not list the
library as a dependency in its PKGBUILD: if it is an official package, report a bug; if it is an
AUR package, report it to the maintainer using its page in the AUR website.

file: could not find any magic files!

This article or section needs language, wiki syntax or style improvements.

Reason: See Help:Style and related articles. (Discuss in Talk:General troubleshooting#)

Example: After an every-day routine update or following the installation of a package you are
given the following error:

# file: could not find any magic files!

This will most likely leave your system crippled. And, any attempts made to recompile/reinstall
the package(s) responsible for the breakage will fail. Also, any attempts made to try to rebuild
the initramfs will result in the following:

# mkinitcpio -p linux
==> Building image from preset: 'default'
-> -k /boot/vmlinuz-linux -c /etc/mkinitcpio.conf -g /boot/initramfs-
linux.img
file: could not find any magic files!
==> ERROR: invalid kernel specifier: `/boot/vmlinuz-linux'
==> Building image from preset: 'fallback'
-> -k /boot/vmlinuz-linux -c /etc/mkinitcpio.conf -g /boot/initramfs-linux-
fallback.img -S autodetect
file: could not find any magic files!
@==> ERROR: invalid kernel specifier: `/boot/vmlinuz-linux'

Typically a previously installed application had placed a configuration file within


/etc/ld.so.conf.d/ or it had made changes to /etc/ld.so.conf which are now invalid.

1. Boot into the Arch Linux Live CD / Installation Media.


2. Mount your root (/) partition to /mnt and using arch-chroot[broken link: invalid section], chroot into your
system.

Note: arch-chroot[broken link: invalid section] leaves mounting the /boot partition up to the user.

1. Examine /etc/ld.so.conf and remove any invalid lines found.


2. Examine the files located inside the directory /etc/ld.so.conf.d/ and remove all invalid
files.
3. Rebuild the initramfs.

# mkinitcpio -p linux
1. Reboot back to your installed system.
2. Once booted, reinstall the package that was responsible for leaving your system inoperable
using:

# pacman -S <package>

See also
 1 Configuration
 2 Security
 3 Networking
o 3.1 Improving performance
o 3.2 TCP/IP stack hardening
 4 Virtual memory
 5 MDADM
 6 Troubleshooting
o 6.1 Small periodic system freezes
 7 See also

Configuration
Note: From version 207 and 21x, systemd only applies settings from /etc/sysctl.d/*.conf
and /usr/lib/sysctl.d/*.conf. If you had customized /etc/sysctl.conf, you need to
rename it as /etc/sysctl.d/99-sysctl.conf. If you had e.g. /etc/sysctl.d/foo, you need
to rename it to /etc/sysctl.d/foo.conf.

The sysctl preload/configuration file can be created at /etc/sysctl.d/99-sysctl.conf. For


systemd, /etc/sysctl.d/ and /usr/lib/sysctl.d/ are drop-in directories for kernel sysctl
parameters. The naming and source directory decide the order of processing, which is important
since the last parameter processed may override earlier ones. For example, parameters in a
/usr/lib/sysctl.d/50-default.conf will be overriden by equal parameters in
/etc/sysctl.d/50-default.conf and any configuration file processed later from both
directories.

To load all configuration files manually, execute

# sysctl --system

which will also output the applied hierarchy. A single parameter file can also be loaded explicitly
with

# sysctl -p filename.conf

See the new configuration files and more specifically sysctl.d(5) for more information.
The parameters available are those listed under /proc/sys/. For example, the kernel.sysrq
parameter refers to the file /proc/sys/kernel/sysrq on the file system. The sysctl -a
command can be used to display all currently available values.

Note: If you have the kernel documentation installed (linux-docs), you can find detailed
information about sysctl settings in /usr/lib/modules/$(uname -
r)/build/Documentation/sysctl/. It is highly recommended reading these before changing
sysctl settings.

Settings can be changed through file manipulation or using the sysctl utility. For example, to
temporarily enable the magic SysRq key:

# sysctl kernel.sysrq=1

or:

# echo "1" > /proc/sys/kernel/sysrq

To preserve changes between reboots, add or modify the appropriate lines in


/etc/sysctl.d/99-sysctl.conf or another applicable parameter file in /etc/sysctl.d/.

Tip: Some parameters that can be applied may depend on kernel modules which in turn might
not be loaded. For example parameters in /proc/sys/net/bridge/* depend on the
br_netfilter module. If it is not loaded at runtime (or after a reboot), those will silently not be
applied. See Kernel modules.

Security
See Security#Kernel hardening.

Networking
Improving performance

This article or section needs language, wiki syntax or style improvements.

Reason: Comments don't belong in the code box, use the wiki markup. (Discuss in Talk:Sysctl#)
# The maximum size of the receive queue.
# The received frames will be stored in this queue after taking them from the
ring buffer on the NIC.
# Use high value for high speed cards to prevent loosing packets.
# In real time application like SIP router, long queue must be assigned with
high speed CPU otherwise the data in the queue will be out of date (old).
net.core.netdev_max_backlog = 65536
# The maximum ancillary buffer size allowed per socket.
# Ancillary data is a sequence of struct cmsghdr structures with appended
data.
net.core.optmem_max = 65536
# The upper limit on the value of the backlog parameter passed to the listen
function.
# Setting to higher values is only needed on a single highloaded server where
new connection rate is high/bursty
net.core.somaxconn = 16384
# The default and maximum amount for the receive/send socket memory
# By default the Linux network stack is not configured for high speed large
file transfer across WAN links.
# This is done to save memory resources.
# You can easily tune Linux network stack by increasing network buffers size
for high-speed networks that connect server systems to handle more network
packets.
net.core.rmem_default = 1048576
net.core.wmem_default = 1048576
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
net.ipv4.tcp_rmem = 4096 87380 16777216
net.ipv4.tcp_wmem = 4096 65536 16777216
net.ipv4.udp_rmem_min = 16384
net.ipv4.udp_wmem_min = 16384
# An extension to the transmission control protocol (TCP) that helps reduce
network latency by enabling data to be exchanged during the sender’s initial
TCP SYN.
# If both of your server and client are deployed on Linux 3.7.1 or higher,
you can turn on fast_open for lower latency
net.ipv4.tcp_fastopen = 3
# The maximum queue length of pending connections 'Waiting Acknowledgment'
# In the event of a synflood DOS attack, this queue can fill up pretty
quickly, at which point tcp_syncookies will kick in allowing your system to
continue to respond to legitimate traffic, and allowing you to gain access to
block malicious IPs.
# If the server suffers from overloads at peak times, you may want to
increase this value a little bit.
net.ipv4.tcp_max_syn_backlog = 65536
# The maximum number of sockets in 'TIME_WAIT' state.
# After reaching this number the system will start destroying the socket in
this state.
# Increase this to prevent simple DOS attacks
net.ipv4.tcp_max_tw_buckets = 65536
# Whether TCP should start at the default window size only for new
connections or also for existing connections that have been idle for too
long.
# It kills persistent single connection performance and should be turned off.
net.ipv4.tcp_slow_start_after_idle = 0
# Whether TCP should reuse an existing connection in the TIME-WAIT state for
a new outgoing connection if the new timestamp is strictly bigger than the
most recent timestamp recorded for the previous connection.
# This helps avoid from running out of available network sockets.
net.ipv4.tcp_tw_reuse = 1
# Fast-fail FIN connections which are useless.
net.ipv4.tcp_fin_timeout = 15
# TCP keepalive is a mechanism for TCP connections that help to determine
whether the other end has stopped responding or not.
# TCP will send the keepalive probe contains null data to the network peer
several times after a period of idle time. If the peer does not respond, the
socket will be closed automatically.
# By default, TCP keepalive process waits for two hours (7200 secs) for
socket activity before sending the first keepalive probe, and then resend it
every 75 seconds. As long as there is TCP/IP socket communications going on
and active, no keepalive packets are needed.
# With the following settings, your application will detect dead TCP
connections after 120 seconds (60s + 10s + 10s + 10s + 10s + 10s + 10s)
net.ipv4.tcp_keepalive_time = 60
net.ipv4.tcp_keepalive_intvl = 10
net.ipv4.tcp_keepalive_probes = 6
# The longer the MTU the better for performance, but the worse for
reliability.
# This is because a lost packet means more data to be retransmitted and
because many routers on the Internet can't deliver very long packets.
# Enable smart MTU discovery when an ICMP black hole detected.
net.ipv4.tcp_mtu_probing = 1
# Turn timestamps off to reduce performance spikes related to timestamp
generation.
net.ipv4.tcp_timestamps = 0

 You can use http://www.speedtest.net to benchmark internetwork performance


before/after the change

TCP/IP stack hardening

The following specifies a parameter set to tighten network security options of the kernel for the
IPv4 protocol and related IPv6 parameters where an equivalent exists.

For some usecases, for example using the system as a router, other parameters may be useful or
required as well.

/etc/sysctl.d/51-net.conf

#### ipv4 networking and equivalent ipv6 parameters ####

## TCP SYN cookie protection (default)


## helps protect against SYN flood attacks
## only kicks in when net.ipv4.tcp_max_syn_backlog is reached
net.ipv4.tcp_syncookies = 1

## protect against tcp time-wait assassination hazards


## drop RST packets for sockets in the time-wait state
## (not widely supported outside of linux, but conforms to RFC)
net.ipv4.tcp_rfc1337 = 1

## sets the kernels reverse path filtering mechanism to value 1 (on)


## will do source validation of the packet's recieved from all the interfaces
on the machine
## protects from attackers that are using ip spoofing methods to do harm
net.ipv4.conf.default.rp_filter = 1
net.ipv4.conf.all.rp_filter = 1
## tcp timestamps
## + protect against wrapping sequence numbers (at gigabit speeds)
## + round trip time calculation implemented in TCP
## - causes extra overhead and allows uptime detection by scanners like nmap
## enable @ gigabit speeds
net.ipv4.tcp_timestamps = 0
#net.ipv4.tcp_timestamps = 1

## log martian packets


net.ipv4.conf.default.log_martians = 1
net.ipv4.conf.all.log_martians = 1

## ignore echo broadcast requests to prevent being part of smurf attacks


(default)
net.ipv4.icmp_echo_ignore_broadcasts = 1

## ignore bogus icmp errors (default)


net.ipv4.icmp_ignore_bogus_error_responses = 1

## send redirects (not a router, disable it)


net.ipv4.conf.default.send_redirects = 0
net.ipv4.conf.all.send_redirects = 0

## ICMP routing redirects (only secure)


#net.ipv4.conf.default.secure_redirects = 1 (default)
#net.ipv4.conf.all.secure_redirects = 1 (default)
net.ipv4.conf.default.accept_redirects=0
net.ipv4.conf.all.accept_redirects=0
net.ipv6.conf.default.accept_redirects=0
net.ipv6.conf.all.accept_redirects=0

Virtual memory
There are several key parameters to tune the operation of the virtual memory (VM) subsystem of
the Linux kernel and the write out of dirty data to disk. See the official Linux kernel
documentation for more information. For example:

 vm.dirty_ratio = 3

Contains, as a percentage of total available memory that contains free pages and
reclaimable pages, the number of pages at which a process which is generating disk
writes will itself start writing out dirty data.

 vm.dirty_background_ratio = 2

Contains, as a percentage of total available memory that contains free pages and
reclaimable pages, the number of pages at which the background kernel flusher threads
will start writing out dirty data.
As noted in the comments for the parameters, one needs to consider the total amount of RAM
when setting these values. For example, simplifying by taking the installed system RAM instead
of available memory:

 Consensus is that setting vm.dirty_ratio to 10% of RAM is a sane value if RAM is say
1 GB (so 10% is 100 MB). But if the machine has much more RAM, say 16 GB (10% is
1.6 GB), the percentage may be out of proportion as it becomes several seconds of
writeback on spinning disks. A more sane value in this case is 3 (3% of 16 GB is
approximately 491 MB).
 Similarly, setting vm.dirty_background_ratio to 5 may be just fine for small memory
values, but again, consider and adjust accordingly for the amount of RAM on a particular
system.

Another parameter is:

 vm.vfs_cache_pressure = 60

The value controls the tendency of the kernel to reclaim the memory which is used for
caching of directory and inode objects (VFS cache). Lowering it from the default value of
100 makes the kernel less inclined to reclaim VFS cache (do not set it to 0, this may
produce out-of-memory conditions).

MDADM
When the kernel performs a resync operation of a software raid device it tries not to create a high
system load by restricting the speed of the operation. Using sysctl it is possible to change the
lower and upper speed limit.

# Set maximum and minimum speed of raid resyncing operations


dev.raid.speed_limit_max = 10000
dev.raid.speed_limit_min = 1000

If mdadm is compiled as a module md_mod, the above settings are available only after the module
has been loaded. If the settings shall be loaded on boot via /etc/sysctl.d, the module md_mod
may be loaded beforehand through /etc/modules-load.d.

Troubleshooting
Small periodic system freezes

Set dirty bytes to small enough value (for example 4M):

vm.dirty_background_bytes = 4194304
vm.dirty_bytes = 4194304

Try to change kernel.io_delay_type (x86 only):


 0 - IO_DELAY_TYPE_0X80
 1 - IO_DELAY_TYPE_0XED
 2 - IO_DELAY_TYPE_UDELAY
 3 - IO_DELAY_TYPE_NONE

See also
 1 Configuration
o 1.1 Syslinux
o 1.2 systemd-boot
o 1.3 GRUB
o 1.4 GRUB Legacy
o 1.5 LILO
o 1.6 rEFInd
o 1.7 EFISTUB
o 1.8 Hijacking cmdline
 2 Parameter list
 3 See also

Configuration
Note:

 You can check the parameters your system was booted up with by running cat
/proc/cmdline and see if it includes your changes.
 The Arch Linux installation medium uses Syslinux for BIOS systems, and systemd-boot
for UEFI systems.

Kernel parameters can be set either temporarily by editing the boot menu when it shows up, or
by modifying the boot loader's configuration file.

The following examples add the quiet and splash parameters to Syslinux, systemd-boot,
GRUB, GRUB Legacy, LILO, and rEFInd.

Syslinux

 Press Tab when the menu shows up and add them at the end of the string:

linux /boot/vmlinuz-linux root=/dev/sda3 initrd=/boot/initramfs-


linux.img quiet splash
Press Enter to boot with these parameters.

 To make the change persistent after reboot, edit /boot/syslinux/syslinux.cfg and


add them to the APPEND line:
APPEND root=/dev/sda3 quiet splash

For more information on configuring Syslinux, see the Syslinux article.

systemd-boot

 Press e when the menu appears and add the parameters to the end of the string:

initrd=\initramfs-linux.img root=/dev/sda2 quiet splash


Press Enter to boot with these parameters.
Note: If you have not set a value for menu timeout, you will need to hold Space while booting
for the systemd-boot menu to appear.

 To make the change persistent after reboot, edit /boot/loader/entries/arch.conf


(assuming you set up your EFI System Partition) and add them to the options line:

options root=/dev/sda2 quiet splash

For more information on configuring systemd-boot, see the systemd-boot article.

GRUB

 Press e when the menu shows up and add them on the linux line:

linux /boot/vmlinuz-linux root=UUID=978e3e81-8048-4ae1-8a06-


aa727458e8ff quiet splash
Press Ctrl+x to boot with these parameters.

 To make the change persistent after reboot, while you could manually edit
/boot/grub/grub.cfg with the exact line from above, the best practice is to:

Edit /etc/default/grub and append your kernel options to the


GRUB_CMDLINE_LINUX_DEFAULT line:
GRUB_CMDLINE_LINUX_DEFAULT="quiet splash"
And then automatically re-generate the grub.cfg file with:
# grub-mkconfig -o /boot/grub/grub.cfg

For more information on configuring GRUB, see the GRUB article.

GRUB Legacy

 Press e when the menu shows up and add them on the kernel line:

kernel /boot/vmlinuz-linux root=/dev/sda3 quiet splash


Press b to boot with these parameters.
 To make the change persistent after reboot, edit /boot/grub/menu.lst and add them to
the kernel line, exactly like above.

For more information on configuring GRUB Legacy, see the GRUB Legacy article.

LILO

 Add them to /etc/lilo.conf:

image=/boot/vmlinuz-linux
...
quiet splash

For more information on configuring LILO, see the LILO article.

rEFInd

 To make the change persistent after reboot, edit /boot/refind_linux.conf and append
them to all/required lines, for example

"Boot using default options" "root=PARTUUID=978e3e81-8048-4ae1-8a06-


aa727458e8ff rw quiet splash"

 If you have disabled auto-detection of OSes in rEFInd and are defining OS stanzas
instead in esp/EFI/refind/refind.conf to load your OSes, you can edit it like:

menuentry "Arch Linux" {


...
options "root=PARTUUID=978e3e81-8048-4ae1-8a06-aa727458e8ff rw quiet
splash"
...
}

For more information on configuring rEFInd, see the rEFInd article.

EFISTUB

See EFISTUB#Using UEFI directly.

Hijacking cmdline

Even without access to your bootloader it is possible to change your kernel parameters to enable
debugging (if you have root access). This can be accomplished by overwriting /proc/cmdline
which stores the kernel parameters. However /proc/cmdline is not writable even as root, so this
hack is accomplished by using a bind mount to mask the path.

First create a file containing the desired kernel parameters


/root/cmdline

root=/dev/disk/by-label/ROOT ro console=tty1 logo.nologo debug

Then use a bind mount to overwrite the parameters

# mount -n --bind -o ro /root/cmdline /proc/cmdline

The -n option skips adding the mount to /etc/mtab, so it will work even if root is mounted
read-only. You can cat /proc/cmdline to confirm that your change was successful.

Parameter list
This list is not comprehensive. For a complete list of all options, please see the kernel
documentation.

parameter Description
root= Root filesystem.
rootflags= Root filesystem mount options.
ro Mount root device read-only on boot (default1).
rw Mount root device read-write on boot.
initrd= Specify the location of the initial ramdisk.
Run specified binary instead of /sbin/init (symlinked to systemd in
init=
Arch) as init process.
init=/bin/sh Boot to shell.
systemd.unit= Boot to a specified target.
resume= Specify a swap device to use when waking from hibernation.
nomodeset Disable Kernel mode setting.
zswap.enabled Enable Zswap.
video=<videosetting> Override framebuffer video defaults.

1
mkinitcpio uses ro as default value when neither rw or ro is set by the boot loader. Boot
loaders may set the value to use, for example GRUB uses rw by default (see FS#36275 as a
reference).

See also
01.org

Applying Patches To The Linux Kernel —


The Linux Kernel documentation
16-20 minutes

Original by:

Jesper Juhl, August 2005

Note

This document is obsolete. In most cases, rather than using patch manually, you’ll almost
certainly want to look at using Git instead.

A frequently asked question on the Linux Kernel Mailing List is how to apply a patch to the
kernel or, more specifically, what base kernel a patch for one of the many trees/branches should
be applied to. Hopefully this document will explain this to you.

In addition to explaining how to apply and revert patches, a brief description of the different
kernel trees (and examples of how to apply their specific patches) is also provided.

What is a patch?¶
A patch is a small text document containing a delta of changes between two different versions of
a source tree. Patches are created with the diff program.

To correctly apply a patch you need to know what base it was generated from and what new
version the patch will change the source tree into. These should both be present in the patch file
metadata or be possible to deduce from the filename.

How do I apply or revert a patch?¶


You apply a patch with the patch program. The patch program reads a diff (or patch) file and
makes the changes to the source tree described in it.

Patches for the Linux kernel are generated relative to the parent directory holding the kernel
source dir.

This means that paths to files inside the patch file contain the name of the kernel source
directories it was generated against (or some other directory names like “a/” and “b/”).

Since this is unlikely to match the name of the kernel source dir on your local machine (but is
often useful info to see what version an otherwise unlabeled patch was generated against) you
should change into your kernel source directory and then strip the first element of the path from
filenames in the patch file when applying it (the -p1 argument to patch does this).
To revert a previously applied patch, use the -R argument to patch. So, if you applied a patch like
this:

patch -p1 < ../patch-x.y.z

You can revert (undo) it like this:

patch -R -p1 < ../patch-x.y.z

How do I feed a patch/diff file to patch?¶


This (as usual with Linux and other UNIX like operating systems) can be done in several
different ways.

In all the examples below I feed the file (in uncompressed form) to patch via stdin using the
following syntax:

patch -p1 < path/to/patch-x.y.z

If you just want to be able to follow the examples below and don’t want to know of more than
one way to use patch, then you can stop reading this section here.

Patch can also get the name of the file to use via the -i argument, like this:

patch -p1 -i path/to/patch-x.y.z

If your patch file is compressed with gzip or xz and you don’t want to uncompress it before
applying it, then you can feed it to patch like this instead:

xzcat path/to/patch-x.y.z.xz | patch -p1


bzcat path/to/patch-x.y.z.gz | patch -p1

If you wish to uncompress the patch file by hand first before applying it (what I assume you’ve
done in the examples below), then you simply run gunzip or xz on the file – like this:

gunzip patch-x.y.z.gz
xz -d patch-x.y.z.xz

Which will leave you with a plain text patch-x.y.z file that you can feed to patch via stdin or the
-i argument, as you prefer.

A few other nice arguments for patch are -s which causes patch to be silent except for errors
which is nice to prevent errors from scrolling out of the screen too fast, and --dry-run which
causes patch to just print a listing of what would happen, but doesn’t actually make any changes.
Finally --verbose tells patch to print more information about the work being done.

Common errors when patching¶


When patch applies a patch file it attempts to verify the sanity of the file in different ways.

Checking that the file looks like a valid patch file and checking the code around the bits being
modified matches the context provided in the patch are just two of the basic sanity checks patch
does.

If patch encounters something that doesn’t look quite right it has two options. It can either refuse
to apply the changes and abort or it can try to find a way to make the patch apply with a few
minor changes.

One example of something that’s not ‘quite right’ that patch will attempt to fix up is if all the
context matches, the lines being changed match, but the line numbers are different. This can
happen, for example, if the patch makes a change in the middle of the file but for some reasons a
few lines have been added or removed near the beginning of the file. In that case everything
looks good it has just moved up or down a bit, and patch will usually adjust the line numbers and
apply the patch.

Whenever patch applies a patch that it had to modify a bit to make it fit it’ll tell you about it by
saying the patch applied with fuzz. You should be wary of such changes since even though patch
probably got it right it doesn’t /always/ get it right, and the result will sometimes be wrong.

When patch encounters a change that it can’t fix up with fuzz it rejects it outright and leaves a
file with a .rej extension (a reject file). You can read this file to see exactly what change
couldn’t be applied, so you can go fix it up by hand if you wish.

If you don’t have any third-party patches applied to your kernel source, but only patches from
kernel.org and you apply the patches in the correct order, and have made no modifications
yourself to the source files, then you should never see a fuzz or reject message from patch. If you
do see such messages anyway, then there’s a high risk that either your local source tree or the
patch file is corrupted in some way. In that case you should probably try re-downloading the
patch and if things are still not OK then you’d be advised to start with a fresh tree downloaded in
full from kernel.org.

Let’s look a bit more at some of the messages patch can produce.

If patch stops and presents a File to patch: prompt, then patch could not find a file to be
patched. Most likely you forgot to specify -p1 or you are in the wrong directory. Less often,
you’ll find patches that need to be applied with -p0 instead of -p1 (reading the patch file should
reveal if this is the case – if so, then this is an error by the person who created the patch but is not
fatal).

If you get Hunk #2 succeeded at 1887 with fuzz 2 (offset 7 lines). or a message
similar to that, then it means that patch had to adjust the location of the change (in this example it
needed to move 7 lines from where it expected to make the change to make it fit).
The resulting file may or may not be OK, depending on the reason the file was different than
expected.

This often happens if you try to apply a patch that was generated against a different kernel
version than the one you are trying to patch.

If you get a message like Hunk #3 FAILED at 2387., then it means that the patch could not be
applied correctly and the patch program was unable to fuzz its way through. This will generate a
.rej file with the change that caused the patch to fail and also a .orig file showing you the
original content that couldn’t be changed.

If you get Reversed (or previously applied) patch detected! Assume -R? [n] then
patch detected that the change contained in the patch seems to have already been made.

If you actually did apply this patch previously and you just re-applied it in error, then just say
[n]o and abort this patch. If you applied this patch previously and actually intended to revert it,
but forgot to specify -R, then you can say [y]es here to make patch revert it for you.

This can also happen if the creator of the patch reversed the source and destination directories
when creating the patch, and in that case reverting the patch will in fact apply it.

A message similar to patch: **** unexpected end of file in patch or patch


unexpectedly ends in middle of line means that patch could make no sense of the file you
fed to it. Either your download is broken, you tried to feed patch a compressed patch file without
uncompressing it first, or the patch file that you are using has been mangled by a mail client or
mail transfer agent along the way somewhere, e.g., by splitting a long line into two lines. Often
these warnings can easily be fixed by joining (concatenating) the two lines that had been split.

As I already mentioned above, these errors should never happen if you apply a patch from
kernel.org to the correct version of an unmodified source tree. So if you get these errors with
kernel.org patches then you should probably assume that either your patch file or your tree is
broken and I’d advise you to start over with a fresh download of a full kernel tree and the patch
you wish to apply.

Are there any alternatives to patch?¶


Yes there are alternatives.

You can use the interdiff program (http://cyberelk.net/tim/patchutils/) to generate a patch


representing the differences between two patches and then apply the result.

This will let you move from something like 4.7.2 to 4.7.3 in a single step. The -z flag to interdiff
will even let you feed it patches in gzip or bzip2 compressed form directly without the use of
zcat or bzcat or manual decompression.

Here’s how you’d go from 4.7.2 to 4.7.3 in a single step:


interdiff -z ../patch-4.7.2.gz ../patch-4.7.3.gz | patch -p1

Although interdiff may save you a step or two you are generally advised to do the additional
steps since interdiff can get things wrong in some cases.

Another alternative is ketchup, which is a python script for automatic downloading and applying
of patches (http://www.selenic.com/ketchup/).

Other nice tools are diffstat, which shows a summary of changes made by a patch; lsdiff, which
displays a short listing of affected files in a patch file, along with (optionally) the line numbers of
the start of each patch; and grepdiff, which displays a list of the files modified by a patch where
the patch contains a given regular expression.

Where can I download the patches?¶


The patches are available at http://kernel.org/ Most recent patches are linked from the front page,
but they also have specific homes.

The 4.x.y (-stable) and 4.x patches live at

The -rc patches live at

The 4.x kernels¶


These are the base stable releases released by Linus. The highest numbered release is the most
recent.

If regressions or other serious flaws are found, then a -stable fix patch will be released (see
below) on top of this base. Once a new 4.x base kernel is released, a patch is made available that
is a delta between the previous 4.x kernel and the new one.

To apply a patch moving from 4.6 to 4.7, you’d do the following (note that such patches do NOT
apply on top of 4.x.y kernels but on top of the base 4.x kernel – if you need to move from 4.x.y
to 4.x+1 you need to first revert the 4.x.y patch).

Here are some examples:

# moving from 4.6 to 4.7

$ cd ~/linux-4.6 # change to kernel source dir


$ patch -p1 < ../patch-4.7 # apply the 4.7 patch
$ cd ..
$ mv linux-4.6 linux-4.7 # rename source dir

# moving from 4.6.1 to 4.7

$ cd ~/linux-4.6.1 # change to kernel source dir


$ patch -p1 -R < ../patch-4.6.1 # revert the 4.6.1 patch
# source dir is now 4.6
$ patch -p1 < ../patch-4.7 # apply new 4.7 patch
$ cd ..
$ mv linux-4.6.1 linux-4.7 # rename source dir

The 4.x.y kernels¶


Kernels with 3-digit versions are -stable kernels. They contain small(ish) critical fixes for
security problems or significant regressions discovered in a given 4.x kernel.

This is the recommended branch for users who want the most recent stable kernel and are not
interested in helping test development/experimental versions.

If no 4.x.y kernel is available, then the highest numbered 4.x kernel is the current stable kernel.

Note

The -stable team usually do make incremental patches available as well as patches against the
latest mainline release, but I only cover the non-incremental ones below. The incremental ones
can be found at https://www.kernel.org/pub/linux/kernel/v4.x/incr/

These patches are not incremental, meaning that for example the 4.7.3 patch does not apply on
top of the 4.7.2 kernel source, but rather on top of the base 4.7 kernel source.

So, in order to apply the 4.7.3 patch to your existing 4.7.2 kernel source you have to first back
out the 4.7.2 patch (so you are left with a base 4.7 kernel source) and then apply the new 4.7.3
patch.

Here’s a small example:

$ cd ~/linux-4.7.2 # change to the kernel source dir


$ patch -p1 -R < ../patch-4.7.2 # revert the 4.7.2 patch
$ patch -p1 < ../patch-4.7.3 # apply the new 4.7.3 patch
$ cd ..
$ mv linux-4.7.2 linux-4.7.3 # rename the kernel source dir

The -rc kernels¶


These are release-candidate kernels. These are development kernels released by Linus whenever
he deems the current git (the kernel’s source management tool) tree to be in a reasonably sane
state adequate for testing.

These kernels are not stable and you should expect occasional breakage if you intend to run
them. This is however the most stable of the main development branches and is also what will
eventually turn into the next stable kernel, so it is important that it be tested by as many people as
possible.
This is a good branch to run for people who want to help out testing development kernels but do
not want to run some of the really experimental stuff (such people should see the sections about -
next and -mm kernels below).

The -rc patches are not incremental, they apply to a base 4.x kernel, just like the 4.x.y patches
described above. The kernel version before the -rcN suffix denotes the version of the kernel that
this -rc kernel will eventually turn into.

So, 4.8-rc5 means that this is the fifth release candidate for the 4.8 kernel and the patch should be
applied on top of the 4.7 kernel source.

Here are 3 examples of how to apply these patches:

# first an example of moving from 4.7 to 4.8-rc3

$ cd ~/linux-4.7 # change to the 4.7 source dir


$ patch -p1 < ../patch-4.8-rc3 # apply the 4.8-rc3 patch
$ cd ..
$ mv linux-4.7 linux-4.8-rc3 # rename the source dir

# now let's move from 4.8-rc3 to 4.8-rc5

$ cd ~/linux-4.8-rc3 # change to the 4.8-rc3 dir


$ patch -p1 -R < ../patch-4.8-rc3 # revert the 4.8-rc3 patch
$ patch -p1 < ../patch-4.8-rc5 # apply the new 4.8-rc5 patch
$ cd ..
$ mv linux-4.8-rc3 linux-4.8-rc5 # rename the source dir

# finally let's try and move from 4.7.3 to 4.8-rc5

$ cd ~/linux-4.7.3 # change to the kernel source dir


$ patch -p1 -R < ../patch-4.7.3 # revert the 4.7.3 patch
$ patch -p1 < ../patch-4.8-rc5 # apply new 4.8-rc5 patch
$ cd ..
$ mv linux-4.7.3 linux-4.8-rc5 # rename the kernel source dir

The -mm patches and the linux-next tree¶


The -mm patches are experimental patches released by Andrew Morton.

In the past, -mm tree were used to also test subsystem patches, but this function is now done via
the linux-next <https://www.kernel.org/doc/man-pages/linux-next.html> tree. The Subsystem
maintainers push their patches first to linux-next, and, during the merge window, sends them
directly to Linus.

The -mm patches serve as a sort of proving ground for new features and other experimental
patches that aren’t merged via a subsystem tree. Once such patches has proved its worth in -mm
for a while Andrew pushes it on to Linus for inclusion in mainline.
The linux-next tree is daily updated, and includes the -mm patches. Both are in constant flux and
contains many experimental features, a lot of debugging patches not appropriate for mainline
etc., and is the most experimental of the branches described in this document.

These patches are not appropriate for use on systems that are supposed to be stable and they are
more risky to run than any of the other branches (make sure you have up-to-date backups – that
goes for any experimental kernel but even more so for -mm patches or using a Kernel from the
linux-next tree).

Testing of -mm patches and linux-next is greatly appreciated since the whole point of those are
to weed out regressions, crashes, data corruption bugs, build breakage (and any other bug in
general) before changes are merged into the more stable mainline Linus tree.

But testers of -mm and linux-next should be aware that breakages are more common than in any
other tree.

This concludes this list of explanations of the various kernel trees. I hope you are now clear on
how to apply the various patches and help testing the kernel.

Thank you’s to Randy Dunlap, Rolf Eike Beer, Linus Torvalds, Bodo Eggert, Johannes
Stezenbach, Grant Coady, Pavel Machek and others that I may have forgotten for their reviews
and contributions to this document.

yolinux.com

Linux Internet Server Security and


Configuration Tutorial
Greg Ippolito
82-104 minutes

Basic Security Steps / Overview:

Perform the following steps to secure your web site:

 See Distribution erratas and security fixes (See Yolinux home page for list). [e.g. Red Hat
Linux Errata]
Update your system where appropriate.
o Red Hat/CentOS:
 yum check-update
(Print list of packages to be updated.)
 yum update
Note that this can be automated using the /etc/init.d/yum-updatesd service
(RHEL/CentOS 5) or create a cron job /etc/cron.daily/yum.cron

#!/bin/sh
/usr/bin/yum -R 120 -e 0 -d 0 -y update yum
/usr/bin/yum -R 10 -e 0 -d 0 -y update

o Ubuntu/Debian:
 apt-get update
(Update package list to the latest version associated with that release of the
OS.)
 apt-get upgrade
 Reduce the number of network services exposed. These will be started by scripts in
/etc/rc.d/rc*.d/ directories. (See full list of services in: /etc/init.d/) There may
be no need to run sendmail (mail server), portmap (RPC listener required by NFS), lpd
(Line printer server daemon. Hackers probe my system for this service all the time.), innd
(News server), linuxconf etc. For example, sendmail can be removed from the boot
process using the command: chkconfig --del sendmail or by using the configuration
tool ntsysv. The service can be terminated using the command
/etc/rc.d/init.d/sendmail stop. At the very least one should run the command
chkconfig --list to see what processes are configured to be operable after boot-up.
See the YoLinux init process tutorial
 Verify your configuration. List the open ports and processes which hold them: netstat
-punta (Also try netstat -nlp)
 List RPC services: [root]# rpcinfo -p localhost
Ideally you would NOT be running portmapper so no RPC services would be available.
Turn off portmapper: service portmap stop (or: /etc/init.d/portmap stop) and
remove it from the system boot sequence: chkconfig --del portmap (Portmap is
required by NFS.)
 Anonymous FTP (Using wu_ftpd - Last shipped with RH 8.0. RH 9 and FC use vsftpd):
By default Red Hat comes configured for anonymous FTP. This allows users to ftp to
your server and log in with the login anonymous and use an email address as the
password. If you wish to turn off this feature edit the file /etc/ftpaccess and change:
class all real,guest,anonymous *
to
class all real,guest *
For more on FTP configuration see: YoLinux Web server FTP configuration tutorial
 Use the find command to locate vulnerabilities - find suid and guid files (which can
execute with root privileges) as well as world writable files and directories. For example:
o find / -xdev \( -perm -4000 -o -perm -2000 \) -type f -print
Remove suid privileges on executable programs with the command: chmod -s
filename
o find / -xdev \( -nouser -o -nogroup \) -print
Find files not owned by a valid user or group.
 Use the command chattr and lsattr to make a sensitive security file un-modifiable over
and above the usual permissions.
Make a file un-modifiable: chattr +i /bin/ls
Make directories un-modifiable: chattr -R +i /bin /sbin /boot /lib
Make a file append only: chattr +a /var/log/messages

 Use "tripwire" [sourceforge: tripwire] for security monitoring of your system for signs of
unauthorized file changes. Tripwire is offered as part of the base Red Hat and Ubuntu
distributions. Tripwire configuration is covered below.
 Watch your log files especially /var/log/messages and /var/log/secure.
 Avoid generic account names such as guest.
 Use PAM network wrapper configurations to disallow passwords which can be found
easily by crack or other hacking programs. PAM authentication can also disallow root
network login access. (Default Red Hat configuration. You must login as a regular user
and su - to obtain root access. This is NOT the default for ssh and must be changed as
noted below.)
See YoLinux Network Admin Tutorial on using PAM
 Remote access should NOT be done with clear text telnet but with an encrypted
connection using ssh. (Later in this tutorial)
 Proc file settings for defense against attacks. This includes protective measures against IP
spoofing, SYN flood or syncookie attacks.
 DDoS (Distributed Denial of Service) attacks: The only thing you can do is have gobs of
bandwidth and processing power/firewall. Lots of processing power or a firewall are
useless without gobs of bandwidth as the network can get overloaded from a distributed
attack.
Also see:
o Turn off ICMP (look invisible to network scans)
o Monitor the attack with tcpdump

Unfortunately the packets are usually spoofed and in my case the FBI didn't care. If the
server is a remote server, have a dial-up modem or a second IP address and route for
access because the attacked route is blocked by the flood of network attacks. You can
also request that your ISP drop ICMP traffic to the IP addresses of your servers. (and
UDP if all you are running is a web server. DNS name servers use UDP.) For very
interesting reading see "The Strange Tale" of the GRC.com DDoS attack. (Very
interesting read about the anatomy of the hacker bot networks.)

 User access can be restricted with the following configuration files:


o /etc/security/limits.conf
o /etc/security/group.conf
o /etc/security/time.conf

See YoLinux SysAdmin tutorial - restrict users

 Remove un-needed users from the system. See /etc/passwd. By default Red Hat
installations have many user accounts created to support various processes. It you do not
intend to run these processes, remove the users. i.e. remove user ids games, uucp, rpc,
rpcd, ...
xinetd:

 It is best for security reasons that you reduce the number of inetd network services
exposed. The more services exposed, the greater your vulnerability. Reduce the number
of network services accessible through the xinet or inet daemon by:
o inetd: (Red Hat 7.0 and earlier) Comment out un-needed services in the
/etc/initd.conf file.
Sample: (FTP is the only service I run)
o ftp stream tcp nowait root /usr/sbin/tcpd
in.ftpd -l -a

Restart the daemon to apply changes: /etc/rc.d/init.d/inetd restart

o xinetd: (Red Hat 7.1 and later) All network services are turned off by default
during an upgrade. Sample file: /etc/xinetd.d/wu-ftpd:
o service ftp
o {
o disable = yes - Default is off. This line controls
xinetd service (enabled or not)
o socket_type = stream
o wait = no
o user = root
o server = /usr/sbin/in.ftpd
o server_args = -l -a
o log_on_success += DURATION USERID
o log_on_failure += USERID
o nice = 10
o }

Turning on/off an xinetd service:

 Edit the file: /etc/xinetd.d/service-name


Changing to the line "disable = yes" turns off an xinetd service.
Changing to the line "disable = no" turns on an xinetd service.
Xinetd configuration must be performed for each and every file in the
directory /etc/xinetd.d/ in order to configure each and every network
service.
Restart the daemon to apply changes: /etc/rc.d/init.d/xinetd
restart
 You may also use the command:
chkconfig wu-ftpd on
OR
chkconfig wu-ftpd off
This will edit the appropriate file (/etc/xinetd.d/wu-ftpd) and restart
the xinetd process.

Tip:
 List init settings including all xinetd controlled services: chkconfig --
list
 List status of services (Red Hat/Fedora Core based systems): service --
status-all

Kernel Configuration:

 Use Linux firewall rules to protect against attacks. (ipchains: kernel 2.6, 2.4 or iptables:
kernel 2.2) Access denial rules can also be implemented on the fly by portsentry.
(Place at the end of /etc/rc.d/rc.local to be executed upon system boot, or some
other appropriate script)
o iptables script:
o iptables -A INPUT -p tcp -s 0/0 -d 0/0 --dport 2049 -j DROP
- Block NFS
o iptables -A INPUT -p udp -s 0/0 -d 0/0 --dport 2049 -j DROP
- Block NFS
o iptables -A INPUT -p tcp -s 0/0 -d 0/0 --dport 6000:6009 -j DROP
- Block X-Windows
o iptables -A INPUT -p tcp -s 0/0 -d 0/0 --dport 7100 -j DROP
- Block X-Windows font server
o iptables -A INPUT -p tcp -s 0/0 -d 0/0 --dport 515 -j DROP
- Block printer port
o iptables -A INPUT -p udp -s 0/0 -d 0/0 --dport 515 -j DROP
- Block printer port
o iptables -A INPUT -p tcp -s 0/0 -d 0/0 --dport 111 -j DROP
- Block Sun rpc/NFS
o iptables -A INPUT -p udp -s 0/0 -d 0/0 --dport 111 -j DROP
- Block Sun rpc/NFS
o iptables -A INPUT -p all -s localhost -i eth0 -j DROP
- Deny outside packets from internet which
o
claim to be from your loopback interface.

o ipchains script:
o # Allow loopback access. This rule must come before the rules
denying port access!!
o iptables -A INPUT -i lo -p all -j ACCEPT - This rule is
essential if you want your own computer
o iptables -A OUTPUT -o lo -p all -j ACCEPT to be able to
access itself through the loopback interface
o
o ipchains -A input -p tcp -s 0/0 -d 0/0 2049 -y -j REJECT -
Block NFS
o ipchains -A input -p udp -s 0/0 -d 0/0 2049 -j REJECT -
Block NFS
o ipchains -A input -p tcp -s 0/0 -d 0/0 6000:6009 -y -j REJECT -
Block X-Windows
o ipchains -A input -p tcp -s 0/0 -d 0/0 7100 -y -j REJECT -
Block X-Windows font server
o ipchains -A input -p tcp -s 0/0 -d 0/0 515 -y -j REJECT -
Block printer port
o ipchains -A input -p udp -s 0/0 -d 0/0 515 -j REJECT -
Block printer port
o ipchains -A input -p tcp -s 0/0 -d 0/0 111 -y -j REJECT -
Block Sun rpc/NFS
o ipchains -A input -p udp -s 0/0 -d 0/0 111 -j REJECT -
Block Sun rpc/NFS
o ipchains -A input -j REJECT -p all -s localhost -i eth0 -l -
Deny and log ("-l") outside packets from internet
o
which claim to be from your loopback interface.

 Note:
o iptables uses the chain rule "INPUT" and ipchains uses the lower case descriptor
"input".
o View rules with iptables -L or ipchains -L command.
o iptables man page
o When running an internet web server it is best from a security point of view, that
one NOT run printing, X-Window, NFS or any services which may be exploited
if a vulnerability is discovered or if mis-configured regardless of firewall rules.

Also see:

o YoLinux Internet Gateway Tutorial


o Red Hat 7.1 firewall GUI configuration tool /usr/sbin/gnome-lokkit
 Use portsentry to monitor network hacker attacks and dynamically assign firewall rules
to thwart attackers. (Later in this tutorial)
 A monolithic and minimal kernel might also provide a small bit of protection (avoid
Trojan modules) as well as running on less common hardware (MIPS, Alpha, etc... so
buffer overflow instructions will not run.)
 Kernel Security Enhancements:
o Red Hat/CentOS SELinux: National Security Agency (NSA): Security-Enhanced
Linux - Altered for increased security.
For more see the YoLinux.com Systems Admin and Web site configuration
tutorials.
o Ubuntu Apparmor community wiki
 Enable ExecShield: this is enabled by default on Red Hat EL 5/CentOS 5. ExecShield is
a Linux kernel feature which protects the system against buffer overflow exploits. This
feature is performed by random placement of stack memory, prevention of execution of
memory used to hold data and text buffer handling. ExecShield can be enabled in the Red
Hat/CentOS configuration file /etc/sysctl.conf by adding the following two lines:
 kernel.exec-shield = 1
 kernel.randomize_va_space = 1

The current system configuration can be checked:

o cat /proc/sys/kernel/exec-shield
o cat /proc/sys/kernel/randomize_va_space

Both should be "1". (System default)


Note: Intel XD/AMD NX 32 bit x86 processors only (not x86_64 which can address
more that 4Gb): Enable AMD NX or Intel XD support by use of the PAE (Physical
Address Extension) kernel. The PAE memory extension is required to access the XD/NX
bit. To see if your processor supports NX or XD PAE, use the command: cat
/proc/cpuinfo | grep flags to show a field with "pae" and "nx".
Install a Linux kernel (2.6.8+) with PAE support with the command yum install
kernel-PAE. The boot loader will also have to specify the PAE kernel for boot.
The BIOS will also have to be configured to support it as well.
This kernel should only be installed on a system with a x86 32 bit processor which offers
this support. The 64 bit x86_64 processors which can natively interact with the XD/NX
bit do not need the PAE kernel.

Firewall Rules to Block Bad IP Blocks:

It is well known that there are various blocks of IP addresses where nefarious hackers and spam
bots reside. These IP blocks were often once owned by legitimate corporations and organizations
but have fallen into an unsupervised realm or have been hijacked and sold to criminal spammers.
These IP blocks should be blocked by firewall rules.

There are various friendly services which seek and discover these IP blocks to firewall and deny
and they share this information with us. Thanks!

The Spamhaus drop list: This is a script to download the total drop list and generate an iptables
filter script to block these very IP addresses:

#!/bin/bash
# Blacklist of hacker zones and bad domains from spamhaus.org
FILE=drop.lasso
/bin/rm -f $FILE
wget http://www.spamhaus.org/drop/drop.lasso
blocks=$(cat $FILE | egrep -v '^;' | awk '{ print $1}')
echo "#!/bin/bash" > Spamhaus-drop.lasso.sh
for ipblock in $blocks
do
echo "iptables -I INPUT -s $ipblock -j DROP" >> Spamhaus-drop.lasso.sh
done
chmod ugo+x Spamhaus-drop.lasso.sh
echo "...Done"

To block the IP addresses just execute the script on each of your servers:

./Spamhaus-drop.lasso.sh

At the very minimum, these blocks of IP addresses should be denied by all servers.

Block or allow by country: One can deny access by certain countries or the inverse, allow only
certain countries to access your server.

See these sites to generate lists:


 IpInfoDb.com - generates Apache htaccess or iptables rules
 Country IP block list generator
 IpDeny.com: CIDR lists

Block forum and comment list spammers: Use the list generated from honeypots operated by
StopForumSpam.com

#!/bin/bash
# Big list of IP addresses to block
# IPs gathered from the last 30 days
# Over 100k IP addresses

rm -f listed_ip_30.zip
wget http://www.stopforumspam.com/downloads/listed_ip_30.zip

rm -f listed_ip_30.txt
unzip listed_ip_30.zip

echo "#!/bin/bash" > Stopforumspam-listed_ip_30.sh


cat ./listed_ip_30.txt | awk '{print "/sbin/iptables -I INPUT -s " $1 " -j
DROP"}' >> Stopforumspam-listed_ip_30.sh

chmod ugo+x Stopforumspam-listed_ip_30.sh

To block the IP addresses just execute the script: ./Stopforumspam-listed_ip_30.sh

Be aware that this is an extremely long list and can take hours to run. It is also a rapidly changing
list which is updated constantly.

[Potential Pitfall]:

You may get the following error:

iptables: Unknown error 18446744073709551615

I found that by slowing down the execution of the script, I can avoid this error. I added a bash
echo to write each line to the screen and it behaved much better although also much slower.

#!/bin/bash
set -x verbose
/sbin/iptables -I INPUT -s XX.XX.XX.XX -j DROP
...
Identify the enemy:

Use the following to identify and geolocate an IP address (InfoSniper.net):

Apache web server:

 Apache modules: Turn off modules you are not going to use. With past ssl exploits,
those using this philosophy did not get burned.
o Red Hat EL 5/CentOS 5 Apache 2.2: The configuration file
/etc/httpd/conf.d/ssl.conf enables SSL by default. This file is picked up
from the line Include conf.d/*.conf in the file
/etc/httpd/conf/httpd.conf Rename the file /etc/httpd/conf.d/ssl.conf
to ssl.conf_OFF to turn off SSL (any file ending with ".conf" is included in the
web server configuration).
o Ubuntu 8.04: a2dismod ssl
This will disable the loading of SSL. The Ubuntu distribution has a fairly frugal
use of modules by default.
The default configuration has SSL turned off.
o Apache 1.3.x config file /etc/httpd/conf/httpd.conf
o #<IfDefine HAVE_SSL>
o #LoadModule ssl_module modules/libssl.so
o #</IfDefine>
o ...
o ...
o #<IfDefine HAVE_SSL>
o #AddModule mod_ssl.c
o #</IfDefine>
o ...
o ...
o <IfDefine HAVE_SSL>
o Listen 80
o #Listen 443
o </IfDefine>
o ...
o ...
o #<IfModule mod_ssl.c>
o #...
o #...
o ...
o #<VirtualHost _default_:443>
o #...
o #...
o ...

Comment out the use of the ssl module by placing a "#" in the first column.

o One can also block the https port 443 using firewall rules:
o iptables -A INPUT -p tcp -s 0/0 -d 0/0 --dport 443 -j DROP
o iptables -A INPUT -p udp -s 0/0 -d 0/0 --dport 443 -j DROP

 Apache version exposure: (Version 1.3+) Don't allow hackers to learn which version of
the web server software you are running by inducing an error and thus an automated
server response. Attacks are often version specific. Spammers also trigger errors to find
email addresses.
 ...

 ServerAdmin webmaster at megacorp dot com
 ServerSignature Off

 ...

The response may be meaningless anyway if you are using the web server as a proxy to
another.

 Block hackers and countries which will never use your website. Use the Apache directive
Deny from to block access.
 <Directory /home/projectx/public_html>
 ...
 ...
 ...
 Order allow,deny
 # Block form bots
 Deny from 88.191.0.0/16 193.200.193.0/24 194.8.74.0/23
 allow from all
 </Directory>

For extensive lists of IP addresses to block, see the Wizcrafts.net block list

SSH: (Secure Shell)

SSH protocol suite of network connectivity tools are used to encrypt connections across the
internet. SSH encrypts all traffic including logins and passwords to effectively eliminate network
sniffing, connection hijacking, and other network-level attacks. In a regular telnet session the
password is transmitted across the Internet un-encrypted.

SSH on Linux refers to OpenSSH secure shell terminal and sftp/scp file transfer connections.
SSH is also a commercial product but available freely for non-commercial use from SSH
Communications Security at http://www.ssh.com/. Two versions are available, SSH1 (now very
old) and SSH2 (current). The commercial version of SSH can be purchased and/or downloaded
from their web site. Note that SSH1 does have a major vulnerability issues. The "woot-project"
web site cracking and defacing gang uses this vulnerability. DO NOT USE SSH1
PROTOCOL!!!!! ("woot-project" exploit/attack description/recovery)

OpenSSH was developed by the the OpenBSD Project and is freely available. OpenSSH is
compatible with SSH1 and SSH2. OpenSSH relies on the OpenSSL project for the encrypted
communications layer. Current releases of Linux come with OpenSSH/OpenSSL.

Links:

 OpenSSH.org - Shell. Supports SSH1 and SSH2 protocols.


o OpenSSL.org - Encrypted network layer
o FreeSSH.org - SSH for other platforms
 SSH:
o SSh.com - Secure shell
o FreeSSH.org - SSh for other platforms
 Secure Shell IETF working group - (Internet Engineering Task Force) status

OpenSSH:

 Download:
o Download OpenSSH RPM's (sourceforge) - statically linked with OpenSSL 0.9.5
- Pick this one for an easy complete RPM install
o Download OpenSSH source (tgz)
o Red Hat Linux 6.x Open SSL RPM downloads (redhat.com) (SSL only)

Note: SSH and SSL are included with Red Hat Linux 7.0+

 Installation:
o Common to Client and Server:
 Red Hat/Fedora/CentOS:
 rpm -ivh openssh-2.xxx-x.x.x86.rpm

 Ubuntu/Debian:
 apt-get install ssh

o Client:
 Red Hat/Fedora/CentOS:
 rpm -ivh openssh-askpass-2.xxx-x.x.x86.rpm
 rpm -ivh openssh-clients-2.xxx-x.x.x86.rpm
 rpm -ivh openssh-askpass-gnome-2.xxx-x.x.x86.rpm - Gnome
desktop users

 Ubuntu/Debian:
 apt-get install openssh-client ssh-askpass-gnome

o Server:
 Red Hat/Fedora/CentOS:
 rpm -ivh openssh-server-2.xxx-x.x.x86.rpm

 Ubuntu/Debian:
 apt-get install openssh-server

 If upgrading from SSH1 you may have to use the RPM option --force.
 The rpm will install the appropriate binaries, configuration files and openssh-server will
install the init script /etc/rc.d/init.d/sshd so that sshd will start upon system boot.
 Configuration:
o Client configuration file /etc/ssh/ssh_config: (Default)
o # $OpenBSD: ssh_config,v 1.9 2001/03/10 12:53:51 deraadt Exp $
o
o # This is ssh client system wide configuration file. See ssh(1)
for more
o # information. This file provides defaults for users, and the
values can
o # be changed in per-user configuration files or on the command
line.
o
o # Configuration data is parsed as follows:
o # 1. command line options
o # 2. user-specific file
o # 3. system-wide file
o # Any configuration value is only changed the first time it is
set.
o # Thus, host-specific definitions should be at the beginning of
the
o # configuration file, and defaults at the end.
o
o # Site-wide defaults for various options
o
o # Host *
o # ForwardAgent no
o # ForwardX11 no
o # RhostsAuthentication no
o # RhostsRSAAuthentication yes
o # RSAAuthentication yes
o # PasswordAuthentication yes
o # FallBackToRsh no
o # UseRsh no
o # BatchMode no
o # CheckHostIP yes
o # StrictHostKeyChecking yes
o # IdentityFile ~/.ssh/identity
o # IdentityFile ~/.ssh/id_rsa
o # IdentityFile ~/.ssh/id_dsa
o # Port 22
o # Protocol 2,1 - Change this line to: Protocol 2
o # Cipher 3des
o # Ciphers aes128-cbc,3des-cbc,blowfish-cbc,cast128-
cbc,arcfour,aes192-cbc,aes256-cbc
o # EscapeChar ~
o Host *
o ForwardX11 yes

Change the line: # Protocol 2,1


to: Protocol 2
This will eliminate use of SSH1 protocol.
Un-comment the options required or accept the hard-coded defaults. The hard
coded defaults for OpenSSH client are compatible with SSH1 client files and sshd
server. An upgrade to OpenSSH client will not require any changes to the files in
$HOME/.ssh/.

o Server configuration file /etc/ssh/sshd_config:


Default:
o # $OpenBSD: sshd_config,v 1.38 2001/04/15 21:41:29 deraadt
Exp $
o
o # This sshd was compiled with PATH=/usr/bin:/bin:/usr/sbin:/sbin
o
o # This is the sshd server system-wide configuration file. See
sshd(8)
o # for more information.
o
o Port 22
o #Protocol 2,1 - Change to: Protocol 2
o #ListenAddress 0.0.0.0
o #ListenAddress ::
o HostKey /etc/ssh/ssh_host_key
o HostKey /etc/ssh/ssh_host_rsa_key
o HostKey /etc/ssh/ssh_host_dsa_key
o ServerKeyBits 768
o LoginGraceTime 600 - Change to:
LoginGraceTime 120
o KeyRegenerationInterval 3600
o PermitRootLogin yes - Change to:
PermitRootLogin no
o #
o # Don't read ~/.rhosts and ~/.shosts files
o IgnoreRhosts yes
o # Un-comment if you don't trust ~/.ssh/known_hosts for
RhostsRSAAuthentication
o #IgnoreUserKnownHosts yes
o StrictModes yes
o X11Forwarding yes
o X11DisplayOffset 10
o PrintMotd yes
o #PrintLastLog no
o KeepAlive yes
o
o # Logging
o SyslogFacility AUTHPRIV
o LogLevel INFO
o #obsoletes QuietMode and FascistLogging
o
o RhostsAuthentication no
o #
o # For this to work you will also need host keys in
/etc/ssh/ssh_known_hosts
o RhostsRSAAuthentication no
o # similar for protocol version 2
o HostbasedAuthentication no
o #
o RSAAuthentication yes
o
o # To disable tunneled clear text passwords, change to no here!
o PasswordAuthentication yes
o PermitEmptyPasswords no
o
o # Un-comment to disable s/key passwords
o #ChallengeResponseAuthentication no
o
o # Un-comment to enable PAM keyboard-interactive authentication
o # Warning: enabling this may bypass the setting of
'PasswordAuthentication'
o #PAMAuthenticationViaKbdInt yes
o
o # To change Kerberos options
o #KerberosAuthentication no
o #KerberosOrLocalPasswd yes
o #AFSTokenPassing no
o #KerberosTicketCleanup no
o
o # Kerberos TGT Passing does only work with the AFS kaserver
o #KerberosTgtPassing yes
o
o #CheckMail yes
o #UseLogin no
o
o #MaxStartups 10:30:60
o #Banner /etc/issue.net
o #ReverseMappingCheck yes
o
o Subsystem sftp /usr/libexec/openssh/sftp-server

 If changes are made to the configuration file, restart the "sshd" daemon to
pick up the new configuration:
Ubuntu: /etc/init.d/ssh restart
Red Hat: /etc/init.d/sshd restart or service sshd restart
 Ssh protocol version 1 is not as secure, it should not take 10 minutes to
type your password and if someone logs in as root without logging in as a
particular user first then traceability is lost if there are multiple admins,
thus the changes were made as suggested above.
 Setting "PermitRootLogin no" mandates that remote logins use an
undetermined user login. This removes root, a known login on all Linux
systems, from the list of dictionary attacks available.
 It is a good idea to change the "Banner" so that a login greeting and legal
disclaimer is presented to the user. i.e. change file /etc/issue.net
contents to:

Access is granted to this server only to authorized


personnel of Mega Corp.
By default, the /etc/issue.net message presents to the hacker the OS
name, kernel release and information which can be used to determine
potential vulnerabilities.

 [Potential Pitfall]: Slow ssh logins - If you get the "login" prompt quickly
but the "password" prompt takes 30 seconds to a minute, then you have a
DNS lookup delay. Set UseDNS no in the config file
/etc/ssh/sshd_config and then restart sshd. The IP address of eth0 (or
the NIC used) should also refer to your own hostname in /etc/hosts
 Generate system keys: /etc/ssh/
o ssh-keygen -q -t rsa -f /etc/ssh/ssh_host_rsa_key -C '' -N ''
o ssh-keygen -q -t dsa -f /etc/ssh/ssh_host_dsa_key -C '' -N ''
o Private keys generated: chmod 600 /etc/ssh/ssh_host_dsa_key
/etc/ssh/ssh_host_rsa_key
o Public keys generated: chmod 644 /etc/ssh/ssh_host_dsa_key.pub
/etc/ssh/ssh_host_rsa_key.pub
o For SELinux:
 /sbin/restorecon /etc/ssh/ssh_host_rsa_key.pub
 /sbin/restorecon /etc/ssh/ssh_host_dsa_key.pub
 Generate user keys:
o Client:
Use the command: /usr/bin/ssh-keygen -t rsa
o Generating public/private rsa key pair.
o Enter file in which to save the key (/home/user-id/.ssh/id_rsa):
o Enter passphrase (empty for no passphrase):
o Enter same passphrase again:
o Your identification has been saved in /home/user-id/.ssh/id_rsa.
o Your public key has been saved in /home/user-id/.ssh/id_rsa.pub.
o The key fingerprint is:
o XX:bl:ab:la:bl:aX:XX:af:90:8f:dc:65:0d:XX:XX:XX:XX:XX user-
id@node-name

Files generated:

$HOME/.ssh/id_rsa - binary
$HOME/.ssh/id_rsa.pub - ssh-rsa ...223564257432 email
address
- Multiple keys/lines allowed.

Command options:

 -t rsa (for protocol version 2)


 -t dsa (for protocol version 2)
 -t rsa1 (for protocol version 1)
 -b 2048 (specifies the key length in bits)
o Server:
 FTP the file $HOME/.ssh/id_rsa.pub to the server
 cd $HOME/.ssh/
 cat id_rsa.pub >> authorized_keys2
 Using ssh: On client use the following command and login as you normally would with a
telnet session:
ssh name-of server
The first time you use ssh it will issue the following message:
 The authenticity of host 'node.your-domain.com (XXX.XXX.XXX.XXX)' can't
be established.
 RSA key fingerprint is
XX:bl:ab:la:bl:aX:XX:af:90:8f:dc:65:0d:XX:XX:XX:XX:XX.
 Are you sure you want to continue connecting (yes/no)? yes
 Warning: Permanently added 'node.your-domain.com,XXX.XXX.XXX.XXX' (RSA)
to the list of known hosts.
 user@node.your-domain.com's password:

Answer yes. It won't ask again.

To use a different user name for the login, state it on the command line: ssh -l
username name-of server

Note: You can now also use the command

sftp

for secure ftp file transfers using ssh.

OpenSSH Man Pages:

 ssh - OpenSSH SSH client (remote login program)


 sshd - OpenSSH ssh daemon
 ssh-keygen - Used to create RSA keys (host keys and user authentication keys)
 ssh_config - OpenSSH SSH client configuration file
 sshd_config - OpenSSH SSH daemon configuration file
 ssh-add - adds RSA or DSA identities for the authentication agent. Used to register new
keys with the agent.
 scp - secure copy (remote file copy program)
 ssh-agent - authentication agent This can be used to hold RSA keys for authentication.
 sftp - Secure file transfer program
 sftp-server - SFTP server subsystem

Other OpenSSH Links:

 Red Hat Open SSH Guide - Also scp, sftp, Gnome ssh-agent
 Linux Journal: OpenSSH Part I

SSH for MS/Windows Links:

 PuTTY. Also see PuTTY configuration


 Tera Term

SSH Notes:

 The sshd should not be started using xinetd/inetd due to time necessary to perform
calculations when it is initialized.
 ssh client will suid to root. sshd on the server is run as root. Root privileges are required
to communicate on ports lower than 1024. The -p option may be used to run SSH on a
different port.
 RSA is used for key exchange, and a conventional cipher (default Blowfish) is used for
encrypting the session.
 Encryption is started before authentication, and no passwords or other information is
transmitted in the clear.
 Authentication:
o Login is invoked by the user. The client tells the server the public key that the
user wishes to use for authentication.
o Server then checks if this public key is admissible.
If yes then random number is generated and encrypts it with the public key and
sends the value to the client.
o The client then decrypts the number with its private key and computes a
checksum. The checksum is sent back to the server
o The server computes a checksum from the data and compares the checksums.
o Authentication is accepted if the checksums match.
 SSH will use $HOME/.rhosts (or $HOME/.shosts)
 To establish a secure network connection on another TCP port, use "tunneling" options
with the ssh command:
o Forward TCP local port to hostport on the remote-host:
ssh remote-host -L port:localhost:hostport command

Specifying ports lower than 1024 will require root access.


FTP opens various ports and thus is not a good candidate. Port 21 is only used to
establish the connection.

Man pages:

 ssh - secure shell client (remote login program)


 sshd - secure shell daemon (server)
 ssh-keygen - Used to create RSA keys (host keys and user authentication keys)
 ssh-keyscan - gather ssh public keys
 ssh-add - adds identities for the authentication agent Used to register new keys with the
agent.
 scp - secure copy (remote file copy program)
 slogin
 sftp - secure file transfer program client.
 sftp-server - secure file transfer program server.
 ssh-agent - Authentication agent. This can be used to hold RSA keys for authentication.
 telnet - user interface to the TELNET protocol
Documentation:

 /usr/share/doc/openssh-XXX/
 /usr/share/doc/openssh-askpass-XXX/
 /usr/share/doc/openssl-0.XXX/

Test:

The network sniffer Ethereal (now Wireshark) was used to sniff network transmissions between
the client and server for both telnet and ssh with the following results:

 Test telnet clear text login: (port 23)

The text sent by the client is green text on a black background.


The rest of the text was transmitted by the server.
Note that both the login ("JoeUser") and password ("super-secret-password") were
captured.
 Test ssh encrypted login: (port 22)

Note that the entire login and password exchange was encrypted.

Fail2ban: block repeated failed logins

Any site on the public internet will be subjected to dictionary password attacks, constantly trying
new words, word and ASCII sequences from automated attack programs from compromised
servers. Use fail2ban to block these attempts. Fail2ban will examine log files to find repeated,
failed login attempts and either temporarily or permanently block the IP addresses of the
attacking system. The default configuration of fail2ban looks over the sshd log file
/var/log/secure to find the attacking system and will allow for 5 failed login attempts before
blocking for 600 seconds (10 minutes).

Fail2ban can be configured to monitor the following processes:


 sshd
 smtp
 Apache httpd
 lighttp
 vsftpd
 postfix
 bind9 named
 mysqld
 asterisk
 ...

Installation:

 Red Hat: yum install fail2ban


 Ubuntu: sudo apt-get install fail2ban

Configuration:

 /etc/fail2ban/fail2ban.conf
 [Definition]
 # 1 = ERROR
 # 2 = WARN
 # 3 = INFO
 # 4 = DEBUG
 loglevel = 3

 # Values: STDOUT STDERR SYSLOG file Default: /var/log/fail2ban.log
 # Only one log target can be specified.
 logtarget = SYSLOG

 socket = /var/run/fail2ban/fail2ban.sock
 pidfile = /var/run/fail2ban/fail2ban.pid

 /etc/fail2ban/jail.conf (often copied to jail.local and edited for local directives)


 [DEFAULT]
 ignoreip = 127.0.0.1/8
 bantime = 3600
 findtime = 600
 maxretry = 3
 backend = auto
 usedns = no

 [ssh-iptables]
 enabled = true
 filter = sshd
 action = iptables[name=SSH, port=ssh, protocol=tcp]
 sendmail-whois[name=SSH, dest=root,
sender=user@megacorp.com]
 logpath = /var/log/secure
 maxretry = 3

Note: if your server is under attack, fail2ban may deliver a lot of email. You may want to
remove the sendmail-whois statement. [DEFAULT] directives:

Directive Description
IP addresses to never ban, like your gateway system. Multiple IPs are separated by a
ignoreip
space. This is your white list. Default 127.0.0.1 (localhost)
time period during which failure occurs. eg 600 refers to the maxretry number of
findtime
failures occurring during this findtime period will be banned. Default 600 seconds
maxretry specify the number of failures before an IP gets banned. Default 3
bantime number of seconds that an IP is banned
enabled true=monitor specified process. false for no monitoring. Default is true only for sshd

Restart after making configuration changes: sudo service fail2ban restart

Configure init to start fail2ban upon boot: sudo chkconfig --level 345 fail2ban on

Also see log file: /var/log/messages

Verify blocking of hackers:


Show the firewall rules generated by failed logins:

[host]# iptables -L
Chain INPUT (policy ACCEPT)
target prot opt source destination
fail2ban-SSH tcp -- anywhere anywhere tcp dpt:ssh

Chain FORWARD (policy ACCEPT)


target prot opt source destination

Chain OUTPUT (policy ACCEPT)


target prot opt source destination

Chain fail2ban-SSH (1 references)


target prot opt source destination
REJECT all -- 122.189.194.238 anywhere reject-with
icmp-port-unreachable
REJECT all -- 183.94.11.208 anywhere reject-with
icmp-port-unreachable
REJECT all -- 58.218.204.132 anywhere reject-with
icmp-port-unreachable
RETURN all -- anywhere anywhere
Verify fail2ban status:
Show sshd fail2ban status:

[host]# fail2ban-client status


Status
|- Number of jail: 1
`- Jail list: ssh-iptables

[host]# fail2ban-client status ssh-iptables


Status for the jail: ssh-iptables
|- filter
| |- File list: /var/log/secure
| |- Currently failed: 0
| `- Total failed: 102
`- action
|- Currently banned: 3
| `- IP list: 122.189.194.238 183.94.11.208 58.218.204.132
`- Total banned: 26

Links:

 fail2ban home page


 fail2ban Github site
 fail2ban - a set of server and client programs to limit brute force authentication attempts
 fail2ban-client - configure and control the server
 fail2ban-server - start the server
 fail2ban-regex - test regex option

rssh: Restricted shell for use with OpenSSH sftp

FTP uses clear text access to your server. This is fine if all systems in the datacenter are secure
and no one can sniff the network. Router and switch configurations make it almost impossible to
sniff most networks these days, but a security compromises at the datacenter on another server
can cause potential problems for your servers if you allow open un-encrypted passwords used by
FTP.

VsFTPd also allows one to limit the user's view of the filesystem to their own directories. This is
good. OpenSSH "sftp" does not provide this capability (until version 4.9. RHEL/CentOS 5 use
OpenSSH 4.3). The "sftp" file transfer does encrypt the passwords (good) but also requires shell
access (bash, csh, ...) for the account which allows full access to the filesystem (bad). The rssh
shell can be used with sftp, scp, cvs, rsync, and rdist and can chroot users to their own
directories and limit function to sftp access only (deny full shell access).

For newer systems (RHEL6/CentOS6/Fedora 11) with OpenSSH 4.9+ see the preferred chrooted
sftp configuration for OpenSSH 4.9+.

The solution is to use

rssh
as your shell with OpenSSH "sftp":

 rssh Home Page


 rssh RPMs - Dag Wieers

Installation: rpm -ivh rssh-2.3.2-1.2.el5.rf.x86_64.rpm

This installs:

 /usr/bin/rssh
 /etc/rssh.conf
 also support program /usr/libexec/rssh_chroot_helper and man pages

Check installed configuration:

rssh -v Configuration:

1. OpenSSH configuration: /etc/ssh/sshd_config


2. ...
3.
4. PermitUserEnvironment no
5.
6. ...
7.
8. Subsystem sftp /usr/libexec/openssh/sftp-server
9.
10. ...

Security note: Also be aware of the setting AllowTcpForwarding which controls port
forwarding.

11. Add shell to list of usable shells: /etc/shells


12. /bin/sh
13. /bin/bash
14. /sbin/nologin
15. /bin/tcsh
16. /bin/csh
17. /bin/ksh
18. /bin/zsh
19. /opt/bin/ftponly
20. /usr/bin/rssh

Ubuntu: You can use the command: add-shell /usr/bin/rssh

21. Change the user's shell to rssh (choose one method)


o chsh -s /usr/bin/rssh user1
o usermod -s /usr/bin/rssh user1
o Assign shell when creating user: useradd -m -s /usr/bin/rssh user1
o Edit /etc/passwd
o user1:x:504:504::/home/user1:/usr/bin/rssh

22. Allow execution to su: chmod u+s /usr/libexec/rssh_chroot_helper


This prevents the following error in /var/log/messages
23. Dec 20 00:23:44 nodex rssh_chroot_helper[27450]: chroot() failed, 2:
Operation not permitted

24. Set access for rssh: /etc/rssh.conf


25. logfacility = LOG_USER
26. allowsftp
27. umask = 022
28. #chrootpath = /users/chroot
29.
30. user="user1:022:00010:/home/user1"

Global security allowable options include: allowscp, allowcvs, allowrdist, allowrsync


Specify global chroot or omit for none.
Specific user security:

1. User login id
2. First set of three number represent the umask
3. Second set of five number represent the bitmask to allow

1 1 1 1 1
rsync rdist cvs sftp scp

4. Specify the global chrooted directory for all using rssh. If omitted, then not chrooted. Can
be overwritten by user configuration.

Note: User configuration overrides the shared chroot settings. Omitted user settings do
not default to shared chroot settings.

31. Configuring the chrooted directory: This is true for a global user chroot or individual
chroot. In this example we will show a user chrooted to their own home directory
/home/user1. When chrooted, the user does not have access to the rest of the filesystem
and thus is blind to all of its executables and libraries. It will therefore be necessary to
copy local executables and libraries for their local use.

Description User directory System equivalent


System devices /home/user1/dev /dev
Configuration files /home/user1/etc /etc
/etc/ld.so.cache
/etc/ld.so.cache.d/*
/etc/ld.so.conf - dynamic linker
Description User directory System equivalent
configuration
/etc/nsswitch.conf
/etc/passwd
/etc/group
/etc/hosts
/etc/resolv.conf
/home/user1/lib
Shared libraries (32 /lib
and 64 bit) /home/user1/lib64 /lib64

Executables and /home/user1/usr /usr


libraries /usr/libexec/openssh/sftp-server
/usr/libexec/rssh_chroot_helper
Executables /home/user1/bin /bin

32. Use script to add chroot required files: /opt/bin/userchroot


33. #!/bin/bash
34. # First and only argument ($1) is user id
35. if [ -d /home/$1 ];
36. then
37. USERDIR=/home/$1
38. else
39. echo "Error: Directory /home/$1 does not exist"
40. exit
41. fi
42.
43. mkdir $USERDIR/etc
44. mkdir $USERDIR/lib
45. mkdir -p $USERDIR/usr/libexec/openssh
46. mkdir -p $USERDIR/var/log
47. mkdir $USERDIR/dev
48. mknod -m 666 $USERDIR/dev/null c 1 3
49.
50. cp -p /etc/ld.so.cache $USERDIR/etc
51. # If directory exists
52. if [ -d /etc/ld.so.cache.d ];
53. then
54. cp -avRp /etc/ld.so.cache.d $USERDIR/etc
55. fi
56. grep $1 /etc/passwd > $USERDIR/etc/passwd
57. cp -p /etc/ld.so.conf $USERDIR/etc
58. cp -p /etc/nsswitch.conf $USERDIR/etc
59. cp -p /etc/group $USERDIR/etc
60. cp -p /etc/hosts $USERDIR/etc
61. cp -p /etc/resolv.conf $USERDIR/etc
62. cp -ap /usr/libexec/openssh/sftp-server
$USERDIR/usr/libexec/openssh/sftp-server
63. cp -ap /usr/libexec/rssh_chroot_helper
$USERDIR/usr/libexec/rssh_chroot_helper
64.
65. # Authentication libraries required for login (32 bit and 64 bit
systems)
66. if [ -d /lib64 ];
67. then
68. mkdir $USERDIR/lib64
69. cp -ap /lib64/libnss_files.so.? $USERDIR/lib64
70. cp -ap /lib64/libnss_files-*.so $USERDIR/lib64
71. else
72. cp -p /lib/libnss_files.so.? $USERDIR/lib
73. cp -p /lib/libnss_files-*.so $USERDIR/lib
74. fi
75.
76. FILES=`ldd /usr/libexec/openssh/sftp-server | perl -ne 's:^[^/]+::; s:
\(.*\)$::; print;'`
77. for ii in $FILES
78. do
79. rtdir="$(dirname $ii)"
80. [ ! -d $USERDIR$rtdir ] && mkdir -p $USERDIR$rtdir || :
81. /bin/cp -p $ii $USERDIR$rtdir
82. done
83. FILES=`ldd /usr/libexec/rssh_chroot_helper | perl -ne 's:^[^/]+::; s:
\(.*\)$::; print;'`
84. for ii in $FILES
85. do
86. rtdir="$(dirname $ii)"
87. [ ! -d $USERDIR$rtdir ] && mkdir -p $USERDIR$rtdir || :
88. /bin/cp -p $ii $USERDIR$rtdir
89. done
90.
91. Note:
o Script use: /opt/bin/userchroot user1
o The files and directories reflect the file and path names for Red Hat Enterprise
Linux 5 and CentOS 5.
o Instead of copying files, one can also use a hard link: ln /etc/ld.so.conf
/home/user1/etc/ld.so.conf if the files are on the same hard drive. In that
way, users receive updates to the system.
Symbolic links will not work. See symlinks and chroot for this discussion.
If the user directory is on a separate drive, use the copy as defined in the script.
o Reduce /etc/passwd to a single user (don't have root etc):
o user1:x:504:504::/home/user1:/usr/bin/rssh

o Once chroot() takes place, programs will not have access to the regular log target.
Specify a chrooted syslog socket target which can be accessed. The number of
sockets are limited and thus configuring rssh for each user is not a good idea for a
large number of users. For use with many users, use the shared chrooted jail
defined by the rssh directive: chrootpath.

Blocking FTP: Setting up rssh does not turn off or block FTP access to your system. You must
still turn off vsftp: /etc/init.d/vsftpd stop. There is little point to setting up secure chrooted
sftp access with rssh and also running a FTP service.

Debugging:

 One can pull in the full root path by issuing an internal mount:
o mount --bind /dev /home/user1/dev
o mount --bind /dev /home/user1/lib
o mount --bind /dev /home/user1/lib64
o mount --bind /dev /home/user1/usr

This technique can be used to narrow down the error to find which directory has the
missing files. It should not be used as a final solution.
Unmount when done: umount /home/user1/dev

 If authenticating to ldap, nis, etc, pull in the appropriate libraries. You can test with all:
cp -p /lib/libnss_* /home/user1/lib
This can be performed for /lib64 as well.
 Checklog files for errors: /var/log/messages

Man pages:

 rssh man page


 rssh.conf man page
 sftp man page

Using gFTP as a Linux sftp client:

 Start program through menu or command line: gftp&


 Select "FTP" from toolbar
 Select "Options"
 Select "SSH" tab

 Select "Apply" amd "Ok"


 On the upper right hand side of the gftp window, select "SSH" from the pull-down menu.

Using FileZilla as a Linux sftp client:

 Select "File" + "Site Manager"


 Select "New Site" (bottom left)
 Enter "Host:"
 Choose "Servertype:" "SFTP using SSH2"
 Select "Logontype:" "Normal"
 Enter "User:" and click on "Connect".

Links:

 Multi-platform GUI client FileZilla


 MS/Windows client WinSCP (supports sftp)

SentryTools: PortSentry
This tool will monitor the network probes and attacks against your server. It can be configured to
log and counter these probes and attacks. PortSentry can modify your /etc/hosts.deny (PAM
module) file and issue IP firewall commands automatically to block hackers.

PortSentry can be loaded as an RPM but this tutorial covers compiling PortSentry from source to
configure a more preferable system logging.

Note: Version 1.2 of portsentry can issue iptables, ipchains or route commands to thwart attacks.
Iptables/Ipchains is a Linux firewall system built into the Linux kernel. Linux kernel 2.6/2.4 uses
iptables, kernel 2.2 (old) uses ipchains. References to ipfwadm are for even older Linux kernels.
Route commands can be used by any Unix system including those non-Linux systems which do
not support Iptables/Ipchains.

Steps to install and configure portsentry:

1. Download and unzip source code


2. Edit include file and compile
3. Start PortSentry
4. Read logs

1. Download and unzip source code:


o Download: PortSentry source code
o Move to your source directory and unzip: tar -xzf portsentry-1.2.tar.gz
2. Edit include file and compile:
cd portsentry_beta/
Read file README.install. It details the following:
o

Edit file: portsentry_config.h

Set file paths and configure separate log file for Portsentry:

Set options:

 CONFIG_FILE - PortSentry run-time configuration file.


 WRAPPER_HOSTS_DENY - The path and name of TCP wrapper
hosts.deny file.

#define CONFIG_FILE "/opt/portsentry/portsentry.conf"


#define WRAPPER_HOSTS_DENY "/etc/hosts.deny"
#define SYSLOG_FACILITY LOG_DAEMON - Default. Change to
LOG_LOCAL6
#define SYSLOG_LEVEL LOG_NOTICE
(Note: I use /opt/portsentry/ because I like to locate "optional" files/software
there. It allows for an easy backup by separating it from the OS. If you prefer, you
can use /etc/portsentry/ for configurations files and follow the Linux/Unix
file system logic)

The above default, "LOG_DAEMON", will log messages to the


/var/log/messages file.

To log to a separate file dedicated to PortSentry logging: (This will eliminate


logging clutter in the main system logging file)

 Add logging directives to syslogd configuration file: /etc/syslog.conf

Change the following line by adding an extra log facility for portsentry
messages which are not going to be logged to the regular syslog output file
/var/log/messages. This lists what messages to filter out from
/var/log/messages.

*.info;mail.none;news.none;authpriv.none;cron.none;local6.n
one /var/log/messages

Add the following line to assign a portsentry log facility:

local6.* /var/log/portsentry.log

Note: Use tab not spaces in the syslog configuration file.

Restart syslogd: /etc/init.d/syslog restart

 Set portsentry_config.h entry to new log facility:


Change from default setting:
 #define SYSLOG_FACILITY LOG_DAEMON

To:

#define SYSLOG_FACILITY LOG_LOCAL6

FYI: Options for the SYSLOG_FACILITY are defined in


/usr/include/sys/syslog.h
They include:

SYSLOG_FACILITY Facility Name Description


LOG_LOCAL0 local0 reserved for local use
LOG_LOCAL1 local1 reserved for local use
SYSLOG_FACILITY Facility Name Description
LOG_LOCAL2 local2 reserved for local use
LOG_LOCAL3 local3 reserved for local use
LOG_LOCAL4 local4 reserved for local use
LOG_LOCAL5 local5 reserved for local use
LOG_LOCAL6 local6 reserved for local use
LOG_LOCAL7 local7 reserved for local use
LOG_USER user random user-level messages
LOG_MAIL mail mail system
LOG_DAEMON daemon system daemons
LOG_SYSLOG syslog messages generated internally by syslogd
LOG_LPR lpr line printer subsystem
LOG_NEWS news network news subsystem
LOG_UUCP uucp UUCP subsystem
LOG_CRON cron clock daemon
LOG_AUTHPRIV authpriv security/authorization messages (private)
LOG_FTP ftp ftp daemon

Options for the SYSLOG_LEVEL include:

SYSLOG_LEVEL Priority Description


LOG_EMERG 0 system is unusable
LOG_ALERT 1 action must be taken immediately
LOG_CRIT 2 critical conditions
LOG_ERR 3 error conditions
LOG_WARNING 4 warning conditions
LOG_NOTICE 5 normal but significant condition
LOG_INFO 6 informational
LOG_DEBUG 7 debug-level messages

Edit file: portsentry.conf to set paths for configuration files and ports to
monitor.

TCP_PORTS="1,11,15,20,21,23,25,69,79, ... "


UDP_PORTS="1,7,9,69,161,162,513,635, ... "

...
...
IGNORE_FILE="/opt/portsentry/portsentry.ignore"
HISTORY_FILE="/opt/portsentry/portsentry.history"
BLOCKED_FILE="/opt/portsentry/portsentry.blocked"
#KILL_ROUTE="/sbin/route add -host $TARGET$ reject" - Generic
Unix KILL_ROUTE
I prefer
iptables/ipchains options below

Un-comment and modify if necessary the appropriate statements. The


TCP_PORTS=, UDP_PORTS= lists are ignored for stealth scan detection modes.
Add common but unused services. i.e. add port 25 if the system is not accepting
email as port 25 is included in most scans.
I added UDP port 68 (BOOTP) and TCP 21 (ftp), 22 (ssh), 25 (smtp mail), 53
(dns bind), 80 (http web server), 119 (news) to the
ADVANCED_EXCLUDE_UDP and ADVANCED_EXCLUDE_TCP statements
respectively.

ADVANCED_EXCLUDE_TCP="21,22,25,53,80,110,113,119" - server
ADVANCED_EXCLUDE_UDP="21,22,53,110,520,138,137,68,67"
OR
ADVANCED_EXCLUDE_TCP="113,139" - workstation
ADVANCED_EXCLUDE_UDP="520,138,137,68,67"

PAM options:

 KILL_HOSTS_DENY="ALL: $TARGET$"

For more on PAM see YoLinux network Admin Tutorial

Choose one option: (Options: network "route" or firewall command


"iptables/ipchains")

2. For those using iptables (Linux Kernel 2.6/2.4+):


KILL_ROUTE="/sbin/iptables -I INPUT -s $TARGET$ -j DROP"
(Note: The default used in portsentry.conf uses the incorrect path for Red Hat. Change
/usr/local/bin/iptables to /sbin/iptables)
3. For Linux 2.2.x kernels (version 2.102+) using ipchains: (Best option)
KILL_ROUTE="/sbin/ipchains -I input -s $TARGET$ -j DENY -l"
OR
KILL_ROUTE="/sbin/ipchains -I input -s $TARGET$ -j DENY"
Note: The second option is without the "-l" or logging option so ipchains won't keep logging the
portscan in /var/log/messages
4. Simple method to drop network return routes if iptables or ipchains are not compiled into
your kernel:
KILL_ROUTE="/sbin/route add -host $TARGET$ reject"
You can check the addresses dropped with the command: netstat -rn They will be routed to
interface "-".

Note on Red Hat 7.1: During installation/upgrade the firewall configuration tool
/usr/bin/gnome-lokkit may be invoked. It will configure a firewall using
ipchains and will add this to your boot process. To see if ipchains and the Lokkit
configuration is invoked during system boot, use the command: chkconfig --
list | grep ipchains. You can NOT use portsentry to issue iptables rules if
your kernel is configured to use ipchain rules.
More info on iptables and ipchains support/configuration in Red Hat 7.1 and
kernel 2.4.

Edit file: portsentry.ignore (contains IP addresses to ignore. )

127.0.0.1
0.0.0.0
Your IP address

The at Home network routinely scans for news servers on port 119 from a server
named authorized-scan1.security.home.net. Adding the IP address of this server
(24.0.0.203) greatly reduces the logging. I also added their BOOTP server.
(24.9.139.130)

I manually issued the iptables (kernel 2.6/2.4) commands on my workstation to


drop the hosts and deny their scans. At Home users may add the commands to the
file /etc/rc.d/rc.local

/sbin/iptables -I INPUT -s 24.0.0.203 -j DROP


/sbin/iptables -I INPUT -s 24.9.139.130 -j DROP

Edit file: Makefile

INSTALLDIR = /opt

And remove the line under "uninstall": (dangerous line!!)

# /bin/rmdir $(INSTALLDIR)
And remove the line under "install": (troublesome line!!)

# chmod 700 $(INSTALLDIR)

To:

# chmod 700 $(INSTALLDIR)/$(CHILDDIR)

Compile: make linux

Fix the following compile errors in portsentry.c

 Change printf ("Copyright 1997-2003 Craig H. Rowland


<craigrowland at users dot
sourceforget dot net>\n");
to one line: printf ("Copyright 1997-2003 Craig H. Rowland\n");
 Fix warning: warning: passing argument 3 of ‘accept’ from
incompatible pointer type
Separate and change declaration of "length" to: unsigned int length;
o

Install (as root): make install

3. Run PortSentry for advanced UDP/TCP stealth scan detection:


o portsentry -atcp
o portsentry -audp

OR use init scripts below in next section.

4. Check logfile for hacker attacks. See: /var/log/messages or


/var/log/portsentry.log if you are logging to a dedicated file.
Also check /etc/hosts.deny to see a list of IP addresses that PortSentry has deemed to
be attackers.
Check the "HISTORY_FILE" /opt/portsentry/portsentry.history

Note: Is is possible to have all logging sent to a logging daemon on a single server. This will
allow the administrator to check the logs on only one server rather than individually on many.

Note on Red Hat 7.1:


Powertools RPM layout:
 /usr/sbin/portsentry - (chmod 700) executable
 /etc/portsentry/ - (chmod 700) Directory used for configuration files.
 /etc/portsentry/portsentry.conf (chmod 600)
 /etc/portsentry/portsentry.ignore (chmod 600)
 /var/portsentry/portsentry.history
 /var/portsentry/portsentry.blocked

Instead of using a firewall command (ipchains/iptables), a false route is used: /sbin/route add
-host $TARGET$ gw 127.0.0.1.
My init script calls the portsentry executable twice with the appropriate command line arguments
to monitor tcp and udp ports. The Red Hat 7.1 init script uses the file
/etc/portsentry/portsentry.modes and a for loop in the init script to call portsentry the
appropriate number of times. Their init script also recreates the portsentry.ignore file each
time portsentry is started by including the IP addresses found with ifconfig and the addresses
0.0.0.0 and localhost. Persistent addresses must be placed above a line stating: Do NOT edit
below this otherwise it is not included in the creation of the new file.
The Red Hat 7.1 Powertools portsentry version logs everything to /var/log/messages. My
configuration avoids log clutter by logging to a separate file.

Notes on DOS (Denial of Service) possibility: If portsentry is configured to shut down an


attack with firewall rules, an attacker may use this feature to slow down your machine over time
by creating a huge set of firewall rules. It would require the hacker to use (or spoof) a new IP
address each time. It is probably a good idea to monitor or even clear the firewall rules from time
to time.

 iptables:
o List firewall rules: iptables -L
o Clear firewall rules: iptables -F
 ipchains:
o List firewall rules: ipchains -L
o Clear firewall rules: ipchains -F

Clean-up script: /etc/cron.monthly/reset-chainrules


(-rwx------ 1 root root)
This script is run automatically once a week by cron. (The presence of this script in this directory
for the Red Hat configuration makes it so)

#!/bin/bash
# Purge and re-assign chain rules
ipchains -F
ipchains -A input -p tcp -s 0/0 -d 0/0 2049 -y -j REJECT
ipchains -A input -p udp -s 0/0 -d 0/0 2049 -j REJECT
ipchains -A input -p tcp -s 0/0 -d 0/0 6000:6009 -y -j REJECT
ipchains -A input -p tcp -s 0/0 -d 0/0 7100 -y -j REJECT
ipchains -A input -p tcp -s 0/0 -d 0/0 515 -y -j REJECT
ipchains -A input -p udp -s 0/0 -d 0/0 515 -j REJECT
ipchains -A input -p tcp -s 0/0 -d 0/0 111 -y -j REJECT
ipchains -A input -p udp -s 0/0 -d 0/0 111 -j REJECT
ipchains -A input -j REJECT -p all -s localhost -i eth0 -l

Also see:

 Sourceforge: Portsentry Home Page - PortSentry, Logcheck and HostSentry home page.
 Portsentry description
 FAQ: Firewall Forensics - Robert Graham

Other tools to detect portscans and network based hacker attacks:

 scanlogd - Attack detection.


 InterSect Alliance - Intrusion analysis. Identifies malicious or unauthorized access
attempts.
 snort - Instead of monitoring a single server with portsentry, snort monitors the network,
performing real-time traffic analysis and packet logging on IP networks for the detection
of an attack or probe.
Also see: YoLinux IDS and Snort links

Using an init script to start and stop the portsentry program.

Init configuration: /etc/rc.d/init.d/portsentry


The init script needs to be executable: chmod a+x /etc/rc.d/init.d/portsentry
After adding the following script, enter it into the init process with the command: chkconfig --
add portsentry or chkconfig --level 345 portsentry on
See YoLinux Init Tutorial for more information.

#!/bin/bash
#
# Startup script for PortSentry
#
# chkconfig: 345 85 15
# description: PortSentry monitors TCP and UDP ports for network attacks
#
# processname: portsentry
# pidfile: /var/run/portsentry.pid
# config: /opt/portsentry/portsentry.conf
# config: /opt/portsentry/portsentry.ignore
# config: /opt/portsentry/portsentry.history
# config: /opt/portsentry/portsentry.blocked

# Source function library.


. /etc/rc.d/init.d/functions

# Source networking configuration.


. /etc/sysconfig/network

# Check that networking is up.


[ ${NETWORKING} = "no" ] && exit 0
# See how we were called.
case "$1" in
start)
echo -n "Starting portsentry: "
daemon /opt/portsentry/portsentry -atcp
/opt/portsentry/portsentry -audp
echo
touch /var/lock/subsys/portsentry
;;
stop)
echo -n "Shutting down portsentry: "
killproc portsentry
echo
rm -f /var/lock/subsys/portsentry
rm -f /var/run/portsentry.pid
;;
status)
status portsentry
;;
restart)
$0 stop
$0 start
;;
reload)
echo -n "Reloading portsentry: "
killproc portsentry -HUP
echo
;;
*)
echo "Usage: $0 {start|stop|restart|reload|status}"
exit 1
esac

exit 0

Logrotate Configuration:

Create the following file to have your logs rotate.

File:

/etc/logrotate.d/portsentry
/var/log/portsentry.log {
rotate 12
monthly
errors root@localhost
missingok
postrotate
/usr/bin/killall -HUP portsentry 2> /dev/null || true
endscript
}
Also see the YoLinux Sys Admin tutorial covering logrotate.

Tests:

 Portscan your workstation - Use your web browser to go to this site. Select "Probe my
ports" and it will scan you. You can then look at the file
/opt/portsentry/portsentry.blocked.atcp to see that portsentry dropped the
scanning site:

Host: shieldsup.grc.com/207.71.92.221 Port: 23 TCP Blocked

The file /var/log/portsentry.log will show the action taken:

portsentry[589]: attackalert: SYN/Normal scan from host:


shieldsup.grc.com/207.71.92.221 to TCP port: 23
portsentry[589]: attackalert: Host 207.71.92.221 has been blocked via
wrappers with string: "ALL: 207.71.92.221"
portsentry[589]: attackalert: Host 207.71.92.221 has been blocked via
dropped route using command:
"/sbin/ipchains -I input -s 207.71.92.221 -j DENY -l"

 nmap: portscanner - This is the hacker tool responsible for many of the portscans you
may be receiving.

Command arguments:

Argument Description
-sO IP scan. Find open ports.
-sT TCP scan. Full connection made.
SYN scan (half open scan). This scan is typically not logged on
-sS
receiving system.
-sP Ping ICMP scan.
-sU UDP scan.
-P0 Don't ping before scan.
-PT Use ping to determine which hosts are available.
-F Fast scan. Scan for ports listed in configuration.
-T Set timing of scan to use values to avoid detection.
-O Determines operating system.
-p 1000-1999,5000-
Scan port ranges specified.
5999

Also see: nmap man page for a full listing of nmap command line arguments.
Examples:

nmap -sT -F IP-address Scan


nmap -sS -F IP-address SYN Scan
nmap -sU -F IP-address Scan UPD ports
nmap -sF -F IP-address FIN Scan
nmap -O -F IP-address Determine OS
nmap -p22 -F -O IP-address
nmap -p 1-30,40-65535 IP-Address Scan given port ranges

Add the option -v (verbose) or -vv (super verbose) for more info.
The ports will be determined to be open, filtered or firewalled.

Sample output from command: nmap -sS -F -O IP-Address

Starting nmap V. 2.54BETA7 ( www.insecure.org/nmap/ )


...
..
(The 1067 ports scanned but not shown below are in state: closed)
Port State Service
21/tcp open ftp
22/tcp open ssh
25/tcp open smtp
53/tcp open domain
111/tcp open sunrpc - Shut down the portmap (RPC)
daemon: /etc/rc.d/init.d/portmap stop
137/tcp filtered netbios-ns - Turn off netbios services:
/etc/rc.d/init.d/smb stop
138/tcp filtered netbios-dgm
139/tcp filtered netbios-ssn

TCP Sequence Prediction: Class=random positive increments


Difficulty=2727445 (Good luck!)
Remote operating system guess: Linux 2.1.122 - 2.2.16

Nmap run completed -- 1 IP address (1 host up) scanned in 36 seconds


 nmap/nmapfe: nmapfe = nmap front end - GUI front end to nmap. It's an amazingly easy
and useful tool which will help you make discoveries about your servers before the

hackers do.

Nmap and nmapfe are available with distribution or on the Red Hat Powertools CD for
older (7.1) releases:

o nmap-XXX.i386.rpm
o nmap-frontend-XXX.i386.rpm

Links:

 nmap man page


 The Art of Port Scanning - by Fyodor
 Gremwell MagicTree - processes NMap and OpenVAS output to generate a report.
Requires OpenOffice.
 ndiff - Compares two nmap scans and outputs the differences. Monitor network for
changes.
Tripwire: (security monitoring)

Tripwire monitors your file system for changes. Tripwire is used to create an initial database of
information on all the system files then runs periodically (cron) to compare the system to the
database.

Use the command tripwire --version or rpm -q tripwire to determine the version.

Red Hat includes Tripwire as an optional package during install. The Ubuntu/Debian install is as
easy as apt-get install tripwire. Upon installation it will proceed to scan your entire
filesystem to create a default database of what your system looks like. (files and sizes etc) It took
about ten minutes to run on my server!

Tripwire configuration files:

 Tripwire 2.3.0-58: (Red Hat 7.1)


o /etc/tripwire/twcfg.txt
o /etc/tripwire/twpol.txt

These files are first edited and then processed by the script
/etc/tripwire/twinstall.sh which configures Tripwire after the installation of the
Tripwire RPM package.

Edit and change file: /etc/tripwire/twcfg.txt

Change:
LOOSEDIRECTORYCHECKING =false
to
LOOSEDIRECTORYCHECKING=TRUE

This was recommended in the comments of the file twpol.txt

Edit and change file: /etc/tripwire/twpol.txt

Change:
severity = $(SIG_XXX)
to
severity = $(SIG_XXX),
emailto = root@localhost
or
severity = $(SIG_XXX),
emailto = root@localhost;admin@isp.com

where XXX is the severity level. This will cause Tripwire to email a report of
discrepancies for the rule edited. Set the email address to one appropriate for you.

I also added:
o "User binaries" rule: directory /opt/bin
o "Libraries" rule: directory /opt/lib

I removed/commented out:

o the rule "System boot changes" as it reports changes due to system boot.
o Rule: "Root config files": Many of the non-existent files listed under /root were
commented out to reduce the number of errors reported.
o Rule "File System and Disk Administraton Programs": Many of the non-existent
binaries listed under /sbin were commented out to reduce the number of errors
reported.

After configuration files have been edited run the script: /etc/tripwire/twinstall.sh
The script will ask for a "passphrase" for the site and local system. This is a similar
concept to a password - remember it!

If at any point you want to make configuration/policy changes, edit these files and re-run
the configuration script. The script will generate the true configuration files used by
Tripwire:

o /etc/tripwire/tw.cfg
(View with command: twadmin --print-cfgfile)
o /etc/tripwire/tw.pol
(View with command: twadmin --print-polfile)
o /etc/tripwire/site.key
o /etc/tripwire/ServerName-a-local.key

These files are binary and not human readable.

 Tripwire 1.2-3 (Red Hat 6.2 Powertools): /etc/tw.config

Tripwire initialization:

If at any time you change the configuration file to monitor your system differently or install an
upgrade (changes a whole lot of files which will "trip" tripwire into reporting all changes) you
may want to generate a new database.

 Tripwire 2.3.0-58: /usr/sbin/tripwire --init


You will be prompted for your "local passphrase".
This will generate a tripwire database file: /var/lib/tripwire/ServerName-a.twd
 Tripwire 1.2-3: /usr/sbin/tripwire -initialize

This will generate a tripwire database file: ./databases/tw.db_ServerName


If you are in root's home directory, this will create the file
/root/databases/tw.db_ServerName
At this point copy it to a usable location:
cp -p /root/databases/tw.db_ServerName
/var/spool/tripwire/tw.db_ServerName

Don't change /etc/tw.config without first running tripwire -initialize otherwise


it will show differences due to settings in tw.config file rather than true differences.

Cron and tripwire:

Cron runs tripwire:

 Tripwire 2.3.0-58:
File: /etc/cron.daily/tripwire-check
 #!/bin/sh
 HOST_NAME=`uname -n`
 if [ ! -e /var/lib/tripwire/${HOST_NAME}.twd ] ; then
 echo "**** Error: Tripwire database for ${HOST_NAME} not
found. ****"
 echo "**** Run "/etc/tripwire/twinstall.sh" and/or "tripwire --
init". ****"
 else
 test -f /etc/tripwire/tw.cfg && /usr/sbin/tripwire --check
 fi

You may move this cron script to the directory /etc/cron.weekly/ to reduce reporting
from a daily to a weekly event.
Tripwire reports will be written to: /var/lib/tripwire/report/HostName-Date.twr

 Tripwire 1.2-3:
File: /etc/cron.daily/tripwire.verify script which runs the command:
/usr/sbin/tripwire -loosedir -q
Note: You may want to move the script to /etc/cron.weekly/tripwire.verify to
reduce email reporting to root.

Read tripwire report:

 Tripwire 2.3.0-58: twprint --print-report -r


/var/lib/tripwire/report/report-file.twr

Interactive mode:

 Tripwire 1.2-3:
Update tripwire database - run: tripwire -interactive
This will allow you to respond Y/N to files if they should be permanently updated in the
tripwire database. This will still run tripwire against the whole file system. I ran it from
/root and it updated /root/databases/tw.db_ServerName You must then cp -p to
/var/spool/tripwire/ to update the tripwire database.
Default configuration file:

 Tripwire 2.3.0-58: /etc/twcfg.txt

ROOT =/usr/sbin
POLFILE =/etc/tripwire/tw.pol
DBFILE =/var/lib/tripwire/$(HOSTNAME).twd
REPORTFILE =/var/lib/tripwire/report/$(HOSTNAME)-
$(DATE).twr
SITEKEYFILE =/etc/tripwire/site.key
LOCALKEYFILE =/etc/tripwire/$(HOSTNAME)-local.key
EDITOR =/bin/vi
LATEPROMPTING =false
LOOSEDIRECTORYCHECKING =false
MAILNOVIOLATIONS =true
EMAILREPORTLEVEL =3
REPORTLEVEL =3
MAILMETHOD =SENDMAIL
SYSLOGREPORTING =false
MAILPROGRAM =/usr/sbin/sendmail -oi -t

 Tripwire 1.2-3: /etc/tw.config

# Log file
@@define LOGFILEM E+pugn
# Config file
@@define CONFM E+pinugc
# Binary
@@define BINM E+pnugsci12
# Directory
@@define DIRM E+pnug
# Data file (same as BIN_M currently)
@@define DATAM E+pnugsci12
# Device files
@@define DEVM E+pnugsc
# exclude all of /proc
=/proc E
#=/dev @@DIRM
/dev @@DEVM
#=/etc @@DIRM
/etc @@CONFM
# Binary directories
#=/usr/sbin @@DIRM
/usr/sbin @@BINM
#=/usr/bin @@DIRM
/usr/bin @@BINM
#=/sbin @@DIRM
/sbin @@BINM
#=/bin @@DIRM
/bin @@BINM
#=/lib @@DIRM
/lib @@BINM
#=/usr/lib @@DIRM
/usr/lib @@BINM
=/usr/src E
=/tmp @@DIRM

Add:

/var/named @@CONFM - If you are running Bind DNS


slave
/home/httpd/cgi-bin @@BINM

Delete/comment out:

#/dev @@DEVM

This eliminated the reporting of too much junk due to a reboot of the system.

Man pages:

Tripwire 2.3.0-58:

 tripwire - a file integrity checker for UNIX systems


 twintro - introduction to Tripwire software
 twadmin - Tripwire administrative and utility tool
 twprint - Tripwire database and report printer
 siggen - signature gathering routine for Tripwire
 twconfig - Tripwire configuration file reference
 twpolicy - Tripwire policy file description reference (For file
/etc/tripwire/twpol.txt)
 twfiles - Overview of files used by Tripwire and file backup process

Also see:

 TripwireSecurity.com
 Tripwire.org
 Tripwire documentations
 /usr/doc/tripwire-1.2/docs/designdoc.ps
 ViperDB - Alternative to Tripwire.
 Red Hat 7.1 tripwire manual

CHKROOTKIT: Performing a trojan/worm/virus file scan.

Tripwire will monitor your filesystems for intrusion or addition of a file so you may determine
what changes have occurred on your system in sensitive areas. Chkrootkit will scan your system
for known exploits, Trojan commands, and worms used to compromise a system.

Download chkrootkit from http://www.chkrootkit.org. It is a shell script which should be run as


root as well as a small collection of C programs.
 Installation:
o make sense (Compile C programs)
o ./chkrootkit (Run shell script and call programs.)
 Usage:
o ./chkrootkit
OR
o ./chkrootkit -h (help)

See the README file for more info.

Note:

 This software is constantly being upgraded and updated to include scans for new exploits.
 If running portsentry, chkrootkit may return a false error while performing the bindshell
test.

NESSUS: Performing a network vulnerability scan/security assessment of your system.

Let me start by saying that this should only be performed on your own systems. It is considered
and attack to run this against the systems of others and legal action may be taken against you for
performing such an audit. This is not a scan like NMAP. NESSUS will search and locate
vulnerabilities on your system by actively trying to perform known exploits against the system.

Nessus is amazingly complete and effective. In fact it is awesome!! It will identify services on
your system and try to exploit them. If a vulnerability is found it will make recommendations
about upgrades, configuration changes and where to find patches. It will also explain any causes
for concern in detail and explain why your system is vulnerable. And that's not all! It can output
reports in various formats including HTML with pie charts and bar charts!! The HTML reports
will have hyperlinks to the security reports, upgrades and patches. (I'm impressed) It can scan
Unix, Linux and Windows systems for vulnerabilities.

Note:

 Running "Dangerous Plugins" may cause a crash of the system being audited!!

The NESSUS software is available from http://Nessus.org.


If compiling source:

 Edit file: nessus-core/include/config.h (Set USE_AF_UNIX to define socket type)

It is also available in RPM form: (See http://freshrpms.net)

 nessus-client-....rpm
 nessus-common-....rpm
 nessus-plugins-....rpm
 nessus-server-....rpm : Nessus plugins which are used to perform the various checks.
(Scripts in nasl scripting language) Note that the RPM installs an init script which starts
nessusd during boot. Disable with chkconfig --del nessusd
 nessus-devel-....rpm : Nessus development libraries and headers.

Running NESSUS:

Configuration file: /etc/nessus/nessusd.conf

You may also consider a popular branch of Nessus, OpenVAS: Open Vulnerability Assessment
System

Useful links and resources:

 YoLinux List of security Tools and Links


 NSA security guide for Red Hat Enterprise Linux 5 (pdf)
 Kali Linux - Bootable live CD Linux distro pre-configured for penetration testing.
 Bastille-Linux.org - scripts to "harden" or "tighten" the Linux system
 UnicornScan - fast portscanner
Also see onetwopunch.sh to automate UnicornScans.
 Intrusion Detection on Linux: LIDS - LIDS is an intrusion detection and prevention
system that resides within the Linux kernel.
 Openwall.com - Owl (security enhanced Linux) and security patches. This kernel patch
makes the stack of a process non-executable so instructions loaded during a buffer
overflow attack will not run.
 LDP HowTo Guides:
o Linux Networking Overview HOWTO - Daniel Lopez Ridruejo
 News/Usenet Group: comp.os.linux.security - Deja
 Insecure.org: Linux Exploits
 comp.os.linux.security FAQ
 Chkrootkit.org: Links
 RFC 2196: Site Security Handbook
 CERT: UNIX Configuration Guidelines
 Apache.org: Security Tips for Server Configuration
 Unix Security Links
 InfosysSec.org: Security Portal
 SecurityFocus.com - News and Info
 W3C: Security Resources
 Attack Info:
o CERT: denial of Service Attacks - Description
o CERT: TCP SYN Flooding and IP Spoofing Attacks
o CERT: UDP Port Denial-of-Service Attack
o DOS Attacks: SMURFING
o DOS presentations
o CERT: Problems With The FTP PORT Command
 Security Service Firms:
o SecureScan Perimeter
o QualsGuard

Books:

"Linux Firewalls"
by Robert L. Ziegler, Carl Constaintine
ISBN #0735710996, New Riders 10/2001

This is the newer version. It includes updates on the Linux


2.4 kernel, VPN's and SSH.

"Linux Firewalls"
Robert L. Ziegler
ISBN #0-7357-0900-9, New Riders 11/1999

Most complete Linux firewall/security book in publication.


Covers ipchains, bind and a complete review of possible
firewall configurations.

"Hack Proofing Linux : A Guide to Open Source Security"


by James Stanger, Patrick T. Lane
ISBN #1928994342, Syngress

"Real World Linux Security: Intrusion Prevention,


Detection and Recovery"
by Bob Toxen
ISBN #0130281875, Prentice Hall
"Hacking Linux Exposed"
by Brian Hatch, James B. Lee, George Kurtz
ISBN #0072225645, McGraw-Hill (2nd edition)

From the same authors of "Hacking Exposed".

"Maximum Linux Security: A Hacker's Guide to Protecting


Your Linux Server and Workstation"
by Anonymous and John Ray
ISBN #0672321343, Sams

Covers not only audit and protection methods but also


investigates and explains the attacks and how they work.

"Network Intrusion Detection: An Analyst's Handbook"


by Stephen Northcutt, Donald McLachlan, Judy Novak
ISBN #0735710082, New Riders Publishing

"SSH, the Secure Shell : The Definitive Guide"


by Daniel J. Barrett, Richard Silverman
ISBN #0596000111, O'Reilly & Associates

"Nessus Network Auditing (Jay Beale's Open Source


Security)"
by Renaud Deraison, Noam Rathaus, HD Moore, Raven
Alder, George Theall, Andy Johnston, Jimmy Alderson
ISBN #1931836086, Syngress

"Computer Security Incident Handling Step by Step"


by Stephen Northcutt
ISBN #0967299217
"Security Assessment: Case Studies for Implementing the
NSA IAM"
by Russ Rogers, Greg Miles, Ed Fuller, Ted Dykstra
ISBN #1932266968, Syngress

"Network Security Assessment"


by Chris McNab
ISBN #059600611X, O'Reilly

"A Practical Guide to Security Assessment"


by Sudhanshu Kairab
ISBN #0849317061, Auerbach Publications

"Aggressive Network Self-defense"


by NEIL R. WYLER
ISBN #1931836205, Syngress Publishing

Security Source Magazine Free


Subscription
Security Source Magazine's cover story is about keeping the
network secure, from the gateway to the desktop. Subscribe
now and continue to learn about valuable security topics
and strategies in each quarterly issue.
Info Security Magazine Free
Subscription
Business and management of information security. It is an
international magazine, with an European focus. It is
published in both print and digital editions, the latter
containing the full content of the print publication,
accessible via the web. Its experienced editorial team
delivers stories that deal with the big picture issues of
information security. Our sources and columnists are the
expert security researchers and practitioners who define,
drive, and lead the field. And our journalists are in demand
by the IT trade and broadsheet press.

S-ar putea să vă placă și