Documente Academic
Documente Profesional
Documente Cultură
USER'S GUIDE
3.0
Copyright 2009 Wind River Systems, Inc. All rights reserved. No part of this publication may be reproduced or transmitted in any form or by any means without the prior written permission of Wind River Systems, Inc. Wind River, Tornado, and VxWorks are registered trademarks of Wind River Systems, Inc. The Wind River logo is a trademark of Wind River Systems, Inc. Any third-party trademarks referenced are the property of their respective owners. For further information regarding Wind River trademarks, please see: www.windriver.com/company/terms/trademark.html This product may include software licensed to Wind River by third parties. Relevant notices (if any) are provided in your product installation at the following location: installDir/product_name/3rd_party_licensor_notice.pdf. Wind River may refer to third-party documentation by listing publications or providing links to third-party Web sites for informational purposes. Wind River accepts no responsibility for the information provided in such third-party documentation.
Corporate Headquarters Wind River 500 Wind River Way Alameda, CA 94501-1153 U.S.A. Toll free (U.S.A.): 800-545-WIND Telephone: 510-748-4100 Facsimile: 510-749-2010 For additional contact information, see the Wind River Web site: www.windriver.com For information on how to contact Customer Support, see: www.windriver.com/support
Contents
3
3 4 5 6 6 7 7 7 8 8 9 10 10 11 11 11 11 11
13
13 14 14
iii
Configuring and Building the Platform Project ............................................ Configuring for a Custom Target .................................................................... Deploying Runtime Software .......................................................................... 2.3 Updating and Debugging .............................................................................................. Updating Packages ........................................................................................... Updating the Kernel Configuration ............................................................... Debugging Runtime Software ......................................................................... 2.4 Preparing a Product Deployment .................................................................................
14 15 15 15 15 16 16 16
17
17 18 18 19 19 19 19 19 20 20 20 20 21 21 23 23 23 24 25 26 27 28 28 28 28 28 28 28 29 29 29
Layers in the Development Environment .................................................................. The layers Directory .......................................................................................... Layer Structure and Replaceability ................................................................
3.5
Templates in the Development Environment ............................................................ 3.5.1 3.5.2 Template Configuration Files .......................................................................... Core Layer Template Directory Structure and Contents ............................. profile Templates ............................................................................................... rootfs Templates ................................................................................................ board Templates (BSPs) .................................................................................... test Templates .................................................................................................... feature Templates .............................................................................................. extra Templates .................................................................................................. 3.5.3 Toolchain Layer Template Directory Structure and Contents .................... arch Templates ................................................................................................... cpu Templates .................................................................................................... multilib Templates ............................................................................................. 3.5.4 Kernel Layer Template Directory Structure and Contents .......................... default Templates .............................................................................................. feature Templates ..............................................................................................
iv
Contents
29 29
31
31 31 32 32 32 33 34 35 35 36 36 36 36 36 37 38 38 38 39 39 40 40 40 41 42 43 43 43 44 44 46 46 47 47 48 48
49
49 50 50 51 51 52 52 52 53 53 54 54 56 58 60 60 60 60 62 62 63 64 65
67
67 67 68 69 71 71 72 72 73 74 74 75
vi
Contents
6.4.2 6.5
77 78 78 79 79 80 82
Using Custom Layers ...................................................................................................... Configuration with Layers ............................................................................... Verifying Layer Processing ..............................................................................
6.6
Combining Custom Layers and Templates ................................................................ Another Custom Profile Example ................................................................... Specifying Templates in a Custom Layer ......................................................
83
83 83 84 84 84 86 87 87 88
Adding Custom Applications to Platform Projects .................................................. Referencing External Application Code from a Project ............................... Including the Source in the Package dist Directory .....................................
91
91 91 92 93 94 94
97
97 97 99 99
vii
Audit Reporting ................................................................................................ 100 Example of Auditing Output .......................................................................... 100 9.4 Reconfiguring and Rebuilding the Kernel ................................................................ 103 Using GUI Tools for Kernel Modification ...................................................... 103 Adding a Kernel Fragment File in a Template .............................................. 104 Adding a Config Fragment in Your Project Build Directory ...................... 105 9.4.1 Resetting the Original Kernel Configuration ................................................ 106
10
Older Method of Adding SRPMs .................................................................... 114 Create the Local Layer Package Environment .............................................. 115 Create the Patch ................................................................................................. 116 Build with the Patch .......................................................................................... 119
10.4
Adding a Package: rpmbuild with a Classic Package .............................................. 120 Preparing to Add a Standard Source Archive with rpmbuild ................... 120 Adding a Standard Source Archive with rpmbuild ..................................... 120
10.5
Adding a Package: the Classic Method ....................................................................... 121 Preparing to Add a Source Archive with the Classic Method .................... 121 Adding a Source Archive with the Classic Method ..................................... 121
10.6 10.7
Removing a Package ....................................................................................................... 121 Adding a Package to a Running Target ....................................................................... 122
11
viii
Contents
No Forced Preemption (Server) ...................................................................... Voluntary Kernel Preemption (Desktop) ....................................................... Preemptible Kernel (Low-latency Desktop) .................................................. Complete Preemption (Real-Time) ................................................................. 11.5
Interrupt Service Routine (ISR) Payload Execution Context .................................. 126 Thread Softirqs .................................................................................................. 127 Thread Hardirqs ................................................................................................ 127 Preemptible RCU ............................................................................................... 127
11.6
Run-time Scheduler Debug Instrumentation ............................................................ 128 Debug preemptible kernel ............................................................................... Wakeup latency histogram .............................................................................. Non-preemptible critical section latency timing .......................................... Interrupts-off critical section latency timing ................................................. RT Mutex Integrity Checker ............................................................................ 128 128 128 128 129
12
ix
Smart Querying for Dependencies ................................................................. 153 12.7.2 Getting a Footprint Snapshot .......................................................................... 154
13
git and the Kernel ............................................................................................................ 165 13.4.1 An Overview of gits Role in the Kernel ........................................................ 165 The Kernel Build Workflow ............................................................................. 166 The kernel-cache ................................................................................................ 166 The Kernel Source Tree ..................................................................................... 167 13.4.2 Starting to Use git .............................................................................................. 168 Types of Commands ......................................................................................... 168 Tools Overview .................................................................................................. 169 The Kernel Lifecycle and Developer Workflow ............................................ 170 13.4.3 Examples ............................................................................................................ 174 Adding a Patch to the Kernel .......................................................................... Patch Management ........................................................................................... BSP Example ...................................................................................................... Patch Merge ....................................................................................................... Sharing a Kernel ................................................................................................ 174 176 177 179 180
13.5
Kernel Patching with scc ................................................................................................ 180 Kernel Patching Design Philosophy ............................................................... scc Facilities ........................................................................................................ scc Files ............................................................................................................... scc File Examples ............................................................................................... 180 181 181 183
Contents
Internals .............................................................................................................. 187 14.2 Deployment ...................................................................................................................... 187 Accessing the Simulation ................................................................................. 188 14.3 Configuration ................................................................................................................... 189 Ending the Simulation ...................................................................................... 190 Command Line Options ................................................................................... 190 Enabling TUN/TAP Networking ................................................................... 191 14.4 QEMU Example: Deploying initramfs ........................................................................ 193 Building and Running initramfs ..................................................................... 194 Switching the file system from initramfs ....................................................... 194
15
Configuring DHCP ......................................................................................................... 201 The DHCP Configuration File ......................................................................... 201 The DHCP Leases File ...................................................................................... 202 Starting the DHCP Server ................................................................................ 202
15.3
Configuring TFTP ........................................................................................................... 202 Making the Kernel Available for Download ................................................. 202 The TFTP Configuration File ........................................................................... 203
15.4
Configuring NFS ............................................................................................................. 203 Making the Root File System Available for Export ...................................... 203 Configuring /etc/exports ................................................................................ 204
16
xi
16.4
Example Ramdisk Deployment with U-Boot ............................................................ 213 Create the initrd Image ..................................................................................... 213 Configure U-Boot .............................................................................................. 213 Deployment ........................................................................................................ 214
17
18
19
xii
Contents
19.3
Booting Standalone with LinuxLive ............................................................................ 230 Before You Begin ............................................................................................... 231 19.3.1 19.3.2 19.3.3 Creating a Platform Project .............................................................................. 231 Preparing the Target's Hard Drive .................................................................. 233 Placing the File System and Kernel on the Hard Disk ................................. 235 Copying from the Wind River CD-ROM ....................................................... 235 Copying from a USB Disk ................................................................................ 235 Downloading from a Network Host .............................................................. 236 19.3.4 Configuring Target System Files and Booting .............................................. 237
19.4
20
22
xiii
22.2
Adding SRPM Packages ................................................................................................ 256 22.2.1 Adding the logwatch SRPM ............................................................................ 256
22.3
22.4
Adding Classic Packages ............................................................................................... 261 22.4.1 Adding Classic Packages with configure ...................................................... 261 Adding links ...................................................................................................... 261 22.4.2 Adding Classic Packages without configure ................................................ 263 Adding schedutils ............................................................................................. 264
22.5 22.6
Adding Packages with a GUI Tool ............................................................................... 268 Adding an RPM Package to a Running Target .......................................................... 270
23
Adding a Layer to a Platform Project .......................................................................... 273 Adding Another Layer ................................................................................................... 274 Overriding Layer Contents with Another Layer ....................................................... 275 Patching a Host Tools Package ..................................................................................... 276 Configuring and Patching the Kernel ......................................................................... 277 Enabling CONFIG_BINFMT_AOUT ............................................................. 277 Patching the Kernel ........................................................................................... 278 Configuring and Building ................................................................................ 278
23.7 23.8
Using Feature Templates in Layers .............................................................................. 279 Modifying a BSP ............................................................................................................. 280
24
xiv
Contents
Boot KVM on common_pc_64 (with TAP) .................................................... 287 24.3.2 Configuring the KVM Guest ........................................................................... 288 Start the KVM guest (linux) from the KVM host (linux): ............................ 289 24.3.3 24.4 Run apache or boa ............................................................................................. 289
Collecting Kernel Core Dumps with Kdump ............................................................ 290 24.4.1 Kdump Example with x86 ............................................................................... 290 Using kexec for Quick Reboot ......................................................................... 292 Issues and Limitations ...................................................................................... 292
PART V: APPENDIXES
A Open Source Documentation .................................................................... 295
A.1 A.2 A.3 A.4 A.5 Introduction ...................................................................................................................... 295 Carrier Grade Linux ........................................................................................................ 295 Networking ....................................................................................................................... 296 Security .............................................................................................................................. 296 Linux Development ........................................................................................................ 296
xv
D.2.1
Enabling and Disabling KGDB in the Kernel ............................................... 311 Using the Command Line ................................................................................ 311
D.3
Configuring a TIPC Proxy ............................................................................................. 315 Configuring Your Workbench Host ............................................................................. 316 Using usermode-agent with TIPC ................................................................................ 317
Preparing the Host .......................................................................................................... 334 H.2.1 H.2.2 Installing the Simple Executive Layer Prerequisites .................................... 334 Available Documentation ................................................................................ 335
H.3
xvi
Contents
H.3.1 H.3.2
Configuring your Project ................................................................................. 335 Customize your Package List .......................................................................... 335 Using the Feature Templates ........................................................................... 336 Making Changes Manually .............................................................................. 336
Building the Project ........................................................................................... 336 Specifying Build Types ..................................................................................... 337
Running Simple Executive Applications ................................................................... 337 H.4.1 H.4.2 Linux Usermode Applications ........................................................................ 338 Standalone Applications .................................................................................. 338
H.5
Simple Executive Layer Technical Notes .................................................................... 339 H.5.1 Simple Executive Applications as wrlinux Packages .................................. 339 Application Wrapper Makefiles ...................................................................... 339 Application Wrapper Support Files ............................................................... 340 H.5.2 .........................................................Miscellaneous Simple Executive Details 340
H.6
Configuring and Building with Workbench .............................................................. 341 H.6.1 H.6.2 H.6.3 H.6.4 H.6.5 H.6.6 H.6.7 Adding the SDK Path ....................................................................................... 341 Overriding the OCTEON_MODEL Value ..................................................... 341 Starting Workbench .......................................................................................... 341 Configure a Platform Project with Simple Executive Support ................... 342 Building the Platform Project .......................................................................... 343 Working with the Package List ....................................................................... 343 Changing the OCTEON_TARGET Value for a Package .............................. 344
H.7 H.8
Configuring the Kernel with Workbench .................................................................. 345 Debugging from the Command Line .......................................................................... 347 H.8.1 H.8.2 H.8.3 Overview ............................................................................................................ 347 Prerequisites ....................................................................................................... 348 Available Documentation ................................................................................ 348
H.9
Setting Up the Target ...................................................................................................... 348 H.9.1 H.9.2 Review: Starting a Standalone Application ................................................... 348 Starting an Application for Debugging ......................................................... 349
H.10 Setting up the Host ......................................................................................................... 350 H.10.1 Starting GDB ...................................................................................................... 350 H.10.2 Connecting to a Target ...................................................................................... 350 H.11 Debugging Caveats ......................................................................................................... 351 H.11.1 Single-step and Atomic Operations ................................................................ 351
xvii
Debugging Multiprocessor Applications ...................................................... 351 Debugging Standalone Images with Linux Running .................................. 352 Debugging the Linux Kernel ........................................................................... 352
H.12 Debugging with Workbench ......................................................................................... 353 H.12.1 Prerequisites ....................................................................................................... 353 H.12.2 Importing the Application to a C/C++ Project (optional) .......................... 353 H.12.3 Creating a Launch Configuration ................................................................... 354 H.12.4 Debugging the Application ............................................................................. 355 H.12.5 Note(s) on Workflow ........................................................................................ 357 H.13 Known Issues, Limitations, and Tips .......................................................................... 357
xviii
PAR T I
Development Workflow ............................................................ 13 The Development Environment ............................................... 17 Configuring and Building ........................................................ 31 Layer and Template Processing .............................................. 49 Custom Layers and Templates ................................................ 67 Application Development ........................................................ 83
1
Introduction
1.1 Introduction 3 1.2 Wind River Linux Documentation 4 1.3 Roadmap to the Wind River Linux Users Guide 5 1.4 Document Conventions 6 1.5 Overview of Wind River Linux 6 1.6 Platform Developer and Application Developer 7 1.7 Kernel and File System Components 8 1.8 Cross Development Tools 10 1.9 Supported Run-time Boards 11 1.10 Additional Resources 11
1.1 Introduction
Welcome to the Wind River Linux User's Guide. Wind River Linux is a software development environment that creates optimized Linux distributions for embedded devices. Development environments are available on a number of host platforms, and support a large and ever-growing set of targets. For details on particular host support refer to the Release Notes. For supported target boards, refer to Wind River Online Support.
This guide describes Wind River Linuxhow to configure it, and customize it for your needs. It is primarily oriented toward command line usage, but it is also useful to Workbench developers who want to understand some of the underlying design and implementation of the build system. It provides both explanatory and procedural use case material.
The Getting Started provides a few brief procedures that you can perform on the command line or with Workbench. Its primary purpose is to orient you in the primary ways to use Wind River Linux and point to the documentation areas that focus most on the way you will be using the product.
This guide describes how to use Workbench to develop projects, manage targets, and edit, compile, and debug code.
This guide is for Linux-specific use of Workbench, and provides examples on how to configure and build application, platform, and kernel module projects.
Wind River Workbench provides context-sensitive help. To access the full help set, select Help > Help Contents in Wind River Workbench. To see help information for a particular view or dialog box, press the help key when in that view or dialog box. See 1.4 Document Conventions, p.6 for details on the help key.
Reference manual pages (man pages) for the gnu commands on the Wind River Linux development host. Accessible though Workbench help with Help > Help Contents Wind River Documentation > References > Wind River Linux Operating System Reference.
This is a set of documents that describe how to use the Wind River Analysis tools that are provided with Workbench. The tools include a memory use analyzer, an execution profiler, and System Viewer, a logic analyzer for visualizing and troubleshooting complex embedded software. The Wind River System Viewer API Reference is also included.
The host shell is a host-resident shell provided with Workbench that provides a command line interface for debugging targets.
Most of the documentation is available online as PDFs or HTML accessible through Wind River Workbench online help. Links to the PDF files are available by selecting Wind River > Documentation from your operating system start menu. The documentation is also available below your installation directory (called installDir) through the command line as follows:
PDF VersionsTo access the PDF, point your PDF reader to the *.pdf file, for example: installDir/docs/extensions/eclipse/plugins/com.windriver.ide.doc.wr_linux_ platforms/wr_linux_users_guide_3.0/wr_linux_users_guide_3.0.pdf. HTML VersionsTo access the HTML, point your web browser to the index.html file, for example: installDir/docs/extensions/eclipse/plugins/com.windriver.ide.doc.wr_linux_ platforms/wr_linux_users_guide_3.0/html/index.html.
Long command lines that would normally wrap are shown using the backslash (\) followed by ENTER, which produces a secondary prompt, at which you may continue typing. (The secondary prompts are not shown to make it easier to cut and paste from the examples.) In the following example you would enter everything literally except the $ prompt:
$ configure --enable-board=sun_niagara2_sun4v \ --enable-kernel=standard \ --enable-rootfs=glibc_std
If a command requires root privileges to run, the prompt is displayed as #. The path to the configure script used to configure a project is generally omitted for brevity. The script is found in installDir/wrlinux-version/wrlinux/. The following naming conventions are used throughout the guide:
/home/user/WindRiver is referred to as installDir. The directory or folder where you build your projects, for example /home/user/workdir/common_pc (common_pc_prj in Workbench), is referred to as prjbuildDir.
Platform Developer
The Platform Developer package is for developers who are intimately concerned with the Linux operating system including:
configuring and rebuilding the kernel and file system developing or adding device drivers or kernel modules deploying the kernel and file system to target boards.
The Platform Developer package is available for the Linux host systems specified in the release notes.
Platform Developer Package Contents
The Platform Developer package includes the full Wind River Linux:
reference source reference file system target libraries cross-build system host utilities GNU toolchain Board Support Package (BSP) components for supported boards
KGDB kernel mode agent debugging ptrace user mode agent debugging Wind River System Viewer Wind River Analysis Tools core file analysis
The platform developer can export a sysroot (a portable set of libraries, include files, and other resources) as well as a toolchain to be used by the application developer.
Application Developer
The Application Developer package is for the developer of user-level applications only. It is available for Linux, Solaris, and Windows host systems as listed in the release notes.
Application Developer Package Contents
target libraries Wind River host utilities Wind River GNU GCC 4.3.x toolchain
Included is Wind River Workbench, with a debugging and analysis tools subset:
ptrace user mode agent debugging Wind River System Viewer Wind River Analysis Tools
NOTE: The sections of this book that deal mainly with the Wind River Linux cross-build system, reference source and file system, and BSP components, are not relevant to the Application Developer Package.
The kernel-BSP-filesystem feature matrix, available on Wind River Online Support, is the foundation of the kernel feature profiles, and documents the supported configurations as tested by Wind River. The matrix consists of kernel feature profiles, supported BSPs, and the types of file systems supported.
A kernel feature profile implements a supported set of kernel features. Each contains features that are compatible with each other and excludes features that are not compatible. Kernel profiles use a combination of kernel configuration, kernel patches, and build system changes to support their features.
NOTE: Kernel feature profiles are not the same as profile templates. Profile templates (or simply profiles, are described in profile Templates, p.25).
The kernel profiles are layered to build a set of increasingly specific or enhanced functionality. The set of features that is available and tested on all boards is called the standard kernel profile. Kernel feature profiles that add or modify the functionality of the standard profile are called enhanced kernel profiles. Enhanced profiles are available on a selected set of boards and are mutually exclusive with other enhanced profiles. A single board may be supported by multiple mutually exclusive (runtime) enhanced profiles along with the standard profile.
NOTE: All features of the standard kernel profile work within any particular enhanced profile.
standardall boards support the standard profile, fundamental kernel features are implemented in this profile to provide a common platform for all boards. smallthe small kernel profile represents a configuration suitable for resource contained deployment. Available on selected boards. cglThis is the Carrier Grade Linux profile, designed to support the Linux Foundations CGL 4.0 specification. See http://www.linux-foundation.org/en/Carrier_Grade_Linux for a summary and details on the CGL specification. Available on selected boards. Not available for ARM or MIPS based boards. ecglThis is an extended CGL profile. Kernels with this profile provide the CGL features plus extensions. Available on a subset of CGL boards. Refer to Wind River Online Support for more information.preempt_rtThis kernel profile provides the PREEMPT_RT kernel patches to enable conditional hard real-time support for selected boards. For details, refer to http://rt.wiki.kernel.org/index.php. rtcoreKernels with this profile support the Real-Time Core guaranteed real-time core extensions. Real-Time Core is an optional product available from Wind River.
For detailed instructions on reconfiguring and customizing Wind River Linux kernels, see chapter 9. Configuring the Kernel.
Glibc Standard (glibc_std)A full file system, with Glibc but without CGL-relevant packages or extensions. Glibc CGL (glibc_cgl)A full file system, with CGL-relevant packages and CGL extensions. Glibc Small (glibc_small)A much smaller, BusyBox-based file system, with Glibc. uClibc (uclibc_small)The same BusyBox-based file system as glibc_small, but with uClibc, a small C library intended specifically for very small footprint systems.
Run-time components are available both as binary RPMs and source tar files.
Table 1-1 shows which file systems are available with each kernel profile.
Table 1-1 Kernel Profiles and Supported File Systems
glibc_std
glibc_cgl
glibc_small
uclibc_small
No Yes No No No Yes
a. In some cases this combination may not be supported as it is not needed on purely networking equipment. Individual board readme files contain details. b. In cases where a board cannot support the cgl kernel profile (for example MIPS boards) it instead supports the standard kernel profile and the glibc_cgl root file system with some features of the userspace gracefully failing for lack of kernel support. NOTE: Refer to Wind River Online Support for the latest kernel-filesystem-BSP feature matrix to determine which kernel features and file systems are supported for your board.
10
Wind River Online Support provides updates and enhancements to packages as they become available, which can be downloaded and added to Wind River Linux. Tutorials designed to illustrate Wind River integration with Workbench, as well as sample configuration files to simplify the target board boot process, are also available.
Use Cases
Besides step-by-step instructions in several chapters, this Users Guide includes several tutorial examples, in Part IV. Use Cases.
NOTE: Detailed Workbench tutorials are available in the Wind River Workbench Users Guide and Wind River Workbench by Example, Linux Version.
Installation
Complete installation instructions can be found in the Wind River product installation and licensing guides. Go to http://www.windriver.com/licensing and then choose the Site Configuration Documentation link on that page. Any last-minute changes in the installation procedure or host requirements can be found in the Wind River Linux Release Notes or at Wind River Online Support.
11
12
2
Development Workflow
2.1 Introduction 13 2.2 Installing, Configuring, and Deploying Run-Time Software 14 2.3 Updating and Debugging 15 2.4 Preparing a Product Deployment 16
2.1 Introduction
This chapter presents an overview of the development workflow for application and platform development using Wind River Linux with Wind River Workbench. The cycle starts at product installation and ends at product deployment. This chapter provides basic instructions for building the run-time system. Each section refers to subsequent chapters for detailed explanations and step-by-step tutorials. Figure 2-1 illustrates the basic stages of product development.
Figure 2-1 Overview of the Product Development Lifecycle Product Requirements
Applications Files Packages Unit Tests Docs Layers Sysroots Profiles Kernel File System
Setup
Templates Layers Packages RPMs, CVS, . . .,
Develop
Config Edit Compile On-Host Workbench
Diagnose
Deploy Debug Test On-Host Workbench & Target
Product Deliverables
Kernel Image File System Image Layers Sysroots
Optimize
System Layout Views Profiles Static Analysis Cores Footprint Workbench, Eclipse, Target
13
This document is largely concerned with the Setup and Develop phases shown in Figure 2-1. Refer to the Analysis Tools documentation listed in 1.2 Wind River Linux Documentation, p.4 for details on the Diagnose and Optimize phases. You can perform most of the operations involved in the development cycle within the GUI environment of Workbench, although most of the operations performed in this book occur at the command line. For full details, see Wind River Workbench Users Guide and Wind River Workbench by Example, Linux Version.
Platform projects consist of default or customized kernel and file system combinations, and application projects are targeted for specific platforms. Platform developers create a platform project and then produce a sysroot (with make export-sysroot) for application developers. The sysroot provides the target runtime libraries and header files for use by the application developers on their development hosts. Because the sysroot duplicates application dependencies of the eventual runtime environment, applications are easily deployed after development. Platform developers can incorporate developed applications in a project by placing the application under prjbuildDir/filesystem/fs/ or by using the file system layout feature, and then rebuilding the file system (see 8. Changing Basic Linux Configuration Files and C. File System Layout Configuration for more information).
By configuring and building a platform project, you create a complete run-time system. You typically create a platform project within a work directory, called in this document workdir. Within workdir you create a subdirectory for the particular
14
project, which will be referred to in this document as prjbuildDir. Your prjbuildDir would typically have some name indicating its contents, for example common_pc_small, or new_powerpc. Within your prjbuildDir, issue a configure command with the necessary options to configure the appropriate build environment and makefiles. You then issue a make command to build a complete platform including the kernel and root file system. For more detailed instructions on the configuration and build process, see chapter 4. Configuring and Building.
If your target does not exactly match one of the supported boards, you can create a custom board support package, generally based on one of the provided definitions. See the Wind River Linux BSP Developers Guide for detailed instructions.
The process of initially deploying a target entails providing the kernel and file system, setting up the network infrastructure, configuring the bootloader, and so on. Note that the QEMU simulated deployment method (only supported for some boards), makes network infrastructure, bootloaders, and even target hardware unnecessary.
Updating Packages
You can add or remove RPM or source packages from your system, and these packages may be ones provided by Wind River or available from third parties. Package configuration is covered in detail in chapters 8. Changing Basic Linux Configuration Files, 10. Adding Packages, and Part IV. Use Cases. In addition, Wind River Workbench by Example, Linux Version describes how to use the Workbench GUI to add and remove packages, apply patches, and much more.
15
You can reconfigure, rebuild, and test the kernel. See chapter 9. Configuring the Kernel.
Wind River Linux platforms and applications can be developed and debugged with the wide range of tools available with Linux in general. In addition, Wind River Linux includes the powerful set of tools provided with Wind River Workbench, which consists of the Eclipse-based Workbench GUI customized with debugging and target management tools, as well as a set of analysis tools. There are several advantages to using Workbench for development, rather than just using command-line tools such as gdb and others on existing source code. For example, you can take advantage of the integration of the Editor and source code navigation facilities, create launch configurations, and in general use the GUI and its many different views to control breakpoints, monitor threads and processes, and so on.
Go to the Kernel hacking menu item, and disable Compile the kernel with debug info. Alternatively, you can use the Kernel Configuration tool in a Workbench platform project as described in the Wind River Workbench by Example, Linux Version. If you are enabling this option in a *.cfg file, turn it off there. Refer to 9. Configuring the Kernel for more on kernel configuration.) You may want to optimize libraries, if appropriate, as described in 12. Configuring Scalable Features or configure for stand-alone deployment as described in Part III. Deploying your Platform Project. Final test and release completes the process.
16
3
The Development Environment
3.1 Introduction 17 3.2 Development Environment Directory Structure 18 3.3 Templates and Layers 20 3.4 Layers in the Development Environment 21 3.5 Templates in the Development Environment 23
3.1 Introduction
You build Wind River Linux run-time systems using two different environments:
The development environment is the installed Wind River code, in its own directory and subdirectory structure. The build environment is a completely separate area, where you actually build the run-time system. When building a run-time system for a supported target board or simulation, you should not have to enter or modify the development environment in any way. The separation of the development and build environments keeps the development environment pristine, and also supports parallel builds. This chapter introduces the structure and content of the development environment. It includes a discussion of the function and contents of the two prominent structural features of Wind River Linuxlayers and templates. The build environment, including the use of the configure and make commands to build runtime software, is described in 4. Configuring and Building.
17
installDir/
layers/
sysroots/
The structure shown in Figure 3-1 makes it clear what is part of the build system and therefore cannot be overridden by other layers, and what is actually a layer and therefore can be overridden by your custom configurations.
NOTE: Not all layers are shown in Figure 3-1 and additional layers may be added.
The directories and executables shown in Figure 3-1 are discussed in the following sections.
The startWorkbench.sh executable starts the Workbench GUI. You can start Workbench through clicking a desktop icon or, from the command line, entering the path and name of the executable. Workbench is introduced in the Wind River Workbench Users Guide, and examples of its use are in Wind River Workbench by Example, Linux Version. Not specifically shown in the diagram are several directories of interest to Workbench users:
workbench-3.1The Wind River Workbench installation. workspaceThe default Workbench workspace. You can specify an alternate location at Workbench startup or switch workspaces during use.
18
docsthe documentation for the online help system. The .html and .pdf files may also be accessed directly by browsing. wrlinux-3.0/scriptsScripts useful for Workbench and otherwise, including a script to help in adding packages to a project (see 22.5 Adding Packages with a GUI Tool, p.268). wrlinux-3.0/samplesSample projects that can be used in Workbench as well as from the command line.
Use this location for patches and other updates from Wind River.
The sysroots directory contains several pre-built sysroots that are available as build specs out of the box. Sysroots provide board-specific, pre-built target libraries to link against when building a package from source.
This directory contains the configure script you use when configuring a project and a config directory that contains files setting default configure script behavior. The configure script is a front end to the product-agnostic ldat/configure script. You run the wrlinux/configure script which calls ldat/configure with the proper parameters.
ldat is an acronym for Linux Distribution Assembly Tool. This directory contains the package- and product-agnostic build infrastructure. It includes text files that list required host tools for supported development hosts.
The scripts subdirectory contains a number of shell scripts for the build system that perform various build tasks, including defining build macros and rules. Within the tools subdirectory are makefiles and configuration scripts to build Wind River-modified host tools including the rpm, bzip2, and elfutils tools used to build target packages; the qemu simulator; the patch and quilt patching tools; and many more.
19
The wrlinux-3.0/layers directory is described in detail in 3.4 Layers in the Development Environment, p.21.
20
The Wind River Linux development environment contains separate layers for the kernel, toolchain, and build source files. These layers are directories in installDir/wrlinux-3.0/layers. The Wind River Linux layers have the prefix wrll, for example wrll-wrlinux, which is the layer containing the kernel and file system sources as well as associated configuration directories (templates). A kernel layer (wrll-linux-version) and toolchain layer (wrll-toolchain-version) are also provided in the layers directory. The layers directory may also contain other layers including optional products, such as the Real-Time Core product from Wind River (wrll-rtcore-version). The basic layered development structure is shown in Figure 3-2.
Figure 3-2 Overview of Development Environment Layers
installDir/wrlinux-3.0/layers
wrll-analysis-version
wrll-host-tools
wrll-linux-version
wrll-toolchain-version
wrll-wrlinux
include wrll-toolchain-*/
The sections below describe the contents of the primary layers provided with Wind River Linux.
The wrll-analysis-version Layer
The analysis layer includes Workbench and command-line analysis tools support. The subdirectories and contents are:
dist/Makefiles and source files for building analysis tools. packages/Source packages for the Workbench analysis tools. Refer to the Workbench analysis tools documentation for details on these tools. templates/Various analysis tools templates for inclusion during configuration.
21
tools/Makefiles and source files for the boottime analysis tools. See 12. Configuring Scalable Features for more information on the boottime analysis tools.
The wrll-host-tools
The host tools layer contains the host tool binaries in host-tools, and the makefiles and patches in tools that create the binaries from the source code provided in layers/wrll-wrlinux/packages. A default template adds the host tools to your build configurations.
The wrll-linux-version Layer
The kernel layer contains the default kernel for different board combinations under the boards subdirectory. The packages directory contains the source file archives for host and target kernel tools, and host makefiles and patches are in tools and target makefiles and patches are in dist. The templates directory provides configuration files for different kernel configurations.
The wrll-toolchain-version Layer
The toolchain layer (wrll-toolchain-version) contains toolchain layers for each supported architecture, a common toolchain layer, source, and an include file that includes the architecture-specific toolchain layers. This is an example of a layer using an include file to include additional layers. In the case of this include file, the toolchains for the individual architectures are preceded with a minus (-) sign, meaning the particular toolchain is not required for a configuration to proceed, but the common toolchain entry does not have a minus sign meaning that it is required. The toolchain subdirectories contain the complete GNU tool chain including cross-compiler and GNU documentation. The toolchain layer was formerly supplied by the wrlinux-version/gnu/ directory.
The wrll-wrlinux Layer
This directory contains the open source files, target package patches and makefiles, and the resulting binaries and has the following subdirectories:
The packages directory contains compressed tar files and source RPMs, for Wind River Linux run-time system applications and host tools. These are copied from their open source project repositories. The dist directory contains makefiles and patches for the run-time application source files, each in their own package subdirectory. The makefiles and patches integrate the original pristine source files from the open source community stored in packages into the Wind River Linux build system. The RPMS directory contains the patched run-time applications packaged as binary RPMs. These pre-built packages are used to build the run-time file system unless you make source modifications. The templates directory contains a set of templates that control the architecture, CPU, board, and kernel configuration (including patches); the file system configuration, and the package list of each board. For more information on templates, see section 3.5 Templates in the Development Environment, p.23.
22
The kernel (wrll-linux-version) and core (wrll-linux) layers show some of the standard subdirectories that layers can contain, including custom layers that you create. Layers make the development and build environments highly configurable. Layers are replaceableif you have a different kernel layer, for example, you could specify it as an option to the configure command and override the default kernel layer. For more information on layers, refer to 5. Layer and Template Processing.
config.shdefines build environment variables. includelists other templates to include. *.cfgkernel config fragments (partial kernel configuration files). uclibc.cfga build configuration file for uClibc-based file systems. pkglist.* This set of files controls the ultimate contents of the prjbuildDir/pkglist file:
pkglist.addlists packages to be added to the target package list. pkglist.removelists packages to be removed from the target package list.
23
pkglist.onlya list of packages that are to be the beginning of a new target package list, replacing any pkglist assembled up to this point.
toolslist.addlists packages to be added to the host tools package list. toolslist.removelists packages to be removed from the host tools package list. toolslist.onlya list of packages that are to be the beginning of a new host-tools package list, replacing any toolslist assembled up to this point.
modlist.* This set of files controls the ultimate contents of the prjbuildDir/filesystem/fs/etc /modules file:
modlist.addlist of kernel modules to add to the target for optional loading (for example with the modprobe command). modlist.removelists modules to be removed from the target module list. modlist.onlya list of modules that are to be the beginning of a new target module list, replacing any modules file assembled up to this point.
fs/ filesTemplates may contain an fs subdirectory, the contents of which are used to help assemble the final target file system. For example, an fs/etc/inittab file may contribute the inittab for the target file system. bootloader/ filesFiles associated with bootloader requirements for the specific template. READMEText describing the template. This is copied to prjbuildDir/READMEs/ when you configure a project that includes the template.
NOTE: Each board/boardname template includes an important README file. This file provides detailed information on the particular board, including information on board-specific features, its bootloader, and booting procedures. This README file is available in the prjbuildDir/READMES directory after you configure a project.
24
Figure 3-3
wrll-wrlinux
templates
board
extra
feature
profile
rootfs
test
The following sections describe the contents of the five primary template directories and the three supplementary template directories of the wrll-wrlinux core layer.
profile Templates
Profiles combine kernel, file system, and other features into groups tailored for specific uses. They can be used as is or used as a template to which you may add or subtract features, or as a model for creating custom profiles. Pre-defined profiles provide you with a starting point to create tailored solutions that best fit you needs. They are not necessarily intended to be used as an actual product, but may serve as a model or basis for a product. Pre-define profiles include:
consumer_premise_equipmentA consumer device profile for customer premise equipment. This profile provides, for example, possible configurations for a set top box, or a home network gateway. industrial_equipmentA consumer device profile for industrial personal computers. This profile provides, for example, possible configurations for a network monitor device, or an industrial control device. mobile_multimedia_deviceA consumer device profile for mobile and multimedia devices. This profile provides, for example, possible configurations for a personal media player, or a mobile internet device. pne Carrier Grade Linux. The Carrier Grade Linux specification is a standard, owned by the Linux Foundation, that defines features and performance of a Linux distribution suitable for use in carrier grade equipment. Enabling this profile will provide a configuration that implements the requirements of the Carrier Grade Linux specification as published at http://www.linuxfoundation.org/en/Carrier_Grade_Linux. The pne profile provides a CGL registered kernel and userspace configuration. Documentation describing which features are enabled in this configuration can be found on the CGL registration page at http://www.linuxfoundation.org/en/Registration.
epneEnhance Carrier Grade Linux. This profile provides as certain enhancements that are not currently part of the Carrier Grade Linux specification but are of use in carrier environments. This includes customized
25
OOM killer behavior, network traffic statistics gathering, specialized communications mechanisms for high-performance applications inside the same machine and modified default configuration values.
lpne Limited Carrier Grade Linux. This profile is intended for environments where some carrier grade features are required but a full CGL registered kernel and userspace is not necessary. Expected environments for this profile are small networking appliances and consumer electronics devices where specific components of the CGL specification are required (for example, applications where only the CGL Security requirements are necessary).
For additional information on the profiles and their supported features, refer to the
README files and templates associated with each profile in
installDir/wrlinux-3.0/wrll-wrlinux/templates/profile. For information on how to configure a project to include a profile, refer to Configuring with Profiles, p.36.
rootfs Templates
*libc*Template subdirectories of rootfs that contain libc in their names, for example glibc_std, provide the package lists of the particular root file system type in pkglist.add files. They also contain include files that include the appropriate *_fs structure template for the file system (described next), and config.sh files which provide environment variables for the build system. file_system_fsTemplate subdirectories of rootfs that end in the suffix _fs provide file system structure. For example, glibc_small_fs contains some target /etc files and subdirectories as well as fs-install and pre-cleanup scripts used by the build system as described in 5.5 Constructing the Target File System, p.62.
glibc_cgl This provides a full suite of Glibc-based run-time packages, including CGL-relevant packages and CGL extensions.
glibc_std This provides a full suite of Glibc-based run-time packages, but without CGL-relevant packages and CGL extensions.
glibc_small This provides a reduced suite of Glibc-based packages in a BusyBox-based run-time system.
26
uclibc_small This provides a reduced suite of uClibc-based packages in a BusyBox-based run-time system.
NOTE: Note that for the uclibc_small and glibc_small file systems, additional capabilities such as debug and demo tools must be added at configure time if you configure from the command linedebug and demo features are added by default when you configure using Workbench. For examples of how to add debug and demo features from the command line, see 4.2 Configuring Your Platform Project, p.32.
The templates for the four supported Wind River run-time file systems include a package list file and an include file; the templates listed in the include file (which always include a structure template, discussed next), are processed before the package list. (For details on include file processing, see 5.3.2 Processing Template include Files, p.56).
rootfs/file_system_fs Templates
The file system structure templates are generally processed first, because they are found at the top of the include files within the four supported file system templates. The three structure templates are:
glibc_fs This template provides a directory structure for all glibc_std and glibc_cgl-based file systems.
glibc_small_fs This template provides a directory structure for all glibc_small-based file systems.
uclibc_fs This template provides a directory structure for all uclibc_small-based file systems.
This directory contains a template for each supported board, defining kernel and file system configurations specific to that board. A board template generally includes a kernel configuration file, a config.sh file and an include file. It may also have rootfs and kernel subdirectories for more detailed, board-specific configurations. The board template contains the board README file.
NOTE: Some board templates also include a bootloader subdirectory containing bootloader binaries and flashing instructions
27
test Templates
This directory contains several templates for different test suites. These tests are designed to validate a specific kernel and run-time system on a specific board. Some tests are board-specific, some are feature-specific, some are kernel and file system specific. There is also a common_tests template, for validation tests common to most boards, kernels and file systems.
feature Templates
The feature directory contains templates for special run-time features, some of which, like BusyBox, are automatically added to specific kernel and file system configurations, and others which you must add manually during the configuration phase, using either the --with-template-dir and --with-template options, or a custom layer.
extra Templates
The extra templates come from products other than the core Wind River Linux distribution providing, for example, SNMP support.
arch Templates
There is an arch templates directory for each supported architecturearm, ia32, mips, ppc, and sparc. With one exception their contents are not standard. Although they all contain a build environment variable file (config.sh), some also have package list remove files, some have kernel configuration files, and so forth.
cpu Templates
There is an cpu templates directory for each supported architecturearm, ia32, mips, ppc, and sparc.This directory contains a template for each supported CPU, defining CPU-specific configurations. Unlike the arch templates, the contents of the cpu templates tend to be standard. Each has a config.sh file and an include file; in addition, the CPUs that support uClibc have a uClibc build configuration file (uclibc.cfg).
multilib Templates
This directory provides information and environment variable settings (config.sh files) and includes architecture templates (with include files) to provide multiple library support. Information in the configuration files includes, for example, the valid combinations of the available versus the compatible CPU variants, and information on the available soft versus hard floating point libraries.
28
wrll-linux-version
templates
board
default
karch
kernel
The sections below describe the contents of the wrll-linux-version kernel layer template directories.
default Templates
The templates/default directory contains a toolslist.add file. Default templates are always included with the layer, so these host tools are added to your configuration whenever the kernel layer is included.
feature Templates
The templates/feature directory provides configurations for various kernel features including boot-time tracing and kernel debugfs.
karch Templates
These templates provide makefile variables for the different kernel architectures.
kernel Templates
These templates configure the supported kernel types (cgl, ecgl, preempt_rt, rtcore, small, and standard). For more information on templates, see 5. Layer and Template Processing.
29
30
4
Configuring and Building
4.1 Introduction 31 4.2 Configuring Your Platform Project 32 4.3 Building Your Platform Project 43
4.1 Introduction
The Wind River Linux Distribution Assembly Tool, or LDAT, is the Wind River Linux cross-build system for producing optimized embedded device software. LDAT has been designed specifically to benefit distributed development environments (many to one), and environments in which there are multiple projects leveraging common code.
If a pre-built kernel and file system is satisfactory for deployment, or current testing and development, you can build a complete run-time file system in minutes using prebuilt kernel and file system binaries (the RPM build method). You can build specific parts from source files, saving time by building only the file system, or only the kernel, or a specific package, whichever element is of current interest. Your builds cannot contaminate the original pre-built kernels, RPMs, configuration files, and source packages because the development environment is kept separate from the build environment. By using custom layers and templates (see 6. Custom Layers and Templates),you can add packages, modify file systems, and reconfigure kernels for repeatable, consistent builds, yet still keep your changes confined for easy removal, replacement, or duplication.
31
These last two features allow multiple builds, customized builds, and a strict version control system, while keeping the Wind River development environment pristine and intact. You create the build environment as a regular user with the configure command. It is in this environment that you build (make) Wind River Linux run-time system, either default or customized, using software copied or linked from the development environment. This chapter describes how you use the configure script to configure your project for the run-time software targeted for your board or simulation. It then describes the various make commands you use to build all or selected parts of the run-time. Although this chapter is oriented toward the command line, it will also give Workbench users a better understanding of the process of creating a Wind River Linux platform project.
Post-installation
Before you create your first run-time system, there is an immediate post-installation step that you must perform only onceinstalling host updates (if required).
Installing Host Updates
After product installation, you must install any necessary host updates. Please read the required-*.txt text files for your host, within installDir/wrlinux-3.0/ldat/. Use the -q flag with the RPM command to check your versions. The version numbers within the required text files are the minimum acceptable.
NOTE: The configure command checks for required host updates, and notes them in config.log, a text file within the build directory.
32
Figure 4-1
/home/user/
workdir/
WindRiver/
common_pc/
wrlinux-3.0/
In this example, the new work directory is named workdir. Within workdir is the common_pc project build directory which will hold the build environment, in this case for a common PC board. Directory names have been chosen for clarity; you can name them as you like. In this document, the variable prjbuildDir refers to your project build directory, which is common_pc in the example in the figure.
NOTE: When using Workbench to create a platform project, by default Workbench creates the installDir/workspace work directory, and the project build directory, with a _prj suffix, beneath it. You may override this default behavior, and select whatever work directory and project build directory structure you choose. For the example shown in Figure 4-1, the Workbench project directory would be in /home/user/WindRiver/workspace/common_pc_prj.
The configure script, found within installDir/wrlinux-3.0/wrlinux/, must be run from within the project build directory, in this example, from within workdir/common_pc/. The configure script is the most important of several key configuration filesit initiates the entire configuration process. It creates a subdirectory structure within the project build directory and populates it with the script framework, configuration files and tools necessary to build the run-time system. It processes board templates and initial package files, and copies basic run-time file system configuration files (for the etc and root directories), from the development environment. The script is always run with options. Which options you supply depend on which kernel and file system you wish to build for your board, which features you want to include, and whether you wish to build a complete run-time system, or only a kernel or only a file system.
33
The configure script produces a plain text log file, config.log, within the project build directory, in this case, workdir/common_pc. This is a very useful file, recording configure options, automatic checking of host RPM updates, and so on. Workbench saves a similar log file, creation.log which contains the screen output of the configure command.
prjbuildDir
build
build-lib
build-tools
export
host-cross
filesystem
scripts
RPMS
dist
fs
Selected directories and their contents are described in further detail below:
build
Contains target packages for the default CPU of the build. During an RPM build (make fs), source code for the kernel and its patches are copied to the linux-version subdirectory within build/. During a source build (make build-all), the source code for all packages is copied to and built within each packages named subdirectory within build.
build-lib
This directory appears in cases where the project supports multilibs. For example, for the common_pc_64 board type, the build-x86_32 directory appears here. This is how the build of each multilib for the respective packages is kept separate.
build-tools
A special build directory where any required host tools are built.
export
The build stores its end products in export/. During an RPM build (make fs), the pre-built kernel image is copied to export/. The run-time file system, built from RPMs, is copied to the dist subdirectory, and a compressed run-time file system (a tar.bz2 file), is placed in export/. During a source build (make build-all), this directory also contains the new kernel, the vmlinux file, System Map file and a compressed modules file. During a source build (make build-all), the newly created RPMs are placed within the RPMS subdirectory, in addition to the contents placed in export/.
34
filesystem/fs
Contains run-time system files such as configuration files for etc and boot.
host-cross
Contains tools that run on the host and assist in cross-compiling and using the build environment. This is basically the infrastructure for the project build and includes the toolchain wrappers, toolchain, host tool binaries, and libraries.
scripts
Three directories not shown in Figure 4-2, are dist, packages and tools. These directories allow you to add packages and tools to experimental builds, without the necessity of first creating a custom template or layer. Your project build directory essentially becomes the highest-level layer containing these directories. To use them, populate them with the following:
distMakefiles and patches for target packages. toolsMakefiles and patches for host-tool packages. packagesThe SRPM and classic packages themselves.
The contents of these directories will augment (if unique), or replace (if identically-named,) the contents of dist, packages, and tools directories in lower-level layers, including the layers installed with the product. Once validated, the contents should be moved to a custom layer. Each of these directories contains a README file with additional information. Instructions for using these directories, and for adding packages in general, can be found in chapter 10. Adding Packages.
35
Profiles provide a convenient way to specify multiple options for pre-defined configurations. These configurations typically serve as starting points for more custom development. Pre-defined profiles include the following:
Kernel-only
File System-only
You can specify --enable-kernel or --enable-rootfs or both. If you specify --enable-kernel, you must also specify a board. It is an error to specify both --enable-cpu and --enable-board. A board implies a CPU.
36
Note that with the use of profiles which contain a kernel and root file system specification, you only need to specify the board and profile on the configure command line. The use of profiles is discussed in more detail in profile Templates, p.25 and 6. Custom Layers and Templates.
NOTE: Do not repeat arguments to configure, because only the last one will be used. For example, if you specify: ... --with-toolchain-version=x --with- toolchain-version=y configure just sets the version to y. If you want to specify multiple non-exclusive features, you can use comma-separated lists, for example: ... --with-template=template1,template2 or add features to the root file system with the + shorthand such as ... --enable-rootfs=glibc_small+feature1+feature2+feature3.
Within the project build directory, type the following on the command line:
$ configure \ --enable-board=common_pc \ --enable-kernel=standard \ --enable-rootfs=glibc_std \ --enable-test=yes \ --with-test=bsp
The configure command is a script located in installDir/wrlinux-3.0/wrlinux/. If this directory is not in your PATH, include the absolute or relative path to the configure command in the examples given in this guide.
NOTE: The configure command fails with an error if you have "." in your PATH environment variable. In addition to being a security issue, having a "." in your PATH can cause problems with the build. Remove "." from your PATH (for example, by editing and reinitializing your .bashrc, .cshrc, or other startup file) before issuing the configure command.
Press ENTER to configure the project build directory. This takes two or three minutes. When configuration is finished, type:
$ make fs
This creates a complete run-time file system from pre-built RPMs, in approximately 20 to 30 minutes, depending on your configuration and environment. The system copies the pre-built kernel for this project from installDir/wrlinux-3.0/layers/wrll-linux-version/boards/board-name/kernel-type/ to the export/ directory within the project build directory.
37
Three basic sets of configure options for building run-time systems for the common PC are shown below. An additional set is shown for building a run-time system for the Platform CD ARM Versatile AB-926EJS.
NOTE: For details on developing platform projects for the optional Real-Time Core
The following command configures a complete run-time system (kernel and file system) for the common_pc
$ configure \ --enable-board=common_pc \ --enable-kernel=standard \ --enable-rootfs=glibc_std
Alternatively, you could use a profile instead of specifying the kernel and root file system on the configure command line as described in Configuring with Profiles, p.36.
The following command configures only a kernel for the common_pc BSP.
$ configure \ --enable-board=common_pc \ --enable-kernel=standard
You can then use the source build method (make build-all) to build the kernel.
The following command configures a file system only for the common_pc:
$ configure \ --enable-cpu=x86_32_i686 \ --enable-rootfs=glibc_std
NOTE: Correct CPU codes for each board can be found in the wrll-wrlinux/templates/board/boardname/include file.
38
You can configure a a complete run-time system (kernel and file system) for the ARM versatile AB-926EJS, with subsequent creation of a flash file system enabled, using either the RPM (make fs), or source build (make build-all) method.
$ configure \ --enable-board=arm_versatile_926ejs \ --enable-kernel=small \ --enable-rootfs=glibc_small \ --enable-bootimage=flash
NOTE: In this example no debug or demo templates have been added to the small
file system configuration, which makes for a smaller run time, but it is one that does not have debug capabilities, such as usermode-agent, built in. In the next example, debug capabilities are added.
You can configure a complete run-time system (kernel and file system) for the ARM versatile AB-926EJS, with subsequent debugging enabled, using either the RPM (make fs), or source build (make build-all) method.
$ configure \ --enable-board=arm_versatile_926ejs \ --enable-kernel=small \ --enable-rootfs=glibc_small \ --with-template=feature/debug
The final option in the example, --with-template=feature/debug, adds application debugging features to the file system. Note that a shorthand way of adding file system profile templates is available, and you could specify the file system with --enable-rootfs=glibc_small+debug. Therefore, an equivalent configuration command for this example is:
$ configure \ --enable-board=arm_versatile_926ejs \ --enable-kernel=small \ --enable-rootfs=glibc_small+debug
Similarly, to add demo capability (graphics capabilities) to a uclib_small file system, you could either include the --with-template=feature/demo option to the configure command, or just specify the file system as --enable-rootfs=uclibc_small+demo:
$ configure \ --enable-board=arm_versatile_926ejs \ --enable-kernel=small \ --enable-rootfs=uclibc_small+demo
NOTE: For more information on the features provided by the debug and demo templates, see the installDir/wrlinux-3.0/wrll-wrlinux/templates/feature/demo and debug directories.
39
Analysis tools are primarily used with Workbench as documented in online Analysis Tools and Workbench documentation. Like other Wind River Linux configuration commands, you can perform the following through Workbench or the command line. Note that if you create projects through the command line, you then have to import them into Workbench for them to become visible. Backtracing, which is used by the analysis tools, is performed differently by MIPS boards than by non-MIPS boards, so the following presents two examples of configuring builds for analysis tools.
Non-MIPS Targets
You can use the following configure command to add analysis tools support to, for example, a SUN CP3020 target:
$ configure --enable-board=sun_cp3020 \ --enable-rootfs=glibc_std \ --enable-kernel=standard \ --with-template=feature/analysis \ --enable-build=profiling
NOTE: The --enable-build=profiling option enables frame pointers for the backtrace code. (The --enable-build=debug option also enables frame pointers which enables backtrace.) To build this project, you must perform a build-all to rebuild all the packages with the new flag. MIPS Targets
You can use the following configure command to add analysis tools support to, for example, the following Cavium Octeon target:
$ .../configure --enable-board=cavium_octeon_cn38xx_evb_nic4 \ --enable-rootfs=glibc_std \ --enable-kernel=standard \ --with-template=feature/analysis
NOTE: The MIPS boards do not need additional configure options because they use a different method for backtracing. You can use make fs to build this configuration.
This section describes some of the more commonly used configure options.
A full platform configuration requires that you specify a board, kernel, and root file system. ---enable-board=boardname Specifies the target board. The list of board support packages that are currently installed is given in the --help output. A full list of supported boards can be
40
found at Wind River Online Support. A board specification implicitly includes cpu and arch because the board template includes defaults through include files. (For details on include files, see 5.3.2 Processing Template include Files, p.56). This option is equivalent to specifying --with-template=board/boardname. ---enable-kernel=kernel Specifies the kernel. This option is equivalent to specifying --with-template=kernel/kernel option. ---enable-rootfs=rootfs Specifies the file system. This option is equivalent to specifying --with-template=rootfs/rootfs option.
--enable-ldat-checksum=[yes|no] Rebuilds packages from source when the checksum of package meta data changes instead of using the prebuilt RPMs. Meta data includes build system makefiles, the tar packages, patches, version, toolchain information, and so on. The default is yes. ---enable-cpu=cpu Specifies the CPU. Typically, you do not specify this option because there is a default CPU for the board you choose with --enable-board. --enable-jobs=number Specifies the maximum number of parallel jobs that make should perform. This should be set to the number of CPUs your system has available. --enable-bootimage=iso Enables the subsequent build of an ISO bootimage. Note that after the build completes, you must run a further command, make boot-image, to actually build the .iso image (found within export). --enable-bootimage=flash Enables the subsequent build of a flash file system. Note that after the build completes, you must run a further command, make boot-image, to actually build the image file (found within export). --enable-test=yes Includes that file systems and kernels standard suite of test packages. --with-test=testname Includes a specific test. --with-template=template1,template2,template3... Appends the specified templates to the usual template list created by the configure options.When used with the --with-template-dir option, it can be used to include a custom template. --with-template-dir=templatedirectory Specifies a user-selected directory for a custom template, to be processed after the usual templates in wrll-wrlinux/templates. --with-layer=layer1,layer2,layer3... Specifies custom layers. The system will process any template of the same name found within a layer instead of the regular template within the
41
development environment. (The regular template may, however, be included by the template in the custom layer.) --enable-quilt=yes Applies the quilt model instead of patch when applying patches. This is the default. --with-package-dir=packagedirectory Specifies the location and name of the directory containing package source files. Without this option, configure defaults to wrlinux-3.0/layers/wrll-wrlinux/packages/. --with-toolchain-dir=toolchaindirectory Specifies the location and name of the directory containing the toolchain. Without this option, configure defaults to installDir/wrlinux-3.0/layers/wrll-toolchain-version/. --enable-build=debug or --enable-build=production When doing a source build (make build-all), debug will compile and install binaries and libraries with debugging information (-g). This also lowers default optimizations. Use production (the default) to optimize and strip installed libraries and binaries.
Arguments to the configure command allow you to rebuild the toolchain or libc from source. !
WARNING: While building the toolchain or libc from source is supported, the
resulting binaries are not supportedall defects must be reproduced using the prebuilt binary toolchain and libc (glibc or uclibc).
Building libc from Source
To build libc from source, rather than using the pre-built version from the toolchain, add the feature/build_libc template with --with-template=feature/build_libc, or use the shorthand method of adding templates when specifying your root file system, for example: --enable- rootfs=glibc_std+build_libc
NOTE: Note that the --enable-build-libc argument is deprecated. Use the build_libc template to build libc.
To change the options for the C library, specify them in the package list (prjbuildDir/pkglist). For example, if you wanted to build glibc with frame pointers, you would modify the glibc entry in pkglist to read:
glibc EXTRA_CFLAGS=-fno-omit-frame-
42
To build the toolchain from source, use --with-template=feature/build_toolchain. When using this option, once the project is configured, perform a make toolchain. The system will give you further instructions.
NOTE: Note that the --enable-build-toolchain argument is deprecated. Use the build_toolchain template to build the toolchain.
To configure to build the host tools, use --enable-prebuilt-tools=no. Note that this does not rebuild the toolchain components. Use make host-tools to build the host tools.
The RPM build method (make fs). This method uses pre-built kernels, and builds run-time file systems from pre-built RPMs where available, otherwise from source packages.
The source build method (make build-all). This method always builds both kernels and file systems from source packages.
NOTE: You do not have to rebuild everything if you modify individual package sources or meta data. See Rebuilding Packages with Changed Checksums, p.45 for more information.
43
Run the configure command within the project build directory to create a board, kernel, and file system-specific build environment, separate from the development environment created upon product installation. As part of this process, makefiles, configuration files, and a directory structure are configured for the new build environment. For example, you could configure a complete platform project as follows:
$ configure --enable-board=common_pc --enable-kernel=standard \ --enable-rootfs=glibc_std
Step 2:
This will typically take 10 to 20 minutes, depending on the complexity of the file system.
NOTE: The commands make, make fs, and make all do the same thingbuild the file system from RPMs where possible, from source otherwise, and create a link to the default kernel. These and associated files are placed in prjbuildDir/export.
The output you see in the RPM build follows this sequence: 1. 2. The sysroot contents are updated. The kernel source from the development environment is unpacked into prjbuildDir/build/linux-version-type/, and all of the platform, file system, board, and feature-specific patches are applied. It creates a list of the package RPMs necessary to build the run-time file system. It then creates the run-time file system by extracting the package RPMs into prjbuildDir/export/dist/. File system information from prjbuildDir/filesystem/fs is extracted and copied to prjbuildDir/export/dist/ last, to be able to overwrite files from the RPMs. It compresses that run-time file system and installs the file into prjbuildDir/export, as a tar.bz2 file, so that it can be easily copied to an NFS-exported, or other, directory. For example, the compressed run-time file system for the common_pc BSP is: prjbuildDir/export/comon_pc-rootfs-kernel-dist.tar.bz2. For convenience, the prebuilt specific kernel for the particular platform and board is automatically copied from its location in the development environment
3.
4.
44
(installDir/wrlinux-3.0/layers/wrll-linux-version/boards/board/kernel), to prjbuildDir/export/.
Rebuilding Packages with Changed Checksums
If you modify the source or meta data of a package that is part of your current configuration, this will be detected when you rebuild your file system as long as you have the configuration option --enable-ldat-checksum set to yes (the default). You will be prompted to perform a distclean and rebuild of the package. You can set LDAT_FORCE_CLEAN=distclean on the command line to distclean it without intervention, for example:
$ make package_name LDAT_FORCE_CLEAN=distclean
If you have LDAT_FORCE_CLEAN=distclean set in your environment, you will not be prompted to distclean, you will not have to put it on the command line, and the build system will automatically rebuild packages with changed checksums. The meta data of each package contains the following:
package_MK5SUM package_DEPENDS package_CONFIG_VAR package_CONFIG_OPT package_MAKE_VAR package_MAKE_OPT package_EXTRACONFIGS package_TEMPLATE_DIRS package_DIST TARGET_FUNDAMENTAL_CFLAGS
the tar packages, patches, version, and toolchain information, and so on. Note that packages that are dependencies are part of the checksum so that, for example, a change to glibc will affect the checksum of all dependent packages.
Dynamically Removing a Package
If you want to dynamically remove a particular package from the build, remove it from the pkglist file before the build.
45
This step is identical to the RPM method with the configure command as shown in 4.3.2 Using the RPM Build Method (make fs), p.44.
Step 2: Build the kernel and run-time file system.
Run the command make build-all to build a new board-specific kernel from source, and generate a new set of RPMs from source files to build a compressed run-time file system image:
$ make build-all
This may easily take an hour or more for the first build, although subsequent builds should be faster. The make build-all first generates a new set of RPMs from open source archive files and source RPMs. Then it compiles the source files to generate new binary executables, and bundles the executables into RPMs. Finally, it builds the run-time file system from the new RPMs.
The process of building from source is described in more detail and sequentially below.
Step 1: Unpack, patch and compile packages.
This two-phase step proceeds package-by-package, that is, each phase is completed for one package, and the system then moves on to the next package.
Phase One
The source files for a specific board are unpacked from the original development environment (wrlinux-3.0/layers/wrll-wrlinux/packages/), directly to their own named subdirectory within the build directory. They are then patched if necessary (patches integrate packages into the build environment, add functionality, enable cross-compilation, or repair defects in the original source).
Phase Two
Each unpacked and patched package is configured and compiled. The binaries and any configuration files are then installed into each packages prjbuildDir/build/INSTALL_STAGE/package/ directory.
Step 2: Generate RPMs.
One or more binary RPMs are generated from each package, and installed in prjbuildDir/export/RPMS/processor or prjbuildDir/export/RPMS/noarch. The
46
noarch subdirectory holds binaries designed to run on all architectures, instead of a given processor type.
Step 3: Build the run-time system.
The next step builds the kernel and compressed run-time system file from the newly created RPMs, and installs them in export/. It creates the Linux kernel; compresses the file system and modules into a tar file, and compresses the modules alone into a separate bzipped tar file. It also creates a separate vmlinux file for debugging and a System.map file. Examples of these files, for the common_pc board are:
The run-time systems kernel and file system are ready to be exported to a target once the file system is unpacked and the kernel and file system are copied to their respective download and export directories.
NOTE: QEMU simulates deployment of supported boards directly from the project build directory, using the file system in export/dist. See chapter 14. Simulated Deployment with QEMU.
To build only the kernel from source, perform the following procedure.
Step 1: Create the build environment.
Run make -C build linux to build a new board-specific kernel from source in approximately 20 to 30 minutes. The resulting build process can easily take an hour or more the first time it is run; subsequent builds should be considerably faster.
47
To build only the file system from source, specify the file system and the CPU, for example:
$ configure --enable-rootfs=glibc_small --enable-cpu=x86_32
You can find the default CPU for a board by viewing the include file in the board template. For example:
$ cat \ installDir/wrlinux-3.0/layers/wrll-wrlinux/templates/board/common_pc/include cpu/x86_32_i686 karch/i386
Run make -C build package_name. For example, to build the tar package, enter make -C build tar.
48
5
Layer and Template Processing
5.1 Introduction 49 5.2 Understanding Layers 50 5.3 Understanding Templates 52 5.4 Processing Template Components 60 5.5 Constructing the Target File System 62
5.1 Introduction
Wind River Linux includes layers, and some optional products from Wind River are implemented as layers. In addition, you can create your own custom layers, and include layers created by others. The layers that are provided by the Wind River Linux development environment were described in 3.4 Layers in the Development Environment, p.21. This chapter will help you understand how layers and templates are used in the Wind River Linux build system. Refer to 6. Custom Layers and Templates for details on how you can customize the default build environment with your own layers and templates. You have already used layers and templates to build kernels and file systems if you have performed any of the examples in the Getting Started, or in 4. Configuring and Building. When you create a project with the configure utility, you do so using the available templates and layers. The configuration process creates a list of available layers, and then searches them to obtain any required templates. If a required template is not found, it is an error. Layers provide templates and packages, while templates provide configuration. For example, a new package becomes available to the build system when you add it to a layer, but it only becomes part of a given project when you configure in the template that selects it. The template does not contain the package, it merely marks the package for inclusion. Throughout this discussion, the terms higher and lower are used to describe the priority layers or templates have. A higher-level template (or layer) takes precedence over a lower-level one, and is thus more specific, rather than less specific.
49
When configure searches for components, it selects higher-level components first. When configure applies multiple components, it applies lower-level components first; this design allows higher-level components to override lower-level components. For example, a given BSPs kernel configuration fragment is at a higher level than the generic standard kernel configuration. The BSP-specific kernel configuration settings can then override more generic kernel configuration settings.
3.
4. 5.
6. 7.
As layers are added to the layer search list, any include files they provide are processed, inserting the included layers on the list below the layer that includes them.
50
The result is a prjbuildDir/layers file, ordered from top (highest priority) to bottom (lowest priority) that looks like this:
/path/toplayer /path/middlelayer /path/bottomlayer /path/scopetools-version/wrlinux /path/wrll-linux-version /path/wrll-toolchain-version /path/wrll-toolchain-version/wrll-toolchain-version-arm /path/wrll-toolchain-version/wrll-toolchain-version-ia /path/wrll-toolchain-version/wrll-toolchain-version-mips /path/wrll-toolchain-version/wrll-toolchain-version-powerpc /path/wrll-toolchain-version/wrll-toolchain-version-sparc /path/wrll-toolchain-version/wrll-toolchain-version-common /path/wrll-wrlinux /path/wrll-host-tools
Note that some layers are included by default, for example a default kernel layer, because no alternative was specified. And note the toolchain layers for each architecturethese specific layers are included by an include file in the wrll-toolchain-version layer. Layers are searched for specific templates based on your configuration command as described next.
You may also have an environment variable called LDAT_LAYER_PATH, which is a comma-separated list of locations to look for layers before the default installDir/wrlinux-3.0/layers directory.
Layers contain various configuration and source directories as well as directories containing pre-built packages that are recognized by the build system.
Configuration and Source Directories
templates/Configuration templates. dist/Makefiles and patches for tools and packages. tools/Host tool source packages. packages/Target packages.
Pre-Built Directories
host-tools/Pre-built host tools, mirrored into prjbuildDir/host-cross/. boards/Pre-built kernels, mirrored into prjbuildDir/export/. RPMS/Pre-built target packages, mirrored into prjbuildDir/export/RPMS/.
51
Any layer can have any or all of these components, and these components then augment or override components in lower layers.
When you specify a configure command line option, such as the following:
--enable-rootfs=glibc_std
Similarly, when you specify a kernel with --enable-kernel, you are specifying a kernel template such as kernel/standard, and when you specify a board with --enable-board you are specifying a board template such as board/fsl_hpcii. Therefore, the following configure options:
--enable-rootfs=glibc_std --enable-kernel=standard --enable-board=fsl_hpcii
In addition, when you specify a typical board, you implicitly specify additional templates because the specified board template includes them. For example, the fsl_hpcii board template contains an include file, which causes it to include cpu, multilib, and arch templates. These templates are included implicitly to save you from having to specify them explicitly on the configure command line. Exactly how templates include other templates with include files is described in 5.3.2 Processing Template include Files, p.56.
52
The configure process builds the initial list of templates to search for from explicit and implicit configure command options, arranging them in the following order: 1. 2. 3. 4. 5. 6. rootfs kernel profile board default command line
where later templates have a higher priority over earlier templates. This would be equivalent to putting them all on the command line in the order:
--with-template=rootfs/type,kernel/type,profile/name,board/type,default,whatever...
because the last templates specified in this syntax have the highest priority.
NOTE: You do not have to place your arguments on the configure command line in the correct order, they will be ordered correctly in the template list that the configuration process constructs.
What is important is to recognize the priority of templates. So, for example, if profile/name contained an include file listing rootfs and kernel templates, the templates listed in the profile/name include file would override the --enable-rootfs and --enable-kernel templates you gave on the command line, because profile templates have a higher priority than the rootfs and kernel templates in the ordered search list constructed by configure.
Templates are searched for in the following order: 1. template as a path relative to your project build directory, (or an absolute path, but absolute paths can only come from the --with-template option or include files.) board/template in each layer from top to bottom. cpu/template in each layer from top to bottom. arch/template in each layer from top to bottom. template in each layer from top to bottom.
2. 3. 4. 5.
Each template is searched for as each member of this template path list until it is found. If it is not found in any form in any layer, it is an error. (But some included templates may be specified as optional as described in Marking an Included Template Optional, p.59.)
53
Template Processing
A template is processed when it is found, unless the template contains an include file listing one or more other templates. The templates listed in any include files must be processed before the template that contains the include file. Once a template has been found and all include files processed, it is processed, and then the search for the next template (if any) begins. While template processing order and therefore priority may seem at first confusing, it is this design that gives templates their power, allowing you to replace system and other templates, or selectively add and remove components from them. You can always determine the order in which your templates were processed by viewing the templates and template-paths files as described in the next section.
Based on this command line, configure creates an initial ordered search list 1. 2. 3. 4. 5. rootfs/glibc_small kernel/standard board/common_pc default feature/glibc_small_debug
The configure process then inserts each template from the initial list into the ordered template path search list: 1. 2. 3. 4. 5. prjbuildDir/template templates/board/template in each layer templates/cpu/template in each layer templates/arch/template in each layer templates/template in each layer
So, for our command line example, configure first searches for rootfs/glibc_small as: 1. 2. 3. rootfs/glibc_small in your project build directory board/rootfs/glibc_small in the templates/ directory of the highest priority layer cpu/rootfs/glibc_small in the templates/ directory of the highest priority layer
54
4. 5.
arch/rootfs/glibc_small in the templates/ directory of the highest priority layer rootfs/glibc_small in the templates/ directory of the highest priority layer
If a rootfs/glibc_small template is found and it does not contain an include file, the template is processed and the next template is searched for. If it does contain an include file, the include chain is processed as described in 5.3.2 Processing Template include Files, p.56 and then the template is processed. If a glibc_small template is not found in any template path variation after searching in the highest priority layer, configure then searches for it in the next highest priority layer, and so on until a rootfs/glibc_small has been found. If it is not found and all layers have been searched, it is a configuration error.
An Example of Template Search Results
The result of the search for the templates in the layers is recorded in your prjbuildDir, in the layers, templates, and template_paths files.The template_paths file for the example is given in Example 5-1. In the template_paths file shown, templates that contain include files are shown in bold text.
Example 5-1 Example of template_paths and include Files installDir/wrlinux-3.0/layers/wrll-wrlinux/templates/rootfs/glibc_small_fs installDir/wrlinux-3.0/layers/wrll-wrlinux/templates/feature/busybox installDir/wrlinux-3.0/layers/wrll-wrlinux/templates/rootfs/glibc_small installDir/wrlinux-3.0/layers/wrll-linux-version/templates/kernel/standard installDir/wrlinux-3.0/layers/wrll-wrlinux/templates/arch/ia32 installDir/wrlinux-3.0/layers/wrll-toolchain-version/i586/templates/multilib/ x86_32 installDir/wrlinux-3.0/layers/wrll-toolchain-version/i586/templates/cpu/x86_3 2_i686 installDir/wrlinux-3.0/layers/wrll-linux-version/templates/karch/i386 installDir/wrlinux-3.0/layers/wrll-wrlinux/templates/board/common_pc installDir/wrlinux-3.0/layers/wrll-host-tools/templates/default installDir/wrlinux-3.0/layers/wrll-linux-version/templates/default installDir/workbench-3.1/analysis/wrlinux/templates/default installDir/wrlinux-3.0/layers/wrll-wrlinux/templates/feature/debug installDir/wrlinux-3.0/layers/wrll-wrlinux/templates/feature/small_debug installDir/wrlinux-3.0/layers/wrll-wrlinux/templates/feature/glibc_small_debu g
For example, starting at the top of the template-paths file, we can see in the third line that configure found a rootfs/glibc_small template in the core layer (wrll-wrlinux). That template contains an include file listing a glibc_small_fs structure template and a busybox feature template, so they were found and processed first, before the rootfs/glibc_small template. Figure 5-1 illustrates the prjbuildDir/templates file for this example, showing which templates included others.
55
Figure 5-1
rootfs/glibc_small_fs feature/busybox rootfs/glibc_small kernel/standard arch/ia32 multilib/x86_32 cpu/x86_32_i686 karch/i386 board/common_pc default default default feature/debug feature/small_debug feature/glibc_small_debug
Note that in addition to the templates explicitly provided by configure command line arguments, there are several included templates, and also all default templates encountered in the layers. The highest priority template is listed at the bottom of the templates and template_paths files. This is feature/glibc_small_debug from the command line in the example.
Then the templates are processed in this order: 1. 2. 3. 4. template2 templateA templateB template1
56
A slightly more complex example illustrates this more clearly. This time, template1 includes four templates in its include file:
template1/include templateA templateB templateC templateD
In this case, the depth-first search of template1 encounters an include file, which causes it to include templateA. But templateA also has an include file, containing an entry for template2, so it includes template2. In template2 there is no include file so template2 is processed, then templateA is processed, and then it examines the next entry in template1s include file, templateB. templateB is then processed (unless it contains an include file, in which case the included template(s) are first examined for include files, and so on). The end result for our example is a processing of templates in this order: 1. 2. 3. 4. 5. 6. 7. 8. template2 templateA templateB template3 template4 templateC templateD template1
This gives template1 the highest priority of these templates, and it can override actions performed by any of the templates processed before it. The highest priority template is processed last. As an example of how priority of processing can affect outcome, consider two filespkglist.add and pkglist.removeas they might occur in some of these templates. These files cause packages to be added or removed from a package list that is created as the templates are processed. When processing of all templates is complete, the result is the contents of the prjbuildDir/pkglist file. When processing the templates in the order shown in the example above, packages in any pkglist.add file in template2 will be added to the package list, then packages in any pkglist.remove file in template2 will be removed from the list. Then any packages in any pkglist.add file in templateA will be added, then packages in any pkglist.remove file in templateA will be removed. This may cause packages added in either template2 or templateA to be removed from the list by the pklist.remove in templateA, although they could be added back by a later (higher priority) template.
57
Then templateB is processed and so on. Finally, the packages in any pkglist.add file in template1 are added, and the packages in any pkglist.remove in template1 are removed. Note that you can also completely control the entire package list with a custom template that includes a pkglist.only file, which restarts the package list with the contents of the file. See 5.4 Processing Template Components, p.60 for more details on template component processing.
Templates are protected from multiple inclusion while processing through any included templates (called processing the include chain). When configure first looks for a template it will find the first version with that name. If that template includes a template of the same name, it will then find the next one with the same name after the first one it found, and so on. In other words, it will go through the same search algorithm again, but this time skipping the one it already found and finding the next one of that name.
NOTE: Note that templates are not protected against inclusion at other times, so the same template may be included more than once in an overall configuration.
In practice, including templates of the same name is a common and useful technique. For example, if a board template contains a rootfs/glibc_std directory which in turn has an include file naming the rootfs/glibc_std template, the process is this: 1. 2. 3. 4. When the search for rootfs/glibc_std begins, the board template containing a rootfs/glibc_std directory is found and its include file processed. The include file contains an entry for rootfs/glibc_std, so a search is made once again for rootfs/glibc_std. The same rootfs/glibc_std in the board directory is found again, but since it has already been found in this search, it is skipped. The search continues and the next rootfs/glibc_std template found is processed (unless it contains an include file, the contents of which would be processed first). Finally, the board directorys rootfs/glibc_std template (which was the first one found) is processed.
5.
Note that if the second rootfs/glibc_std template does contain an include file, the templates listed in that file are processed before the other contents of the template, and so on for each template encountered. The result is that the entire include chain is processed depth-first, so that the first template that was found in the chain is processed last, giving it the highest priority. The depth-first application of include files ensures that the more-specific versions of templates are able to override or replace components of more generic ones. Note that you cannot specify which layers version of a template to includethe build system automatically seeks out the highest-level layer containing a template which has the right name, and which is not already being processed.
58
A minus sign - at the beginning of an include file line means "It is not an error if this included template does not exist." For example, if you have an include file that has this line: -feature/superfeatures configure will search for the template feature/superfeatures, as always, using the usual search procedure, and include it if it exists, but will not produce an error if it does not exist. (See the toolchain layer in the development environment for an example of an include file with optional templates.) If an include file lists a template that does not exist and is not preceded by a minus sign, an error results. Figure 5-2 summarizes template include file processing.
Figure 5-2 Processing Template include Files
59
File fragments, such as config.sh or *.cfg. File system changes, in the fs directories. Package list handling *.add, *.remove, and *.only files.
Processing of each of these different kinds of template components is discussed in this section.
The config.sh and *.cfg files are fragments that are concatenated to produce the final config.sh and kernel .config files used by the build system. The config.sh fragments contain build environment variables as described in G. Build Variables, and the *.cfg fragments contain kernel configuration options and are discussed in 9. Configuring the Kernel. Templates components processed first appear first in the concatenations, so, for example, if an earlier template (lower priority) sets a kernel config option that is set differently by a later template, the setting in the later template will override the earlier setting, due to the way the final .config file is processed.
Files found in the fs subdirectories of templates are applied in bottom-first order, with files created by the last template processed overriding files created by previously-processed templates There is no concatenation of fragments, only replacement, so the fs files in a template override identically-named files of previously processed templates.
These are the *.add, *.remove, and *.only files for packages, host tools, and kernel modules. They determine the list of packages that will comprise the set of packages, host tools, or kernel modules, and are:
60
modlist.remove modlist.only
As each template is processed, first any files in *.add are added from the package list, then any files in *.remove are removed from it. If there is a *.only file, its contents become the start of a new package list, effectively making any *.add or *.remove in the template or any preceding templates meaningless. Note that the order of processing of the *.add and *.remove files may produce results noticably different from simply appending all of the *.add and *.remove files. For example, imagine a pair of templates, called A and B that contain pkglist.add and pkglist.remove files. As include file specifies B, so B is included from A. Thus, Bs pkglist.add and pkglist.remove files are processed first. Here are the files: A/pkglist.add: package_2 A/pkglist.remove: package_3 B/pkglist.add: package_1 B/pkglist.remove: package_2 If the package list files were simply appended, the results would be that pkglist.add would add packages 1 and 2, then pkglist.remove would remove packages 2 and 3. However, this would result in the included template (B) overriding the including template (A). Instead, each pkglist.add and pkglist.remove pair is processed in turn. Thus, after B is processed, package 1 has been added, and package 2 has been removed. When A is processed, package 2 is added back to the project, and package 3 is removed. This produces the desired result; the including template overrides the included template. Figure 5-3 illustrates how combining templates in layers can contribute to a final product.
61
Figure 5-3
.add
rootfs/glibc_std
layerlow
glibc_std
Processed Configuration
.remove
board/myboard/rootfs/glibc_std
layerhigh
In addition to constructing the package lists, config.sh, and .config files during template processing, the configure process performs the following steps for each template:
Step 1: Runs the pre-cleanup script.
During the processing of a template, the configuration process changes directory to the prjbuildDir/filesystem/fs directory and runs the path_to_template/fs/pre-cleanup script if it exists.
Step 2: Populates the fs directory.
During the processing of a template, the configuration process copies the path_to_template/fs directory to prjbuildDir/filesystem/fs.
62
5 Layer and Template Processing 5.5 Constructing the Target File System
Step 3:
During the processing of a template, the configuration process changes directory to the prjbuildDir/filesystem/fs directory and runs the path_to_template/fs/post-cleanup script if it exists.
Step 4: Processes the fs-install* scripts.
During the processing of a template, the configuration process appends the contents of path_to_template/fs/fs-install if it exists to the prjbuildDir/filesystem/fs/fs-install script. If a path_to_template/fs/fs-install-only script exists, any existing prjbuildDir/filesystem/fs/fs-install script is overwritten with its contents.
NOTE: Using these scripts is the preferred way of adding to or overwriting pieces of the target file system. They are part of the work of the configuration utility, and therefore are included in the RPM configuration database as discussed in the next section. Note, however, that they can't remove things from the target file system (the contents of export/dist/ and *.dist.tar.gz)for that you would have to use an fs_final script as described in Build Time File System Construction (export/dist), p.63.
The final steps of file system construction occur during the build process in the following order:
Step 1: Determine that all RPMS are available.
The build process uses rpm to install the file system in prjbuildDir/export/dist. If source needs to be recompiled for the RPMfor example, if the package meta data has changed (as described in Rebuilding Packages with Changed Checksums, p.45), or only source exists at this pointthe source is compiled and the RPM produced.
Step 2: Begin populating the export/dist directory.
The package RPMs are installed in the prjbuildDir/export/dist directory. This directory will ultimately contain the file system that is exported when using QEMU, and which is compressed into the compressed tar file export/*.dist.bz2 which you can download to your target.
Step 3: Run the fs-install script.
The fs-install script is executed (called from prjbuildDir/filesystem/Makefile) in a pseudo-root environment so it can operate as the root user and do such things as assign root permissions to files and directories. (See Viewing the Target File Settings, p.64 for more on the pseudo-root environment.)
Step 4: Install the configuration rpm.
The configuration rpm contains the prjbuildDir/filesystem constructed by the configuration utility (see Configure Time File System Construction (filesystem/fs), p.62). It is installed over the contents of export/dist so that it can overwrite anything required.
63
Step 5:
The fs_final script, if it exists, is run. Note that this script can do whatever you want it to do. It can, for example, remove files from the target file system, unlike fs_install which can only add or replace contents. !
CAUTION: Actions performed by fs_install are reflected in the RPM configuration database, which is a database of files and the packages that own those files. Actions performed by fs_final are not reflected in the database, so they may cause it to differ from the contents of export/dist. It is assumed you know what you are doing if you use fs_final scripts. Create the new compressed tar file.
Step 6:
The target file system assembled in the prjbuildDir/export/dist directory is also stored as a compressed tar file in prjbuildDir/export/arch-rootfs-kernel.dist.tar.bz2.
Ownership and permission settings are managed in parallel by the pseudo tool, so that files you create are owned by you on the host, but may become root files on the target. You can enter this pseudo environment on your host to examine the target-specific ownership and permission settings. For example, if you were to examine target file ownership without pseudo, it might look like this:
$ pwd prjbuildDir $ ls -l export/dist/bin/sh lrwxrwxrwx 1 user user 7 Feb
To examine the file with its settings as they will appear on the target, you can supply the same command to pseudo:
$ host-cross/bin/pseudo ls -l export/dist/bin/sh pseudo: Warning: PSEUDO_PREFIX unset, defaulting to prjbuildDir/host-cross. lrwxrwxrwx 1 root root 7 Feb 2 10:38 export/dist/bin/sh -> busybox
Note that ownership now shows as root. You can also enter a pseudo shell to move around the target file system and view multiple settings:
$ host-cross/bin/pseudo sh pseudo: Warning: PSEUDO_PREFIX unset, defaulting to prjbuildDir/host-cross. $ cd export/dist/bin $ ls -l s* lrwxrwxrwx 1 root root 7 Feb 2 10:38 sh -> busybox lrwxrwxrwx 1 root root 7 Feb 2 10:38 sleep -> busybox $ exit
Exit the pseudo shell to return to your normal shell. See 8. Changing Basic Linux Configuration Files for information on making changes to filesystem/fs/ configuration files.
64
5 Layer and Template Processing 5.5 Constructing the Target File System
You can query the RPM database to find which RPM supplies a particular file, for example:
# rpm -qf /bin/hostname
There may be more than a single file with the same name supplied by different packages.In that case, the file associated with the default CPU type will be the only version installed. If more than one RPM in the same CPU configuration supplies a file, the file must be identical in the two versions or it produces a hard error.
65
66
6
Custom Layers and Templates
6.1 Introduction 67 6.2 Creating Custom Templates 67 6.3 Using Custom Templates 71 6.4 Creating Custom Layers 73 6.5 Using Custom Layers 78 6.6 Combining Custom Layers and Templates 79
6.1 Introduction
This chapter describes how you can create and use custom templates, create and use custom layers, and how you can combine your custom templates and layers.
67
You can populate your custom templates with the same types of files used in templates in the development environment including:
*.cfg kernel configuration fragments pkglist.*, toolslist.*, and modlist.* package lists include files listing other templates config.sh files listing environment variables
Refer to 3.5 Templates in the Development Environment, p.23 for more details on the contents of templates. Although you can place custom templates anywhere, you would typically locate them outside of both the development and build environments. Figure 6-1 illustrates one possible example.
Figure 6-1 A Possible Organization of Development, Build, and Template Directories
/home/
user/
my_templates/
workdir/
installDir/
...
feature/
profile/
board/
prjbuildDir/
wrlinux-3.0/
my_feature/
my_profile/
my_board/
Build Environment
Development Environment
Custom Templates In Figure 6-1, each template is shown with only one instance, but you could have multiple feature templates under feature/, for example, and multiple profiles under profile/, and so on.
Note that the development environment template naming convention is not a limitation to how you can name templates. For example, you could create a template named whatever in your home directory, or create custom directory/subdirectory pairs under my_templates such as options/one-option, options/another, and so on. You just need to inform configure about your custom template names on your configure command line as described in 6.3 Using Custom Templates, p.71.
68
You may want to override existing development environment templates. For example, you may want to specify your own debug feature and not use the one supplied in the development environment, or you may want to customize the supplied version. You can replace the development environment version by creating your own feature/debug template and informing configure about it on the command line. The most common reason for duplicating a development environment template name, however, is to incorporate, override, or enhance functionality that it provides. By creating a custom template with the same name, you can modify the action of the development environment template without making modifications in the development environment itself. For example, in the case of a custom feature/debug template, you could include the feature/debug template normally found by configure, and add some packages to it. Your template could look like this:
/home/user/my_templates/feature/debug/ pkglist.add include
The include file in your custom template would list feature/debug. When configure found your custom template, it would first process the template listed in the include file. This causes it to search again for feature/debug. It would first find your custom template again but skip it because it was already found, then process the next feature/debug template found, in this case a development environment template with no include file. So the development environment template would be processedit just contains a pkglist.add file, so the contents of that would be appended to the package list being assembled by configure. It would then finally process your custom feature/debug template, adding the contents of your pkglist.add file. (For details on template processing see 5.4 Processing Template Components, p.60). Of course, you could also remove packages from the list added by the development environment template with a pkglist.remove file in your custom template, and in general perform any of the actions templates can perform.
The best way to know where you can put things in templates is to look at working templates. The installDir/wrlinux-3.0/layers/wrll-lwrlinux layer and other layers at the same level in the development environment provide examples of many types of templates. Additional points to note are that include files and default templates can help you structure your templates and reduce duplication of work.
include Files
You can include other templates by listing them in an include file in your custom template. One common use of this is to create a template with the same name as a template in a lower priority layer as described in Duplicating Other Template Names, p.69. You can list multiple templates in include files, and include development environment as well as custom templates.
69
The contents of a default template are applied if the layer it is in is includedyou do not need to specify the default template, and it will be applied if you select the layer.
NOTE: It will also be included if you specify the templates directory it is in on the
configure command line with --with-template-dir as discussed in 6.3 Using Custom Templates, p.71. You could, for example, have a set of feature templates for various but related purposes where each feature adds a set of packages, but many of the packages are added by all features. Rather than maintain the set of common packages across each feature, you could have a pkglist.add file in a default template, and then just maintain the unique packages in the pkglist.add files in each feature. Figure 6-2 illustrates such a scenario and also a system file that is common to all the features. Default templates are processed just before the templates you specify on the command line as described in 5.3 Understanding Templates, p.52. .
Figure 6-2 An Example of default Template Usage
templates/
feature/ ...
default/
feature_3/
feature_2/
feature_1/
pkglist.add
pkglist.add
pkglist.add
pkglist.add
NOTE: When you configure in the layer that contains the default template shown in Figure 6-2, you also configure in the default pkglist.add and the S99test.sh startup scriptregardless of whether or not you also configure in any of the feature or other templates in the layer. (See 6.5 Using Custom Layers, p.78 for details on configuration with layers.)
70
When you specify a development environment template to configure, you use the --with-template option, and configure searches the development environment for the template. For example, to add the debug feature to a configure command you would specify:
$ configure ... --with-template=feature/debug ...
Your custom templates will typically reside outside of the development environment (to keep the development environment pristine), so you would also specify the directory location of your templates with the --with-template-dir option. For example, to specify the my_profile template shown in Figure 6-1, you would specify the following:
$ configure ... --with-template-dir=/home/user/templates \ --with-template=profile/my_profile ...
You can specify multiple templates with a comma separated list, for example:
$ configure ... --with-template-dir=/home/user/templates \ --with-template=profile/my_profile,feature/my_feature,board/my_board ...
Custom templates are processed last, after all other templates. Because of this, they are especially useful for:
Kernel configuration file fragments that override default kernel options. Additions to the file system configuration files under fs/, such as the networking configuration files under /etc/sysconfig, that may have to override default values. Package list files that you want to override previous package list files.
Refer to 5. Layer and Template Processing for details on the order of template processing. You can check the order in which your templates are applied as described next.
71
Two files in your project build directory show you the order of template processingtemplates and template_paths. Both list the templates processed in the order of their priority, from the lowest priority at the top to the highest priority at the bottom. The templates file lists only the template names, and template_paths lists the full paths.For more details on these files, refer to An Example of Template Search Results, p.55 If you are making changes with a template that are not taking effect, check to see if a higher-priority template is overriding your settings.
Wind River Linux release 3.0 introduced profiles based on templates. A profile is a template that typically includes a kernel, a root file system and various other template components. You only need to specify a valid board and a profile that includes a kernel and root file system to the configure script to configure a full platform build environment. When you create a custom profile, you will probably create it in some custom template location or custom layer of your own, so your configure command line will include a --with-template-dir or --with-layer specification as well.
A Simple Custom Profile
As a simple example, consider the following. You create a profile called small_plus that includes the small kernel, the glibc_small file system, and the demo feature. Your directory structure would look like that shown in Figure 6-3.
Figure 6-3 A Simple Custom Profile
$HOME/
templates/
profile/
small_plus/
include
72
The following configure command line creates a platform project for the common_pc using this profile:
$ configure --enable-board=common_pc \ --with-template-dir=$HOME/templates \ --enable-profile=small_plus
Refer to Another Custom Profile Example, p.80 for a more powerful example of the use of custom profiles in conjunction with layers.
/home
user
layers
workdir
installDir
layer_name1
layer_name2
prjbuildDir
wrlinux-3.0
Custom Layers
Build Environment
Development Environment
In Figure 6-4, two custom layers are placed in a directory named layers. One or both could be configured into a project, and additional layers could be added from elsewhere.
73
Layers contribute to your build environment exactly what you want them to contributeno additional files or directories are required. A layer (like a template) has no minimum structure, and it can even be an empty directory, useless as that would be. The installDir/wrlinux-3.0/layers/wrll-wrlinux is a good example of what a layer can do. By following the structure of the development environment layers in your layer, using only the parts that you want with the contents that you want, you can layer-over, or overlay, the contents of your layer on the contents of wrll-wrlinux during your project build You can create layers manually (as described in 6.4.2 Manually Creating Layers, p.77) or you can make changes in your build environment and then package those changes as a custom layer as described in the following section.
2. 3. 4.
Detailed instructions for adding packages can be found in chapter 10. Adding Packages.
74
Once your packages are building and installing correctly, you can move them to a custom layer outside of your build environment. You can use the export-layer target as described next, or manually create a layer as described in 6.4.2 Manually Creating Layers, p.77 and move the package files to it.
After making a number of changes in your build environment it is very useful to be able to create a layer capturing those changes. The layer then provides a way to recreate your current customized buildyou just enter the original configure command, this time also specifying the layer that contains the changes. Similarly, other developers can recreate your customized build environment in the same way, or you can modify your build environment by including their layers. To capture your changes to the original configuration of your project build directory, do the following:
$ cd prjbuildDir $ make export-layer
The first time you create a layer this way, make export-layer must create a reference project for comparison, so it takes longer than it will for future layer creations. The reference project is created from the configure command in the config.log file. Once that is done, the layer is created by comparing the original (reference) configuration with the current configuration. The layer itself is created in export/export-layer/ with a name comprised of your project build directory and a timestamp, for example common_pc.Wed_Aug_22_102239_PDT_2007. A tar file of the layer is also created in export/export-layer/. The following items are captured by make export-layer:
dist/, packages/, and tools/ additions pkglist changes modlist changes filesystem/fs/ modifications fs-final modifications fs-install modifications config.sh modifications kernel .config modifications
An Example Layer
When you create a layer with make export-layer, you create a directory structure that contains the changes that occurred between the time the project build directory was configured and the make export-layer command was issued.
75
If, for example, in the course of a project development you had modified a few system files, changed some kernel configuration parameters, and added and removed a few packages, when you created a layer the contents might look like the following:
common_pc.Sun_Sep_16_042116_PDT_2007 common_pc.Sun_Sep_16_042116_PDT_2007/conf_cmd.ref common_pc.Sun_Sep_16_042116_PDT_2007/README common_pc.Sun_Sep_16_042116_PDT_2007/templates common_pc.Sun_Sep_16_042116_PDT_2007/templates/default common_pc.Sun_Sep_16_042116_PDT_2007/templates/default/README common_pc.Sun_Sep_16_042116_PDT_2007/templates/default/fs common_pc.Sun_Sep_16_042116_PDT_2007/templates/default/fs/etc common_pc.Sun_Sep_16_042116_PDT_2007/templates/default/fs/etc/rc.d common_pc.Sun_Sep_16_042116_PDT_2007/templates/default/fs/etc/rc.d/rc.local common_pc.Sun_Sep_16_042116_PDT_2007/templates/default/fs/etc/sysconfig common_pc.Sun_Sep_16_042116_PDT_2007/templates/default/fs/etc/sysconfig/netwo rk common_pc.Sun_Sep_16_042116_PDT_2007/templates/default/fs/root common_pc.Sun_Sep_16_042116_PDT_2007/templates/default/fs/root/.bash_logout.h ide common_pc.Sun_Sep_16_042116_PDT_2007/templates/default/fs/root/.profile.hide common_pc.Sun_Sep_16_042116_PDT_2007/templates/default/fs/fs-install-only common_pc.Sun_Sep_16_042116_PDT_2007/templates/default/pkglist.add common_pc.Sun_Sep_16_042116_PDT_2007/templates/default/pkglist.remove common_pc.Sun_Sep_16_042116_PDT_2007/templates/default/modlist.add common_pc.Sun_Sep_16_042116_PDT_2007/templates/default/modlist.remove common_pc.Sun_Sep_16_042116_PDT_2007/templates/default/fs_final.sh common_pc.Sun_Sep_16_042116_PDT_2007/templates/default/config.sh common_pc.Sun_Sep_16_042116_PDT_2007/templates/kernel common_pc.Sun_Sep_16_042116_PDT_2007/templates/kernel/knl-frag.cfg common_pc.Sun_Sep_16_042116_PDT_2007/dist common_pc.Sun_Sep_16_042116_PDT_2007/dist/testpkg3 common_pc.Sun_Sep_16_042116_PDT_2007/packages common_pc.Sun_Sep_16_042116_PDT_2007/packages/testpkg1 common_pc.Sun_Sep_16_042116_PDT_2007/tools common_pc.Sun_Sep_16_042116_PDT_2007/tools/testpkg2
You can untar the tar file for the layer anywhere that is accessible and reference it there, or copy it into your source management system. You, or others, could now reference this directory with the --with-layer option to the configure command to include your changes into a project. Refer to 6.5 Using Custom Layers, p.78 for examples of configuring layers into new projects. See 22.2 Adding SRPM Packages, p.256 for a use case for adding packages and then using make export-layer to create a layer that includes the new packages and associated changes.
76
The following provides a high-level example of manually creating a layer that adds packages and feature templates a to create a layer that adds kernel and userspace functionality and also modifies the file system.
Adding Packages
You add packages to a custom layer as follows. First, create a directory to serve as your custom layer, for example layers/new_stuff. Then add packages and dist directories to the custom layer. Include the packages subdirectory within dist, and the patches subdirectory, as in Figure 6-5.
Figure 6-5 Directory Structure: Adding a Package in a Custom Layer
layers/
new_stuff/
packages/
dist/
package_x/
Makefile
patches/
Within packages, add each packages tar file or SRPM package. Within dist/packagename, add the packages makefile. Add the patches to the patches subdirectory.
77
Add a templates directory and then add board and feature subdirectories as shown in Figure 6-6.
Figure 6-6 Directory Structure: Adding a Package in a Custom Layer
layers/
new_stuff/
packages/
dist/
templates/
package_x/
board/
feature/
patches/
my_board/
a_feature/
another/
pkglist.add
pkglist.add, knl-frag.cfg
Your feature directories might make use of various packages in your custom layer, and include kernel configuration settings to support your board. You must specify your layer to the configure command along with any templates you want to include, as described in the next section.
To configure a platform project with a custom layer, use the --with-layer option. By default, the configure command will look for the layer specified with the --with-layer option in installDir/wrlinux-3.0/layers/. If you want to include a custom layer that you have in a different location, specify its full path to the --with-layer option. To include multiple layers, separate them by commas as in --with-layer=layer1,layer2,layer3. For example, to configure the layer new_stuff into your project, you would enter the following:
$ configure ... --with-layer=/fullpath/layers/new_stuff ...
78
6 Custom Layers and Templates 6.6 Combining Custom Layers and Templates
Specify the full path to the layer. Specify multiple layers with a comma-separated list:
$ configure ... --with-layer=/fullpath/layers/new_stuff,/fullpath/layoers/old_stuff
Your custom layers are processed first, giving them the highest priority. You can view the order that the layers were processed for your configuration in the layers file in your project build directoryhighest priority layers are listed first. A configure command line like this:
$ configure ... --with-layer=/fullpath/layers/new_stuff,/fullpath/layoers/old_stuff
Neither the project build directorys templates or layers files show the local custom layer (your project build directory). When you add packages or kernel configuration fragments using the local custom layer, check the results in the pkglist and .config files to determine if your work is being applied correctly.
79
The profile example give in Creating Custom Profiles, p.72 was based on a profile included in a template directory, but not in a layer. Now consider a somewhat more complex example that combines custom profiles and layers with some of the previous discussion concerning template processing (see 5.3 Understanding Templates, p.52).This example includes custom profiles, features and a BSP.
NOTE: See the Wind River Linux BSP Developers Guide for details on creating your
own BSP templates. For this example, you could substitute the name of any supported board, for example, common_pc, for the custom BSP, my_board. The following discussion is based on a custom layer called phones, where you have made profiles for a basic phone and a smart phone. You have also added custom features to your layer, and include or exclude them based on the profiles. Your directory structure might look something like the one shown in Figure 6-7.
Figure 6-7 Profile, Feature, and Board Templates Example
$HOME/
layers/
phones/
templates/
my_board/
profile/
feature/
glibc_small/
basicphone/
smartphone/
lcd_display/ pkglist.add
wireless/ pkglist.add
touchscreen/
include pkglist.add
include
include
The include file in the my_board template includes CPU and architecture templates so that when you specify your profile along with the board, you provide the necessary board, CPU, architecture, root file system, and kernel that configure requires. The pkglist.add files in the feature templates might include packages from your custom layer or from some other layer including wrll-wrlinux/. Similarly, the include file in the touchscreen feature template might include additional feature templatesin this example, it includes lcd_display.
80
6 Custom Layers and Templates 6.6 Combining Custom Layers and Templates
The include file for the basicphone profile might look like this:
kernel/small rootfs/glibc_small feature/ feature/debug feature/demo feature/lcd_display
The include file for the smartphone profile might look like this:
kernel/small rootfs/glibc_small feature/debug feature/demo feature/wireless feature/touchscreen
You just choose a different profile to configure the different phones. To configure the basic phone for your board, enter:
$ configure --enable-board=my_board --with-layer=$HOME/layers/phones \ --with-profile=basicphone
The templates file in your project directory shows the order of template processing. Figure 6-8 illustrates where templates have included other templates for the smart phone profile example.
Figure 6-8 Template Processing with Profile Example (templates File)
Lowest Priority
Highest Priority
kernel/standard arch/ia32 multilib/x86_32 cpu/x86_32_i686 karch/i386 board/my_board kernel/small rootfs/glibc_small_fs feature/busybox rootfs/glibc_small feature/debug feature/demo feature/wireless feature/lcd_display feature/touchscreen profile/smartphone default default default
While Figure 6-8 may appear complicated, it is because the recursive nature of template processing is illustrated. The templates file itself (as well as the template_paths file) simply lists the results of the recursion, so that the last template in the list has been processed last, and has the highest priority. As you can see in Figure 6-8, the profile template is one of the last processed (included templates are processed first) and therefore can override lower priority templates.
81
Note that your custom profiles are not limited to include files, but may contain components like any other template such as pkglist.* files, and fs/ files and scripts as well. Typically, however, they do not include hardware configuration information.
You can supply configure with your templates in a custom layer as long as you specify the layer as well. For example, the following configuration would include the lcd_display feature:
$ configure --enable-board=my_board \ --enable-kernel=standard \ --enable-rootfs=glibc_std \ --with-layer=$HOME/layers/phones \ --with-template=feature/lcd_display
The template feature/lcd_display will be found in the layer $HOME/layer/phones as you can verify in your prjbuildDir/template_paths file after running configure. You can also specify templates that have the same name as other templates. For example, your BSP might be named common_pc instead of my_board, and your common_pc template would include the template board/common_pc. Refer to The Structure of Templates, p.69 for a discussion of the use of this functionality. Note that you can specify a mixture of custom and standard templates which come from custom and standard layers. Refer to 5. Layer and Template Processing for details on how priorities are determined and components processed.
82
7
Application Development
7.1 Introduction 83 7.2 Working with Sysroots 83 7.3 Adding Custom Applications to Platform Projects 87
7.1 Introduction
This chapter discusses how application developers use sysroots which are provided by the platform developer to build applications, and how applications can be incorporated into platform projects.
83
Workbench automatically finds sysroots located in the installDir/wrlinux-3.0/sysroots directory, or you may point the developer environment at an arbitrary alternate directory where you have located an exported sysroot. It is also possible to point the application developer environment to the unexported sysroot from an existing platform build. This sysroot is located in prjbuildDir/host-cross/arch on the development host, but note that it is not suitable for export to other hosts, for example, to a Windows application development environment. To produce an exported sysroot environment from a configured build directory, use make export-sysroot as described in the following section.
Exporting Sysroots
Wind River provides sysroots supporting application development for four different architectures in installDir/wrlinux-3.0/sysroots/. These contain the necessary build specs to run Workbench examples and may be sufficient to get started on application development, but you should export a sysroot based on the specific platform you configure and build. The exported sysroot can then be used by application developers on any supported host. To create a sysroot, run the make export-sysroot command in your prjbuildDir directory, for example, the following creates a sysroot/ directory in export/:
$ cd arm_versatile_926ejs/ $ make fs $ make export-sysroot
The resulting sysroot, for this example, is export/sysroot/arm_versatile_926ejs-926ejs_glibc-std. You can now copy this directory (for example, tar and untar it to) installDir/wrlinux-3.0/sysroots on a development host, or any arbitrary location, for example /sysroots, that developers will use.
NOTE: If you are creating a sysroot for a multilib-capable target, see 7.2.3 sysroots and Multilibs, p.86 for additional information.
84
In the following example, the platform developer has created an exportable sysroot (see 7.2.1 Exporting Sysroots, p.84) for an arm_versatile_926ejs target and placed it in an arbitrary location, /sysroots/arm, on the development host. The application developer builds the supplied sample multithread program and directs the executable output to the exported target root file system.
Step 1: Set-up your environment.
Because in this example you are not using Workbench, you must set-up your environment properly. 1. Initialize the Wind River environment:
$ cd installDir $ ./wrenv.sh -p wrlinux-3.0
2.
Add the appropriate cross-build tools to your path. For example, if you are developing the application on the same host as the one with the platform install, add the path to the toolchain in your project build directory, for example:
$ export PATH=prjbuildDir/host-cross/arm-wrs-linux-gnuabi/bin:$PATH
Step 2:
Write your source code. In this case, we will just copy the existing mthread example source to our application project:
$ cp installDir/wrlinux-3.0/samples/mthread/mthread.c .
Step 3:
Specify the gcc wrapper from your sysroot and, for the mthread application, you must also specify the pthread library for the linker when building:
$ /sysroots/arm_versatile_926ejs-glibc_std/x86-linux2/arm-wrs-linux-gnueabi-a rmv5tel_vfp-glibc_std-gcc -g -lpthread -o \ ../arm_versatile_926ejs/filesystem/fs/mthread.out mthread.c
Note that in the example command line shown, the output is placed in filesystem/fs of the platform project so that it will be included when the runtime file system is built. Alternatively, the application developer could inform the platform developer of the location of applications ready for inclusion in a platform build. Some ways platform developers might include applications in their projects are described in 7.3 Adding Custom Applications to Platform Projects, p.87.
Step 4: Test the program.
You can now build the file system, download the compressed file system to the target, and test the program. Alternatively, if you are running an emulation, you can skip the step of building the file system by placing the mthread build output in export/dist instead of filesystem/fs as shown in the previous step, for example:
$ /sysroots/arm_versatile_926ejs-glibc_std/x86-linux2/arm-wrs-linux-gnueabi-a rmv5tel_vfp-glibc_std-gcc -g -lpthread -o \ ../arm_versatile_926ejs/export/dist/mthread.out mthread.c
85
Then start the emulator if it is not already running and execute the program:
root@localhost:/root> /mthread.out
Wind River supports multiple libraries on certain targets. With these multilib targets, it is possible, for example, to compile an application against both 32- and 64-bit libraries, and not just one or the other. In cases where a board supports multilibs, a reasonable default library has been chosen, but you may need a different library. For example, common_pc_64 targets may include the x86_64 or x86_32 CPU types, with x86_64 being the default. If you want to provide for development with the x86_32 CPU type on a common_pc_64 target, you need to take additional action to be sure the appropriate packages are included in the sysroot you export.
Default and Variant CPU Types
When you configure a multilib-capable target, the default CPU type packages are listed in the pkglist file as normal, for example, glibc. If you have configured a common_pc_64 target, glibc would be the 64-bit glibc, because that corresponds to the CPU default for that target. With a multilib-capable target, packages included for other (non-default) libraries are called variants, and variants are listed as package.variant in pkglist. For example glibc.x86_32, is the name of the 32-bit variant of glibc when you are building the common_pc_64 platform. The glibc package is built in build/, as is normally the case. Packages for the variant are built in build-variant. Continuing with the same example, the glibc.x86_32 package would be built in build-x86_32/.
Adding Application Development Support for Variant Packages
The default build includes the proper packages for the default CPU type, but not all packages. It would take considerably more space on the target, for example, to include all library versions even though many are not used. Therefore, to create the proper sysroot for use in an application development environment that supports variant versions of packages, you must specifically include the additional libraries and packages of the variant in your platform build. For example, you will need libgcc/glibc (or uclibc) for each of the variants you want to be able to use, and you'll often need more (ncurses, openssl, and so on) depending on the application you want to build using the variant. You can add the variant packages by including them with a template at configure time, or adding them to the build system with make -C build pkgname.addpkg after configuration.
86
For example, if you want to develop with the 32-bit version of vim on the common_pc_64 target, you would need to add vim.x86_32, nurses.x86_32, libgcc.x86_32, and any other packages required, to pkglist. After you have added to the pkglist with a template or with make -C build pkgname.addpkg, a fragment of the pkglist might look like this:
glibc libgcc ncurses vim glibc.x86_32 libgcc.x86_32 ncurses.x86_32 vim.x86_32
Once you have added the files to pkglist, you can then build the platform and export the sysroot. It will include the variant packages you specified.
NOTE: The platform configurations for multilib-supported targets include the necessary toolchain wrappers, RPM macros, variant-specific variables, build directories (as mentioned above), and so on. You only need to add the packages.
In this example, the source code is external to the project, but a wrapper package references it and builds it in the local project. This is an excellent solution when the source is under a configuration management system. The usermode-agent package is an example of this. The source is located in installDir/linux-2.x/usermode-agent/src", and the wrapper package makefile is located in installDir/wrlinux-3.0/layers/wrll-wrlinux/dist/wdbagent-ptrace/Makefile. The following makefile code shows the unpack rule, which copies the source from the external location.
wdbagent-ptrace.unpack: @$(ECHO) "Copying $(wdbagent-ptrace_CLEARCASE) ..."; if test ! -d $(wdbagent-ptrace_CLEARCASE); then $(ECHO) "Agent src not found in $(wdbagent-ptrace_CLEARCASE)"; \ exit 1; fi; if test ! -d $(wdbagent-ptrace_BUILD); then \ \
\ \ \
87
$(MKDIR) $(wdbagent-ptrace_BUILD) || exit 1; fi; d=$$(cd $(wdbagent-ptrace_CLEARCASE); $(ECHO) $$PWD); $(CP) -r $$d/* $(wdbagent-ptrace_BUILD) @$(MAKE_STAMP)
\ \ \
Typically, you would place your dist/app_name/Makefile that contains your unpack rule in a shared layer directory.
For small or local applications, you may want to directly include the source in the layer, specifically within the dist/app_name/src sub-directory. The op_agent code in the analysis layer is an example of this. The source code is located in installdir/wrlinux-3.0/layers/wrll-analysis-1.0/dist/wr-opagent/src. The Makefile is located in installdir/wrlinux-3.0/layers/wrll-analysis-1.0/dist/wr-opagent/. The following makefile code shows the unpack rule for this application:
wr-opagent.config: wr-opagent.unpack @$(call echo_action,$@,nothing to do) wr-opagent.unpack: @$(call echo_action,Unpacking,$*) $(MKDIR) -p $(wr-opagent_SRC) $(CP) -r `echo "$($*_PATCH_DIRS)/*" | sed -e "s/patches/src/"` $(wr-opagent_SRC) @$(MAKE_STAMP)
In this case, the source is kept in "open", unpacked form for easy development. Instead of, for example, untar-ing a tar archive, the build system directly copies the source tree into the build directory. There is wrapping a "virtual" package for the purposes of the build system, but the application is not bundled into a package or archive because it is under active local development.
88
PA R T II
89
90
8
Changing Basic Linux Configuration Files
8.1 Introduction 91 8.2 Creating Basic Linux Configuration Files 91 8.3 Changing Preset Linux Configuration Files 92 8.4 Moving Changes to a Custom Layer 93 8.5 Moving Changes to a Custom Template 94 8.6 Tutorial: Configuring Robust Networking and NTP 94
8.1 Introduction
Most basic configuration files within every Linux system are within the /etc directory and its subdirectories. The run-time file of each Wind River Linux system comes complete with a set of preconfigured files within the /etc and /root directories. As with kernel reconfiguration, you may make changes to these files within the build environment for testing. You may also backport them, when and if you want them to be permanent, to either the development environment or to your own template.
91
Figure 8-1
prjbuildDir
export
filesystem
dist
fs
etc
root
sysconfig
rc.d
The contents of fs/ originate within templates in the development environment. During a make fs or a make build-all, the build system copies the contents of the fs directories in the templates to prjbuildDir/filesystem/fs. These files and directories are then copied to the complete run-time file system within prjbuildDir/export/dist. Finally, the build system compresses this file system to a single file within export/. For a typical NFS deployment, you would manually uncompress this file in the NFS export directory.
NOTE: You may need root privileges when performing this tar command, depending on the permission settings of the exported NFS directory.
You erase the existing file system within export/dist when you make a new one. However, any changes you made to files within prjbuildDir/filesystem/fs remain, and are copied over to the rebuilt file system. In this manner, changes to files within prjbuildDir/filesystem/fs migrate to each rebuild. For details on making changes to the runtime file system see C. File System Layout Configuration.
92
8 Changing Basic Linux Configuration Files 8.4 Moving Changes to a Custom Layer
3.
If they have migrated to export/dist/, then they have also migrated to the compressed file system archive file within export/.
NOTE: Although you may add and change files in fs/ within the project build directory, you may not add directories. Added directories will not migrate.
You keep the templates within the development environment pristine. You are not restricted to just filesyou may add additional directories to the file system as well. You can create board, kernel, and file system-specific layers.
The directory structure of the layer will depend on how restrictive you wish the changes to be. For example, you may want:
Your changes migrated to every Glibc-based file system. You can create the directory structure customlayer/templates/rootfs/glibc_fs/fs. Your changes migrated only to every Glibc CGL file system. You can create the directory structure customlayer/templates/rootfs/glibc_cgl/fs. Your changes restricted to the current projects board, CPU and file system. You can create the directory structure customlayer/templates/board/boardname/rootfs/rootfsname/fs.
NOTE: Refer to 5. Layer and Template Processing for details on template processing.
The procedure has three steps: 1. Using the cp * -f -r command, copy the entire template structure (examples above), from the templates subdirectory within the development environment, to your custom layer. Edit or add the configuration files and add any directories you wish, within the layers fs directory. Within the project build directory, run configure with the --with-layer= option.
2. 3.
Check to make sure the changes have migrated to filesystem/fs within the project build directory.
93
Check to make sure the changes have migrated to filesystem/fs within the project build directory.
In order not to disturb the development environment, the tutorial uses a custom template to add the necessary files and directories to the run-time file system. The following assumes you have created a platform project and performed a make fs.
Step 1: Add the network host names to the targets hosts file.
First, use the vi or other text editor to add the servers and targets hostnames and IP addresses to prjbuildDir/filesystem/fs/etc/hosts. An example addition to the hosts file is below:
192.168.10.1 server1.lab.org 192.168.10.2 target.lab.org
Step 2:
In a similar fashion, add the targets hostname to filesystem/fs/etc/sysconfig/network. An example network file is below:
NETWORKING=yes HOSTNAME=target
94
8 Changing Basic Linux Configuration Files 8.6 Tutorial: Configuring Robust Networking and NTP
Step 3:
Step 4:
Copy NTP files from the host machine to their identical directories within fs.
Copy these files from the host machine to filesystem/fs in the project build directory. In the case of etc/ntp, all files within the directory should be copied over.
$ $ $ $ cp cp cp cp /etc/rc.d/init.d/ntpd etc/rc.d/init.d/ntpd /etc/ntp.conf etc/ntp.conf /var/lib/ntp/drift var/lib/ntp/drift /etc/ntp/* etc/ntp/
Step 5:
An example is below:
# IP address of host (NTP server) server 192.168.10.1 drift file /var/lib/ntp/drift server 127.127.1.1 fudge 127.127.1.1 stratum 10
Step 6:
Within the targets etc/ntp directory, change the hostnames in two files.
Within etc/ntp, edit the ntpservers and step-tickers files, replacing their hostnames with the hosts (NTP servers) hostname.
Step 7: Copy the timezone information to the targets etc/localtime file.
The targets timezone information must be copied to the targets etc/localtime file. As an example, if the timezone is Edmonton, Alberta, Canada, then the /usr/share/zoneinfo/America/Edmonton file on the host must be copied to the targets etc/localtime.
Step 8: Copy the entire fs directory to your custom template.
For example, you might make a directory /home/user/templates to hold your templates, and name this template networking. So you would now create a directory /home/user/templates/networking/fs:
$ cd prjbuildDir $ cp -rp filesystem/fs /home/user/templates/networking/
Step 9:
Rerun configure.
Step 10:
Run make fs and check to make sure that your changes have propagated to export/dist.
For example, prjbuildDir/export/dist/etc/hosts should now contain the server and target addresses you added.
Step 11: Reboot the target.
95
Step 12:
Step 13:
Step 14:
The new file system, with more robust networking and the Network Time Protocol daemon, will be propagated from the custom template every time this run-time system is built with the template.
96
9
Configuring the Kernel
9.1 Introduction 97 9.2 Initial Creation of the Kernel Configuration File 97 9.3 Kernel Configuration Fragment Auditing 99 9.4 Reconfiguring and Rebuilding the Kernel 103
9.1 Introduction
You can reconfigure kernels using standard Linux command line or GUI tools. You may want to start by making modifications to an existing BSPs configuration by simple additions to your build area first, and then move them to a custom template or layer when they prove successful. This chapter provides examples of how to perform kernel configurations in these ways. This chapters examples are based on the SBC8560 board built with the standard kernel and Glibc file system in the project build directory sbc85x0/.
97
A kernel configuration is generated any time the linux.config rule is processed, for example when you perform a make linux.reconfig. The kernel configuration is not performed by the configure command, so you do not have to reconfigure your project just to update the kernel configuration when changing a fragment. When you configure your project and make an initial selection of a platform and a BSP (board), you implicitly choose a subset of the various layers and feature templates that are available to be included in your build. Config files that are found in these layers and templates are collected together, and this concatenation of fragments form the initial input to the Linux Kernel Configurator (LKC). The kernel config fragments are collected, starting from the generic and proceeding to the specific, to assemble platform- and board- specific kernel configuration options into a format that is suitable for the LKC. This produces a link to an intermediate version of the .config file in prjbuildDir, with the name board_kernel-config-version. The intermediate version of the file is a flat file created by a concatenation of all the fragments that are used. It has a preamble listing all the fragments that were used and the order in which they were used. At this stage, only basic sanity checks on the config fragment inputs have been performed, for example filtering of duplicate settings. LKC evaluates the input and applies dependency information (contained in Kconfig files in your prjbuildDir/build/linux/ subdirectories). The LKC then creates the kernel configuration file, .config, in prjbuildDir/linux-version-standard/ (for our example) which is the list of options used to build the kernel. The last instance of an option that is found overrides any earlier instance duplicates are filtered out and the last instance of a parameter is its only instance in the top-level kernel configuration file. Note that the top-level kernel configuration file is transientany manual changes to it are ignored. Instead of editing this large file you can create a kernel config fragment file in your project build directory. The kernel config fragment in your project build directory is processed last, so any options you set in it will override any settings of the same options in any other kernel configuration file fragments.
NOTE: Specifying a particular setting in a config fragment does not automatically
guarantee that the option appears in the final .config file. Wind River still uses the built-in part of the default kernel.org configuration (usually referred to as the LKC) to process the fragments and produce the final .config, and the final dependency check may discard or add options as required, for example, due to dependency reasons. The config file that is used to generate a new kernel is prjbuildDir/linux-version-*/.config. It is created from default kernel.org option settings, plus the options settings from all the kernel configuration files in the distribution and build environments.
98
The intent of this on-the-fly-audit of the fragment content and the generated .config file is to warn you when it looks like a BSP may be doing things it should not be doing. For example, filtering is performed to identify duplicate entries, and warnings are issued when options appear to be incorrect due to being unknown or being ignored for dependency reasons. Because there are many kernel options available and many kernel configuration fragments, the auditing mechanism provides summary output to the screen and collects detailed information in a folder relevant to kernel configuration fragment processing. The warnings are captured in files in the audit data directory prjbuildDir/build/linux/wrs/cfg/build_name/*. The following section provides more detail on auditing.
Kernel options are all sourced from Kconfig files placed in various directories of the kernel tree that correspond to the locations of the code that they enable or disable. The logical grouping has the effect of making each Kconfigs content either primarily hardware specific (for example, options to enable specific drivers) or non-hardware specific (for example, options to choose which file systems are available.) Auditing is implemented by the two scripts generate_cfg and kconf_check located in layers/wrll-linux-version/tools/kern-tools/. The auditing takes place in two steps, since the input first needs to be collated and sanitized, and then the final output in the .config file from the LKC must be compared to the original input in order to produce warnings about dropped or changed settings. These scripts are responsible for assembling the fragments, filtering out duplicates, and auditing them for hardware and non-hardware content. The files of interest under the build/linux/wrs/ directory include the following:
hardware.cfgItems listed here are explicitly considered as hardware items, regardless of which Kconfig file they are found in. hardware.kcfThe list of hardware Kconfig files. non-hardware.cfgItems listed here are explicitly considered as non-hardware items, regardless of what Kconfig file they were found in. non-hardware.kcfThe list of non-hardware Kconfig files.
99
By the end of this process, Wind River has sorted all the existing Kconfig files into hardware and non-hardware, and this forms the basis of the audit criteria.
Audit Reporting
The audit takes place at the linux configuration step and reports on the following:
Items in the BSP that don't look like they are really hardware related. Having a non-hardware item in a BSP is not treated as an error, since there may be applications where something like a memory-constrained BSP wants to turn off certain non-hardware items for size reasons alone.
Items in one fragment that are re-specified again in another fragment or even in the same fragment later on. Again this is not treated as an error, since there are several use cases where an over-ride is desired (e.g. the customer supplied fragment described below). Normally there should be no need for doing this -- but if someone does this, the usual rule applies, that is, the last set value takes precedence.
Hardware-related items that were requested in the BSP fragment(s) but not ultimately present in the final .config file. Items like this are of the highest concern. These items output a warning as well as a brief pause in display output to enhance visibility.
Invalid items that don't match any known available option. This is for any CONFIG_OPTION item in a fragment that is not actually found in any of the currently available Kconfig files. Usually this reflects a use of data from an older kernel configuration where an option has been replaced, renamed, or removed.
Example 9-1 provides comments on some sample kernel configuration auditing screen output.
Example 9-1 Commented Kernel Fragment Auditing Output $ make -C build linux.reconfig make: Entering directory `prjbuildDir/build' There were 2 instances of config options redefined within a single fragment. The full list can be found in your workspace at: build/linux/wrs/cfg/build_name/fragment_duplicates.txt
Duplicate instances of options, whether across fragments or in the same fragment, will generate a warning. You can view the indicated fragment_duplicates.txt file to see the specific options.
There were 1 kernel config options redefined during processing this BSP. These config options are defined in more than one config fragment. The full list can be found in your workspace at: build/linux/wrs/cfg/build_name/redefinition.txt
100
This is much like the previous warning, only this time the duplicates that were detected occurred in different fragments. Whenever duplicate options are encountered, only the last instance is included in the final configuration file.
This BSP sets 3 invalid/obsolete kernel options. These config options are not offered anywhere within this kernel. The full list can be found in your workspace at: build/linux/wrs/cfg/build_name/invalid.cfg
You should look at the indicated invalid.cfg file to determine which options are not recognized. It may be you are using obsolete options. A mis-spelling of an option name may trigger this warning also. (A mis-spelling that is a syntax error, for example COFNIG_OPTION=y is ignored and unreported.)
This BSP sets 11 kernel options that are possibly non-hardware related. The full list can be found in your workspace at: build/linux/wrs/cfg/build_name/specified_non_hdw.cfg
The non-hardware options are meant to be in the domain of the platform, not the BSP. The provided BSP options are found to be non-hardware-related and so they are reported here.
WARNING: There were 1 hardware options requested that do not have a corresponding value present in the final ".config" file. This probably means you aren't getting the config you wanted. The full list can be found in your workspace at: build/linux/wrs/cfg/build_name/mismatch.cfg
View the indicated mismatch.cfg file for the option(s) causing this message. An example of a mismatch is a case where you have requested CONFIG_OPTION=y and you get the message Actual value set: "". In this case the option is not used because it is not valid for the input you provided. Another example is a case where you have an option CONFIG_OPTION=m, but you have not enabled modules. (In this case, LKC would provide CONFIG_OPTION=y, assuming that was a valid option.)
Contents of the Audit Data Directory
Audit data is stored in the prjbuildDir/build/linux/wrs/cfg/build_name/ directory. The contents of this directory are refreshed for every linux.config or linux.reconfig. Table 9-1 describes the contents of the files that appear in the audit data directory.
Table 9-1 Description of Files in Audit Data Directory
File
Description
all.kcf known_current.kcf
Alphabetical listing of all Kconfig files found in this kernel. List of previously categorized Kconfig files present in the patched linux tree about to be used for compilation. List of Kconfig files for which the build system has already information on whether to be classified as hardware or not. List of Kconfig files known to contain non-hardware related items.
known.kcf
non-hardware.kcf
101
Table 9-1
File
Description
hardware.kcf unknown.kcf
Kconfig files that are to be treated as containing hardware options. List of Kconfig files present in the about-to-be-used linux tree that are not known by the build system to be either hardware or non-hardware items. Alphabetical listing of all the CONFIG_ items found in this kernel.
CONFIG_ items that are to be treated as
all.cfg always_hardware.cfg
always hardware, regardless of what Kconfig file they are in. always_nonhardware.cfg avail_hardware.cfg As above, but non-hardware. All the options from all the hardware-related Kconfig files, less those options found in always_nonhardware.cfg. List of the CONFIG_ items specified by the BSP. List of the CONFIG_ items specified by the BSP which are hardware (ideally this should be almost all of them). List of the CONFIG_ items specified by the BSP which are non-hardware (ideally this should be almost always empty). Settings which are specified multiple times within a single fragment. List of options that are set in one fragment and then re-set in another later on. Configuration options specified in the BSP that don't match any known valid option, that is, this item isn't in any Kconfig file. A concatenation of all the file fragments. The file of the same name in prjbuildDir is a symlink to this file. The output of the LKC processing as it creates the final .config file.
specified.cfg specified_hdw.cfg
specified_non_hdw.cfg
fragment_duplicates.txt redefinition.txt
invalid.cfg
BSP-kernel_type-kernel_version
config.log
102
The following example uses the console tool make linux.menuconfig in the prjbuildDir/build/ directory to reconfigure the SBC8560 kernel. You may also use the X window system tool make linux.xconfig or, if you are using Workbench, you can use the advanced features available with the Kernel Configuration tool in your platform project. Within a terminal window in sbc85x0/build, run make linux.menuconfig and navigate to the General Setup submenu as shown in Figure 9-1. Reconfigure the kernel to increase the printk ring buffer (LOG_BUF_SHIFT) to 16 by selecting the entry and then entering the value. Save your configuration, and exit.
103
Figure 9-1
To build the new kernel, within sbc85x0/build run make linux.rebuild. Do not run make dep, or make kernelimage.
You add kernel configuration file fragments in *.cfg files, which may contain any number of kernel configuration options. You specify and control one or more *.cfg files in a .scc file.
NOTE: In previous versions, the config files were named knl-base.cfg or knl-kernel_version.cfg, but you can now assign arbitrary names with the use of SCC files (see 13.5 Kernel Patching with scc, p.180).
For example, suppose you wanted to disable KGDB options for certain product configurations. You can create a template that contains the necessary files and then include the template when you configure the project. In the following example, a custom template is located at /home/user/templates/features/no-kgdb. The features/no-kgdb template contains the standard linux subdirectory for kernel modifications, and contains two files, no-kgdb.cfg and no-kgdb.scc. The contents of the files are as follows:
no-kgdb.cfg:
# CONFIG_KGDB is not set
no-kgdb.scc:
kconf non-hardware no-kgdb.cfg
The contents of no-kgdb.scc say to include the kernel configuration fragment file no-kgdb.cfg, which sets non-hardware options. (The scc file is discussed in more detail in 13.5 Kernel Patching with scc, p.180.)
104
To configure your project with these options, add the following to your configure command line:
--with-template-dir=/home/user/templates --with-template=features/no-kgdb
After configure is run, you can see the following at the end of the prjbuildDir/templates file:
... default default features/no-kgdb
To configure your kernel, run make -C build linux.config (or linux.reconfig). In this example, the KGDB options will be turned off even if the configuration otherwise turns them on, because your custom template, features/no-kgdb, is processed last. For example, your default configuration may include the features/kgdb template which enables these options, but your template will disable them. You can see the end result of your kernel configuration in your kernel config files. There is a link to a board-kernel-config-version file in your prjbuildDir, for example, common_pc-standard-config-version, that contains the settings of the kernel configuration options found during the configuration process.
NOTE: The syntax shown:
# CONFIG_KGDB is not set
is not a comment as it may appear with the initial # symbol. The option as shown is the correct LKC syntax for turning off an option. Do not use, for example, CONFIG_KGDB=no, which is incorrect. Also note that a space is required between the # and the C.
Another way to make changes to the kernel configuration is to add kernel configuration fragments in your prjbuildDir. You must then create an scc file in the project build directory to source it. For example, create a file in prjbuildDir called log-buf.cfg that holds the option or options you want to include in your kernel configuration. This file will be the last kernel configuration fragment processed, even after the config fragments in any custom templates you add. For example, to modify the same option you modified with the previous make menuconfig example, you could do the following: 1. 1. Create a prjbuildDir/log_buf.cfg file with the following contents:
CONFIG_LOG_BUF_SHIFT=17
You can verify that the option has been turned off by finding the entry for it in the informational wrs_sbc85x0-standard-config-version file, for example:
$ grep CONFIG_LOG_BUF wrs_sbc85x0-standard-config-version CONFIG_LOG_BUF_SHIFT=17
105
106
10
Adding Packages
10.1 Introduction 107 10.2 Before Adding a Package 108 10.3 Adding a Package: rpmbuild with a Source RPM 109 10.4 Adding a Package: rpmbuild with a Classic Package 120 10.5 Adding a Package: the Classic Method 121 10.6 Removing a Package 121 10.7 Adding a Package to a Running Target 122
10.1 Introduction
Although Wind River Linux comes with a full suite of standard, small foot-print, and Carrier Grade Linux packages, you may add or remove packages as the need arises. There are two ways to add a package:
The rpmbuild methodThis uses rpmbuild and a spec file, in concert with a simple makefile and the Wind River Linux system, to drive the cross-compilation and installation of package source code. Preferably, the source code will come packaged as a source RPM file (also called an SRPM file, and ending in the suffix .src.rpm). SRPMs from other Linux distributions such as Fedora typically come complete with pre-written spec files, and often with distribution-specific patches. If the source code is not packaged as an SRPM, you can use ordinary source code, but in this case you will have to write your own spec file.
The classic methodThis method uses a makefile and the Wind River Linux build system to drive the cross-compilation and installation of package source code. The source code typically comes packaged as a tar archive file.
107
The classic method often requires writing elaborate makefiles. The rpmbuild method uses simplified and largely boilerplate makefiles in combination with spec files. This results in easier and faster package integration, and easier package maintenance.
NOTE: This chapter gives general directions for both rpmbuild and classic build methods. For examples of adding specific packages using different methods, refer to 22. Examples of Adding Packages.
Following Wind River Linux design practice, you should use the local custom layer directories packages and dist within the project build directory during the development stage of adding a package. (See 6.4.1 Workflow and the Local Custom Layer, p.74 for more on the local custom layer.) Once they are set up, you may move the packages and files to a more permanent custom layer (see 6.4 Creating Custom Layers, p.73).
NOTE: The Wind River Linux build system shares basic similarities with RPM package management for systems that are not designed for embedded cross-development, so familiarity with those procedures is helpful in understanding this chapter. For detailed information on rpmbuild, spec files, source RPMs and other concepts discussed in this chapter see, for example, RPM Guide, available at http://fedora.redhat.com/docs/drafts/rpm-guide-en/index.html, and Maximum RPM at http://www.rpm.org/max-rpm/.
In the following discussion, as throughout this guide, installDir refers to /home/user/WindRiver and prjbuildDir refers to your project build directory.
108
Before downloading a new SRPM package, inspect its maintainers web page, or any other source you can find, to make sure it will build on your host, and cross-build if necessary for your target. Determine your package dependencies, that is, which packages are required by your package for it to build and function properly. Check to see if all dependencies are present within the target boards package list by checking the contents of the prjbuildDir/pkglist file. If a dependency is not included in the pkglist file, check to see if it is a standard Wind River Linux package, by inspecting installDir/wrlinux-3.0/layers/wrll-wrlinux/packages/. If not, it must be added. All packages that are dependencies must be present, and listed in the pkglist file.
As of Wind River Linux 3.0, a more simplified version of the procedure described in this section is supported. The older way of adding an SRPM (see 10.3.2 Older Method of Adding SRPMs, p.114 for details) still works, but the new way saves some steps by copying the spec file from the SRPM and editing it directly, rather than creating an integration patch to patch it within the SRPM. The basic steps of the simplified procedure are: 1. 2. Download the SRPM into the packages/ directory. (This procedure assumes you are doing this in your prjbuildDir or a custom layer): Place a Makefile in dist/package_name/. You can copy an existing makefile from a package in installdir/wrll-linux-3.0/layers/wrll-wrlinux/dist/package_name. Edit the makefile as described in Necessary Makefile Contents, p.112. Unpack the SRPM to extract the spec file, and put the spec file in the dist/package_name/ directory. Edit the spec file as described in Necessary spec File Changes, p.113, including references to any patches you may be adding. If you are adding custom patches, place them in dist/package_name/patches and edit the spec file to reference them.
3. 4. 5. 6.
109
You do not need to make a patches.list file or integration patch, which were required in previous versions of Wind River Linux.
NOTE: With this new way of evaluating spec files in any layer, you can override the spec files of package in the installation, including replacing or augmenting patches. The spec file in a dist/package directory of the same name overrides the spec file in a lower level layer (for example, the installed SRPMs.) This works in the same way as using templates of the same name to override lower-level templates, as described in Naming Your Templates, p.68.
You must set up the infrastructure that will include the necessary files and directories to build the package each time. Perform the following sequence of operations for each new third-party package you want to add:
Step 1: Create and configure your project build directory.
The file system you use determines which packages are included by default. You can look in pkglist in you project build directory to see which packages are configured into your project.
Step 2: Get the package you want to add.
Place the package you want to add in your local packages directory:
$ cp package packages/
This creates the directory structure to hold your Makefile and spec file. If you will be adding any custom patches to patch the SRPM, create a patches subdirectory as well:
$ mkdir dist/package/patches
Step 4:
You need to create a new makefile or modify an existing one and place it in your newly-created infrastructure:
$ edit dist/package/Makefile
Use your preferred editor to create and edit the makefile. You can start with another Makefile you have copied from one of the package directories in installDir/wrlinux-3.0/layers/wrll-wrlinux/dist/, or use your editor to create one with the contents as described in Necessary Makefile Contents, p.112. With SRPMs, these are small makefiles, typically 10 lines or so.
110
Step 5:
Add the package to the package list and to the build Makefiles.
This adds pkgname to the pkglist file, adds any known dependencies of pkgname to pkglist, and regenerates the prjbuildDir/build/Makefile.* files to include pkgname.
Step 6: Unpack the package to extract the spec file.
Unpack the package and copy the spec file to your dist/package/ directory:
$ make -C build package.unpack $ cp build/package-version/SPECS/package.spec dist/package/
Step 7:
Edit the prjbuildDir/dist/package/package.spec file as described in Necessary spec File Changes, p.113.
Step 8: Build the package
Resolve any errors with the dist/package/Makefile or dist/package/package.spec files until the build succeeds.
Step 9: OptionalAdd patches.
If you are custom patching the package, add your patches to the dist/package/patches directory and reference them in your dist/package/package.spec file. Repeat the package build until it builds correctly with your custom patches. If you get errors that require other packages to be built, add and build those packages first. When your package builds without error, you can install it in the file system as described in Install the Package in the File System, p.111.
In some cases, you will find that your package needs one or more other packages when you try to include it in the file system. You will have to add those packages as well. When you are able to build the file system without error, your new package is included in the compressed file system (and in prjbuildDir/export/dist).
111
A convenient way to preserve the additions and other modifications you make to a platform project is to create a layer that captures the changes. You can then backup that layer someplace and use it at any time in combination with your original configure command to create your new build environment. A simple way to create a layer that will preserve the changes you make when you add packages is to use the make export-layer command in your project build directory:
$ cd prjbuildDir $ make export-layer
This creates a layer directory in prjbuildDir/export/export-layer/projectname.date and also an archive file of the layer prjbuildDir/export/export-layer/projectname.date.tar. Relocate the layer as desired and include it with your configure command (--with-layer=path_to_layer) to recreate your current configuration. Be sure that your added package(s) are included in a pkglist.add file in the layer, for example in templates/default/pkglist.add. For examples of adding SRPMs, refer to 22. Examples of Adding Packages.
The makefile for a package using the rpmbuild method is simple and largely boilerplate. You can usually copy a Makefile from an SRPM package in the distribution. In such cases you usually only need to change the package name, version number, and MD5 sum.
Lists all of the produced binary packages that should be installed on the target file system (usually excludes development packages.)
pkg_RPM_ALL
Lists all of the packages produced (does not inherit from any other list.) This is used as a validation that the package is being produced properly. If this (and RPM_IGNORE) do not match what RPM tells the build system will be produced, a warning message is generated telling you that you should update your makefile.
pkg_TYPE=
For an SRPM package, this must be set to SRPM. For ordinary compressed source files, it must be set to spec.
pkg_VERSION=
112
pkg_UPSTREAM=
Necessary if the produced RPM name is different from pkg_NAME, or if more than one binary is produced.
pkg_DEPENDS=
Lists all of the development packages. These plus the pkg_RPM_DEFAULT list are installed into the sysroot for development purposes. This is only required if the package produces development RPM's, that is, binary RPMs that contain information that must be installed into the sysroot for other programs to build properly.
NOTE: The sysroot is populated by installing both pkg_RPM_DEFAULT and pkg_RPM_DEVEL.
pkg_RPM_IGNORE
In a few cases, the RPM program reports it will generate a package, that it doesn't actually generate. This is a way to capture those situations.
With source RPMs, you will need to modify a packagename.spec file. Few spec files will need every change listed below, because the lines which need replacement or deletion will not be present. Some spec files will only require the change numbered 1 and, for your records, changes 7 and 8. (You can also use Lua scripting as described in Lua Scripting in Spec Files, p.114.) 1. 2. Immediately after every %build and %install section header, add the RPM macro, %configure_target. Remove any install scripts (scriptlets), such as: %postun, %preun, %pretrans, %posttrans, %pre, %post, %triggerpostrun, %triggerrun, %triggerin, %trigger and %verifyscript. If chkconfig is used, replace it with the macro %{_chkconfig_sh initscript} at the end of %install. If the package uses %ifarch, replace it with %if_arch. (%ifarch still works on a CPU basis, but %if_arch works on a CPU family basis. Inspect any BuildRequires: and BuildPreReq: lines. If packages not supplied by Wind River Linux are listed, comment the lines out with #. If the packages configure cant use the system-wide config.cache, override it by adding %define config_cache config_cache immediately after:
%build %configure_target
3. 4. 5. 6.
7. 8.
If you desire, add a change indicator (such as -WR), to the Release line. If you desire, add an entry to changelog.
113
Lua is a scripting language with an interpreter built into rpm. This allows you to write %pre and %post lua scripts to be run at pre- and post-installation. (Note that bash scripts are not supported at installation time.) The wrs library is included in the lua interpreter from Wind River. It consists of three functions, wrs.groupadd, wrs.useradd, and wrs.chkconfig. The following provides an example of a post-install section that creates a group and user named named.
%post -p <lua> wrs.groupadd('-g 25 named') wrs.useradd('-c "Named" -u 25 -g named -s /sbin/nologin -r -d /var/named named')
Each function takes one argument, which is the string you would enter at the shell prompt if you were running the Linux command of the same name. Spec file macros are expanded within the string, so the following works as expected.
%pre -p <lua> wrs.groupadd('-g %{uid} -r %{gname}') wrs.useradd('-u %{uid} -r -s /sbin/nologin -d /var/lib/heartbeat/cores/hacluster -M -c "heartbeat user" -g %{gname} %{uname}')
As can be seen from the file names, when the lua script executes, the "root" directory is the root of the target file system. The base, table, io, string, debug, loadlib, posix, rex, and rpm libraries are also built-in to the lua interpreter. Their use, and general lua programming is not covered here.
Additional Information
114
Figure 10-1
Create Patch
Create Patch With Quilt or Manually
You must set up the infrastructure that will include your new patch and the necessary files and directories to build it each time. Perform the following sequence of operations for each new third-party patch you want to add:
Step 1: Create and configure your project build directory.
The file system you use determines which packages are included by default. You can look in pkglist in you project build directory to see which packages are configured into your project.
Step 2: Get the package you want to add.
Place the package you want to add in your local packages directory:
$ cp package packages/
This creates the directory structure to hold your Makefile and patches.
115
Step 4:
You need to create a new makefile or modify an existing one and place it in your newly-created infrastructure:
$ edit dist/package/Makefile
Use your preferred editor to create and edit the makefile. You can start with another Makefile you have copied from one of the package directories in installDir/wrlinux-3.0/layers/wrll-wrlinux/dist/, or use your editor to create one with the contents as described in Necessary Makefile Contents, p.112. With SRPMs, these are small makefiles, typically 10 lines or so.
Step 5: Add the package to the package list.
This adds pkgname to the pkglist file, and also adds any known dependencies of pkgname to pkglist. At this point, you can create a patch so that the package will be properly built each time you build the file system.
You can use quilt to help you create and manage your patches, or you can create them manually by diffing your changes. Both methods are described here and it is a matter of personal preference which one you use. Use cases in 22.2 Adding SRPM Packages, p.256 present both methods. Figure 10-2 summarizes the two methods and the following sections provide details.
116
Figure 10-2
The following steps describe how to use quilt to create a patch that integrates the SRPM package into the cross-build system.
Step 1: Initialize your quilt environment.
You can initialize the following environment variables as shown before starting to use quilt, or just add them to your shell startup file, for example .bashrc or .cshrc.
export export export export QUILT_PATCHES=wrlinux_quilt_patches QUILT_PC=.pc WRLINUX_USE_QUILT=yes PATH=$PATH:prjbuildDir/host-cross/bin
Step 2:
Unpack the package source and necessary files for creating the patch in prjbuildDir/build:
$ cd build $ make package.patch
Step 3:
You now have a subdirectory named with the full package name including version number and suffix. cd into it and start a new patch:
$ cd full_package_name $ quilt new package-wr-integration.patch
117
Step 4:
Use quilt to make the changes to the spec file that will be the changes that the patch applies:
$ quilt edit SPECS/package.spec
At a minimum, you must add the following line immediately after the %build and %install lines:
%configure_target
Step 5:
For example:
$ cat ../../dist/thttpd/patches.list thttpd-wr-integration.patch
You can now build the package wit h the new patch as described in Build with the Patch, p.119.
Create the Patch Manually
An alternative to using quilt to create the patch is described in the following steps.
Step 1: Unpack the package source.
Step 2:
Save your original package source directory so that you can perform a diff against it:
$ mv fullPackageName fullPackageName.ori
118
Step 3:
Unpack the source again, this time to get the source that you will modify:
$ make package.unpack
Step 4:
Make the changes to the spec file that will be the changes that the patch applies:
$ edit fullPackageName/SPECS/package.spec
At a minimum, you must add the following line immediately after the %build and %install lines:
%configure_target
Perform a diff command, with options -Nur, to create a patch. Redirect the diff output to create the patch in the patches directory:
$ diff -Nur fullPackageName.ori/SPECS/package.spec \ fullPackageName/SPECS/package.spec > \ ../dist/package/patches/package-wr-integration.patch
For example:
$ cat ../../dist/thttpd/patches.list thttpd-wr-integration.patch
You can now build the package wit h the new patch as described next.
You can now build the package and it will include the patch each time:
$ cd prjbuildDir/build $ make package.distclean $ make package
[ make package.compile?] If you get errors, you need to repeat the patching cycle. When you edit the spec file, be sure to follow the directions in Necessary spec File Changes, p.113.
119
Before downloading a new package, inspect its maintainers web page, or any other source you can find, to make sure it will build on your host, and cross-build if necessary for your target. Then ascertain its dependencies, and check if they are present within the target boards package list by checking the contents of the pkglist file. If a dependency is not included in the pkglist file, check to see if it is a standard Wind River Linux package, by inspecting installDir/wrlinux-3.0/layers/wrll-wrlinux/packages/. If not, it must be added. All dependencies must be present and in the pkglist file.
Follow this sequence to add a classic package with the rpmbuild method: 1. 2. 3. 4. 5. 6. 7. 8. Put the compressed source file in prjbuildDir/packages. Create the Makefile and patch directories within prjbuildDir/dist/packagename. Create the packages Makefile and enter the MD5 checksum. Add the package to the pkglist file with make -C build pkgname.addpkg, and remove any generated makefiles. Write the spec file. (See Necessary spec File Changes, p.113 for details.) Try to build the package, testing for proper compilation, and adding makefile and source patches as needed. Add the package to the file system. Add the packages RPM to the development environment.
This method, in general terms, applies equally to any package you wish to add or upgrade using the rpmbuild method with a standard source archive file. For examples of adding classic packages with the rpmbuild method, refer to 10. Adding Packages.
120
Before downloading a new package, inspect its maintainers web page, or any other source you can find, to make sure it will build on your host, and cross-build if necessary for your target. Then ascertain its dependencies, and check if they are present within the target boards package list by checking the contents of the pkglist file. If a dependency is not included in the pkglist file, check to see if it is a standard Wind River Linux package, by inspecting installDir/wrlinux-3.0/layers/wrll-wrlinux/packages/. If not, it must be added. All dependencies must be present and in the pkglist file.
Follow this sequence to add a classic package with the classic method: 1. 2. 3. 4. 5. Install the compressed source package into packages. Create the makefile and patch directories within dist/packagename. Create the packages Makefile and MD5 checksum. Add the package to the pkglist file with make -C build pkgname.addpkg, and remove any generated makefiles. Unpack and build the package, testing for proper compilation, adding patches as needed, and including the name of each patch within the patches.list file, in dist/packagename. Add the package to the file system. Add the packages RPM to the development environment.
6. 7.
This method, in general terms, applies equally to any package you wish to add or upgrade using the classic method. For examples of adding classic packages with the classic method, refer to 22Examples of Adding Packages, p.255.
121
Refer to 22.6 Adding an RPM Package to a Running Target, p.270 for an example of adding an RPM.
122
11
Configuring PREEMPT_RT
11.1 Introduction 123 11.2 Enabling Real Time 123 11.3 Application Programming Considerations for PREEMPT_RT 124 11.4 Configuring the Preemption Level 124 11.5 Interrupt Service Routine (ISR) Payload Execution Context 126 11.6 Run-time Scheduler Debug Instrumentation 128
11.1 Introduction
Wind River Linux provides a conditional real-time kernel profile, preempt_rt, for certain board and file system combinations. The RT patch series is currently maintained by Steven Rostedt (see http://rt.wiki.kernel.org/index.php/Main_Page). The default scheduler for preempt_rt is CFS, which is described in F. Control Groups (cgroups).
NOTE: Conditional real-time support is not available for all boards. For further information on validated boards, refer to the BSP-kernel-filesystem matrix available on Wind River Online Support Wind River Linux also supports guaranteed real-time with the Real-Time Core product. For details on Real-Time Core, contact your Wind River service representative.
123
For example, to configure a common PC board with a standard file system and conditional real-time, enter:
$ configure --enable-board=common_pc \ --enable-kernel=preempt_rt --enable-rootfs=glibc_std
Details on each option follow, presented in the order of least to most preemption.
124
Figure 11-1
The text kernel configuration entry is PREEMPT_NONE. This is the traditional Linux preemption model geared towards throughput. It will provide reasonable overall response latencies but there are no guarantees and occasional long delays are possible. This configuration will maximize the raw processing throughput of the kernel irrespective of scheduling latencies.
The text configuration entry is PREEMPT_VOLUNTARY. This configuration reduces the latency of the kernel by adding more explicit preemption points to the kernel code. The new preemption points break long non-preemptive kernel paths, minimizing rescheduling latency and providing faster application reactions, at the cost of slightly lower throughput. This offers faster reaction to interactive events by enabling a low priority process to voluntarily preempt itself during a system call. Applications run more smoothly even when the system is under load. A desktop system is a typical candidate for this configuration.
This configuration applies to embedded systems with latency requirements in the milliseconds range.
125
The text configuration entry is PREEMPT_DESKTOP. This configuration further reduces kernel latency by allowing all kernel code that is not executing in a critical section to be preemptible. This offers immediate reaction to events. A low priority process can be preempted involuntarily even during syscall execution. This is similar to PREEMPT_VOLUNTARY, but allows preemption anywhere outside of a critical (locked) code path. Applications run more smoothly even when the system is under load, at the cost of slightly lower throughput and a slight run-time overhead to kernel code. (According to profiles when this mode is selected, even during kernel-intense workloads the system is in an immediately preemptible state more than 50% of the time.)
This configuration applies to time-response critical embedded systems, with guaranteed latency requirements of 100 usecs or lower. The text configuration entry is PREEMPT_RT. This configuration further reduces the kernel latency by replacing virtually every kernel spinlock with preemptible (blocking) mutexes, and allowing all but the most critical kernel code to be involuntarily preemptible. The remaining low-level, non-preemptible code paths are short and have a deterministic latency of a few tens of microseconds, depending on the hardware. This enables applications to run smoothly irrespective of system load, at the cost of lower throughput and run-time overhead to kernel code. Testing indicates that with this mode selected, a system can be in an immediately preemptible state more than 95% of the time, even during kernel-intense workloads.
126
11 Configuring PREEMPT_RT 11.5 Interrupt Service Routine (ISR) Payload Execution Context
Selecting PREEMPT_RT (complete preemption), automatically enables these configuration options. The migration of ISR payloads to task scheduled context is required for the locking (mutex) model. For other preemption models these configuration options are elective, and allow additional control of offloading interrupt processing from exception context to preemptive task context.
NOTE: Selection of PREEMPT_HARDIRQS and PREEMPT_SOFTIRQS either directly or with selection of PREEMPT_RT requires device drivers and other sources of hardware interrupts to comply with the changed rules in effect for this operational modespecifically through the use of standard and published interrupt API primitives. Attempts to control CPU interrupt state through other means may violate assumptions in the code, cause assertions to be generated, or cause the kernel to panic. For this reason boards which are known to function in this model are listed in the kernel feature matrix available at Wind River Online Support.
Thread Softirqs
The text configuration entry is: PREEMPT_SOFTIRQS. This option reduces the latency of the kernel by threading soft interrupts. This means that all softirqs will execute in the context of ksoftirqd. While this benefits latency, it can also reduce performance due to additional task context switching. The threading of softirqs can also be controlled using the /proc/sys/kernel/softirq_preemption run-time switch and the softirq-preempt=0/1 boot-time option.
NOTE: You will only see the *irq_preemption files if you have built a preempt-rt kernel but do not have CONFIG_PREEMPT_RT set.
Thread Hardirqs
The text configuration entry is: PREEMPT_HARDIRQS. This option reduces the latency of the kernel by threading hard irqs. This means that all (or selected), irqs will run in their own kernel thread context. While this helps latency, this feature can also reduce performance due to additional task context switching. The threading of hard irqs can also be controlled using the /proc/sys/kernel/hardirq_preemption run-time switch and the hardirq-preempt=0/1 boot-time option. Per-irq threading can be enabled and disabled using the /proc/irq///threaded run-time switch.
Preemptible RCU
The text configuration entry is: PREEMPT_RCU. This option reduces the latency of the kernel by making certain RCU sections preemptible. Normally RCU code is non-preemptible. If this option is selected, read-only RCU sections become preemptible. This helps latency, but may expose bugs due to now-naive assumptions about each RCU read-side critical section remaining on a given CPU through its execution.
127
The text configuration entry is CONFIG_DEBUG_PREEMPT. Enables the kernel to detect preemption count underflows, track critical section entries, and emit debug assertions should an illegal sleep attempt occur. Unsafe use of smp_processor_id( ) is also detected.
The text configuration entry is CONFIG_WAKEUP_LATENCY_HIST. Logs all the wakeup latency timing to a histogram bucket, and factors out printk produced by wakeup latency timing.
The text configuration entry is CONFIG_PREEMPT_TRACER. Measures the time spent in preemption disabled critical sections. Time units are in microseconds. The default measurement method is a maximum search, which is disabled by default and can be started during run-time by entering:
# # # # echo echo echo echo 1 1 1 0 > > > > /proc/sys/kernel/trace_use_raw_cycles /proc/sys/kernel/mcount_enabled /proc/sys/kernel/trace_enabled /proc/sys/kernel/preempt_max_latency
Note that kernel size and overhead increases with this option enabled. This option and the IRQSOFF_TRACER timing option, below, can be used together or separately.
The text configuration entry is CONFIG_IRQSOFF_TRACER. Measures the time spent in interrupt disabled critical sections. Time units are in microseconds. The default measurement method is a maximum search, which is disabled by default and can be started during run-time using:
# echo 0 > /debugfs/tracing/tracing_max_latency
Note that kernel size and overhead increases with this option enabled. This option and the CONFIG__PREEMPT_TRACER option can be used together or separately. This is a default kernel option, and not specific to, or added by, the PREEMPT_RT patches.
128
When PREEMPT_RT is configured, most spinlocks and semaphores are converted into mutexes. There still exist true spin locks and older style semaphores. There are places in the kernel that pass the lock by pointer and typecast it back. This can circumvent the compiler conversions. This option will add a magic number to all converted locks and check to make sure the lock is appropriate for the function being used.
129
130
12
Configuring Scalable Features
12.1 Introduction 131 12.2 BusyBox 131 12.3 Static Link Option 133 12.4 Library Optimization Option 134 12.5 Reducing Kernel Boot Time 135 12.6 Analyzing and Optimizing Boot Time 138 12.7 Analyzing and Optimizing Runtime Footprint 152
12.1 Introduction
Many features that give Wind River Linux a small footprint such as BusyBox, Static Link, and library optimization, are themselves scalable. This chapter continues directory conventions used in previous chapters: /home/user/WindRiver is referred to as installDir. The development environment consists primarily of the contents of installDir/wrlinux-3.0. The build environment is contained within the project build directory, which is under /home/user/workdir.
12.2 BusyBox
BusyBox merges tiny versions of standard Linux utilities into a single small executable. These utilities include a shell, compression utilities, a DHCP server, login utilities, archiving utilities like tar and rpm, core utilities like cat, df and ls, networking utilities like ping and tftp, system administration utilities like mount and more, and process utilities like free, ps, and kill.
131
These utilities have reduced functionality compared to their standard Linux counterparts, but they also have a much smaller footprint, and merging them into a single executable results in a smaller footprint still.
Configuring BusyBox
You may add or remove commands supported by the BusyBox executable in much the same way as you configure the Linux kernel. By removing commands you do not intend to use, you reduce the executables size even further.
NOTE: It is not necessary to make the file system before configuring BusyBox. The
initial make busybox.menuconfig command extracts the BusyBox source. Within the build directory, enter:
$ make busybox.menuconfig
The BusyBox menuconfig program functions in exactly the same way as the kernel menuconfig. You can access help for each command, and discard or save your changes. After making your changes, run make busybox to rebuild BusyBox. Once you make busybox, you may check the busybox.links file within build/busybox-version, to confirm that your changes were made.
NOTE: If you are using a RAM or flash file system, you will have to remake it with the make boot-image command.
132
The following example shows how you can create a custom busybox configuration and save it to a layer. 1. Configure a project, for example:
$ configure --enable-kernel=small --enable-rootfs=glibc_small \ --enable-board=arm_versatile_926ejs
2.
Configure busybox:
$ make -C build busybox.menuconfig
Set and unset the options you want. The resulting configuration will be saved in a .config file. 3. 4. Build busybox with your new configuration:
$ make -C build busybox
Deploy your kernel and file system on your hardware or in emulation. Once you have verified that you have the configuration you want, you can proceed to save it to a layer. 5. Save your configuration file and your new busybox .rpm files to a layer:
$ cp build/busybox-1.4.1/.config \ ~/layers/busybox/templates/feature/my_busybox/busybox/config
Note that the file name in the layer is config, not .config.
$ cp export/RPMS/armv5tel_vfp/busybox-*.rpm \ /home/user/layers/busybox/RPMS/glibc_small/armv5tel_vfp/
6.
You can now use the layer to recreate the busybox configuration, for example:
$ mkdir new_project $ cd new_project $ configure --enable-kernel=small \ --enable-rootfs=glibc_small \ --enable-board=arm_versatile_926ejs \ --with-layer=/home/user/layers/busybox/
Now, when you build your file system (make fs), it will build with your custom busybox configuation.
133
applications you add, the more you duplicate standard routines. At a certain point (the exact point will depend on the your setup), static linking will take up more space than dynamic linking.
You must configure the project build directory with the staticlink option. An example configure command is:
$ configure \ --enable-board=arm_versatile_926ejs \ --enable-kernel=small \ --enable-rootfs=uclibc_small+debug \ --enable-bootimage=flash \ --enable-scalable=staticlink
The next step is to perform a make build-all. Because static linking is only effective with very small numbers of executables, you should limit the number of applications included in the run-time system. An effective way of doing that is to use the PACKAGES_IN_FILESYSTEM environment variable when performing a make build-all. This environment variable allows you to compile only the applications you wish to install into the file system. The environment variable precedes the make build-all command. As an example, if you wished to install only the BusyBox application into your file system, your make build-all command (entered as usual within the project build directory), would be:
$ PACKAGES_IN_FILESYSTEM="setup filesystem busybox" make build-all
NOTE: The setup and filesystem applications are not optional; the linux application is also required if modules are enabled.
134
Configure your project build directory with the --enable-scalable=mklibs option. An example configure command is:
$ configure \ --enable-board=arm_versatile_926ejs \ --enable-kernel=small \ --enable-rootfs=glibc_small+debug\ --enable-bootimage=flash \ --enable-scalable=mklibs
The next step is to perform a make build-all. The resulting libraries should be smaller than the ones you'd get without the --enable-scalable option.
2.
3.
4. 5. 6. 7.
135
This discussion of kernel boot-time primarily concerns steps 4 through 6, that is, from the time the kernel begins execution until the first application process (typically /sbin/init or /init) is started.
In general, with embedded devices you can take advantage of the fact that you are working with a fixed topology and so do not need to discover it each time you boot, and your application may need only limited services and resources from the variety that are available. The following discussion is by no means exhaustive, but it presents some significant sources of boot latency which may be most profitable for you to examine. The following are discussed in this section:
Kernel image decompression Delay loop calibration Resource initialization Device driver probe delay Bus enumeration IP autoconfig Console output
If you can afford the space, providing an uncompressed kernel eliminates decompression time, whether that decompression is performed by the bootloader or the kernel itself. You might, for example, make an uncompressed image available in direct mapped memory to allow for execution in place (XIP).
Delay Loop Calibration
At boot time, the kernel computes the software delay loop advisor. This is a time-intensive operation that is unnecessary for a deployed, embedded application where the value is constant. Refer to for an example of how to remove the repeated calculations and determine the amount of time you have saved.
Resource Initialization
This is best addressed by removing resource generality unneeded by the embedded application. The number of pseudo TTYs, consoles, user consoles, RAM disks, and so on, should be minimized to reflect the actual resource need of the application. Some places to look at in your kernel configuration in addition to unnecessary drivers include:
136
Also note that device drivers that are required by your application but have lengthy initialization times potentially may be built as modules and launched after boot at a less latency critical time. For still more aggressive timing data, initialization of VFS and other structure caches may be reduced from system calculated defaults with associated kernel boot parameters.
Device Driver Probe Delay
Because of the nature of an embedded system, device driver probes for unnecessary devices can be eliminated, and if probes are required for existing devices, the timeouts should be minimized based on only what is required for the target hardware. It may even be possible to take the more aggressive approach of dispensing with busy-wait probing for some devices altogether. You may also be able to thread device probe and enumeration operations to maximize concurrent execution times. Note that this is more experimental because the driver routines must be conducive to such threading and you will probably be required to modify the driver code. Examining additional ways to maximize parallel execution of device drivers may well be justified depending on how much latency such operations introduce into the boot time of your system.
Bus Enumeration
This presents a similar situation to device probing as discussed above. Note that for externally accessible buses enumeration is unavoidable, but for some applications it may be useful to defer it until after kernel boot by containing the enumeration functionality in kernel modules.
IP Autoconfig
In most cases this is just something you may want to watch for in the development environment, and only applies to deployed embedded applications that must get their root file system from NFS. Configuring network parameters using DHCP/BOOTP and then NFS to mount the file system can add seconds to the boot process. Providing static IP parameters at the boot prompt (ip=address) can help. If you must use an NFS file system, it may be possible to boot with an initial RAM disk and then transfer to the NFS root file system during application boot up. But the trade-off is that this requires the extra time it takes to load the RAM disk image into memory prior to kernel boot.
Console output
The kernel boot log that is sent to the console device through printk commands adds a significant contribution to boot latency. Due to the nature of printk, calling this function results in synchronous (unbuffered) data transmission that ties the boot process to the speed of the console device. This is most acute in the case of a UART serial device. Even at a rate of 115,2000 bps a single character transmission consumes approximately 87 ms. A typical boot log of 6000 characters would add over 500 ms of latency. For deployed applications the majority of kernel messages may be suppressed with the -quiet kernel command line flag or disabled by kernel configuration.
137
early boot timethe time from when the kernel is launched to the time the init process (usually /sbin/init) is launched late boot time the time from when the init process is launched until the last start-up script is executed.
You can collect data on both of these phases of boot time by using the bootlogger script as your init process. The script uses the Linux kernels ftrace feature to capture profiling data from the Wind River Linux boot sequence.
NOTE: ftrace is documented in prjbuildDir/build/linux/Documentation/ftrace.txt.
The bootlogger script overrides the regular /sbin/init as the first process and copies the early boot time data in /debug/tracing/trace to /var/log/kernel-init.log and then configures ftrace to trace init processes. When the final init process is executed (/etc/rcS.d/S999stop-bootlogger), bootlogger copies the late boot time data to /var/log/post-kernel-init.log. The names and locations of these files are configurable in the targets /etc/bootlogger.conf file (prjbuildDir/export/dist/etc/bootlogger.conf). As a final step, bootlogger launches the regular init process.
NOTE: The bootlogger script is designed to be used in development and is not intended to be deployed in production systems.
To configure your platform project for boot logging, specify the feature/boottime template, for example:
$ configure --enable-board=common_pc --enable-kernel=small \ --enable-rootfs=glibc_small \ --with-template=feature/boottime
When you build your file system, you will have an /sbin/bootlogger script, an /etc/bootlogger.conf configuration file, and a stop-bootlogger script configured as the last init script to run.
Step 2: Configure your boot sequence to use bootlogger.
Configure your kernel boot command line to pass init=/sbin/bootlogger. This is typically done by passing a command to the bootloader, or as a compilable kernel option. If you are using QEMU to emulate your target, you can enter make config-target and then append init=/sbin/bootlogger to the TARGET0_QEMU_KERNEL_OPTS option, for example:
... 52: TARGET0_QEMU_BOOT_DEVICE= 53: TARGET0_QEMU_KERNEL_OPTS=clock=pit oprofile.timer=1 54: TARGET0_VIRT_UMA_START=yes
138
55: TARGET0_QEMU_OPTS= 56: TARGET0_VIRT_EXT_WINDOW=no 57: TARGET0_VIRT_EXT_CON_CMD=xterm -T Virtual-WRLinux -e 58: TARGET0_VIRT_CONSOLE_SLEEP=5 59: TARGET0_QEMU_HOSTNAME= 60: TARGET0_QEMU_USE_KQEMU=yes 61: TARGET0_VIRT_DEBUG_WAIT=no 62: TARGET0_VIRT_DEBUG_TIMEOUT_DEFAULT=40 Enter number to change (q quit)(s save): 53 New Value:other-options init=/sbin/bootlogger Enter number to change (q quit)(s save): s Enter number to change (q quit)(s save): q
Step 3:
Boot your target or emulation. When it has finished the complete boot sequence there will be boot logs for both the early and late phases of the boot process in /var/log on the target. The following sections describe how to use the data collected by ftrace and bootlogger.
2.
3.
$ host-cross/bin/ftrace-bootgraph.pl -p export/dist/var/log/kernel-init.log Warning : Perl module SVG::TT::Graph::BarHorizontal is required for graphical output ftrace-bootgraph.pl will fall back to text text output 0.0045 sec 0.1348 % pidmap_init 0.0060 sec 0.1818 % init_rootfs 0.0033 sec 0.1006 % select_idle_routine 0.0177 sec 0.5343 % alternative_instructions 0.0034 sec 0.1039 % restart_mce 0.0261 sec 0.7869 % acpi_pic_sci_set_trigger 0.0051 sec 0.1526 % kernel_init 0.0020 sec 0.0605 % cpu_callback ... 0.0023 0.0044 0.0043 1.5470 0.0059 0.0034 sec sec sec sec sec sec 0.0689 % dm_mirror_init 0.1314 % rpcauth_init_module 0.1312 % pci_sysfs_init 46.7010 % ic_bootp_recv 0.1779 % ic_bootp_recv 0.1026 % root_nfs_parse_addr
0.1890 sec 5.7044 % Others These 76 functions account for 94 percent of the time
Total time
3.4945
139
ftrace-bootgraph.pl finds which functions are taking most of the time. In the example shown, the other approximate six percent of the time contains those other functions that did not take a lot of time but likely made up much of the listing. The script removes them to make the output more helpful. If you have the SVG::TT::Graph::BarHorizontal perl module installed, you will get graphics output instead of text when you run the ftrace-bootgraph.pl script. Figure 12-2 illustrates some sample graphics output. The ftrace-bootgraph.pl script produces text output from the ftrace date such as the following:
0.0044 0.0043 1.5470 0.0059 0.0034 sec sec sec sec sec rpcauth_init_module pci_sysfs_init ic_bootp_recv ic_bootp_recv root_nfs_parse_addr
For example, the last entry means the root_nfs_parse_addr kernel function took 0.0034 seconds which was .1026% of the early boot time.
Figure 12-2 Example Partial Early Boot Time Graphical Output
140
How long is the late boot time phase? What processes are consuming the CPU? How much time is spent in the idle loop? When the CPU is idle, why is it idle?
You can analyze the initialization of userspace for purposes of optimization with the ftchart script, which can produce graphic and text output based on bootlogger log files as described in the following sections.
ftchart helps you answer you questions about late boot time by reconstructing the init process tree from kernel ftrace data. This allows you to visualize and analyze it in a number of different output formats. Further, you can selectively expand or prune the tree to achieve the desired viewthis is useful for drilling down deeper into the tree to understand where time is being spent, ignoring irrelevant details. The following section provides examples of the use of some of the ftchart output options. For details on these and all ftchart options, use the --help option:
$ prjbuildDir/host-cross/bin/ftchart --help
Suppose you wanted to find out where all of your CPU time is being spent during the post-kernel init phase. First, collect a log using bootlogger and then transfer the post-kernel-init.log file to a convenient location on your development host. You might use the following command:
$ ftchart -o tree -d 3 ./post-kernel-init.log
In this command, ftchart is supplied some output presentation options, and the name of a log file produced by bootlogger. The -o tree option says to output the data as a text tree and the -d 3 option limits the tree depth to 3 levels.
141
The display shows parent and child processes, with the child processes indented under their parent. For example, mingetty is a child of init. Leaf nodes are nodes that do not show any child processes under them, although they may have child process that are just not displayed due to the supplied -d option. The CPU times of leaf nodes are the sums of the CPU time of that node and all of its undisplayed children, if any. From this output, it is clear that the init process and the idle process are consuming the biggest chunks of boot time. But this is still not sufficient to give a solid optimization target, or to identify some suspect package. In addition, the output shown contains much irrelevant detail. For example, there is little need to optimize processes that take up only tiny amounts of CPU time.
The -o cpu Option
To specifically identify optimization targets, the cpu summary output is more useful:
$ ftchart -o cpu ./post-kernel-init.log Total Post-Kernel Boot Time: cpu time: 3.830s <idle> : pid: 0 ppid: 0 cpu time: 1565.589ms (40.9%) <other> cpu time: 2159.169ms (59.1%)
By default, the cpu output simply summarizes how much cpu time is spent in the idle process, and how much cpu time is spent in other processes. (The other category is not an actual process, it is the sum of all un-expanded processes.) The cpu option does not reveal any specific optimization targets but invites two big questions. One, of course, is what's going on in the other category? Another is why is there so much idle time? To address the first question, expand the interesting parts of the process tree using the -e option as described in the next section. An Example of Investigating Idle Time, p.145 discusses how to investigate the second question.
142
To uncover optimization targets, use the -e (expand) option. The argument to the -e option is a comma-separated list of expand paths. An expand path is a slash-separated list representing the lineage of an interesting tree element, much like a path in a file system directory tree. Suppose you are interested in expanding the bar process which has parent process foo and a grandparent process with PID 0, the ancestor of all processes, you would supply the following expand path:
-e "foo/bar"
You could also use a wildcard to expand all child processes of foo as follows:
-e "foo/*"
Note that the root process is the scheduler or idle process (PID 0, the parent process of the userspace processes starting with PID 1). Similarly, if you wanted to expand all child processes of the root process, you could pass the following expand path:
-e "*"
Start by drilling down into the other category. Do this by expanding all children of the <idle> process as follows:
$ ftchart -o cpu -e "*" ./post-kernel-init.log Total Post-Kernel Boot Time: cpu time: 3.830s <idle> : pid: 0 ppid: 0 cpu time: 1565.589ms (40.9%) init : pid: 1 ppid: 0 cpu time: 2159.169ms (56.4%) ksoftirqd/0 : pid: 3 ppid: 0 cpu time: 9.692ms (0.3%) rpciod/0 : pid: 932 ppid: 0 cpu time: 28.585ms (0.7%) khelper : pid: 6 ppid: 0 cpu time: 54.127ms (1.4%) polltester : pid: 9 ppid: 0 cpu time: 0.011ms (0.0%) khubd : pid: 138 ppid: 0 cpu time: 0.621ms (0.0%) pdflush : pid: 177 ppid: 0 cpu time: 0.008ms (0.0%) nfsiod : pid: 227 ppid: 0 cpu time: 12.189ms (0.3%)
Now it is clear (as expected) that the init process is the busiest. Continue drilling down into the init process by iteratively changing the -e option:
$ ftchart -o cpu -e "init/*" ./post-kernel-init.log Total Post-Kernel Boot Time: cpu time: 3.830s <idle> : pid: 0 ppid: 0 cpu time: 1565.589ms (40.9%) mingetty : pid: 2168 ppid: 1 cpu time: 1.793ms (0.0%) rc : pid: 2066 ppid: 1 cpu time: 290.577ms (7.6%) mingetty : pid: 2167 ppid: 1 cpu time: 1.975ms (0.1%) polltester : pid: 952 ppid: 1 cpu time: 98.252ms (2.6%) mingetty : pid: 2169 ppid: 1 cpu time: 2.472ms (0.1%) mingetty : pid: 2170 ppid: 1 cpu time: 1.826ms (0.0%) mingetty : pid: 2171 ppid: 1 cpu time: 2.466ms (0.1%) mingetty : pid: 2172 ppid: 1 cpu time: 2.824ms (0.1%) init : pid: 957 ppid: 1 cpu time: 1746.506ms (45.6%) <other> cpu time: 2159.169ms (2.9%)
Drill deeper by building up the expand path. Specifically, choose the child process that consumes the most CPU time and add its name at the end of the expand path. Continue this process on the data from this example to arrive at the following expand path:
$ ftchart -o cpu -e "init/init/rc.sysinit/start_udev" ./post-kernel-init.log Total Post-Kernel Boot Time: cpu time: 3.830s <idle> : pid: 0 ppid: 0 cpu time: 1565.589ms (40.9%) start_udev : pid: 978 ppid: 958 cpu time: 1587.921ms (41.5%) <other> cpu time: 2159.169ms (17.6%)
143
It is becoming clear that udev is going to be a good place to focus. At this point, determine if it is possible to eliminate the udev package. If the answer is yes, this 41.5% chunk of boot time can be eliminated. If the answer is no, use ftchart to dig still deeper. Inspect start_udev and all of its children:
$ ftchart -o cpu -e \ "init/init/rc.sysinit/start_udev,init/init/rc.sysinit/start_udev/*" \ ./post-kernel-init.log Total Post-Kernel Boot Time: cpu time: 3.830s <idle> : pid: 0 ppid: 0 cpu time: 1565.589ms (40.9%) start_udev : pid: 978 ppid: 958 cpu time: 20.061ms (0.5%) udevsettle : pid: 1029 ppid: 978 cpu time: 2.321ms (0.1%) udevcontrol : pid: 1976 ppid: 978 cpu time: 1.610ms (0.0%) logger : pid: 1977 ppid: 978 cpu time: 0.921ms (0.0%) start_udev : pid: 979 ppid: 978 cpu time: 1.098ms (0.0%) start_udev : pid: 983 ppid: 978 cpu time: 4.033ms (0.1%) awk : pid: 987 ppid: 978 cpu time: 1.110ms (0.0%) fgrep : pid: 988 ppid: 978 cpu time: 1.211ms (0.0%) fgrep : pid: 989 ppid: 978 cpu time: 0.813ms (0.0%) mount : pid: 990 ppid: 978 cpu time: 1.175ms (0.0%) mkdir : pid: 991 ppid: 978 cpu time: 1.332ms (0.0%) mkdir : pid: 992 ppid: 978 cpu time: 1.140ms (0.0%) ln : pid: 993 ppid: 978 cpu time: 0.862ms (0.0%) ln : pid: 994 ppid: 978 cpu time: 0.715ms (0.0%) ln : pid: 995 ppid: 978 cpu time: 0.707ms (0.0%) ln : pid: 996 ppid: 978 cpu time: 0.705ms (0.0%) ln : pid: 997 ppid: 978 cpu time: 0.702ms (0.0%) ln : pid: 998 ppid: 978 cpu time: 0.693ms (0.0%) mkdir : pid: 999 ppid: 978 cpu time: 1.173ms (0.0%) start_udev : pid: 1000 ppid: 978 cpu time: 332.547ms (8.7%) cat : pid: 1008 ppid: 978 cpu time: 1.047ms (0.0%) pidof : pid: 1009 ppid: 978 cpu time: 3.344ms (0.1%) rm : pid: 1010 ppid: 978 cpu time: 0.934ms (0.0%) udevd : pid: 1011 ppid: 978 cpu time: 1146.660ms (29.9%) start_udev : pid: 1013 ppid: 978 cpu time: 1.602ms (0.0%) udevcontrol : pid: 1014 ppid: 978 cpu time: 1.934ms (0.1%) udevtrigger : pid: 1015 ppid: 978 cpu time: 57.471ms (1.5%) <other> cpu time: 2159.169ms (18.1%)
Clearly, there is lots of irrelevant detail here. Adjust the -e option to show only the two biggest chunks:
$ ftchart -o cpu -e "init/init/rc.sysinit/start_udev/start_udev,\ init/init/rc.sysinit/start_udev/udevd" ./post-kernel-init.log Total Post-Kernel Boot Time: cpu time: 3.830s <idle> : pid: 0 ppid: 0 cpu time: 1565.589ms (40.9%) start_udev : pid: 979 ppid: 978 cpu time: 1.098ms (0.0%) start_udev : pid: 983 ppid: 978 cpu time: 4.033ms (0.1%) start_udev : pid: 1000 ppid: 978 cpu time: 332.547ms (8.7%) udevd : pid: 1011 ppid: 978 cpu time: 1146.660ms (29.9%) start_udev : pid: 1013 ppid: 978 cpu time: 1.602ms (0.0%) <other> cpu time: 2159.169ms (20.4%)
Dig deeper and deeper into udevd to arrive at the following -e option and output:
$ ftchart -o cpu -e init/init/rc.sysinit/start_udev/udevd/udevd, \ init/init/rc.sysinit/start_udev/udevd/udevd/* ./post-kernel-init.log Total Post-Kernel Boot Time: cpu time: 3.830s <idle> : pid: 0 ppid: 0 cpu time: 1565.589ms (40.9%) udevd : pid: 1012 ppid: 1011 cpu time: 215.464ms (5.6%) udevd : pid: 2061 ppid: 1012 cpu time: 0.513ms (0.0%) udevd : pid: 2109 ppid: 1012 cpu time: 0.552ms (0.0%) udevd : pid: 2110 ppid: 1012 cpu time: 0.528ms (0.0%) udevd : pid: 2111 ppid: 1012 cpu time: 0.488ms (0.0%) udevd : pid: 2173 ppid: 1012 cpu time: 0.557ms (0.0%) udevd : pid: 2174 ppid: 1012 cpu time: 0.507ms (0.0%) udevd : pid: 2175 ppid: 1012 cpu time: 0.795ms (0.0%) ... udevd : pid: 2031 ppid: 1012 cpu time: 30.999ms (0.8%) udevd : pid: 2033 ppid: 1012 cpu time: 30.316ms (0.8%) <other> cpu time: 2159.169ms (29.2%)
144
It is now clear what's going on in udevapparently, the 29.9% of the boot time that is spent in udevd is spent spawning many processes, each of which does a tiny bit of work. This is the limit of what the ftchart tool can tell us. Now would be the time to dive into the udevd initialization code and understand why all of these processes are being spawned, and if this code can be optimized. You could repeat the steps demonstrated above to investigate what is going on for the 8.7% chunk that is consumed by start_udev and possibly identify another optimization target.
Now consider idle time. Identifying opportunities to reduce idle time is more difficult than reducing CPU usage. The reason is that idle time does not have a single root cause. That is, any given idle interval will likely have many processes waiting for many resources. Choosing which process to optimize and how to optimize it without introducing idle time elsewhere is not exactly simple. Further, it is unlikely that there are long stretches of pure idle time that can be optimized away; instead, the idle time is probably made up of many small fragments. That said, two popular techniques for reducing idle time are eliminating unnecessary delays and exploiting opportunities for parallelism. The following example shows how ftchart can help identify opportunities for applying these two techniques.
Eliminating Unnecessary Delays
The first technique involves identifying processes that sleep electively. These are called lazy sleepers. In principle, if these delays can be shortened or eliminated, idle time can be reduced. Consider the following output. In this command, ftchart reports the lazy sleepers (-o lazy) and limits the output to lazy sleepers with more than 0.3 seconds of sleep time (-t 0.3):
$ ftchart -o lazy -t 0.3 ./post-kernel-init.log Total Post-Kernel Boot Time: 3.803s Total Post-Kernel Idle Time: 1.801s (47.37%) <idle>(0) init(1) init(959) rc.sysinit(960) start_udev(977) udevsettle : pid: 1026 ppid: 977 Total Sleep Time: 1153.067ms (10.90% CPU idle, 99.75% elective) Application requested delay: 1150.222ms (99.75%) (10.92% CPU idle) RPC operation: 2.845ms (0.25%) (0.00% CPU idle) <idle>(0) init(1) init(959) rc.sysinit(960) start_udev(977) udevd(1010) udevd(1011) udevd(1866) modprobe : pid: 1867 ppid: 1866 Total Sleep Time: 422.856ms (56.72% CPU idle, 97.70% elective) Kernel space requested delay: 413.143ms (97.70%) (58.06% CPU idle) RPC operation: 4.836ms (1.14%) (0.00% CPU idle) Page fault: 3.222ms (0.76%) (0.00% CPU idle) Closing a file: 1.633ms (0.39%) (0.00% CPU idle) Unknown reason: 0.022ms (0.01%) (0.00% CPU idle) <idle>(0) khubd : pid: 138 ppid: 0 Total Sleep Time: 2061.569ms (64.74% CPU idle, 11.40% elective) Waiting for USB hub events: 1805.415ms (87.57%) (63.23% CPU idle) Kernel space requested delay: 234.956ms (11.40%) (77.03% CPU idle) Waiting for urb from USB: 21.198ms (1.03%) (56.84% CPU idle)
145
In the first block of data, the first line after the totals shows the parents of the process of interest. This helps identify exactly which instance of a process is interesting. Next comes the name of the process itself. In this case it's udevsettle. Next, the Total Sleep Time for udevsettle is about 1.2 seconds. While this process is sleeping, the CPU is idle for 10.92% of the time. Also, 99.75% of the Total Sleep Time is elective. The lines that follow the Total Sleep Time represent a break down of the idle time by reason in decreasing order. As you can see, the major reason for this delay is Application requested delay. Shortening the application-requested delay in udevsettle would allow operations that depend on udevsettle to proceed. In principle, these operations could use the idle time resulting in part from udevsettle's delay. Whether or not this strategy is realistic depends on the specific reason for that delay and you would have to investigate this. Similar investigations could be applied to the other lazy sleepers.
Increasing Parallelism
Opportunities for increasing parallelism are best identified graphically. For this purpose, ftchart has the png output feature. This output feature plots a horizontal bar graph whose x axis is time and whose bars are the processes of interest.
NOTE: The png output included in this section is available in larger images in the installation in installDir/wrlinux-3.0/layers/wrll-analysis-1.0/tools/ftchart/src/.
To generate a basic top-level view of activity, generate the output with no arguments:
$ ftchart -o png tests/post-kernel-init.log
The output appears in the current directory as ftchart.png. It shows various intervals of green, gray, and red for each process as follows:
146
When the idle process is green, the CPU is idle. By default, all of the processes are grouped into the other bar at the bottom of the graph. This bar is the output of all of the unexpanded processes overlaid. Naturally, it is cluttered with more output than can be understood by inspection. It's time to drill down into this other category and tease out the relevant data. Now turn your focus to the second half of the boot period, which contains plenty of idle time, and may present some opportunities for parallelism. To drill down one layer deeper, expand all of the top-level child processes as with the cpu output (-e "*").
$ ftchart -o png -e "*" tests/post-kernel-init.log
You can immediately identify a thick band of red in khubd and init near the 9.75 seconds mark. Also, this band of red is accompanied by a similar green band for the idle process. Drilling deeper would reveal that this is the same idle time attributed to udevsettle in the lazy output analysis above. Because you know that the idle time in this region is due to elective sleeping, move on further to the right. As expected, the interesting process to drill into is the init process. But first, note that all of the child processes for an expanded parent (caused by the -e * option) are overlaid with their parent; they do not go into the other category. This is why the view for the init process is somewhat cluttered. Drilling down into the init process will help clarify things:
$ ftchart -o png -e "init/*" tests/post-kernel-init.log
147
Figure 12-5
First, note that all of the unexpanded output has ended up in the other bar. By inspecting this bar, you can evaluate whether or not some important activity for the time interval of interest is being ignored. If so, you must tune the -e option. In this case, however, very little is happening in the other category during the interval of interest. Two observations emerge from this graph. First, the polltester process is spending almost all of its time in elective sleep.You can "prune" processes such as these from the analysis using the --prune (-p) option.
NOTE: polltester is a simple polling example and is not something that is interesting to optimize. It is used in these examples simply to demonstrate the prune option, and is not provided with Wind River Linux.
The next observation is that brief mingetty processes are not interesting, partially because they do not use the CPU, and partially because they are so small they are not good optimization targets. Eliminate these from the view and put them in the other category by setting a --threshold (-t) option. Applying these two revisions to the command line generates a cleaner picture:
$ ftchart -t 0.1 -o png -e "init/*" -p "init/polltester" \ tests/post-kernel-init.log
148
Figure 12-6
This view shows that much of what happens in the last half of the boot sequence happens in the rc process. It also shows that much of this time is spent idle. Perhaps if the processes launched by the rc process are not all contending for the same resources, they can be launched in parallel. To identify these processes, tune the --expand (-e) option. You may want to use the tree output option described in Visualizing Late Boot Time, p.141 to help develop your expand option. Skipping some intermediate drilling steps, you can arrive at the following command. Tuning the --threshhold (-t) option brings some interesting processes out of the other category and back into the expanded view:
$ ./ftchart -t 0.01 -o png -e "init,init/init,init/rc,init/rc/*" \ -p "init/polltester" tests/post-kernel-init.log
149
Figure 12-7
From this output, some details of the launch scripts for the various services are clear. Many of them, such as the sshd, xinetd, and sendmail scripts, appear to spend much time sleeping. Perhaps these scripts could be launched in parallel to eliminate some idle time. At this point, you would investigate whether this makes sense, or whether launching these processes must be deferred for some reason. If the processes can be launched in parallel, you could make that change, re-profile the boot process, and repeat this analysis to determine if an improvement has been made. In these cases, an experiment is probably more revealing than more analysis. In short, try something and see what happens, because it can be hard to predict what the impact of change may be. However, as an alternative to experimentation, you could also use ftchart to get information about why these processes sleep. This can be done using ftchart's idle output. This is textual output that summarizes the reasons why a processes sleeps, much like the lazy output option. Here's an idle output example for the sshd launch script:
$ ftchart -t 0.1 -o idle -e "init/rc/S55sshd,init/rc/S55sshd/*" \ tests/post-kernel-init.log Total Post-Kernel Boot Time: 3.803s Total Post-Kernel Idle Time: 1.801s (47.37%) S55sshd : pid: 2115 ppid: 2067 Total Sleep Time: 299.046ms (77.40% CPU idle, 0.00% elective) Waiting for a process to die: 286.026ms (95.65%) (79.16% CPU idle) RPC operation: 5.791ms (1.94%) (3.26% CPU idle) Reading from a pipe: 5.320ms (1.78%) (74.29% CPU idle) Fork() system call: 1.001ms (0.33%) (0.00% CPU idle) Writing a page to disk: 0.908ms (0.30%) (98.13% CPU idle) sshd : pid: 2119 ppid: 2115 Total Sleep Time: 259.961ms (75.58% CPU idle, 0.00% elective) Page fault: 141.668ms (54.50%) (91.26% CPU idle) Loading kernel module: 58.550ms (22.52%) (54.87% CPU idle) RPC operation: 45.756ms (17.60%) (46.81% CPU idle)
150
13.935ms (5.36%) (97.96% CPU idle) 0.052ms (0.02%) (0.00% CPU idle)
This reveals that the S55sshd launch script is mainly waiting for a process to die, which points to child processes as the fundamental reason why the process is sleeping. However, sshd, a child process of the launch script, seems to be waiting mainly on page faults, loading a kernel module, and performing RPC operations. Assuming that these cannot be easily optimized away, perhaps something else can be placed in parallel with them. Consider the sendmail launch process:
$ ftchart -t 0.05 -o idle -e "init/rc/S80sendmail,init/rc/S80sendmail/*" \ tests/post-kernel-init.log Total Post-Kernel Boot Time: 3.803s Total Post-Kernel Idle Time: 1.801s (47.37%) S80sendmail : pid: 2141 ppid: 2067 Total Sleep Time: 313.646ms (75.17% CPU idle, 0.00% elective) Waiting for a process to die: 295.782ms (94.30%) (78.44% CPU idle) RPC operation: 10.072ms (3.21%) (27.37% CPU idle) Unknown reason: 3.459ms (1.10%) (0.55% CPU idle) Fork() system call: 2.353ms (0.75%) (0.00% CPU idle) Writing a page to disk: 1.286ms (0.41%) (77.29% CPU idle) Writing data to TTY: 0.562ms (0.18%) (0.00% CPU idle) sigprocmask system call: 0.064ms (0.02%) (0.00% CPU idle) Page fault: 0.027ms (0.01%) (0.00% CPU idle) Sending TCP/IP data: 0.022ms (0.01%) (0.00% CPU idle) NFS operation: 0.013ms (0.00%) (0.00% CPU idle) Reading from a pipe: 0.006ms (0.00%) (0.00% CPU idle) makemap : pid: 2142 ppid: 2141 Total Sleep Time: 125.242ms (84.28% CPU idle, 0.00% elective) Page fault: 92.589ms (73.93%) (88.83% CPU idle) RPC operation: 19.016ms (15.18%) (61.68% CPU idle) Writing a page to disk: 8.724ms (6.97%) (79.06% CPU idle) Unknown reason: 2.971ms (2.37%) (93.40% CPU idle) NFS operation: 1.942ms (1.55%) (97.73% CPU idle) newaliases : pid: 2145 ppid: 2141 Total Sleep Time: 82.490ms (73.30% CPU idle, 0.00% elective) Page fault: 47.746ms (57.88%) (94.79% CPU idle) RPC operation: 25.467ms (30.87%) (24.88% CPU idle) Writing a page to disk: 4.436ms (5.38%) (97.88% CPU idle) Unknown reason: 2.915ms (3.53%) (96.26% CPU idle) NFS operation: 1.762ms (2.14%) (97.56% CPU idle) Sending data over socket: 0.164ms (0.20%) (0.00% CPU idle)
It seems that, as in the case with S55sshd, S80sendmail is mainly waiting for another process. The biggest chunk of this wait time is taken up by makemap and newalias. These children in turn spend most of their sleep time waiting for page faults to be handled and RPC operations. So, assuming that the sshd and sendmail processes do not have to be run serially for any reason, they could be launched in parallel. This may allow some page faults for sendmail to be handled while sshd loads that kernel module. On the other hand, both of these processes spend much of their time sleeping on page faults. Putting the processes in parallel may not reduce the total time spent waiting for page faults.
151
Script Usage
To see the usage syntax of the sample script, use the --usage options, for example:
$ cd prjbuildDir $ scripts/rpm_query.sh --help scripts/rpm_query.sh [-1|--rpm-sizes] [-2|--rpm-files] [-3|--smart-requires-provides] (optional) [ --package] <package> [ --usage]
To get a list of all packages with their packages sizes use the -1 or --rpm-sizes options, for example:
$ scripts/rpm_query.sh --rpm-sizes Package : libgcc Size : 48544 Package : setup Size : 434210 Package : filesystem Size : 0 Package : wrsv-ltt Size : 1284722 ...
152
You can get a list of the files contained in a package with -2 or --rpm-files, for example:
$ scripts/rpm_query.sh --rpm-files Package : libgcc List of files in rpm : /lib/libgcc_s.so.1 Package : setup List of files in rpm : /etc/aliases /etc/bashrc /etc/csh.cshrc /etc/csh.login /etc/environment /etc/exports /etc/filesystems /etc/fstab /etc/group /etc/gshadow /etc/host.conf /etc/hosts /etc/hosts.allow /etc/hosts.deny /etc/inputrc /etc/motd /etc/mtab /etc/passwd /etc/printcap /etc/profile /etc/profile.d /etc/protocols /etc/securetty /etc/services /etc/shadow /etc/shells /usr/share/doc/setup-2.6.14 /usr/share/doc/setup-2.6.14/uidgid /var/log/lastlog ...
153
xerces-2.8.0-1_WR3.0zz@i686 (libgcc_s.so.1) xorg-x11-server-Xorg-1.3.0.0-24_WR3.0zz@i686 (libgcc_s.so.1) libgcc_s.so.1(GCC_3.0) Required By: beecrypt-4.1.2-12_WR3.0zz@i686 (libgcc_s.so.1(GCC_3.0)) db4-cxx-4.6.21-5_WR3.0zz@i686 (libgcc_s.so.1(GCC_3.0)) fam-2.7.0-1_WR3.0zz@i686 (libgcc_s.so.1(GCC_3.0)) libstdc++-4.3.2-1_WR3.0zz@i686 (libgcc_s.so.1(GCC_3.0)) libusb-0.1.12-15_WR_3.0zz@i686 (libgcc_s.so.1(GCC_3.0)) mesa-libGLU-7.0.1-5_WR3.0zz@i686 (libgcc_s.so.1(GCC_3.0)) pcre-7.3-3_WR3.0zz@i686 (libgcc_s.so.1(GCC_3.0)) xerces-2.8.0-1_WR3.0zz@i686 (libgcc_s.so.1(GCC_3.0)) xorg-x11-server-Xorg-1.3.0.0-24_WR3.0zz@i686 (libgcc_s.so.1(GCC_3.0)) libgcc_s.so.1(GCC_3.3) Required By: libstdc++-4.3.2-1_WR3.0zz@i686 (libgcc_s.so.1(GCC_3.3)) libgcc_s.so.1(GCC_3.3.1) libgcc_s.so.1(GCC_3.4) libgcc_s.so.1(GCC_3.4.2) libgcc_s.so.1(GCC_4.0.0) libgcc_s.so.1(GCC_4.2.0) Required By: libstdc++-4.3.2-1_WR3.0zz@i686 (libgcc_s.so.1(GCC_4.2.0)) libgcc_s.so.1(GCC_4.3.0) libgcc_s.so.1(GLIBC_2.0) Required By: libstdc++-4.3.2-1_WR3.0zz@i686 (libgcc_s.so.1(GLIBC_2.0)) mysql-5.0.45-1_WR3.0zz@i686 (libgcc_s.so.1(GLIBC_2.0)) xorg-x11-server-Xorg-1.3.0.0-24_WR3.0zz@i686 (libgcc_s.so.1(GLIBC_2.0))
When you export your footprint, the script copies pkglist and filesystem/changelist.xml from your project build directory to the save directory. The project build directory is the prjbuildDir you are in, or you can specify a project build directory with -p or --project. Specify the save directory with the -s or --save option.
154
The script also creates an XML file called export-footprint.xml in the save directory. This is used to store the optional note and is used by the import to validate that the export had been successful. When you import a footprint, pkglist.in and changelist.xml are copied from the save directory, specified by -s or --save, to the appropriate location in the project directory, specified by -p or --project, or the prjbuildDir you are in. The existence of the export-footprint.xml file is used to validate that a successful export had previously occurred.
155
156
13
Patch Management
13.1 Introduction 157 13.2 Patch Principles and Workflow 158 13.3 The Quilt Patching Model 160 13.4 git and the Kernel 165 13.5 Kernel Patching with scc 180
13.1 Introduction
This chapter introduces various patch management concepts to help in understanding the patch model used by Wind River Linux. For an example of how to use the Workbench patch manager GUI, refer to Wind River Workbench by Example, Linux Version.
Wind River Linux uses two open-source methods of patching code. LDAT, the Wind River Linux build system, uses the open-source quilt patching model.Wind River use of the quilt command is dicussed in 13.3 The Quilt Patching Model, p.160. The Wind River Linux kernel is now managed as a git tree, and patching makes use of associated tools such as git and guilt, as described in 13.4 git and the Kernel, p.165.
157
Wind River Linux keeps its source code pristine. Patches are only applied to project code, when building a project. Patch lists are rigorously maintained.
Patch workflow for Wind River developers follows this pattern: 1. 2. 3. 4. Product designers first decide on wherewhich template or layerto insert the patch. The individual developer configures a project for the specific product, specifying the relevant layer or template in the configure command. The individual developer then works locally, developing new code and new patches to extend existing code. The developer then validates their local work with the central code base before folding back changes and patches. The more general the layer in which the patch is placed, the greater the scope of testing required to justify the acceptance of these changes. Automated test tools and procedures for the individual contributor help in keeping the code base correct. After successful validation, the developer checks in the changes.
5.
Simple reject resolutions include resolving path names, fuzz factor, whitespace, and patch reverse situations. Some hunk rejects can be resolved by simple adjustments, including:
Leading Path Namesthe leading path directory names in the patch may not match the directory names of your targets. By removing some or all of the patch's leading path names, you may then match the local environment. Fuzz Factoreach hunk has a leading and following number of lines around changes to provide a validating context for the hunk. If these leading or following lines do not exactly match the target file, the so-called "fuzz factor" can be loosened from an exact match (0) to a looser match (> 0). White spacesometimes the only difference in the leading and following context lines is in the exact whitespace. The patch apply can be adjusted to ignore white space differences when attempting to apply the patch. Patch Reversalsometimes the patch file was created backwards, meaning that it reflects the differences from the new version to the original, instead of the normal direction of the original to the new version. Reversing the patch will fix this and allow the patch to apply.
158
If a patch almost, but not quite applies, it can sometimes be fixed by adjusting the source target so that the context matches what the patch is looking for. After the patch is resolved, you can then create a new correct patch file based on the difference from the original target and the resolved target, and then throw away the original patch file. If the patch file must be maintained exactly as it was received, this is the preferred method. After rejects are resolved in this manner, you can always introduce an intermediate patch that takes the source to this adjusted state, allowing the original source and the acquired patch to be preserved, if that is required or desired.
Preserving the Source File, Fixing the Patch
Alternatively, you can adjust the patch file itself. This is more complicated because it involves modifying the patch file using the patching syntax. This method is preferred if the patch file is unlikely to be externally updated, and thus a localized version is acceptable. It also removes the need for any intermediate patch, as described in the previous section, or the undesirable situation of a patch to a patch.
Placing Unresolved Rejects into Files
Some rejects require study and so cannot be immediately resolved using the above methods. You should be able to accept the patch hunks that apply cleanly, and preserve a copy of the hunks that do not. These reject hunks can be saved to a file for analysis.
Placing Unresolved Rejects into the Source (Inline)
Alternatively, you may wish to place the rejected hunks directly in the target source file, so that they can be seen within the context in which they do (or should) apply. This reduces the potential clutter of multiple reject files (which might otherwise be lost or forgotten).
Deploying Patches
Wind River suggests that custom patches be deployed within a custom template or layer, thereby leaving the development environment intact. For more information and examples, see chapters 9. Configuring the Kernel, and 10. Adding Packages.
159
During the patch phase, the SRPM package's source is patched by the Wind River Linux Quilt-based patch system. To patch a source file within the SRPM, you must do the following: 1. 2. 3. Create a new top patch file to hold the changes. Save that patch file in the installation or layer. Register the new patch file in the package's spec file.
In the following procedure shows how to create a simple patch that patches two source files in the mktemp package.
Configure a project as follows: 1. For this procedure, use a glibc_small file system:
$ cd prjbuildDir $ configure --enable-kernel=standard \ --enable-rootfs=glibc_small \ --enable-board=common_pc
160
2.
Set up your Wind River Linux environment for quilt (the following command lines assume an sh-style shell):
$ export QUILT_PATCHES=wrlinux_quilt_patches
By default, quilt assumes patches are in a subdirectory named patches, so this variable overrides the default and states that the patches subdirectory will be wrlinux_quilt_patches.
$ export QUILT_PC=.pc
This includes the path to quilt and other Wind River-supplied host tools. 3. Add the mktemp package (it is part of the installed development environment) and proceed as far as the patch phase of building the mktemp package:
$ make -C build mktemp.addpkg $ make -C build mktemp.patch
Note that prjbuildDir/build/mktemp-version/ now contains the files and subdirectories with the unpacked and patched source.
Create a new multi-file patch on the top of quilts patch stack with the following procedure. 1. 2. Change directory to prjbuildDir/build/mktemp-version/.
$ cd build/mktemp-version
In the packages build directory, start a new patch with a descriptive name, for example:
$ quilt new mktemp-version-my_custom.patch Patch mktemp-version-my_custom.patch is now on top
Your new patch is now one of two in quilts patches directory (wrlinux_quilt_patches):
$ quilt series patches_links/installDir/wrlinux-3.0/layers/wrll-wrlinux/dist/mktemp/patches/ mktemp-wr-integration.patch mktemp-version-my_custom.patch
(Refer to 10.3.2 Older Method of Adding SRPMs, p.114 for information about pkg-wr-integration.patch.) Your patch is on top, meaning changes you make now will apply to it:
$ quilt top mktemp-version-my_custom.patch
161
3.
Edit the files you want to include in your patch. In this example you make minor changes to the README file and a source file:
$ quilt edit BUILD/mktemp-version/README $ quilt edit BUILD/mktemp-version/mktemp.c
NOTE: quilt edit file uses the editor set in your EDITOR environment variable, or vi if none is set. As an alternative to using quilt edit file, you can use quilt add file and then edit the file as you normally would.
For the purposes of this procedure, you could, for example, add some text to the README file and modify the Usage statement in the mktemp.c file. 4. Your current working patch now has two files:
$ quilt files BUILD/mktemp-version/README BUILD/mktemp-version/mktemp.c
5.
Before you save the patch, confirm that your source changes work by building the package:
$ cd prjbuildDir $ make -C build mktemp
You can, for example, look at the build/mktemp-version/BUILD/mktemp-version/mktemp executable to see that it contains your patched usage statement:
$ strings build/mktemp-version/BUILD/mktemp-version/mktemp | grep Usage Usage: %s [-V] | [-dqtu] [-p prefix] [template] [my patch test message]
Repeat these steps until your changes build successfully. 6. Regenerate (refresh) the patch so that it includes your successful changes to the files:
$ cd build/mktemp-version $ quilt refresh Refreshed patch mktemp-version-my_custom.patch
1.
Create a layer that you will use to store your patches and related files. For example, create a layer called mod_mktemp with the corresponding command_name/ and patches/ directories under dist:/
$ mkdir -p $HOME/layers/mod_mktemp/dist/mktemp/patches/
2.
Add a makefile to the layer. In this case you can just copy the existing one from the development environment:
$ cp installDir/wrlinux-3.0/layers/wrll-wrlinux/dist/mktemp/Makefile \ $HOME/layers/mod_mktemp/dist/mktemp/
3.
4.
Edit the patch file so that paths will match the context of the patch when it is applied:
$ editor $HOME/layers/dist/my_mktemp/patches/mktemp-version-my_custom.patch
162
Remove the initial paths and add a suffix to the original file name for backup. Differences between the patch before you edit it and after you edit are pointed out in bold text in the following example: Before:
--+++ ... --+++ a/BUILD/mktemp-1.5/README b/BUILD/mktemp-1.5/README a/BUILD/mktemp-1.5/mktemp.c b/BUILD/mktemp-1.5/mktemp.c
After:
--+++ ... --+++ mktemp-1.5/README.orig mktemp-1.5/README mktemp-1.5/mktemp.c.orig mktemp-1.5/mktemp.c
You now have a patch to apply to the package source, but you must modify the spec file to include it.
To inform the build system of your new patch, you must modify the package.spec file. This file registers the patches to be applied to the SRPM package. 1. Copy the spec file to your layer:
$ pwd prjbuildDir/build/mktemp-version/ $ cp SPECS/mktemp.spec $HOME/layers/dist/my_mktemp/
2.
List your patch(es) with a Patch number statement and then include them with a %patch macro. Wind River patches start with number 500, so if the spec file already includes Wind River patches you might start your patch numbers with the next available number, for example 504. Alternatively you may want to start your own numbering sequence, say in the 600s. For this example add the following to the mktemp.spec file somewhere before the %prep section:
Patch600: mktemp-version-my_custom.patch
And add the following after the %prep and before the %build to include it in the prepatch section of the spec file:
%patch600 -p1 -b .my_custom
In this typical example, the -p1 parameter means ignore the first directory name from each file name in the patch file, and the -b flag means generate a backup of the file before patching it. Also note that the %patch macro will automatically prefix the package name and version number and suffix the .patch extension so they are not included. 3. Add any other patches and sources that you created for this package to your layer:
$ cp SOURCES/your_patch_or_source_files $HOME/layers/dist/my_mktemp/patches/
163
Now that you have the spec and patches files in place, you can test your patch setup. 1. Create a new project that includes the layer you created:
$ configure --enable-kernel=standard --enable-rootfs=glibc_small\ --enable-board=common_pc --with-layer=$HOME/layers/mod_mktemp/
2.
Note that you could also make this step part of your layer with a pkglist.add file. 3. Specify the distclean target to the make command. This will start the package's build directory with a clean slate.
$ pwd prjbuildDir $ make -C build mktemp.distclean
NOTE: You do not have to use the package.distclean target when you first
perform this procedure, but you may be re-iterating this procedure until the patch works correctly. You should use the package.distclean target before each re-iteration. 4. Specify the patch target. This will apply the spec file patch, then the spec file will apply the new custom patch.
$ make -C build mktemp.patch
If there are errors, use the error messages to fix the respective patch file(s) that you saved. For example, confirm the registered and listed file name spellings, the syntax in the spec file, the patch ordering, and the before/after file name entries in the source patch file. Repeat the package distclean and patch rules until all error are resolved. 5. By stopping the package build at the patching stage, you can view your patched source to see that your patches were applied, for example:
$ editor build/mktemp-version/BUILD/mktemp-version/README $ editor build/mktemp-version/BUILD/mktemp-version/mktemp.c
or
$ diff -Nur build/mktemp-version/BUILD/mktemp-version/README.my_custom \ build/mktemp-version/BUILD/mktemp-version/README
You should see that the changes you made to the source files have been applied. 6. You can now finish building your package:
$ make -C build mktemp
164
Any project configured with your new layer will contain the patches for the mktemp package. To keep from rebuilding the package each time, you could build it once and then copy the mktemp* RPMs to your layer, for example:
$ make -C build mktemp $ mkdir -p $HOME/layers/mod_mktemp/RPMS/glibc_small/i686/ $ cp prjbuildDir/export/RPMS/i686/mktemp* \ $HOME/layers/mod_mktemp/RPMS/glibc_std/i686/
Deliver the kernel in a git tree. In the previous organization, patches were spread around multiple directories, and that provided no easy way to tell which directories had been incorporated or applied. Deliver Wind River additions on top of the base kernel.org tree in a seamless fashion. This lets you browse file history and see Wind River changes as well as previous core kernel.org changes all together in a continuous fashion. Create a history-clean, branched, and tagged git repository for transparent access to the logically divided features that comprise the 3.0 kernel.
NOTE: A history-clean git repository is one in which features are introduced in completion and do not include development history. Without this, you might also see, for example, test patches being applied and then reverted, and 50 patches fixing minor issues for the feature. The history-clean repository is a set of clearly-defined chunks that introduce functionality.
Use a single git repository to contain all the kernel types, features, and BSPs using branches and tags to give clear boundaries to each of these. Leverage community best practices and workflow around git source management.
The git tree presents a uniform interface to modifications to the upstream kernel. Major features are tagged, branched, and presented in a clean manner. In other words, git integrates Wind River, partner, and other patch sources seamlessly with the kernel.org git history. Wind River Linux kernels are built directly from the git repository, which has been previously constructed, by checking out the appropriate BSP branch and compiling the kernel. This means that there are no patch failures when using the constructed tree, and:
165
Multiple boards and kernel combinations are present in a single git tree, significantly streamlining the maintenance of multiple board installations. Differences between two BSPs is easily extracted by git, no longer requiring a recursive diff on two independent source trees. The kernel source and build directories are kept separate. When coupled with the integration of multiple board configurations, a single install can easily build and maintain multiple kernel variants. Because the kernel is built directly from a git repository, any end-user changes to the source files during development are automatically tracked by git and can be committed, exported, and saved with a git-based workflow.
The 3.0 kernel continues to leverage the advantages of having pristine source plus patches and blends it into a git repository. The 3.0 kernel uses git to provide patch and configuration sharing by using common branches as the base of more specific configurations. Using git in the 3.0 kernel means that the git workflow and tools can be leveraged to enhance development, seamlessly integrate with the external developer community, and employ distributed source management.
The high level kernel build workflow with Wind River Linux version 3.0, is as follows: 1. 2. 3. Clone a fully patched, branched and annotated git repository into a build directory. Checkout the BSP or kernel type branch. Any additional patches and config files are detected and layered onto the appropriate leaf branches in the tree as additional commits. But the core patches are not used because they have already been captured as commits, and hence the tree is often not patched at all. Build the kernel.
4.
These step are normally performed automatically by the build system. See also The Kernel Lifecycle and Developer Workflow, p.170.
The kernel-cache
Unlike previous releases of Wind River Linux, the focus of the changes to the kernel is not the patches themselves but is instead the constructed git repository. Wind River maintains a repository that contains the patches and encodes the information required to construct the kernel git repository. The checkout of this internal repository that was used to create the pre-constructed git tree is provided as a reference. There should be no need for you to directly manipulate anything in it. The same repository contains all the configuration fragments for the Wind River Linux kernel. This repository is called the kernel-cache, in the sense that it is the
166
permanent store for the patches used to construct and re-construct the git repository. You do not directly manipulate the kernel-cache, which is captured in the Wind River 3.0 kernel tree itself. This means that it is completely optional. The kernel-cache is processed by the build system to generate a meta-series that describes the steps required to create a fully branched, tagged and history-clean git repository. While Wind River maintains a master kernel-cache, you can create multiple kernel-caches, and use them to construct additional kernel trees. Using a custom kernel-cache allows patches to be shared and included between kernel-caches. This means that an add-on kernel-cache can reference the embedded Wind River cache and modify how it is used to construct a kernel tree. This would be an example of an optional, and more complex use case of a power user who is maintaining several BSPs with a shared feature set.
The patches and other sources that are used to create the Wind River Linux source tree are grouped by functionality. These groupings translate to a tagged and branched git tree as shown in Figure 13-1.
Figure 13-1 The Wind River Linux Source Tree
kernel.org wrs_base standard feature (a) feature (b) tag bsp1-standard bsp2-standard cgl (and other kernel types) cgl feature c,d,e tag bsp1-cgl bsp2-cgl bsp3-cgl The branches shown in Figure 13-1 are as follows:
wrs_basebranches from kernel.org at a defined point, for example at version 2.6.27.15. standardcommon kernel functionality for all boards is created in this branch. feature (a), (b)tagged and separated features in the standard branch (for example, lttng, yaffs2.) bsp1,2-standardBSP branches at the top of standard. Any board-specific changes are contained in these branches.
167
cglenhanced kernel type that branches at the top of standard; therefore, it inherits standards features and adds new features. bsp1-cgl...bsp3-cglbsp branch at the top of the cgl kernel. Although a separate branch from the -standard BSPs, the same patches used for the -standard, -cgl, and between all cgl boards are used to construct the branch, so the BSPs are identical in board-specific functionality. The tags represent the completion of feature additions for a given kernel type on the branch of that respective kernel type.
Types of Commands
There are a few broad categories for the types of commands available:
Determine out what has changed, and to have a look at the patches: git whatchanged, git branch, git checkout
Apply patches to the git tree: git fetch, git pull, git am, kgit import, kgit meta, git apply, git rebase, guilt push, guilt pop, guilt refresh
Send changes for upstream inclusion: git commit, kgit export, git request-pull, git format-patch, git send-email
Note that, in addition, there are some build targets that you can use to manipulate the meta series:
linux.rescc: regenerate the meta series used to construct a tree. linux.reconfig: reconfigure the kernel from the config file fragments.
168
Tools Overview
A specific set of toolscan be used to s work with the Wind River Linux 3.0 kernel. These are summarized in this section.
NOTE: You can add prjbuildDir/host-cross/bin to your path to use the tools. The tools are linked from the host tools layer, for example:
prjbuildDir/host-cross/bin/git -> /home/user/WindRiver/wrlinux-3.0/layers/wrll-host-tools/host-tools/bin/git
git
git and the many commands that compose the toolset are used to manage the low-level details of the kernel git tree. git is used in a standard manner and Wind River follows the best practices of the kernel community.
guilt
Use guilt to track the patches that created the kernel git tree. guilt is a community add-on to git and adds the ability to manage a series of patches directly in a git repository. guilt provides the ability to maintain git branches in a manner similar to quilt and raw patches. The use of guilt gives the ability to manipulate commits as units or building blocks and to keep them contained and refreshed without using git internals directly.
scc
The patch management system of Wind River Linux version 2.0 (called smudge) has evolved to meet the demands of creating and managing the kernel in a git repository. The engine that meets those requirements in Wind River Linux version 3.0 is the series config compiler, called scc. scc unifies the information required to fully describe a kernels features. It processes feature descriptions (.scc files) that contain patches, branching, tagging, and other manipulations. In its most basic use case, you can think of an scc file as equivalent to a patches.list file, or the series file of quilt. scc works in a modular manner to compile each individual feature and link them into a script. When run, that script produces a meta-series that describes everything required to construct a git repository. The construction and generation of a meta series is one phase in creating a kernel git repository. There is a secondary phase that interprets the meta series to build the tree. The kgit tools (described below) process the meta-series and construct the 3.0 kernel git repository. If you are just using the pre-generated git tree as is, and only layering your own changes on top of that, then you will likely never be using the full functionality of scc that is deployed during a complete tree generation. See 13.5 Kernel Patching with scc, p.180 for more on scc.
169
The kern_tools tools are a set of scripts written by Wind River to create and manipulate the kernel git repository in a standard way. They provide the ability to import, export and manage the commits that comprise the 3.0 kernel tree. The kern-tools are:
kgit: dispatches to sub-kgit commands. Also used to identify the type of a source repository. kgit-clean: checks tree consistency and can optionally remove old branches. kgit-import: imports patches and features in many formats. Interfaces with git, guilt and the Wind River Linux git tree structure. kgit-publish: takes a Wind River Linux kernel git tree and converts it into a tree that can be shared or used for build system integration. kgit-scc: wrapper around scc, used for .scc file searching and for tree construction. Not normally run manually and is part of the kernel build system. kgit-checkpoint: converts the files that track the Wind River Linux kernel repository's internal structure into a commit. When the checkpoint is restored, the tree is available for development. kgit-config: saves and reads configuration values specific to the Wind River Linux kernel git repository. kgit-init: initializes the base of a Wind River Linux kernel git repository. This is not normally run manually and is part of the kernel build system. kgit-pull: wrapper around git pull. kgit-classify: manipulates the feature descriptions that are used to construct a Wind River Linux kernel git tree. kgit-export: exports configuration and patches from the Wind River Linux kernel git tree. kgit-meta: interprets a meta series to construct a set of branches, patches, and tags in a Wind River kernel git repository. kgit-rebase: produces a rebase report that indicates which patches should be propagated between branches.
The Kernel Lifecycle and Developer Workflow Step 1: The Kernel Source
The majority of the patching of the 3.0 kernel is already completed for you, and only additions to the default Wind River patches are performed on the fly. This is due to the fact that the 3.0 kernel git repository is constructed by applying patches on top of the kernel.org base and effectively capturing the patches as git commits. The patches and configuration files that were used to create the repository are captured within the repository itself. They can be found in the kernel source tree under the wrs/patches directory and are maintained on a per-branch basis.
170
For example, to see a list of the patches that make up the standard branch, look under linux/wrs/patches/standard/links/path_to_patches. Note that this is not the best way to see what changes are in a particular branch of the constructed treegit whatchanged and other git commands are more effective and are described below. It is still useful to describe the processing required to create the constructed repository, since that leads to the branching strategy and is the same process required when adding new features to the kernel. As discussed in The kernel-cache, p.166, the source for the kernel git repository is called the kernel-cache. The kernel-cache is processed by scc to generate the meta series used to construct the tree. The patches and configuration within the kernel-cache are organized in a similar manner to the organization of Wind River Linux 2.0, with the significant difference of integrated kernel configuration and patching.
NOTE: Although the kernel-cache can be found within the kernel tree itself, it is mainly informational and you should rarely (if ever) directly modify it.
Wind River organizes kernel modifications into logical categories to ease maintenance and visibility of changes. The actual on-disk organization is not important, but the representation as branches, tags, and commits in the constructed tree is important.
Construction of the git Repository
The Wind River kernel git repository is constructed by processing feature descriptions in a top-down manner. At the top of the tree are the leaf nodes, which represent features that do not have sub branches, and when checked out can be compiled into a valid kernel. In general, BSPs form the leaf nodes of the kernel tree. When constructing a tree, the leaf nodes are found, processed by scc, and used to construct the git repository. Leaf nodes are feature descriptions (in .scc files) that include sub-features and configuration data and are the entry point for .scc processing. The high-level phases of tree construction are: 1. scc compiles the leaf nodes. This means that all included kernel features are compiled, kernel configuration data is logged, and scripts are created to represent each leaf node. Transforms, patches, and conditionals are processed and compiled into the final script for later execution. The leaf node scripts are executed to produce a set of meta-series. Patch transforms and substitutions are performed at this point. Each meta-series is interpreted by kgit-meta. At this point branches are created, patches are converted into git commits, and tags are applied to the tree. Kernel configuration data is copied into the kernel tree structure for later use. The tree is checkpointed and published. Checkpointing captures the state of the tree, the patch to commit mappings, and anything else required to build or manipulate the tree. Publishing makes the tree available for use by the build system.
2. 3.
4.
171
The result is the fully-branched and tagged kernel git tree that you use. The layout and organization used to construct the base tree is not meant to be modified by youit is meant to be extended, which is described in the next step. The published, fully-patched kernel git repository is placed as a bare clone in wrll-linux-2.6.27/git/default_kernel and is processed automatically by the kernel build system.
NOTE: A bare clone is one in which only the git repository data is present, not the actual files. So you will not see any source, although it is internally represented by git. Step 2: The Kernel Tree Extension
At this point, any templates, profile additions, or other command line specified features are processed and used to extend the kernel git tree. (Kernel tree extension is the equivalent of the version 2.0 kernel patching phase.) Tree extension is done by processing the kernel feature descriptions that were requested, comparing against those which were used to construct the kernel git tree, and then applying any extensions to those features to the existing tree. The feature descriptions used to build the kernel tree are stored in linux/wrs/cfg/kernel-cache. The first part of this phase, creates a local clone of the default_kernel repository (mentioned in Step 1) into the local kernel source directory. In this phase, scc is invoked in a light-weight manner to re-compile the existing kernel feature descriptions and any add-on kernel features that have been passed with templates, profiles, or on the command line. Once the existing features and add-on features are linked into the executable, a new meta series is generated. The new meta-series is processed to detect differences from the constructed tree. Extensions are normally kernel configuration changes, or patches added to the end of the BSP branch. This matches standard git workflow as git commits are added in chronological order. The content used to construct the Wind River common branches, such as wrs_base or standard should not normally be modified. If they are modified, any sub-branches would have to be rebased in order to pick up the changes in the common branch.
NOTE: Patches (in the form of new commits) are layered on top of a parent commit.
This parent commit represents the state of the whole tree at the time of the new child commit. The context and content of a patch depends on the content of the associated parent files. If you rebase (that is, try to apply your patch to a different parent commit), then you will have to fix any context or content issues that may arise. The rebase is a manual process, but is detected during tree extension and reported to you. The reason this rebase is required is that any changes to the common branch will be after the branch point for the BSP node. If the BSP is to see the change, it must have its branch point updated to the new end-of-branch or new commit ID. At this point the tree is ready to build.
172
Step 3:
To configure and build a kernel, the proper BSP branch is checked out and used. The branches for valid BSPs follow the naming convention of bsp-kernel_type. Only valid board and kernel type combinations are captured in the constructed tree. This is done automatically by the kernel build system and nothing needs to be done by the developer. See 9. Configuring the Kernel for details on how the kernel is configured. Note that the kernel source and build directories are separated. The linux source is in linux/ while the build directory is linux-board-kernel-build/. This means that switching to a different board or kernel combination can be accommodated in a single build directory.
Step 4: Kernel Development
You can perform iterative development on the kernel tree once it has been configured. Wind River does not enforce a particular development style and any workflow may be adopted. Due to the tight integration with git, Wind River recommends kernel.org-style workflows. Development should be done on the BSP branch starting as follows:
$ git checkout bsp-kernel
To see how Wind River categorized the patches for a particular feature, use:
$ kgit classify ls $ kgit classify cat feature_name
Step 5:
Saving patches
Be careful to export any changes you make to the kernel in the board build directory (prjbuildDir/build), because the kernel source directory in a board build is a clone of a master kernel repository and the entire board build is transient by nature. It will be lost, for example, with a make linux.distclean. There are many ways to ensure that development is not lost:
git format-patch Commit changes to the local tree and export them with git format-patch as follows:
$ git checkout branch
173
On the next tree construction you can manually apply the changes with git am, or you can add them to a custom kernel-cache or template to have them automatically applied.
git push Commit changes to the local tree and export them with git push as follows:
$ git push ssh://userid@upstream/repository mybranch:remote_branch
NOTE: If you rewrite the history, or reconstruct or rebase git commits, if they use the same branch names or tags you have what is called a non-fast forward situation. This means that the two commit trees do not share the same structure and must be rewritten to perform an update. Depending on the source repository, there can be problems performing pushes if the tree is being constructed in a non-fast forward manner.
kgit and export to a cache Commit changes to the local repository, classify them, and directly export them to a custom kernel cache.
$ kgit import -t treeish start ... end_commit
Note the start patch and end patch names and optionally classify the changes.
$ kgit export -p start ... end_patch dir
make Again, commit any changes, and ensure they are in the guilt series and classified.
$ make linux.export export_dir=path_to_layer
13.4.3 Examples
The following sections demonstrate some ways to use git and associated tools with Wind River Linux.
NOTE: The following examples assume that prjbuildDir/host-cross/bin/ is in your
path.
scc has been designed to work in a top-down fashion to provide explicit control of kernel patching and configuration. It is best to understand the kernel implementation and use the provided tools, rather than bolting-on patches and configuration files from external templates.
174
That being said, there is a way to do this and to keep the mechanism for explicit control.
Technique #1: Using a Template
1. 2.
In that directory place your feature description, your patch, and configuration files (if required):
$ ls templates/feature/my_feature/linux version.patch my_feature.scc my_feature.cfg
The .scc file describes the patches, configuration files, and where in the patch order the feature should be inserted:
patch version.patch kconf non-hardware my_feature.cfg
3. 4.
Configure your build with the new template by supplying the --with-template=features/my_feature option to the configure command line. Build the kernel:
$ make linux
If you do not require a full template, you can place a .scc file at the top of the build (prjbuildDir), along with configuration files and patches. The build system will pick up the .scc file and add it to the patch list automatically.
Technique #2: kernel-cache with the BSP Name Duplicated
1.
At the top of a layer, create a kernel cache. The build system will recognize any directory of the name kernel-*-cache as a kernel cache. For example, do the following:
$ cd my_layer $ mkdir kernel-temp-cache
2.
3.
Create the .patch, .cfg, and .scc files in kernel-temp-cache/my_feat instead of in templates/feature/my_feature/linux as you did in Technique #1: Using a Template, p.175. Configure the build with the feature added to the kernel type by passing --with-kernel=standard+my_feat/my_feature.scc to the configure command line. Build the kernel:
$ make linux
4.
5.
If your feature name overrides the name of a similar feature in the core kernel-cache, you can re-use the original version by including it. This allows a BSP to be overridden in a kernel-cache while continuing to include the original BSP configuration and patches.
175
This is similar to Duplicating Other Template Names, p.69 except that instead of templates, this is done with .scc files. You create a feature.scc and include in it the statement include feature.scc, where feature is the same feature name as one in the kernel-cache.
Technique #3: git
Then:
$ git-am patch
or
$ kgit-import -t patch patch $ cd .. $ make linux
Patch Management
The constructed kernel trees are comprised of branches, each of which was constructed from a distinct and separate patch series. To determine which patches were used to construct a branch, do the following:
$ $ $ $ make linux.devprep cd linux git checkout branch #for example, standard guilt applied
Typically, it is better to just use the commits to look at what built a branch:
$ git whatchanged branch
NOTE: The patch can be exported with kgit export -p top outdir at this point.
$ guilt push -a
You have now re-written the history and changed the commit IDs for all the patches that make up the series. No dependent branch will see those changes, since they have branched off the old commit ID of the patch you just refreshed. To make changes visible to other branches, you must propagate the change:
$ git checkout child_branch $ guilt rebase parent_branch
This removes all patches and commits that are currently applied, creates a branch at the commit ID, and then re-applies all patches. Continue this up the chain to the leaf branch.
176
BSP Example
The following example illustrates the bootstrap of a BSP. Perform these steps before each of the following techniques: 1. Create the required board template files to configure a build with the new BSP, for example, mylayer/templates/board/my_bsp/config.sh. (See the Wind River Linux BSP Developers Guide for more information on BSP files.) Configure a build. Clone the default_kernel tree:
$ make linux.unpack $ make linux.devprep
2. 3.
1.
2.
Add a header:
$ guilt header -e
or
$ git apply patch $ git add files $ git commit -s
or
$ kgit import -t mbox mbox $ kgit import -t dir path_to_directory_with_series $ kgit import -t patch patch
177
3. 4.
5.
6. 7.
Link the configuration to the patches, and add this to the category:
scc_leaf ktypes/standard my_bsp-standard kconf hardware my_bsp.cfg
8.
Export:
$ kgit export -v -b my_bsp-standard -x links \ -p all -c my_bsp path_to_layer
9.
Test build:
$ cd .. $ make linux TARGET_BOARD=my_bsp kprofile=my_bsp use_current_branch=t
Assuming the patches have been exported to the correct location, future builds will now find the board, apply the patches to the base tree, and make the relevant branches and structures. In addition, the special build options will no longer be required.
Technique #2: kernel-cache
1. 2. 3. 4. 5. 6.
Create board template as in Technique #1: git, p.177. Create a kernel-name-cache in a layer. Manually create the directory to hold the .scc and .cfg files for the BSP (see Technique #1: git, p.177 for the example). Add patches to the BSP directory, and add them to the .scc file with the patch directive. Make linux.patch:
$ make linux.patch
Although this technique seems easier, it does not leverage the existing kernel.org workflow, and requires patches to be applied and resolved in place, exported, and then work continues. The first technique allows a BSP to be started on an existing tree and worked on in place.
178
Patch Merge
b.
Multiple patches:
$ git am mbox $ kgit import -t dir dir
If you use kgit -t dir, you can use a patch resolution cycle such as this to locate rejects and resolve options:
$ wiggle --replace path_to_file path_to_reject $ guilt refresh
(wiggle helps resolve patch failures by using word-wise comparisons see prjbuildDir/host-cross/share/man/man1/wiggle.1.) Or use manual resolution:
$ git add files $ git commit -s
or
$ git apply --reject .dotest/0001 $ git add files $ git am --resolved
or
$ git am --continue
4.
Export patches:
$ kgit export -p first_patch...last_patch dir
or
$ git format-patch last_commit^ -o dir
or
$ git push ..
You can also import changes with git pull, git fetch or rebase, and so on. You should use git practices for resolving conflicts in this case, and merge commits done with the results.
179
Sharing a Kernel
Once a tree has been constructed, built, and the changes deemed acceptable, you can reproduce the build without exporting the patches or reconstructing the tree:
$ make linux $ kgit publish -a linux linux.published
You can now place the output directory linux.published in the kernel layer (wrll-linux-2.6.27/git) as the default_kernel repository or you can push it to a remote server. Once pushed, subsequent calls to make linux check out the previously constructed branch and build the kernel.
NOTE: You can push changes directly to remote trees without publishing.
Unlike other packages in the build system, the kernel is not single purpose or targeted at a particular piece of hardware. It must perform the same tasks and offer the same APIs across many architectures and different pieces of hardware. The key to managing a feature-based patching of the linux kernel is to remove both the distributed control of the patches (subdirectory-based patches.list files) and hand editing of the patch files. Replacing these two characteristics with script-based patch list generation and a method to control and describe the desired patches with a top-down approach eases the management of kernel patching complexity. Additionally, a direct mapping between BSPs and profiles can be easily made, increasing maintainability. The scc script has been implemented to control the process of patch list generation and feature-based patching.
180
In the most simple example, scc files look very similar to the patches.list of earlier releases. Once notable difference is that the metadata concerning the license, source and reviewers of the patch are contained inside the patch itself and not in the scc file. This information can be in the scc file, but only as a secondary source of information.
scc Facilities
Top down, feature-based control of patches which allows a feature and profile-based global view of functionality and compatibility to dictate which patches should be applied. It also allows feature- and architecture-specific patch context modifications to be created by each individual feature. Feature inheritance and shared patches means that each feature may explicitly include the features and inherit their patches. Each feature can then modify the inherited patches list and substitute slightly different patches to work in their context. This allows the sharing and reuse of patches by only changing the minimum amount and context of existing feature patches. Allows upstream, feature-based patches to be logically grouped and used in many different patch stacks. This allows isolation and combination testing of features and allows a single set of patches to be used in multiple platforms. Modifications to a feature patch set are contained in the modifying top level feature's directory, leaving the original patch in it's pristine form. These are called patch context mods and can be architecture-, platform-, or feature-based. Patch context mods can be identified by the name of the original patch which they are based plus a suffix of the feature name which required the modification of the original patch.
Associates kernel configuration directly with the patches that comprise a kernel feature. Direct mapping of published kernel feature compatibility profiles to named patch stacks.
scc Files
scc files are small, sourced shell scripts. Not all shell features should be used in these scripts, and in particular no output should be generated, because the script is interpreted by the calling framework. You can use conditionals and any other shell commands, but you should be careful to use only basic, standard commands. A feature script may denote where it should be located in the link order. This is only used by scripts that are not being included by a parent or entry point script and that you wish to be executed. The available sections are INIT, MAIN, and FINAL. Denote the section names in a .scc file as follows:
# scc.section section_name
Any variable passed to scc with the -D=macro is available in individual feature scripts. To see what variables are available, locate the invocation of scc and search for defines.
181
dirChanges the current working patch directory, and subsequent calls to patch use this as their base directory. patchOutputs a patch to be included in the feature's patch set. Only the name of the patch is supplied, and the path is calculated from the currently set patch directory. patch_triggerIndicates that an action should be triggered and performed on a patch. The syntax is:
patch_trigger condition action target_patch_name
archa comma-separated list of architectures or all. plata comma-separated platform list or all.
excludeUse only in exceptional situations where a patch cannot be applied for certain reasons (architecture or platform). When the trigger is satisfied the patch will be removed from the patch list. includeUse to include a patch only for a specific trigger. Like exclude, this should only be used when necessary. It takes one argumentthe patch to include. transformModifies the patches in the patch set based on a sed substitution format: / match/replace/. Multiple transforms can be applied in a single feature or across many features. ctx_modIndicates that a base patch has context modifications due to different patch stacks using a common feature. The base patch is almost always the pristine upstream patch and the ctx_mods are context changes to allow the patch to apply in multiple stacks. ctx_mods This takes one argumentthe base_patch name to modify as it appears in the common feature. The ctx_mod patch is found in the directory of the feature adding the trigger and must have the name dictated by the condition indicated in the trigger. If platforms or architectures have been indicated in the conditional, the patch takes the form: base_patch.archs. Where archs is an underscore-separated list of architectures matching the comma separated list used in the conditional If all is the arch or plat trigger, the context patch takes the form base_patch.feature_with_the_trigger. A context patch should be version controlled, but not hand edited, and regenerated when required.
includeIndicates that a particular feature should be included and processed in order. There is an optional parameter after feature_name to indicate that the order of processing should not be used and a feature must be included after feature feature_name. Include paths are relative to the root of the directories passed with -I.
182
Note that changing the default order of large feature stacks by forcing a different order with after can result in a significant work effort in order to rebase the patches of the features, if they are touching the same source files.
excludeIndicates that a particular feature should not be included even if an include directive is found. The exclude must be issued before the include is processed. set_kernel_versionTakes a new kernel version as its argument. This allows a feature to change the effective kernel version and allows other features to test this value with the KERNEL_VERSION variable. check_boardTests if a particular board is being patched. This allows a feature to change the patches on a board-specific basis. Logical actions should be based on the return value $?. A 1 indicates that the current board matches the test value, a 0 means that a different board is being patched.
The following presents some examples on the use of scc. Note that you can get detailed help with scc --help=scc.
Specifying a Leaf Node
This is a BSP branch with no child branches, hence is a leaf on the tree (with comments):
# these are optional, but allow standalone tree construction define WRS_BOARD name define WRS_KERNEL kern_type define WRS_ARCH arch scc_leaf ktypes/standard common_pc-standard # ^ ^ # +-- parent + branch name include common_pc.scc # ^ # +--- include features shared across all kernel types for this BSP
This file reflects common_pc-standard.scc, that is, the common_pc BSP with kernel type standard.
Specifying a Normal Node
183
Specifying Transforms
The following changes the order of pending includesif the passed feature is detected, the first feature is included after it:
include features/rt/rt.scc after features/kgdb/kgdb
The following prevents the named feature from ever being included:
exclude features/dynamic_ftrace/dynamic_ftrace.scc
The following changes the named patches in the series into patch_name.patch.feature_name, where the substituted patch is in this directory:
patch_trigger arch:all ctx_mod dynamic_printk.patch patch_trigger arch:all ctx_mod 0001-Implement-futex-macros-for-ARM.patch
or patch order within a feature, or feature order, then it will trigger an auto branch from the point of the last feature it shares in common with the pre-generated branch.
184
PA R T I II
185
186
14
Simulated Deployment with QEMU
14.1 Introduction 187 14.2 Deployment 187 14.3 Configuration 189 14.4 QEMU Example: Deploying initramfs 193
14.1 Introduction
QEMU is a processor simulator for supported boards. (Refer to your Release Notes for a list of supported boards.) Using QEMU for simulated deployment, no actual target boards are required, and there are no networking preliminaries. QEMU and Workbench are compatible both in User Mode and Kernel Mode. QEMU deployment, for the supported boards, offers a suitable environment for application development and architectural level validation. User-space and kernel binaries are compatible with the real hardware.
Internals
When started, QEMU runs in a pseudo-root environment and starts the NFS server with alternate RPC ports. The simulated target is given a hard-coded IP address of 10.0.2.15, and localhost is visible from the simulated target as 10.0.2.2.
14.2 Deployment
The Getting Started provides an example of how to deploy a QEMU target for user mode debugging. You can also use QEMU to perform kernel mode debugging (KGDB) of supported Wind River Linux targets as described in this section.
187
Once you have built a platform project for one of the QEMU-supported boards and then built the file system (make fs), you can start an instance of QEMU for that target. Note that after a make fs, the pre-built kernel is automatically copied to the project build directorys export subdirectory. The QEMU simulator loads and executes the kernel found within the export subdirectory, and NFS-mounts the export/dist subdirectory as its root file system. The following example assumes you have built a platform project for one of the supported boards (the example uses the ARM Versatile AB-926EJS platform). When you have created the platform project you can start QEMU from the command line, load the KGDB kernel module, and then connect the debugger from Workbench as shown in the following procedure. In this example, the KGDBOE agent was set to start up automatically then you boot the simulated target; otherwise you are required to manually start this agent. 1. Enter make start-target in your project build directory. For example:
$ cd /home/user/WindRiver/workdir/arm_versatile $ make start-target
2. 3.
If you have built a platform with a small file system just press ENTER, otherwise provide the user name root, password root to log in. At the root prompt, load the KGDB Ethernet (kgdboe) module as follows:
# modprobe kgdboe kgdboe=@/,@10.0.2.2/
Note that in the configuration information given above, the usual host ports are mapped to new port numbers so that you can access the features through the new port numbers. For example, KGDB is usually accessed at port 6443, but you used port 4445 when you connected in the previous procedure. Telnet port 23 has been mapped to port 4441, and ssh port 22 has been mapped to port 4440. You can access the running simulation through those ports with the appropriate tools. For example, from another terminal window on the same host, you could use ssh to log in to the running simulation with the following command:
$ ssh -p 4440 root@localhost
From Workbench
1.
You can now use Workbench to connect the debugger to the QEMU target. In Workbench, right-click in the Remote Systems view and select New > Connection, then expand the Wind River Linux folder and select Wind River Linux KGDB Connection. Click Next.
188
2. 3. 4.
Select Linux KGDB via Ethernet and click Next. For Remote Host Settings enter the name localhost and change the Port to 4445. Click Next. For Kernel image, browse to the location of your exported kernel image that contains symbols. This is the vmlinux-symbols file and is contained in the export/ subdirectory below your project directory. For example, your path might look something like this:
/home/user/workdir/arm_versatile/export/arm_versatile_926ejs-vmlinux-symb ols-WR2-0ap_standard
Click OK and click Next twice until you are at the Object Path Mappings screen. 5. Click Add on the Object Path Mappings screen to add the path to your exported file system. Leave the target path blank and browse to the export/dist host path under your project build directory. For example, it might be;
/home/user/workdir/arm_versatile/export/dist
Click OK and then click Finish. 6. You now have a WRLinuxKGDB_localhost target connection in your Remote Systems view. Select it and click the green connection icon. After a few moments the connection is made. If you have identified the correct symbols file in step 4, the kgdb.c source should be displayed in the editor. Expand the debug context in the Debug View and you will see that System Context is Stopped. The terminal window where you launched QEMU will be frozen. Select the operating system in the Debug View and click the green Resume button to continue system processing. You can now continue to debug the QEMU target with Workbench. For more information on kernel mode debugging, refer to Wind River Workbench by Example, Linux Version. To disconnect, click the red Disconnect icon in the Remote Systems view. You can stop the QEMU simulator by entering CTRL-A x in the terminal window.
7.
14.3 Configuration
At a terminal, and within the project build directory, you may enter an interactive menu to change default QEMU configurations by entering:
$ make config-target
The menu, with its numbered default configuration values, looks similar to the following:
===QEMU and or User NFS Configuration=== 1: TARGET_QEMU_BOOT_TYPE=usernfs 2: NFS_EXPORT_DIR=/home/user/WindRiver/workspace/common_pc_prj 3: NFS_MOUNTPROG=21111 4: NFS_NFSPROG=11111 5: NFS_PORT=3049
189
6: TARGET_QEMU_BIN=qemu 7: TARGET_QEMU_AUTO_IP=yes 8: TARGET_QEMU_USE_STDIO=yes 9: TARGET_QEMU_BOOT_CONSOLE=ttyS0 10: TARGET_QEMU_GRAPHICS=no 11: TARGET_QEMU_KEYBOARD=en-us 12: TARGET_QEMU_PROXY_PORT=4442 13: TARGET_QEMU_PROXY_LISTEN_PORT=4446 14: TARGET_QEMU_DEBUG_PORT=1234 15: TARGET_QEMU_AGENT_RPORT=udp:4444::17185 16: TARGET_QEMU_KGDB_RPORT=udp:4445::6443 17: TARGET_QEMU_TELNET_RPORT=tcp:4441::23 18: TARGET_QEMU_SSH_RPORT=tcp:4440::22 19: TARGET_QEMU_MEMSCOPE_RPORT=tcp:5698::5698 20: TARGET_QEMU_PROFILESCOPE_RPORT=tcp:5678::5678 21: TARGET_QEMU_KERNEL=bzImage 22: TARGET_QEMU_INITRD= 23: TARGET_QEMU_HARD_DISK= 24: TARGET_QEMU_CDROM= 25: TARGET_QEMU_BOOT_DEVICE= 26: TARGET_QEMU_KERNEL_OPTS= 27: TARGET_QEMU_OPTS= Enter number to change (q quit)(s save):
There should not normally be a need to change these default configurations. The CTRL+A c command allows you to enter and exit the QEMU monitor, which provides commands from within the simulation. For example:
root@localhost:/root> CTRL+A c (qemu) help help|? [cmd] -- show the help commit device|all -- commit changes to the disk images (if -snapshot is used) or backing files info subcommand -- show various information about the system state q|quit -- quit the emulator . . . (qemu) CTRL+A c root@localhost:/root>
You can quit the simulation from the (qemu) prompt with quit, or from the simulator command prompt (root@localhost:/root>) with CTRL+A x.
Use make start-target TOPTS=option on the command line to see various options you can pass when starting a simulation. Use -h to display the available options:
$ make start-target TOPTS="-h" Usage ./scripts/config-target.pl [Options] <command> Options: -c Use text console -gc Use graphics console -p Use telnet proxy as console -i # Increment the remote port offsets by # typically used when starting moren than one target -d Extra script debug output -w Wait until debugger attaches to QEMU -x Use an external console defined by TARGET_VIRT_EXTERNAL_CONSOLE
190
and go into the background Output the target start command which you could use to start a debugger with -m # Number of megs of RAM to use on the target -su Use "su -c" instead of "sudo" for root access -t Use tuntap -cd <iso_file> Boot from CD (QEMU Only) -disk <disk_image> Boot kernel with disk image -cow <cow_file> COW file for (UML Only) -no-kqemu Do not use the kqemu accerator -o Commands: start stop nfs-start nfs-stop net-start net-stop kqemu-start kqemu-stop allstop config
Start target, NFS server and proxy (if needed) Stop the target and NFS server... Start the NFS server Stop the NFS server Start the network server (TUN/TAP) Stop the network server (TUN/TAP) Load the KQEMU kernel module unload the KQEMU kernel module Stop target, NFS server and proxy Display or change the default configuration
For example, if another QEMU session is running on your host, you can start a second QEMU session by choosing different ports. The -i option does this by automatically incrementing port numbers by the specified amount:
$ make start-target TOPTS=-i 2
(See 19. Stand-Alone Deployment to Disk for more on creating and booting .iso images.) You can also boot a hard disk image:
$ make start-target TOPTS="-disk Hard_Disk_Image"
TUN and TAP are virtual network kernel drivers used to implement network devices that are supported entirely in software, making them ideal for use with a QEMU deployment. TAP, short for network tap, simulates an Ethernet device and works with layer 2 packets such as Ethernet frames. TUN, short for network tunnel, simulates a network layer device. It works with layer 3 packets, such as IP packets. Once enabled, TAP creates a network bridge while TUN provides the routing. You can use TUN/TAP networking to configure a network on your host that connects to the QEMU target simulation. If you wish to connect two or more QEMU simulations for testing and debugging, TUN/TAP lets you specify each simulations networking parameters.
191
Enabling TUN/TAP from Workbench NOTE: Configuring TUN/TAP networking on the host requires root privileges
you can start the emulation as the root user, or start it as another user and you will be prompted for the root password. If you used Workbench to create the QEMU target connection, TUN/TAP is enabled by default. It is possible to make changes to the default settings when you create a new target connection or from the Target Connection Properties dialog. The default settings include:
default setting is auto, but you may specify a number for the tap. For example, tap0, tap1, and so on.
TARGET_TAP_UID: The user ID name of the tap device. The default setting is
auto.
auto.
making changes to TAP settings. The default setting is sudo, but su -c is also acceptable.
NOTE: You must configure the TUN/TAP interface once for each system boot. To access these settings in the New Target Wizard:
1. 2. 3. 4.
In Workbench, select the New Connection button in the Remote Systems window to launch the New Connection Wizard. Select the connection type. Since we are accessing TUN/TAP settings for a QEMU deployment, choose Wind River QEMU Connection, then click Next. In the New Connection dialog, QEMU Simulator Configuration section, make changes as necessary to the default TUN/TAPsettings. Continue the wizard in accordance with your target connection requirements.
1. 2. 3.
In the Remote Systems window, right-click on the QEMU target connection you want to make changes on, then click Properties. In the Target Connection dialog, QEMU Simulator Configuration tab, make changes as necessary to the default TUN/TAP settings. Click OK to save the settings.
192
NOTE: This command must be run as root, or with the -su option from the command line. If you are not logged in as root, sudo will automatically run and prompt you for the root password.
When your simulation is running, view the routing information on the simulation:
root@localhost:/root> route Kernel IP routing table Destination Gateway 192.168.200.0 * default 192.168.200.1 root@localhost:/root>
Flags UH U U UG
Metric 0 0 0 0
Ref 0 0 0 0
Use 0 0 0 0
For example, the 192.0.2.0 through 192.0.0.24 IP block is assigned as "test net" for use in documentation and example code. It is often used in conjunction with domain names example.com or example.net in vendor and protocol documentation. Addresses within this block should not appear on the public Internet. Note that 192.168.200.1 is assigned to the host and 192.168.200.15 is assigned to the target. Network applications on the host, for example, may now access the target at 192.168.200.15.
193
To build and run initramfs, perform the following steps: 1. Configure a BSP. Since initramfs is designed for small file system, use either glibc_small or uclibc_small to configure a BSP. Using glibc_std or glibc_cgl may increase the kernel size and possibly introduce boot issues as a result. Configure your project by specifying a board, kernel, and file system. For example, enter the following command to specify the ARM Versatile 926ejs board with a standard kernel and small file system:
$ installDir/wrlinux-3.0/wrlinux/configure \ --enable-board=arm_versatile_926ejs --enable-kernel=standard \ --enable-rootfs=glibc_small --enable-jobs=5
2.
Build the kernel boot image with initramfs. From the project build directory, enter the following command on a single line:
$ make boot-image BOOTIMAGE_FSTYPE=initramfs BOOTIMAGE_TYPE=flash
This creates a bootable file system in the prjbuildDir/export/dist directory that includes initramfs in the kernel. The file system is in export/dist/, for example:
README* bin/ boot/ dev/ etc/ home/ lib/ media/ mnt/ opt/ proc/ root/ sbin/ selinux/ srv/ sys/ tmp/ usr var/
The prjbuildDir/export/arm_versatile_926ejs-initramfs file contains the initramfs-enabled kernel. 3. Run the initramfs-enabled kernel with QEMU. Since initramfs contains the file system, it is not necessary to identify a root file system for QEMU. Enter the following command from the project build directory, all on a single line, to boot the kernel using QEMU:
$ ./host-cross/bin/qemu-system-arm -nographic -k en-us \ -kernel ./export/arm_versatile_926ejs-initramfs -net user \ -net nic,macaddr=52:54:00:12:34:56 -M versatileab -nortclk \ -append "console=ttyAMA0,115200 ip=dhcp rw highres=off UMA=1"
Once the kernel boots, a shell displays in initramfs. For a list of built-in commands, type help.
To aid in your development process, it may be necessary to switch from using an initramfs root file system to a hard disk root file system. The following procedure provides instructions for switching root file system from initramfs to a hard disk root file system using QEMU. In this process, you create an ext2 file to emulate a hard disk for QEMU. 1. Configure and build a common_pc initramfs image using the following command, entered on a single line, from the project build directory:
$ installDir/wrlinux/configure --enable-board=common_pc \ --enable-kernel=standard --enable-rootfs=glibc_small --enable-jobs=5
You should substitute the path to your Wind River Linux install directory for the installDir in the example.
194
2.
Create the initramfs boot image, using the following command, entered from the project build directory:
$ make boot-image BOOTIMAGE_FSTYPE=initramfs BOOTIMAGE_TYPE=flash
Run the initramfs-enabled kernel with QEMU. Enter the following command from the project build directory, all on a single line, to boot the kernel using QEMU:
$ ./host-cross/bin/qemu- -nographic -k en-us -kernel ./export/common_pc-initramfs -net user -net \ nic,macaddr=52:54:00:12:34:56 -M versatileab -nortclk \ -append "console=ttyAMA0,115200 ip=dhcp rw highres=off UMA=1"
Once the kernel boots, a shell displays in initramfs. For a list of built-in commands, type help. Execute the following command in the initramfs shell to see which root file system is mounted:
# mount
The following displays to indicate that the root file system resides in initramfs:
rootfs on / type rootfs (rw) proc on /proc type proc (rw) sysfs on /sys type sysfs (rw) devpts on /dev/pts type devpts (rw,mode=600) tmpfs on /dev/shm type tmpfs (rw)
Since the file system does not return any reference to a hard disk, for example /dev/sda, the file system resides in initramfs. 3. To transfer the initramfs session to a hard disk-emulated session, you must create an ext2 image file for emulating the hard disk in QEMU. To do this, perform the following steps: a. Create the image file by entering the following command in the QEMU session terminal:
$ dd if=/dev/zero of=image.ext2 bs=20M count=1
b. c.
Copy the file system to this file using the following commands:
$ mkdir tmp_root $ sudo mount -t ext2 -o loop image.ext2 tmp_root $ sudo tar jxvf export/common_pc-glibc_small-standard-dist.tar.bz2 \ -C tmp_root/ $ sudo umount tmp_root
4.
Rebuild the initramfs kernel to add the necessary programs to busybox which are required to switch from the initramfs QEMU session to the hard disk one. These programs include switch_root and mdev. To aid in this process, we provide a sample busybox config file: busybox.config. To add the programs to busybox, perform the following steps: a. Copy installDir/wrlinux-3.0/samples/initramfs_busybox.config to build/busybox-1.11.1/.config file using the following command:
$ cp busybox.config build/busybox-version/.config
b.
Perform a make command using the original busybox configuration, then again to rebuild the kernel with the new configuration file using the following commands:
$ make -C build busybox.oldconfig $ make -C build busybox.rebuild
195
c.
Rebuild the kernel and file system using the following command:
$ make fs
This completes the kernel and file system rebuild necessary to add the required programs to busybox. 5. Create the init script to run the switch_root command from init. Note that init is the PID1 process. Run the following command from the project build directory in a terminal, or use a text editor to create the init file and move to the prjbuildDir/export/dist directory.
$ cat << EOF > export/dist/init #!/bin/sh mount -a touch /etc/mdev.conf mdev -s mount /dev/sda /mnt echo -e "\n switch initramfs to /dev/sda.........\n" exec switch_root /mnt /sbin/init EOF
6. 7.
The result creates a new initramfs kernel in the /export directory titled: common_pc-bzImage-WR3.0zz_standard. You will use this kernel to run the new QEMU hard disk session. 8. Run the kernel to see how to switch from initramfs to hard disk file system. We provide the ext2 file image.ext2 to QEMU to emulate hard disk. Enter the following command from the project build directory to begin the QEMU session:
$ ./host-cross/bin/qemu -nographic -k en-us \ -kernel ./export/common_pc-bzImage-WR3.0zz_standard \ -net user -net nic,macaddr=52:54:00:12:34:56 \ -nortclk -append "console=ttyS0,115200 ip=dhcp \ rw highres=off UMA=1" -hda image.ext2
The QEMU session begins. Once the load process completes, the following message displays in the terminal:
------------------snip-------------------------------------------Freeing unused kernel memory: 5820k freed EXT2-fs warning: mounting unchecked fs, running e2fsck is recommended switch initramfs to /dev/sda......... init started: BusyBox v1.11.1 (2009-01-05 14:51:26 CST) starting pid 931, tty '': '/etc/init.d/rcS' Welcome to Wind River Linux Please press Enter to activate this console. starting pid 935, tty '': '-/bin/sh' # ------------------snip--------------------------------------------
196
9.
Once the shell is up and running, you can execute a mount command to verify that the root file system is located on the hard disk, for example:
# mount rootfs on / type rootfs (rw) proc on /proc type proc (rw) sysfs on /sys type sysfs (rw) devpts on /dev/pts type devpts (rw,mode=600) tmpfs on /dev/shm type tmpfs (rw) /dev/sda on / type ext2 (rw,errors=continue)<------ on hard disk proc on /proc type proc (rw) sysfs on /sys type sysfs (rw) devpts on /dev/pts type devpts (rw,mode=600) tmpfs on /dev/shm type tmpfs (rw)
The sixth line from the top indicates /dev/sda on / type ext2, verifying that we have indeed loaded the new hard disk-based QEMU session. You can stop the QEMU simulator by entering CTRL-A x in the terminal window.
197
198
15
Network Server Configuration
15.1 Introduction 199 15.2 Configuring DHCP 201 15.3 Configuring TFTP 202 15.4 Configuring NFS 203
15.1 Introduction
When you deploy Wind River Linux on a networked board, the boot loader on the board gets a kernel and file system from the network. This requires a properly configured boot loader and network server setup. This chapter describes how to configure your network server(s) to supply the kernel and file system to your board through its network connection. It assumes you have built a file system and have either built a kernel or are using the default kernel provided when you built your platform project. Refer to 16. Deploying Your Board from a Network for a discussion of board boot loader configuration for network deployment.
If you are booting your target board over the network you will typically use the following resources in this order: 1. 2. 3. 4. a bootloaderthis is software on the board that you configure to access the network appropriately. an IP configurationyou can configure an IP network address into your bootloader, or you may get your IP address from the network. a kernel to boota network server provides a kernel for download. a root file system to mountthe downloaded kernel mounts the root file system from the network.
199
See 16. Deploying Your Board from a Network for some examples of boot loader configuration for network deployments. Board-specific details for the boot loaders are provided in the README files in your prjbuildDir/READMES directory.
NOTE: Boot loader and network configuration are somewhat different for boards that use the PXE boot protocol. Refer to 17. Deploying Your Board with PXE for details on network booting with PXE.
The typical network deployment boot process described in this chapter uses network servers as follows: 1. 2. 3. The boot loader on the board gets its IP address, either locally or from the network. If from the network, a DHCP server supplies the IP address. With its IP address, the boot loader connects to a TFTP server and downloads a compressed kernel file. The boot loader uncompresses and boots the kernel, which takes control and then mounts its root file system from an NFS server on the network.
This chapter provides details on how to configure the DHCP, TFTP, and NFS servers. These servers and your development host may be physically one machine, or may be different machines (see Figure 15-1).
Figure 15-1 Embedded Development in a Networked Environment One or More Machines Provide Services
Development Host
DHCP Server
Ethernet
TFTP Server
NFS Server
Serial Line
Target Board
Different network servers provide different GUI and command-line tools for network service configuration. Configuration file specifics may also vary. This chapter can only make suggestions on how to configure the different services refer to your server documentation for specifics on your host and services.
NOTE: You will typically need root (superuser) privileges when configuring network services.
200
You may want to map your target and server IP addresses to host names for ease of reference. For example, you could configure your servers /etc/hosts file to include both the targets and servers host name and IP address. An example is:
192.168.10.1 server1.lab.org server1 192.168.10.2 target7.lab.org target7
To set the same information on the target, insert this information into prjbuildDir/filesystem/fs/etc/hosts before you build the file system. The resulting file system will include the hosts file when downloaded from the server.
The DHCP configuration file is /etc/dhcpd.conf. A sample file is presented below. Example 15-1 is a basic example of this file for a DHCP server called server1.lab.org. The servers IP address is 192.168.10.1. The configuration file identifies server1.lab.org as the TFTP server and the target is assigned a static IP addresses. In this example the DHCP server is the Internet Software Consortiums (ISC) DHCP, version 3.0.1. Refer to the documentation for your DHCP server for specific configuration file settings.
Example 15-1 The dhcpd.conf File
Notice that the targets static IP address is within the DHCP servers subnet, but outside the range of the dynamic IPs.
# Sample /etc/dhcpd.conf file authoritative; ddns-update-style ad-hoc; default-lease-time 21600; max-lease-time 21600; option option option option routers 192.168.10.1; subnet-mask 255.255.255.0; broadcast-address 192.168.10.255; domain-name lab.org;
201
option domain-name-servers 192.168.10.1; # Subnet and range of IP addresses for dynamic clients subnet 192.168.10.0 netmask 255.255.255.0 { range 192.168.10.3 192.168.10.40; } host server1.lab.org { hardware Ethernet XX:XX:XX:XX:XX:XX; fixed-address 192.168.10.1; } }
The DHCP server will not start without an empty leases file being created. If it has not been created already, enter the following within the /var/lib/dhcp directory:
# touch dhcpd.leases
This creates an empty file that can be used by the DHCP server. Alternatively, create an empty dhcpd.leases file with an editor.
After configuring the /etc/dhcpd.conf file and after creating the leases file, start the server using a GUI or command-line tool. For example, for Red Hat Linux, you could enter (as root):
# service dhcpd start
You may want to configure the DHCP service to start when the server boots.
The default TFTP download directory is typically tftpboot. If a download directory for TFTP is not already created, you must create it. Refer to your server documentation for the name of your TFTP download directory and for instructions if you want to change the default. For example, using the command line you could copy the kernel to the TFTP download directory as follows:
# cd prjbuildDir/export # cp -L *uImage* /tftpboot/uImage
202
This copies the kernel from your export directory to the file with the shorter name (for convenience) of uimage in the TFTP download directory. The -L option covers both cases of whether it is a prebuilt kernel, or a symlink to a kernel you have explicitly built.
For many Linux systems, the TFTP server is automatically started upon request with inetd or xinetd. The following provides some general instructions for enabling the TFTP server with xinetd. Refer to your system documentation for details on how to enable TFTP. With xinetd, the TFTP configuration file is /etc/xinetd.d/tftp. In Red Hat, TFTP is disabled by default. You can enable it by changing the:
disable = yes
Alternately, you can avoid a manual edit by using the setup program at the command line to enable the service. After enabling TFTP, remember to restart xinetd (for example, with the service command on Red Hat systems).
You can export any directory you choose to the network with NFS. This section assumes you have created an export directory in your home directory, for example /home/user/export. Copy and uncompress the compressed run-time file system file to the NFS export directory. For example, you could use the command line as follows:
# cd /home/user/export # tar -xjvpf prjbuildDir/export/*dist.tar.bz2
203
Configuring /etc/exports
The NFS configuration file is a plain-text file, /etc/exports. You must configure it to export the run-time file system to the target. For example, if your target had the IP address of 192.168.10.2, the /etc/exports file might appear as shown in Example 15-2.
Example 15-2 An Example /etc/exports File
/home/user/export 192.168.10.2/255.255.255.0(rw,sync,no_subtree_check,no_root_squash)
This makes /home/user/export available for mounting to the machine with network address 192.168.10.2 only. After changing the /etc/exports file, reload the service with:
# exportfs -ra
Finally, restart NFS. On Red Hat Linux systems you may use the service command as follows: # service nfs restart Or use the appropriate GUI tool for your system.
204
16
Deploying Your Board from a Network
16.1 Introduction 205 16.2 Configuring a Serial Connection to the Board 206 16.3 Example Network Deployments with RedBoot 207 16.4 Example Ramdisk Deployment with U-Boot 213
16.1 Introduction
Wind River Linux supports network deployment with NFS, ramdisk, and three boot images suitable for flash RAM. Not all methods can be employed on all boards, Refer to Wind River Online Support and your boards README files for specifics on your board. Refer to 15. Network Server Configuration for details on setting-up NFS, DHCP, and TFTP network services. This chapter covers the following deployment methods:
JFFS2this is the Journaling Flash File System, version 2. CRAMFSthis is the Compressed ROM File System. YAFFSthis file system is designed specifically for NAND flash chips. Ramdiskthe file system downloaded to RAM and mounted as a ramdisk (/dev/ram0).
In addition, the platform supports stand-alone deployment with the kernel and a ramdisk, JFFS2, or CRAMFS image in flash memory. For details, see 18. Stand-Alone Deployment With Flash Devices.
205
This chapter assumes that RedBoot (or another suitable boot loader) has already been installed onto the target board.
NOTE: You must refer to the README for your target as the instructions are target-specific and this chapter can only provide examples. You can find the README file in installDir/wrlinux-3.0/layers/wrll-wrlinux/templates/board/boardname, or in prjbuildDir/READMES after running configure.
This chapter continues directory conventions used in previous chapters: /home/user/WindRiver is referred to as installDir. The development environment consists primarily of the contents of installDir/wrlinux-3.0. The build environment is contained within the project build directory, which is under /home/user/workdir. As a board example, the chapter uses the Freescale I.MX31 ADS (fsl_imx31ads) built within the project build directory prjbuildDir.
Within the Workbench Terminal view, click the Settings icon, and set the port and baud rate.
On the server, you must edit two configuration files within /etc/uucp/ to reflect your serial port and baud rate. Edit the port file to reflect your serial ports device name and baud rate, as in this example:
port serial0_38400 type direct device /dev/ttyS0 speed 38400 hardflow false
You can find instructions on each boards serial port device name and baud rate in the boards README file.
206
16 Deploying Your Board from a Network 16.3 Example Network Deployments with RedBoot
You can now open the serial terminal at any console with cu, for example as follows:
# cu S0@38400
Enter help at a Redboot prompt for a list of the commands available to you. Configure RedBoot using fconfig to set your default TFTP host and interface options. The boot instructions in the following examples assume that eth0 has a valid address and a default TFTP server has been configured as described in 15.3 Configuring TFTP, p.202.
207
JFFS2 capability is included in the product in support of specific board releases that have drivers supporting NOR or NAND flash devices. The following examples use the Freescale I.MX31 ADS (fsl_imx31ads) boardrefer to the README file for your board for board-specific instructions.
Booting JFFS2 Root File System (NOR)
With the NOR flash enabled, the fsl_imx31ads target supports JFFS2 as a root file system. 1. Configure your project, for example:
$ configure --enable-board=fsl_imx31ads --enable-kernel=small \ --enable-rootfs=glibc_small+debug --enable-bootimage=flash
2. 3.
You could, for example, put the options in a filename.cfg file in your project build directory, reference it in an SCC file in your project build directory, and then enter make -C build linux.reconfig in your project build directory. 4. 5. Generate the boot image:
$ make boot-image BOOTIMAGE_FSTYPE=jffs2
The fis list command will show the list of RedBoot partitions, for example:
RedBoot> fis list ... Read from 0x07ee0000-0x07eff000 at 0xa1fe0000: . Name FLASH addr Mem addr Length RedBoot 0xA0000000 0xA0000000 0x00040000 kernel 0xA0100000 0x00100000 0x001A0000 root 0xA0300000 0x00100000 0x01220000 cramxipfs 0xA1520000 0x01008000 0x003A0000 jffs2 0xA18C0000 0x01008000 0x00700000 FIS directory 0xA1FE0000 0xA1FE0000 0x0001F000 RedBoot config 0xA1FFF000 0xA1FFF000 0x00001000
Counting from 0 the JFFS2 partition in this example is partition 4. 6. Load and execute the kernel that was configured with JFFS2 support:
RedBoot> load -r -b 0x01008000 -h 192.168.10.1 zImage RedBoot> exec -c "console=ttymxc0,115200 root=/dev/mtdblock4 rootfstype=jffs2 rw ip=dhcp"
208
16 Deploying Your Board from a Network 16.3 Example Network Deployments with RedBoot
The following procedure shows how to use JFFS2 with NAND flash. 1. Configure your project, for example:
$ configure --enable-board=fsl_imx31ads --enable-kernel=small \ --enable-rootfs=glibc_small+debug --enable-bootimage=flash
2. 3.
You could, for example, put the options in a filename.cfg file in your project build directory, reference it in an SCC file in your project build directory, and then enter make -C build linux.reconfig in your project build directory. 4. Generate the boot image:
$ make boot-image BOOTIMAGE_FSTYPE=jffs2
then copy export/fsl_imx31ads-jffs2 to the /tmp directory of the NFS-exported root file system. 5. 6. Enable the NAND flash from the RedBoot prompt.
RedBoot> factive nand
Boot to an NFS root file system that includes the mtd-utils. The NAND flash is statically defined for 4 partitions:
# cat dev: mtd0: mtd1: mtd2: mtd3: mtd4: mtd5: mtd6: mtd7: mtd8: mtd9: /proc/mtd size erasesize name 00040000 00020000 "RedBoot" 001a0000 00020000 "kernel" 01220000 00020000 "root" 003a0000 00020000 "cramxipfs" 0001f000 00008000 "FIS directory" 00001000 00008000 "RedBoot config" 00020000 00004000 "IPL-SPL" 00400000 00004000 "nand.kernel" 01600000 00004000 "nand.rootfs" 065e0000 00004000 "nand.userfs"
mtd0-5 are NOR flash partitions, mtd6-9 are the NAND partitions. 7. Erase the flash and then write the image to NAND partition 8:
# flash_eraseall /dev/mtd8 # nandwrite -p /dev/mtd8 /tmp/fsl_imx31ads-jffs2
8.
9.
209
Linear and standard CRAMFS root file systems are supported for specific board releases that have drivers supporting NOR flash devices. The following examples use the Freescale I.MX31 ADS (fsl_imx31ads) boardrefer to the README file for your board for board-specific instructions. (Board README files are located in installDir/wrlinux-3.0/layers/wrll-wrlinux/templates/board/boardname/. The appropriate READMEs for your project are copied into the READMES/ subdirectory when you configure a new project.)
Booting Linear CRAMFS Root File System
With the NOR flash enabled the fsl_imx31ads target supports Linear CRAMFS XIP, also referred to as Application XIP.
NOTE: The RedBoot bootloader does not support executing a kernel directly from flash (Kernel XIP).
1.
2. 3.
You could, for example, put the options in a filename.cfg file in your project build directory, reference it in an SCC file in your project build directory, and then enter make -C build linux.reconfig in your project build directory. 4. Generate the boot image:
$ make boot-image BOOTIMAGE_FSTYPE=cramxipfs
Copy the resulting zImage file (renamed simply zImage in this example) to your TFTP download directory. 5. From the Redboot prompt on the target, enter the following:
RedBoot> load -r -b 0x01008000 -h 192.168.10.1 fsl_imx31ads-cramxipfs RedBoot> fis create cramxipfs
The fis create command assumes it's arguments from the last loaded file.
RedBoot> fis list
The fis list command will show the list of RedBoot partitions. Note the Flash address of the cramxipfs partition which is in the first column. 6. 7. Load the kernel that was configured with Linear Cramfs support:
RedBoot> load -r -b 0x01008000 -h 192.168.10.1 zImage
Using the Flash address reported by fis list, exec the kernel. In this example the Flash address is 0xA152000. Yours will likely be different.
RedBoot> exec -c "console=ttymxc0,115200 root=/dev/null rootfstype=cramfs rootflags=physaddr=0xA1520000 ip=dhcp"
210
16 Deploying Your Board from a Network 16.3 Example Network Deployments with RedBoot
With the NOR flash enabled the fsl_imx31ads target supports standard CRAMFS as a root file system. 1. Configure your project, for example:
$ configure --enable-board=fsl_imx31ads --enable-kernel=small --enable-rootfs=glibc_small+debug --enable-bootimage=flash
2. 3.
You could, for example, put the options in a filename.cfg file in your project build directory, reference it in an SCC file in your project build directory, and then enter make -C build linux.reconfig in your project build directory. 4. Generate the boot image:
$ make boot-image BOOTIMAGE_FSTYPE=cramfs
Copy the resulting zImage file (renamed simply zImage in this example) to your TFTP download directory. 5. From the Redboot prompt on the target, enter the following:
RedBoot> load -r -b 0x01008000 -h 192.168.10.1 fsl_imx31ads-cramfs RedBoot> fis create cramfs RedBoot> fis list
The fis list command will show the list of Redboot partitions. For example:
RedBoot> fis list ... Read from 0x07ee0000-0x07eff000 at 0xa1fe0000: . Name FLASH addr Mem addr Length RedBoot 0xA0000000 0xA0000000 0x00040000 kernel 0xA0100000 0x00100000 0x001A0000 root 0xA0300000 0x00100000 0x01220000 cramxipfs 0xA1520000 0x01008000 0x003A0000 cramfs 0xA18C0000 0x01008000 0x00700000 FIS directory 0xA1FE0000 0xA1FE0000 0x0001F000 RedBoot config 0xA1FFF000 0xA1FFF000 0x00001000
Counting from 0 the CRAMFS partition in this example is partition 4. 6. Load and execute the kernel that was configured with CRAMFS support:
RedBoot> load -r -b 0x01008000 -h 192.168.10.1 zImage RedBoot> exec -c "console=ttymxc0,115200 root=/dev/mtdblock4 rootfstype=cramfs ip=dhcp"
YAFFS capability is included in the product in support of specific board releases that have drivers supporting NAND flash. The following example uses the Freescale I.MX31 ADS (fsl_imx31ads) board and YAFFS2refer to the README file for your board for board-specific instructions.
211
The fsl_imx31ads has small block NAND and so will support YAFFS as shown in the following example. 1. Configure your project, for example:
$ configure --enable-board=fsl_imx31ads --enable-kernel=small \ --enable-rootfs=glibc_small+debug --enable-bootimage=flash
2. 3.
You could, for example, put the options in a filename.cfg file in your project build directory, reference it in an SCC file in your project build directory, and then enter make -C build linux.reconfig in your project build directory. 4. 5. Generate the boot image:
$ make boot-image BOOTIMAGE_FSTYPE=yaffs
Copy export/fsl_imx31ads-glibc_small-small-dist.tar.bz2 (or export/fsl_imx31ads-glibc_small-small-dist.tar.bz2) to an accessible location on the target's NFS root file system and boot the target. Make sure the mtd-utils are included in the NFS file system and busybox includes the tar applet with -j support (see Configuring BusyBox, p.132). On the booted target:
# flash_eraseall /dev/mtd9 # mkdir /mnt/yaffs # mount -t yaffs /dev/mtdblock9 /mnt/yaffs
6.
This will create an empty YAFFS file system. 7. Uncompress the file system onto the device:
# # # # cd /mnt/yaffs tar jxvf /tmp/fsl_imx31ads-glibc_small-small-dist.tar.bz2 . cd / umount /mnt/yaffs
8.
Load and execute the kernel that was configured with YAFFS support:
RedBoot> factive RedBoot> load -r RedBoot> exec -c rootfstype=yaffs nand -b 0x01008000 -h 192.168.10.1 zImage "console=ttymxc0,115200 rw ip=dhcp root=/dev/mtdblock9 rw"
212
16 Deploying Your Board from a Network 16.4 Example Ramdisk Deployment with U-Boot
1.
This configuration uses the uclibc_small file system to make it small enough for a RAM disk image. 2. 3. Build the file system:
$ make fs
You could, for example, put the options in a filename.cfg file in your project build directory, reference it in an SCC file in your project build directory, and then enter make -C build linux.reconfig in your project build directory. 4. Create initrd, the ramdisk image. Within your project build directory, enter:
$ make boot-image BOOTIMAGE_TYPE=flash BOOTIMAGE_RAM0SIZE=8192
Note that the ramdisk size is dependent on the size of the underlying file system image as well as available ramdisk. For uclibc_small, 8MB should be more than enough. (In Workbench you could create a custom build target for your preferred ramdisk size.) 5. Copy the resulting images within export to the TFTP directory:
$ cd export # cp *initrd.gz /tftpboot/initrd.gz $ cp *uImage* /tftpboot/uImage
Configure U-Boot
Enter help at a U-Boot prompt to see the commands available to you. Use the setenv command to set environment variables, printenv to view them, and saveenv to save them. Set the U-Boot environment as follows:
bootdelay=5 baudrate=38400 bootfile=uImage ipaddr=192.168.10.2 serverip=191.168.10.1 bootargs root=/dev/ram0 rw console=ttyS0,115200n8 initrd=0x80600000,8M ramdisk_size=8192 stdin=serial stdout=serial stderr=serial verify=n
213
Deployment
Perform the following steps at the U-Boot console: 1. 2. 3. Enter the following to load the initrd image into RAM:
# tftpboot 0x80600000 ti_omap2430sdp-initrd.gz
Note that the kernel must be loaded after the initrd, otherwise U-Boot will assume the kernel start address is 0x80600000 instead of 0x80000000. Press ENTER to activate the console. There is no root password.
214
17
Deploying Your Board with PXE
17.1 Introduction 215 17.2 Preparing the Downloaded Files 216 17.3 Configuring DHCP for PXE 217 17.4 Setting up and Booting the Target 218
17.1 Introduction
You can configure the Pre-boot Execution Environment (PXE) boot loader on most IA32 boards with Wind River Linux board support packages (BSPs). This chapter describes a typical development example of bringing up a board using PXE, TFTP, and NFS. For DHCP and PXE boot, three separate servers are required:
The Syslinux package, which contains the PXELinux boot loader, is also required. The TFTP and PXELinux packages must be installed.
Process Overview
A PXE boot-enabled NIC supports the Bootstrap Protocol (BOOTP). This protocol, provided by a DHCP server, allows a diskless target to obtain its own IP address, the IP address and name of a server, and the name of the boot loader file on that server that it can download to boot. Booting the target server follows these steps: 1. The targets PXE-enabled NIC broadcasts its MAC address, requesting an IP address from a BOOTP/DHCP server.
215
2.
The DHCP/BOOTP server, configured with the MAC address of the target and other options, returns the targets IP address, along with the name of the TFTP server and the name of the PXELinux boot loader file, which resides on the TFTP server. The target downloads, using TFTP, the PXELinux boot loader, which provides the name of the Linux kernel image to load. The PXELinux boot loader downloads the kernel. The target runs the kernel, which rediscovers its IP address from the DHCP server. The DHCP server provides the location for the NFS root file system; the kernel mounts it and completes system initialization.
3. 4.
The PXELinux boot loader file is pxelinux.0. This file is part of the Syslinux package. Installing Syslinux installs pxelinux.0 into the /usr/lib/syslinux directory; it must be copied to the TFTP download directory, by default /tftpboot.
The PXELinux configuration file resides in the /tftpboot/pxelinux.cfg directory. There can be separate configuration files for separate targets. To enable this, a filename convention is used that identifies a configuration file by its specific targets hardware type and MAC address, or its IP address. The following example demonstrates how the PXE bootloader searches for the correct configuration file. The example assumes that the PXE bootloader is looking for the configuration file for the scenarios target.lab.org, which has been assigned an IP address of 192.168.10.2, and which has an Ethernet card with a MAC address of 00-20-ED-6E-82-3D. First, the bootloader will look for a configuration file corresponding to its MAC address, with the first two digits representing its ARP code. This filename, all in lowercase, would be: 01-00-20-ed-6e-82-3d (Note the 01- preceding the MAC address.)
216
17 Deploying Your Board with PXE 17.3 Configuring DHCP for PXE
If that filename cannot be found in the /tftpboot/pxelinux.cfg directory, the bootloader will search for a file named after its IP address in hexadecimal. The filename for this example, all in uppercase, would be:
C0A80A01
Not finding that, the bootloader will search for files in the following order:
C0A80A0 C0A80A C0A80 C0A8 C0A C0 C
Finally, not finding any of these files, it will look for a file named default. In this scenario, the default filename is used. Both the file and its directory must be created:
# mkdir /tftpboot/pxelinux.cfg # touch /tftpboot/pxelinux.cfg/default
The configuration file is plain text. Example 17-1 is a configuration file for this scenario.
Example 17-1 The PXELinux Configuration File default netboot prompt 1 display pxeboot.msg timeout 300 label netboot kernel bzImage append ip=dhcp root=/dev/nfs nfsroot=/home/nfs/export
As can be seen, PXELinuxs configuration file is similar to the LILO configuration file. bzImage represents the kernels actual filename. It has been given the label netboot, which is also the default kernel to load.
217
# Next two lines PXE boot additions allow booting; allow bootp; # Subnet and range of IP addresses for dynamic clients subnet 192.168.10.0 netmask 255.255.255.0 { range 192.168.10.3 192.168.10.40; } host server1.lab.org { hardware Ethernet XX:XX:XX:XX:XX:XX; fixed-address 192.168.10.1; } # Next section PXE boot static IPs for the target; an example MAC address # (Ethernet address) is provided. host target.lab.org { hardware ethernet 00:20:ED:6E:82:3D; fixed-address 192.168.10.2; next-server 192.168.10.1; filename pxelinux.0; option root-path 192.168.10.1:/home/nfs/export; }
In this case, dhcpd.conf has been configured to support BOOTP, and the PXE target is configured with a static IP address and supplied the following:
fixed address is the address of the PXE server. filename provides the file name of the PXE file in /tftpboot to download, in this case pxelinux.0. next-server is the address of the NFS server. option root-path provides the path on the NFS server for the exported PXE files.
Setting up the target requires that network boot using PXE is enabled. This is generally done within the CMOS setup routine. Configure the boot parameters and sequence in your BIOS to enable the PXE boot loader and boot from it first (or only).
218
17 Deploying Your Board with PXE 17.4 Setting up and Booting the Target
When your target boots you should see the target go through the following sequence: 1. 2. 3. 4. 5. 6. broadcast MAC address and receive IP address download PXE Boot Loader and configuration file download bzImage boot bzImage get IP address again mount NFS file system
If you cannot get through the first two steps in the sequence above, verify your dhcpd.conf file settings. If you cannot download the bzImage file, verify that your TFTP server is enabled and xinetd has been restarted. If your bzImage boots but cannot mount the file system, verify that the NFS daemon (nfsd) is running and that the targets root file system exists in /usr/nfs/export.
219
220
18
Stand-Alone Deployment With Flash Devices
18.1 Introduction 221 18.2 Process Overview 222 18.3 Preliminaries 222 18.4 Setting up Hosts 222 18.5 Stand-alone Deployment with a Ramdisk 223 18.6 Stand-alone Deployment with JFFS2 224 18.7 Stand-alone Deployment with CRAMFS 225
18.1 Introduction
You can use Wind River Linux for stand-alone deployment of supported target boards by loading the kernel and its ramdisk or flash image into flash memory. In other words, after initial setup, it is no longer necessary to download either the kernel or the file system from the network. This chapter covers the three methods supported. 1. 2. 3. Ramdiskthe file system is mounted as a ramdisk (/dev/ram0). JFFS2this is the Journaling Flash File System, version 2. CRAMFSthis is the Compressed ROM File System.
This chapter builds on chapter 15. Network Server Configuration and frequently references material in that chapter. This chapter assumes that the boot loader has already been installed on the target board. !
CAUTION: The ARM Versatile AB-926EJS will not correctly flash Wind River flash
file systems with the U-Boot supplied by the manufacturer. The U-Boot must be upgraded to version 1.1.3.
221
This chapter continues directory conventions used in previous chapters: /home/user/WindRiver is referred to as installDir. The development environment consists primarily of the contents of installDir/wrlinux-3.0. The build environment is contained within the project build directory, which is under /home/user/workdir. As a board example, the chapter uses the ARM Versatile AB-926EJS, built within the project build directory arm_versatile.
18.3 Preliminaries
In the deployment examples in this chapter, it is assumed that you have already done the following:
created the kernel image and file system you wish to use set up the bootloader environment for your file system configured networking as described in 15. Network Server Configuration.
This information can be inserted into the targets hosts file by editing the prjbuildDir/filesystem/fs/etc/hosts file, before building your file system.
222
18 Stand-Alone Deployment With Flash Devices 18.5 Stand-alone Deployment with a Ramdisk
First, load the ramdisk image (initrd) into flash using the following procedure. 1. Load the ramdisk into RAM. At the U-Boot console, enter:
# tftp 0 initrd.gz.uboot
NOTE: When tftp is done loading, it will give you the number of bytes transferred, and the hex equivalent. This is important information you will need in further steps, and later when booting the target. An example of the output is:
Bytes transferred = 4400918 (432716 hex)
2. 3. 4.
Next, load the kernel into flash, following these steps: 1. 2. Load the kernel into RAM. At the U-Boot console, enter:
# tftp 0 uImage
Make a note of the hex number of bytes transferred, in this example, 10F514. Unprotect just enough flash for the kernel image. Make sure the address you use does not interfere with the initrd image you have already loaded into flash:
# prot off 34060000 +10F514
3. 4.
Both the initrd image and the kernel are now loaded into non-volatile flash memory.
Booting the target is a two-stage procedure. 1. First, copy the ramdisk image from flash into RAM. At this stage you will need the flash address you copied the initrd to, as well as its size in hex:
# cp.b 36000000 800000 432716
223
2.
Next, use the bootm command to boot the kernel, with options indicating the kernels location in flash, and the initrds location in RAM:
# bootm 34060000 800000
For the U-Boot loader, load the kernel into flash following these steps: 1. Load the kernel into RAM. At the U-Boot console, enter:
# tftp 0 uImage
NOTE: When tftp is done loading, it will give you the number of bytes transferred, and the hex equivalent. This is important information which you will need in step 2. An example of the output is:
Bytes transferred = 1111316(10F514 hex)
2.
Unprotect just enough flash for the kernel image. Make sure the address you use does not interfere with the JFFS2 image you have already loaded into flash:
# prot off 34060000 +10F514
3. 4.
Both your kernel and JFFS2 file system are now loaded into non-volatile flash memory. Boot the target with U-Boot as follows:
# bootm 34060000
It is not necessary to run a DHCP server with a target configured for stand-alone deployment with JFFS2. You may turn it off entirely, or just comment out the targets host declaration in the dhcpd.conf file. You may also simplify the U-Boot environment. The following is an example of one environment what would suffice:
baudrate=38400 bootfile=uImage ethaddr=00:02:F7:00:10:39
224
18 Stand-Alone Deployment With Flash Devices 18.7 Stand-alone Deployment with CRAMFS
bootargs=root=/dev/mtdblock1 rootfstype=jffs2 noinitrd mem=128M console=AMA0 mtdparts=phys_mapped_flash:128K(u-boot),16M@0x2000000(jffs2),5012K@0x1980000( cramfs) ip=192.168.10.2:192.168.10.1:192.168.10.1:255.255.255.0 bootcmd=bootm 34060000 bootdelay=5 stdin=serial stdout=serial stderr=serial verify=n
NOTE: The bootargs line, above, has wrapped; the three lines should be entered as one.
In this example, once the target is switched on it will bring up U-Boot, wait five seconds for your intervention, then automatically boot the Linux kernel and JFFS2 file system.
For U-Boot, load the kernel into flash following these steps: 1. Load the kernel into RAM. At the U-Boot console, enter:
# tftp 0 uImage
NOTE: When it is done loading, it will give you the number of bytes transferred, and the hex equivalent. This is important information which you will need in step 2. An example of the output is:
Bytes transferred = 1111316(10F514 hex)
2.
Unprotect just enough flash for the kernel image. Make sure the address you use does not interfere with the CRAMFS image you have already loaded into flash:
# prot off 34060000 +10F514
3. 4.
Both your kernel and CRAMFS file system are now loaded into non-volatile flash memory. Boot with U-Boot as follows:
# bootm 34060000
225
It is not necessary to run a DHCP server at all with a target configured for stand-alone deployment with CRAMFS. You may turn it off entirely, or just comment out the targets host declaration in the dhcpd.conf file. The U-Boot environment may also be simplified. The following environment is one example of what would suffice:
baudrate=38400 bootfile=uImage ethaddr=00:02:F7:00:10:39 bootargs=root=/dev/mtdblock2 noinitrd mem=128M console=AMA0 mtdparts=phys_mapped_flash:128K(u-boot),16M@0x2000000(jffs2),5012K@0x1980 000(cramfs) ip=192.168.10.2:192.168.10.1:192.168.10.1:255.255.255.0 bootcmd=bootm 34060000 bootdelay=5 stdin=serial stdout=serial stderr=serial verify=n
NOTE: The bootargs line, above, has wrapped; the three lines should be entered as one.
In this example, once the target is switched on it will bring up U-Boot, wait five seconds for your intervention, then automatically boot the Linux kernel and CRAMFS file system.
226
19
Stand-Alone Deployment to Disk
19.1 Introduction 227 19.2 Server-Based Installation of Wind River Linux 227 19.3 Booting Standalone with LinuxLive 230 19.4 Creating ISO and USB Flash Drive Images 238
19.1 Introduction
This chapter describes two methods to install Wind River Linux on a server hard disk and then boot it. The first method, 19.2 Server-Based Installation of Wind River Linux, p.227, is based entirely on Wind River Linux and provides for a flexible configuration in which you can specify different Wind River Linux file systems for the installation and the boot. The second method, 19.3 Booting Standalone with LinuxLive, p.230, uses LinuxLive to boot the server which you configure and then install Wind River Linux. (See http://www.linux-live.org/ for details on LinuxLive.)
Using the Wind River Linux build system you can create an ISO image to burn to a CD or DVD, and then use that CD or DVD to boot up the target, format the local disk, and install the runtime on the disk. At that point, you can remove the CD or DVD and boot the target directly from the local disk. You can also test your build using QEMU as shown in the procedure in this section.
227
There are two ways to perform the configurationeither self-contained in a single build directory; or in two build directories, one for the runtime to install on the target, and one for the install CD itself. Two build directories allows you to boot the server with a different operating system than the one you will install on it. The related options for the options for the configure command:
--enable-bootimage=iso --with-template=feature/installer --with-installer-target-build=otherbuilddir
Use the first two options together to build the installer software and create an ISO image. Use the third option only if you are creating a separate build directory.
Using a Self-Contained Installation
In the self-contained installation, the build creates a /RPMS directory in the root file system, where it puts all the RPMS that will be used to install the runtime on the target. The difference between the build types is just a question of where those RPMs come fromeither this build, or from another build.
Using a Separate Installation Build Directory
The --with-installer-target-build option is how you specify where to pick up the RPMs to be used for the target. There is no building or even checking of the build directory, it just picks up whatever RPMs are in otherbuilddir/export/RPMS. So, you first build everything in otherbuilddir, and then build things in your project build directory. If you dont specify the --with-installer-target-build option, the build system will use whatever RPMs are in export/RPMS in your project build directory.
In the following procedure, you configure and build a self-contained server installation. To test the installation, you can use QEMU to create, configure, and boot the installation from a virtual disk as shown.
Use the following commands to configure and build a .iso image of the server installation:
$ configure --enable-board=install_x86 --enable-kernel=standard \ --enable-rootfs=glibc_small --enable-bootimage=iso \ --with-template=feature/installer $ make boot-image
In this example, you will use QEMU on the host, so you will boot the .iso image directly from the export/ directory. If you wanted to burn the image to CD/DVD-ROM, you could insert a CD/DVD-ROM and enter make boot-image-burn.
228
After building the file system and boot image, you can test it with the procedure in this section which uses QEMU to create, install to, and then boot from a virtual disk.
Step 1: Create the virtual disk.
Use the qemu-img host tool to create and size the virtual disk, placing it in an accessible location such as /tmp:
$ host-cross/bin/qemu-img create -f qcow hd0.vdisk 1000M
Step 2:
1.
Boot the .iso image you create in Configuring and Building the Server Install, p.228:
$ make start-target TOPTS=" -no-kernel -cd export/install_x86-boot.iso \ -disk hd0.vdisk -gc"
NOTE: Press CTRL-ALT at any time to exit from the boot window. Click in the window to return control to it.
2.
Press F1 when prompted and specify where you want to install the software. In this example, enter:
boot: linux-c
Press ENTER when prompted. 3. 4. ! Choose the disk you want to format for in the installation. In this example, accept the default hda by pressing ENTER. Accept defaults by pressing ENTER or enter alternatives for the prompts that follow.
WARNING: You will be prompted when you are about to format the disk. If
you enter Yes, you will lose any data on the disk. In this example you are just formatting the virtual disk you created, so it is not a concern, but care must be taken when installing to a servers hard disk. Press ENTER to continue. 5. You can modify the package selection offered, or select N for the pre-selected packages. Then enter y to install the selected packages.
When the installation is complete, you would remove the install media such as a CD-ROM from the server. Because you are using a virtual disk, just close the QEMU window.
Step 3: Boot the installed disk.
Now boot from the disk that you installed Wind River Linux on in the previous step. In this example, the installation was performed on the virtual disk you created in Step 1. To boot from that virtual disk, enter:
$ make start-target TOPTS="-no-kernel -disk hd0.vdisk -gc"
Press ENTER, select the operating system you want, and press ENTER to boot.
229
Create a common PC platform project and build the file system, using Workbench or the command line. You can accept the default kernel or build a new one. This is the kernel and file system you will install on the hard disk of the target. You may optionally create a bootable CD-ROM as well.
NOTE: If you create a bootable CD-ROM in your platform project, you can also use it to transfer a file system and kernel to the target. If you use one of the other boot methods, you must transfer the file system and kernel to the target separately. Step 2: Boot the target.
You can boot the target with the CD-ROM you created with your platform project, or you can use some other bootable CD-ROM such as the freely-available Gparted-LiveCD or Partition Magic. All of these allow you to boot the target and then partition and format the hard disk on the target.
Step 3:
Use the bootable CD-ROM to partition, format, and mount the hard disk on the target. !
WARNING: Any pre-existing data on the hard drive of the target will be lost when you perform this procedure. Copy the kernel and file system to the target.
Step 4:
To transfer the file system and kernel from the export/ directory in your platform project to the hard disk on your target, you can:
copy the kernel and file system from the Wind River CD-ROM. transfer files with a portable drive such as a USB keychain drive. make a network connection to the development host and download the kernel and file system.
230
Step 5:
Uncompress the file system to the hard disks root, and place the kernel in the hard disks boot directory.
Step 6: Configure your boot menu.
In this example the disk is formatted for a single operating system and you configure the boot menu to boot it.
Step 7: Boot the target.
Reboot the target (without the CD-ROM) to boot from hard disk.
Before you build your project, consider how you plan to proceed:
Are you going to create a CD-ROM in your platform project? You can create a self-sufficient CD-ROM and typically do not need to perform any additional kernel configuration.
Are you going to use a USB portable drive to transfer files between the two machines? Be sure you have configured in support for the file system used by the USB device. For example, if the device is formatted for the VFAT file system, add that support to the kernel.
Are you going to connect your development host and target by Ethernet? You may need to add kernel options to support the kind of Ethernet device your target uses. For example, if your target hardware uses the Real Tek 8190 Ethernet device, enable that support in the kernel.
You can use the Workbench Kernel Configuration tool, or make menuconfig from the command line, to add kernel configuration options.
NOTE: Even if you use the command line to create and build your platform projects, you can still take advantage of Workbench tools. Import your existing platform project directory into Workbench (under File > Import > Wind River Linux). You can then, for example, double-click on the Kernel Configuration icon in your project, and use that tool to manipulate kernel options.
231
NOTE: If you want to use the CD-ROM to transfer the file system and kernel, you must place them in the Wind River Linux development environment. You may not wish to disturb a pristine development environment, or may not have permission to write to it. If that is the case, use the USB or network methods to transfer the file system and kernel to the target.
The following procedure creates the kernel and file system for the hard disk on the target and, optionally, a bootable CD image. 1. Set up the build environment and run the configure script a. If you are going to create a bootable CD-ROM, you must enable an ISO image, for example:
$ configure --enable-board=common_pc \ --enable-kernel=standard+squashfs \ --enable-rootfs=glibc_std \ --enable-bootimage=iso
Note that you must add the +squashfs argument with the kernel specification, and include the --enable-bootimage=iso option. t
NOTE: If you are using Workbench, configure your platform project with the KERNEL: squashfs template and add the option --enable-bootimage=iso.
[Note n that adding the template feature/bootimage_iso does not do it. b. If you are going to transfer the file system and kernel using a USB or network device, you do not need to build the ISO image. You could use the following configure command:
$ configure --enable-board=common_pc \ --enable-kernel=standard \ --enable-rootfs=glibc_std
2. 3.
If you are not building the bootable CD-ROM, skip to 19.3.2 Preparing the Target's Hard Drive, p.233. 4. Copy the *.dist.tar.bz2 and *bzImage* files from your prjbuildDir/export directory to installDir/wrlinux-3.0/layers/wrll-host-tools/host-tools/lib/linux-live/cd-root . Create the ISO image by running:
$ make boot-image
5.
This command creates an ISO image within the export directory. 6. Wind River supports QEMU on the common PC platform, so you can test that your .iso image is bootable by booting it with QEMU as follows:
$ make start-target TOPTS=-cd prjbuildDir/export/common_pc-boot.iso
232
7.
Burn the ISO image onto a CD using a CD authoring tool available in your host environment. For example, in the GNOME environment, right-click on the .iso file and select Write to Disk.
You can now boot a stand-alone PC from the .iso file that is on the CD-ROM.
1.
Enter the fdisk command with the device name of the drive you are going to format. For example, at the console on the target, enter the following:
root@localhost:/root> fdisk /dev/hda
In this case, the hard drive on the target is device /dev/hda. 2. Examine your current partition table with the p command in fdisk:
Command (m for help): p Disk /dev/hda: 30.0 GB, 30005821440 bytes 16 heads, 63 sectors/track, 58140 cylinders Units = cylinders of 1008 * 512 = 516096 bytes Device Boot Start End Blocks Id System
! 3.
WARNING: Any pre-existing data on the hard drive of the target will be lost
4.
5.
233
6.
Enter a number of cylinders for the size of your primary partition. Since you are only creating one partition, this is the majority of the disk space and the remainder is used for swap space.
First cylinder (1-58140, default 1): ENTER Using default value 1 Last cylinder or +size or +sizeM or +sizeK (1-58140, default 58140): 50000 Command (m for help):
7.
8.
Change the type of the second partition to swap space (type 82):
Command (m for help): t Partition number (1-4): 2 Hex code (type L to list codes): 82 Changed system type of partition 2 to 82 (Linux swap / Solaris) Command (m for help):
9.
10.
(You can ignore the messages about the loopback device during reboot.)
Step 2: Format and mount the hard drive and swap.
1. 2.
After rebooting, the hard drive is automatically mounted on /mnt/hda1. Verify this with the df command as follows:
root@localhost:/root> df -h Filesystem Size Used Avail Use% Mounted on tmpfs 363M 0 363M 0% /dev/shm /dev/hda1 24G 679M 22G 3% /mnt/hda1 root@localhost:/root>
234
3.
The swap space is also active. Check it with the free command as follows:
root@localhost:/root> free -m total used Mem: 724 72 -/+ buffers/cache: 20 Swap: 4006 0 root@localhost:/root> free 652 704 4006 shared 0 buffers 10 cached 41
4.
Format the main Linux partition, by first unmounting, then formatting it, as follows:
root@localhost:/root> umount /mnt/hda1 root@localhost:/root> mkfs -t ext3 /dev/hda1
5.
19.3.3 Placing the File System and Kernel on the Hard Disk
This section describes how you can use the Wind River CD-ROM, a USB disk, or a network connection to transfer the kernel and compressed file system to the target. Perform one of these procedures and then proceed to 19.3.4 Configuring Target System Files and Booting, p.237.
If you created a bootable CD-ROM that contained the file system and kernel for the target (see 19.3.1 Creating a Platform Project, p.231) you can now install the file system and place the kernel in the installed file system. 1. Change directory to the hard disk root (/mnt/hda1) and uncompress and extract the file system from the current RAM disk root directory:
root@localhost:/mnt/hda1> tar jxvpf /boot/*dist.tar.bz2
(Some permission or time stamp setting errors may cause a concluding error message that may be ignored.) 2. Copy the kernel from the current RAM disk to the hard disks boot directory:
root@localhost:/mnt/hda1> cp /boot/*bzImage* /mnt/hda1/boot/bzImage
(Note that this shortens the name to bzImage for convenience.) 3. Configure system files as described in 19.3.4 Configuring Target System Files and Booting, p.237.
In the following example, you use a USB keychain disk that has been formatted for the VFAT file system to transfer files from the development host to the target. Note that you may need to perform these commands as the root user. 1. Insert a formatted USB memory device into a USB port on the development host.
235
2.
Verify the USB device is mounted. Many hosts will mount it automatically for you. If it is not mounted, mount it, for example:
# mount -t vfat /dev/sdc1 /media/KINGSTON
3.
Copy the kernel and compressed file system to the USB device. Drag and drop them using a GUI, or use the command line, for example:
# cd prjbuildDir/export # cp *bzImage* /media/KINGSTON # cp *dist.tar.bz2 /media/KINGSTON
4. 5.
Unmount the USB device through a GUI menu choice or on the command line:
# umount /media/KINGSTON
Insert the USB device in the target that is running from CD-ROM. You may see a message such as the following:
scsi 3:0:0:0: Direct-Access Kingston DataTraveler 2.0 1.00 PQ: 0 ANSI: 2 SCSI device sda: 8089600 512-byte hdwr sectors (4142 MB) sda: Write Protect is off SCSI device sda: 8089600 512-byte hdwr sectors (4142 MB) sda: Write Protect is off sda: sda1 sd 3:0:0:0: Attached scsi removable disk sda sd 3:0:0:0: Attached scsi generic sg0 type 0
This message provides the device name sda. If you booted with the device already inserted, look for similar lines in the dmesg output. 6. Make a mount point and mount the USB device on the target, for example:
root@localhost:/root> mkdir /mnt/usbdisk root@localhost:/root> mount -t vfat /dev/sda1 /mnt/usbdisk
7.
Be sure you are in the hard disks root directory (for example /mnt/hda1) and then uncompress and extract the file system:
root@localhost:/root> cd /mnt/hda1 root@localhost:/mnt/hda1> tar jxvpf /mnt/usbdisk/*dist.tar.bz2
8.
(Note that this shortens the name to bzImage for convenience.) After you have copied the kernel and file system from the USB device to /dev/hda1 on the target, you can configure system files as described in 19.3.4 Configuring Target System Files and Booting, p.237.
To make a network connection from the target to the host, you must use a boot device on the target that includes networking tools. The bootable CD-ROM you made in your platform project or other widely available bootable CD-ROMs or floppies offer networking support. This example assumes that the host and target are on the same subnet, and that the host will accept an sftp or ftp connection. First configure basic networking on the target, and then copy the necessary files from the host.
236
1. 2.
Configure your Ethernet connection with your target address, for example:
root@localhost:/root> ifconfig eth0 192.168.10.2
If you want to use host names instead of IP addresses, create a temporary /etc/hosts file for the target (temporary because it is in the RAM disk) with entries for the target and the host, for example:
127.0.0.1 192.168.10.1 192.168.10.2 localhost.localdomain server1.lab.org target7.lab.org localhost server1 target7
3.
Change directory to the targets future root directory (currently /mnt/hda1) and use sftp or ftp to connect to the host, for example:
root@localhost:/root> cd /mnt/hda1 root@localhost:/mnt/hda1> sftp server1
4.
Change to the export directory on the host where you created your target's ISO image. Download the compressed file system and the kernel file, for example:
sftp> mget *bzImage* sftp> mget *dist.tar.bz2 sftp> quit
5.
Unpack the file system in the mounted hard drive (/mnt/hda1), which will become the root directory on the target:
> tar -xvjpf *dist.tar.bz2
6.
Move the kernel to the boot directory on the hard drive of the target:
root@localhost:/mnt/hda1> mv *bzImage* boot/bzImage
2.
If you are using a non-Wind River bootable CD-ROM it may not contain a suitable fstab file. If that is the case, edit /mnt/hda1/etc/fstab to look like this:
proc /proc proc defaults 0 0 # AutoUpdate sysfs /sys sysfs defaults 0 0 # AutoUpdate devpts /dev/pts devpts defaults 0 0 # AutoUpdate relayfs /mnt/relay relayfs defaults 0 0 # AutoUpdate tmpfs /dev/shm tmpfs defaults 0 0 # AutoUpdate /dev/hdc /mnt/hdc_cdrom iso9660 noauto,users,exec 0 0 # AutoUpdate /dev/hda1 /mnt/hda1 ext3 auto,users,suid,dev,exec 0 0 # AutoUpdate /dev/hda2 swap swap defaults 0 0 # AutoUpdate /dev/fd0 /mnt/floppy vfat,msdos noauto,users,suid,dev,exec 0 0 # AutoUpdate
237
3.
Create a new boot menu as follows: a. Backup the default boot/grub/menu.lst file:
root@localhost:/root> cd /mnt/hda1/boot/grub root@localhost:/mnt/hda1/boot/grub> mv menu.lst orig_menu.lst
The orig_menu.lst file contains useful instructions that can help you understand the menu entries. It also shows you how to set up a system for dual- or multi-booting if you want to configure target disks that way in the future. b. Using a text editor, create a new menu.lst file that contains the following:
default timeout title root kernel boot 0 5 my Common PC (hd0,0) (hd0,0)/boot/bzImage root=/dev/hda1 fastboot /boot/bzImage
Save your file. 4. Install GRUB to the Master Boot Record (MBR). a. Start GRUB by entering grub at the command line:
root@localhost:/mnt/hda1/boot/grub> grub grub>
b.
Set the root device, and then install GRUB to the MBR, with the following three commands:
grub> root (hd0,0) grub> setup (hd0) grub> quit
Reboot, removing the CD-ROM so that the target reboots from hard disk. (The system must be rebooting before you can remove the CD-ROM.)
to your configure line. The common_pc is an example of a target that supports both boot options. 2. Enable the CONFIG_VFAT kernel option for VFAT file system support. You can use the linux.menuconfig build target or the Workbench Kernel Configuration tool to set the option, for example:
$ make -C build linux.menuconfig
And change the VFAT option in File systems > DOS/FAT/NT Filesystems to y or * (not M). After building the target, create the boot image:
$ make boot-image
238
19 Stand-Alone Deployment to Disk 19.4 Creating ISO and USB Flash Drive Images
target-name-boot.iso target-name-usb.img
3.
Use a CD-writer to write the .iso to CDROM. If you have root permissions, you can use the dd command to write the USB image to a flash drive. .
NOTE: Note the USB image defaults to a size of 256M.
239
240
20
Deploying SELinux
20.1 Introduction
Configuring an SELinux platform project requires that you include the selinux feature template, but you must further configure the run-time system due to the nature of SELinux as described in this chapter. Not all configurations support SELinux. Consult your Wind River representative for more information.
Due to the nature of the SELinux toolchain, a few manual bootstrap issues need to be addressed before you are able to use a fully functional installation. These steps are described in Booting the Target and Loading the Policy, p.241. In addition, if you want to perform policy management while on the target, you must build a policy store as described in Building the policy store, p.242
Because SELinux depends on every file in the root file system to be set to a particular file context, this can only be done during runtime. Use the following procedure to make the modifications to the runtime. 1. 2. Boot into a shell (supply the boot arguments root=/dev/sda rw init=/bin/bash selinux=1). Mount essential file systems:
# mount -t proc none /proc && \ # mount -t sysfs none /sys && \ # mount -t selinuxfs none /selinux
241
3.
NOTE: Please use a block size bigger than the size of policy.23 as the write needs to be done in one write, otherwise the policy will not load.
4.
Restore file security contexts and synchronize RAM with the disk:
restorecon -v -R / sync ; sync ; sync
At this point, you will be able to reboot (set init=/sbin/init) into a functional SELinux system.
NOTE: You must reboot in order to build the policy store as described next.
If you are unable to boot your system, it is usually because /* is not coming up in the right security context. You must follow the procedure exactly.
In order to manage the modules loaded in your policy, you need to create a policy store. Due to the nature of the SELinux toolchain, the policy store cannot be created during compile time because the libraries use /etc/selinux as the SELinux rootpath, which in essence would require doing the entire build in a chroot. Since a chroot is not used for building SELinux, you need to create the policy store manually if you wish to manage the policy in the target. If no policy management support is needed in the target, this step is not required for a functional SELinux system. In order to create a policy store, you must put all the module.pp files into a single policy.X file. Do this with semodule as follows: 1. Turn off enforce mode
# setenforce 0
Or
# echo 0 > /selinux/enforce
2.
NOTE: The example is for a csh user, Substitute bash or other shell commands if your shell is different.
CAUTION: This can take time and spaceas an example, on a 2.4 GHz Pentium
4 it may take the semodule command over 30 minutes, and use over 500 MB of RAM.
242
3. 4.
Relabel the file system (optional, if you have not changed the policy):
$ restorecon -R /
Remember, building a policy store is only needed for policy management while on the target, for example when adding or removing a module using semodule. It may be more beneficial to do this sort of work at compile time from within the build/refpolicy/ directory instead.
243
244
PA R T I V
Use Cases
21 22 23 24 Building Run-times with RPM and Source ............................. 247 Examples of Adding Packages ............................................... 255 Using Custom Templates and Layers ..................................... 271 Kernel Use Cases ..................................................................... 283
245
246
21
Building Run-times with RPM and Source
21.1 Introduction 247 21.2 Tutorial One: RPM Build for Common PC 248 21.3 Tutorial Two: Source Build for Common PC 250 21.4 Tutorial Three: Building ISO Images and Partial Run-time Systems 251 21.5 Tutorial Four: RPM Build on ARM Versatile AB-926EJS 252 21.6 Tutorial Five: Source Build on ARM Versatile AB-926EJS 253 21.7 Tutorial Six: Building Ramdisk and Flash File Systems 254
21.1 Introduction
These are three step-by-step tutorials on building Wind River Linux run-time systems for the Common PC, a generic X86 board, and three step-by-step tutorials on building Wind River Linux run-time systems for the ARM Versatile AB-926EJS. Note that both of these boards can be simulated by QEMU. The tutorials for the Common PC cover:
Using the RPM method to build a complete run-time system. Using the source method to build a complete run-time system, with tests enabled. Using the source method to build a complete run-time system, including an ISO image for hard disk deployment. Using the source method to build a root file system only. Using the source method to build a kernel only.
Using the RPM method to build a complete run-time system, with flash images enabled. Using the source method to build a complete run-time system, with flash images enabled.
247
Using the RPM method to build a complete run-time system as a ramdisk (initrd) image, a JFFS2 flash file system, or a CRAMFS flash file system.
Step 2:
Within /home/user, and as a regular user, make a work directory (in this example, workdir). This can hold any number of builds:
$ cd /home/user $ mkdir workdir
Step 3:
Change directory to workdir and make the project build directory. This will hold the build and source files, and the run-time system itself. In this example it is named after its board; you may name it as you like:
$ cd workdir $ mkdir common_pc
Step 4:
Within workdir/common_pc/, run the configure script (configure) that resides in installDir/wrlinux-3.0/wrlinux/. The configure command options determine which kernel to use, and which root file system to build, for a specific board. In this case, a standard kernel and file system is configured, for the Common PC board:
$ cd common_pc $ configure \ --enable-board=common_pc \ --enable-kernel=standard \ --enable-rootfs=glibc_std
Once the configure command has completed, you can review its output at anytime in the configure.log file in the project build directory.
248
21 Building Run-times with RPM and Source 21.2 Tutorial One: RPM Build for Common PC
Step 5:
Build the run-time file system within the project build directory.
Build the file system from RPMs and include the default kernel:
$ make fs
Within a few minutes the build system will create a compressed run-time file system image within export/. The kernel is prebuilt and resides in installDir/wrlinux-3.0/layers-wrll-linux-version/boards/common_pc/standard/. The build system automatically copies it to your export/ subdirectory.
Step 6: Copy the pre-built kernel to the TFTP download directory.
Step 7:
Use the tar command from the NFS export directory to extract and uncompress the run-time file system. The tar commands -x option instructs tar to extract. The -j command instructs it to uncompress through bzip2. The -v option instructs it to be verbose (this is not a necessary option). The -p option instructs it to preserve permissions, and the -f option identifies the following file as the archive file to be uncompressed. To relieve the tedium of typing a long filename, the full name of the compressed run-time system file is abbreviated with a wildcard.
# cd /home/user/export # tar -xjvpf /home/user/workdir/common_pc/export/*dist.tar.bz2
Your board can now use tftp to download the /tftpboot/bzImage kernel and NFS-mount the exported file system you have created.
249
Follow Step 1 to Step 3, in Tutorial One: RPM Build for Common PC, p.248, above.
Step 2: Configure the project build directory.
Within workdir/common_pc, run the configure script. The configure command options direct which kernel to build, and which root file system to build, for a specific board. In this case, a standard Linux kernel and file system is configured, for the Common PC board. The test suite option is included:
$ cd common_pc $ configure \ --enable-board=common_pc \ --enable-kernel=standard \ --enable-rootfs=glibc_std \ --enable-test=yes
Step 3:
Step 4:
Copy the kernel and the file system to their download and NFS export directories.
Both the kernel and a compressed run-time file system image are now in the workdir/common_pc/export directory. The kernel must be copied to the directory where TFTP is configured to download it to the target, and the file system image must be uncompressed to its NFS export directory. In this tutorial, the destination for the kernel is /tftpboot, and the destination for the file system is /home/user/export. In the command below, the kernel is both copied and renamed.
$ cd export $ su Password: (root password) # cp *bzImage* /tftpboot/bzImage # cd /home/user/export # tar -xjvpf /home/user/workdir/common_pc/export/*dist.tar.bz2
250
21 Building Run-times with RPM and Source 21.4 Tutorial Three: Building ISO Images and Partial Run-time Systems
21.4 Tutorial Three: Building ISO Images and Partial Run-time Systems
This tutorial builds an ISO bootable image, a kernel alone, and a file system alone (without kernel), for the Common PC board.
Building an ISO Image Step 1: Make the necessary directories, install the RPM updates, and run configure.
Follow Step 1 to Step 3, in Tutorial One: RPM Build for Common PC, p.248, above.
Step 2: Configure the project build directory.
Within workdir/common_pc, run the configure script. In this example, the configure command options configure the build environment to enable the subsequent build of an ISO boot image:
$ configure \ --enable-board=common_pc \ --enable-kernel=standard \ --enable-rootfs=glibc_std \ --enable-bootimage=iso
Step 3:
Step 4:
This command creates an ISO image within the export directory. Burn the ISO image onto a CD.
Building a File System Only Step 1: Make the necessary directories, install the RPM updates, and run configure.
Follow Step 1 to Step 3, in Tutorial One: RPM Build for Common PC, p.248, above.
Step 2: Configure the project build directory.
Within workdir/common_pc, run the configure script. In this example, the configure command options configure the build environment to build a file system only:
$ configure \ --enable-cpu=x86_64 \ --enable-rootfs=glibc_cgl
Step 3:
251
Building a Kernel Only Step 1: Make the necessary directories, install the RPM updates, and run configure.
Follow Step 1 to Step 2, in Tutorial One: RPM Build for Common PC, p.248, above.
Step 2: Configure the project build directory.
Within workdir/common_pc, run the configure script. In this example, the configure command options configure the build environment to build a kernel only:
$ configure \ --enable-board=common_pc \ --enable-kernel=standard
Step 3:
Step 2:
Within /home/user, and as a regular user, make a work directory (in this example, workdir). This can hold any number of builds:
$ cd /home/user $ mkdir workdir
Step 3:
Change directory to workdir, and make the project build directory. This will hold the build and source files, and the run-time system itself. In this example it is named after the board; you may name it as you like:
$ cd workdir $ mkdir arm_versatile
Configure the project build directory. Within workdir/arm_versatile, run the configure script (configure) that resides in installDir/wrlinux-3.0/wrlinux/.
In this example, the configure command enables the subsequent building of a variety of flash images.
$ cd arm_versatile $ configure \ --enable-board=arm_versatile_926ejs \ --enable-kernel=small \ --enable-rootfs=glibc_small \ --enable-bootimage=flash
252
21 Building Run-times with RPM and Source 21.6 Tutorial Five: Source Build on ARM Versatile AB-926EJS
Step 4:
Build the run-time file system within the project build directory.
Note that once the project build directory has been configured, information that readme files are available in the local README directory is displayed. To perform the RPM build, enter:
$ make
Within a few minutes the build system will create a compressed run-time file system image within prjbuildDir/export. It also creates the file system for QEMU within prjbuildDir/export/dist/.
Step 5: Copy the kernel and the file system to their download and NFS export directories.
Both the kernel and a compressed run-time file system image are now in the workdir/arm_versatile/export directory. The kernel must be copied to the directory where TFTP is configured to download it to the target, and the file system image must be uncompressed to its NFS export directory. In this tutorial, the destination for the kernel is /tftpboot, and the destination for the file system is /home/user/export. In the command below, the kernel is both copied and renamed.
$ cd export $ su Password: (root password) # cp *uImage* /tftpboot/bzImage # cd /home/user/export # tar -xjvpf /home/user/workdir/arm_versatile/export/*dist.tar.bz2
Follow Step 1 to Step 3, Tutorial Four: RPM Build on ARM Versatile AB-926EJS, p.252, above.
Step 2: Build the complete run-time system.
Step 3:
Copy the kernel and the file system to their download and NFS export directories.
Both the kernel and a compressed run-time file system image are now in the workdir/arm_versatile/export directory. The kernel must be copied to the directory where TFTP is configured to download it to the target, and the file system image must be uncompressed to its NFS export directory.
253
In this tutorial, the destination for the kernel is /tftpboot, and the destination for the file system is /home/user/export. In the command below, the kernel is both copied and renamed.
$ cd export $ su Password: (root password) # cp *uImage* /tftpboot/uImage # cd /home/user/export # tar -xjvpf /home/user/workdir/arm_versatile/export/*dist.tar.bz2
Within the project build directory (in this example, /home/user/workdir/arm_versatile), enter:
$ make boot-image BOOTIMAGE_FSTYPE=initrd BOOTIMAGE_RAM0SIZE=200000
254
22
Examples of Adding Packages
22.1 Introduction 255 22.2 Adding SRPM Packages 256 22.3 Adding Spec Packages 259 22.4 Adding Classic Packages 261 22.5 Adding Packages with a GUI Tool 268 22.6 Adding an RPM Package to a Running Target 270
22.1 Introduction
You may want to add one or more packages to the set of packages automatically included in your project. To add packages to your platform, you should first check your Wind River Linux installation to see if the package(s) you want to add are already provided.
NOTE: You can view a list of the file system packages in your current project in prjbuildDir/pkglist.
If you simply want to replace an existing package with a different version, you can make use of the infrastructure already provided for the package. Follow the procedure in Adding mm, p.259 to replace an existing package. You can add three different types of packages to the Wind River Linux build system. Specify the type of package you want to add in the package_TYPE variable in the build system makefile for each package (dist/package/Makefile).Table 22-1 summarizes the three ways of adding packages.
Table 22-1 Three Ways to Add Packages
Package
package_TYPE
How to Add
SRPM
package_TYPE = SRPM
255
Table 22-1
Package
package_TYPE
How to Add
Spec Classic
Create a spec file for rpmbuild. Patch the supplied package makefile and build.
a. Do not specify a value for package_TYPE when adding packages with a classic makefile.
The following sections provide examples of how to add packages to your platform project for each of the three types of packages. The examples assume you have already configured a platform project. If you have created a platform project with a small file system, you can still follow the procedures but may have to add additional packages that are required by the packages added in the examples.
Acquire the package from its location on the Web, CD, or other computer. In this example, logwatch is available from http://download.fedora.redhat.com/pub/fedora/linux/releases/7/Fedora/source/ SRPMS/ Place the package in the local custom layers packages/ directory (prjbuildDir/packages/). Do not in any way uncompress or unpack the file.
Step 2: Create the Makefile and Patch Directories
Create a directory named after the package within the local custom layers dist directory (prjbuildDir/dist) and create a patch subdirectory of the packageName directory. In this example, the package directory would be logwatch. The structure would be prjbuildDir/dist/logwatch/patches. A simple way to create this from prjbuildDir is:
$ mkdir -p dist/logwatch/patches
256
Step 3:
Create the makefile within prjbuildDir/dist/logwatch. Refer to Necessary Makefile Contents, p.112 for details on the contents of the Makefile. To calculate the md5sum for logwatch and replace the logwatch_MD5SUM value with it. To calculate the md5sum, run md5sum on the package:
$ md5sum packages/logwatch-*
Replace the logwatch_VERSION value with the correct version number. This is the string in the package name between logwatch- and .src-rpm, for example 7.3.4-6.fc7. Your Makefile for logwatch will look something like this:
PACKAGES += logwatch
logwatch_TYPE = SRPM logwatch_RPM_DEFAULT= logwatch logwatch_RPM_ALL = logwatch logwatch-debuginfo logwatch_MD5SUM = f17c0a1722a590406ce7a30b5e9b2ccb logwatch_VERSION = 7.3.4-6.fc7 logwatch_ARCHIVE = logwatch-$(logwatch_VERSION).src.rpm logwatch_UPSTREAM =
http://download.fedora.redhat.com/pub/fedora/linux/releases/7/Fedora/source/S RPMS/$(logwatch_ARCHIVE)
Step 4:
Use the pkgname.addpkg make target to add the package and any known dependencies to pkglist:
$ make -C build logwatch.addpkg
This adds the package name without version number or suffix to prjbuildDir/pkglist, and regenerates your makefiles to include the package. If you specified any dependencies in the makefile, they will be included in pkglist if they are not already in pkglist.
Step 5: Unpack the Package
Run the patch rule and unpack and patch the SRPM within prjbuildDir/build:
$ make logwatch.unpack
Running the patch rule for the package will create the main build directory, prjbuildDir/build/logwatch-7.3.4-6.fc7 and unpack the SRPM into several subdirectories. The tar archive file and all the patches are placed within the SOURCES subdirectory. The unpacked sources will go into the BUILD/logwatch-7.3.4-6.fc7 subdirectory, and be patched. The spec file goes into the SPECS subdirectory.
Step 6: Copy and Edit the spec File
Copy the spec file prjbuildDir/build/logwatch-version/SPECS/logwatch.spec to prjbuildDir/dist/logwatch/. Edit the copied version in dist/logwatch. You must make the first of the following changes:
Immediately after the %build and %install section headers, add the RPM macro, %configure_target.
257
If you desire, add a change indicator (such as -WR), to the Release line. If you desire, add an entry to changelog.
(Refer to Necessary spec File Changes, p.113 for additional information on spec files and Lua Scripting in Spec Files, p.114 for information on pre- and post-install scripts.)
Step 7: Make the package.
If it does not compile correctly, examine the spec file changes and rebuild it until it does.
NOTE: If you are adding custom patches to the SRPM, place your patch(es) in prjbuildDir/dist/logwatch/patches/ and edit the prjbuildDir/dist/logwatch/logwatch.spec file to include them. Step 8: Build the file system.
It is often the case that you are able to successfully include the added package into your file system at this time:
$ make fs
This particular example of logwatch, however, has been chosen because it will cause an error when building the file system:
../../wrlinux-2.0/wrlinux/scripts/rpmdeps.pl: Unresolved dependency mailx required by logwatch
This indicates that logwatch requires another package, mailx, for installation.
Step 9: Add mailx.
If mailx is already there, run make -C build mailx.addpkg to add it to pkglist and then make -C build mailx. Otherwise, you will need to find a mailx package and add it using the appropriate procedure for the type of package that it is. You can then perform the make, and logwatch will be installed in the file system.
Step 10: Create a layer to save your changes.
Create a layer that includes the changes you have made to your current project build directory:
$ cd prjbuildDir $ make export-layer
Your layer will be created in prjbuildDir/export/export-layer/name.date. Your packages are included in a pkglist.add file in the new layer, in this example they are in templates/default/pkglist.add. You can then re-create your current configuration at any time with your original configuration command (which can be found in conf_cmd.ref in the layer) and additional --with-layer=path_to_layer configuration option.
258
Adding mm
In the following procedure, you either add or update the mm package, depending on your installation. If you already have mm installed, you can copy existing infrastructure files and edit them as described in the procedure. If your installation does not include mm, you can create the files as shown. This procedure describes how to add a package with the spec method, and it also shows how you can update (override) an installed package with a newer version.
Step 1: Get the package.
Get the latest version of the package that you can find on the Web or another source. At the time of this writing, mm-1.4.2.tar.gz was available. Place the package in prjbuildDir/packages/.
Step 2: Create the infrastructure.
If the mm package exists in your installation you can copy the dist infrastructure and contents to your local project build directory:
$ cp -r installDir/wrlinux-3.0/layers/wrll-wrlinux/dist/mm/ prjbuildDir/dist/
Step 3:
If you have copied the mm directories and files from your installation, edit dist/mm/Makefile for the correct MD5SUM, VERSION, ARCHIVE, and UPSTREAM settings. If you need to create the makefile from scratch, it should look something like this:
Example 22-1 Makefile for Spec File Package PACKAGES mm_TYPE mm_NAME mm_RPM_DEFAULT mm_RPM_DEVEL mm_RPM_ALL mm_MD5SUM mm_VERSION mm_ARCHIVE mm_UPSTREAM mm_DEPENDS += mm = = = = = = = = = = spec mm mm mm-test mm-devel mm mm-test mm-devel mm-debuginfo bdb34c6c14071364c8f69062d2e8c82b 1.4.2 mm-1.4.2.tar.gz http://location/mm-1.4.2.tar.gz/$(mm_ARCHIVE) glibc
259
Step 4:
If you have copied the mm directories and files from your installation, edit dist/mm/mm.spec for the correct version number and remove the following two lines which patch the 1.4.0 version:
Patch500: mm-1.4.0-add-libtool-tag.patch ... %patch500 -p1 -b .add-libtool-tag
If you need to create the spec file from scratch, it should look something like the one shown in Example 22-2. Note that you can often find spec files for your package on the Web that you can use to start with.
Example 22-2 Spec File for Spec File Package Name: mm Version: 1.4.2 Summary: A shared memory library. Release: 1_WR%{?_wr_rel} Group: System Environment/Libraries URL: http://www.engelschall.com/sw/mm/ Source0: http://www.engelschall.com/sw/mm/mm-%{version}.tar.gz # WRLinux patches License: Apache Software License BuildRoot: %{_tmppath}/%{name}-%{version}-root %description The MM library provides an abstraction layer which allows related processes to easily share data using shared memory. %package devel Summary: Files needed for developing applications which use the MM library. Group: Development/Libraries Requires: %{name} = %{version}-%{release} %description devel The MM library provides an abstraction layer which allows related processes to easily share data using shared memory. The mm-devel package contains header files and static libraries for use when developing applications which will use the MM library. %prep %setup -q %build %configure_target export LD="" export ac_cv_maxsegsize=67108864 %configure --with-shm=MMFILE \ --with-headers="%{_host_cross_include_dir}" make CC_FOR_BUILD="%{_host_cc_wrapper}" CFLAGS_FOR_BUILD="%{_host_cflags}" CFLAGS="${CFLAGS}" %install %configure_target rm -rf $RPM_BUILD_ROOT %makeinstall %clean rm -rf $RPM_BUILD_ROOT %files %defattr(-,root,root) %doc LICENSE README PORTING THANKS %attr(0755,root,root) %{_libdir}/*.so.* %{_libdir}/*.so
260
%files devel %defattr(-,root,root) %{_bindir}/* %{_includedir}/* %{_libdir}/*.a %{_libdir}/*.la %{_mandir}/*/* %changelog * Comments here
Step 5:
Go to your project build directory and reconfigure your project so that it includes the new package:
$ cd prjbuildDir $ make reconfig
Step 6:
You can now build mm and it will build the new package:
$ make -C build mm.build
Note that when the build is finished, you have a build directory for the new version, not the old one, for example, prjbuildDir/build/mm-1.4.2/.
Acquire the file from its location on the Web, CD, or other computer. At the time of this writing, a links-1.00pre20.tar.gz file is available from http://artax.karlin.mff.cuni.cz/~mikulas/links/download/.
261
Put the package in the local custom layers packages directory (prjbuildDir/packages/). Do not in any way uncompress or unpack the file.
Step 2: Create the Makefile and Patch Directories
Create a directory named after the package within the local custom layers prjbuildDir/dist directory and create a patch subdirectory of the package_name directory. In this example, the package directory would be links so you would have prjbuildDir/dist/links/patches/. A simple way to do this is:
$ cd prjbuildDir $ mkdir -p dist/links/patches
Step 3:
Create the makefile within prjbuildDir/dist/links. A simple way to do this is to copy an existing makefile from the Wind River Linux distribution for a classic file and modify it. For example, copy installDir/wrlinux-3.0/layers/wrll-wrlinux/dist/which/Makefile to prjbuildDir/dist/links. In the makefile, do the following: 1. 2. Change all instances of which to links. Replace the value of links_MD5SUM with the value you get from the following command:
$ md5sum prjbuildDir/packages/links*
3.
Replace the value of links_VERSION with the string in the package name between the name of the package (pkg-)and .tar.gz. For a package named links-1.00pre20.tar.gz, this would be 00pre20. Replace the following with appropriate values:
links_DESCRIPTION links_SUMMARY links_LICENSE links_UPSTREAM links_GROUP
4.
If you can locate an RPM of the package at a site such as rpmseek.com, you may find all of the information you require there. The following is an example of a complete makefile for the links package:
PACKAGES += links
links_DESCRIPTION = Links is a text-based Web browser. Links does \ not display any images, but it does support tables and most other \ HTML tags. Links advantage over graphical browsers is its speed -- \ Links starts and exits quickly and swiftly displays webpages. links_NAME = links links_RPM_DEFAULT = links links_RPM_ALL = links links-debuginfo links_SUMMARY = A text-mode Web browser. links_SUPPORTLVL = 3 links_GROUP = Applications/Networking/Internet links_RUN_DEPS = glibc links_MD5SUM = e05e4838920c14c9d683ff8b4730c164 links_VERSION = 1.00pre20 links_ARCHIVE = links-$(links_VERSION).tar.gz links_UPSTREAM = http://artax.karlin.mff.cuni.cz/mikulas/$(links_ARCHIVE) links_LICENSE = GPL links_DEPENDS = glibc
262
NOTE: Do not use any single-quotes () or double-quotes () in your comments, for example in pkg_DESCRIPTION or pgk_SUMMARY. Step 4: Add the package to pkglist.
Use the pkgname.addpkg make target to add the package and any known dependencies to pkglist and reconfigure your build/Makefiles.*:
$ make -C build links.addpkg
Step 5:
Test your work to be sure the new package builds properly before building the file system.
1.
it means that the build system cannot find your links-version.tar.gz file. Make sure that you have the correct version number and name specified in the makefile so that when the full name is expanded it matches the name of the tar.gz file in packages/. 2. You can now build the RPM package for installation:
Step 6:
When you have successfully built the RPM, the links package will be installed from the RPM when you build the file system.
NOTE: You could have skipped step 5 and proceeded immediately to building the file system (make fs), and your package source code would be unpacked and built during the file system build procedure. The advantage of first unpacking your source archive and building the RPM is that you do not have to wait for other parts of the file system to build before you are able to determine if you have added the package correctly.
263
Adding schedutils
The following example adds the schedutils package. The example requires several changes to the makefile it comes with because:
The makefile variable CC must be changed to the appropriate toolchain. The package does not come with a configure script. The makefile installs under /usr while the Wind River build environment installs under wrlinux/usr. The list of binaries produced must be changed to those supported by the target architecture.
In the following example we build schedutils on the arm_versatile_926ejs. (Note that schedutils is now a part of util-linux and not usually installed separately any longer.) This example uses the importPackages.tcl script to set up the package build infrastructure as described in 22.5 Adding Packages with a GUI Tool, p.268. The following procedure assumes you have created a project directory and configured it for the arm_versatile 926ejs, for example:
$ configure --enable-board=arm_versatile_926ejs --enable-kernel=standard \ --enable-rootfs=glibc_std
Step 1:
Initialize your environment and then start the importPackage.tcl script to download schedutils from the Web:
$ $ $ $ cd installDir ./wrenv.sh -p wrlinux-3.0 cd prjbuildDir wtxwish installDir/wrlinux-3.0/scripts/importPackage.tcl
NOTE: This should cause your path to include the installDir/workbench-version/foundation... path. Enter the following command to verify your path:
$ echo $PATH If there is no foundation directory path in your path, you can do the following:
$ export PATH=$PATH:installDirworkbench-version/foundation/x86-linux2/bin
for bash, or
$ setenv PATH $PATH:installDirworkbench-version/foundation/x86-linux2/bin
for csh. At the time of this writing, the package can be found at http://rlove.org/misc/schedutils-1.5.0.tar.gz. Select Wget, enter the URL, click Update and click Go. When the tool has completed the import, you will have the package in packages/, the Makefile and patches/ directory in dist/schedutils/, and the build directory build/schedutils/.
264
Step 2:
The importPackges.tcl script creates a Makefile in dist/schedutils/, filling in the settings that it can and pointing out additional entries that you need to edit. In particular, search for angle brackets (< and >) which indicate where you must supply values. For the schedutils Makefile, you must supply values for the following:
schedutils_UPSTREAM = <pkg_URL>/$(schedutils_ARCHIVE) schedutils_DESCRIPTION = <Description of the package> schedutils_SUMMARY = <RPM Summary of the package> schedutils_UPSTREAM = <pkg_URL>/$(schedutils_ARCHIVE) schedutils_MD5SUM =
Refer to the comments in the makefile for instructions on filling in these fields. After making the edits, your makefile (minus the comments) will look something like this:
PACKAGES += schedutils schedutils_VERSION = 1.5.0 schedutils_ARCHIVE = schedutils-1.5.0.tar.gz schedutils_UPSTREAM = http://rlove.org/misc/$(schedutils_ARCHIVE) schedutils_LICENSE = GPL schedutils_DEPENDS = glibc schedutils_DESCRIPTION = schedutils is a set of utilities for retrieving and \ manipulating process scheduler-related attributes, such as real-time \ parameters and CPU affinity. schedutils_NAME = schedutils schedutils_SUMMARY = Linux utilities for manipulating scheduler attributes. schedutils_RPM_DEFAULT = schedutils schedutils_RPM_DEVEL = schedutils_RPM_ALL = schedutils schedutils_SUPPORTLVL = 3 schedutils_GROUP = System Environment/Base schedutils_RUN_DEPS = glibc schedutils_MD5SUM = bb8dc76dd896bc190d4b5347db86e12a
Step 3:
As previously mentioned, schedutils does not come with configure. You will need to modify the makefile for this situation as shown in the next step.
Step 4: Add configure to the makefile.
If you build schedutils now, it gets past the configure error and you come to the next errors:
install: cannot create regular file `/usr/local/bin/chrt': Permission denied install: cannot create regular file `/usr/local/bin/ionice': Permission denied install: cannot create regular file `/usr/local/bin/taskset': Permission denied
265
The reason for these errors is that the package you acquired from the Web, like most third-party packages you acquire, is not configured for building in a cross-development environment. It assumes you want to install the package on the host where you are building it. You must patch the supplied makefile (as described in Step 7) and edit dist/schedutils/Makefile to integrate the package build process into the Wind River Linux build environment as described in the next step.
Step 5: Edit the makefile for the build environment.
schedutils_MAKE_OPT = Values listed under this variable are passed to the makefile on the command line as make $(schedutils_MAKE_OPT). As a result, values in the supplied makefile such as: CFLAGS = -O2 -Wall -W -Wstrict-prototypes ${ANAL_WARN} are replaced by: CFLAGS=$(schedutils_TARGET_CFLAGS) -I$(HOST_CROSS_INCLUD E_DIR)
schedutils_INSTALL_OPT = Values listed under this variable are passed to make install as make $(schedutils_INSTALL_OPT). As a result, values in the supplied makefile such as:
PREFIX = /usr/local
In some cases, you would now be able to build your imported package without a problem:
$ make -C build schedutils.distclean $ make -C build schedutils
In the case of the schedutils build for the arm_verstile_926ejs, however, you meet an additional error:
ionice.c:48:3: error: #error "Unsupported archiecture!"
The ionice program portion of schedutils is not supported for this architecture and must be removed from the build. To save time and space in this example, note that another schedutils program, taskset, is not needed and can also be removed as shown in the next step.
266
Step 7:
Create a patch.
Create a patch instead of editing the makefile each time you perform a make pkg.unpack. Create the schedutils-1.5.0-cross-compiler.patch patch shown in Example 22-3 and put it in dist/schedutils/patches/. Then create a patches.list file in the same directory which contains only the name of the patch, for example:
$ cat dist/schedutils/patches/patches.list schedutils-1.5.0-cross-compiler.patch Example 22-3 Commented schedutils-1.5.0-cross-compiler.patch Patch --- schedutils-1.5.0/Makefile2005-07-29 13:32:57.000000000 -0700 +++ schedutils-1.5.0.build/Makefile2007-10-26 09:38:02.000000000 -0700 @@ -21,15 +21,20 @@ CFLAGS = -O2 -Wall -W -Wstrict-prototypes ${ANAL_WARN} INSTALLBIN= install -INSTALLMAN= install --mode a=r -INSTALLDOC= install --mode a=r -INSTALLDOCDIR= install --directory
+# Replace hard coded install with INSTALLBIN variable that is modified in +# the make command line as one of the variables listed in +# $(schedutils_INSTALL_OPT).
+INSTALLMAN= $(INSTALLBIN) --mode a=r +INSTALLDOC= $(INSTALLBIN) --mode a=r +INSTALLDOCDIR= $(INSTALLBIN) --directory PROGS = chrt ionice taskset MANPAGES= chrt.1 taskset.1 DOCS = AUTHORS ChangeLog COPYING INSTALL README -all: chrt ionice taskset
+# Replaces hard-coded targets list with PROGS variable that is modified +# in the make command line as one of the variables listed in +# $(schedutils_MAKE_OPT) and +# $(schedutils_INSTALL_OPT)
+all: $(PROGS) chrt: chrt.c $(CC) $(CFLAGS) -DVERSION=\"$(ver)\" -o chrt chrt.c @@ -51,13 +56,23 @@ -o -name '*.tmp' -o -size 0 \) \ -type f -print | xargs rm -rf
+# Fixes the installation so that instead of: +# install file1 file2 file3 /destination/directory +# it does: +# install file1 /destination/directory
+# per each file in 'for' loop install: ${PROGS} @echo Install binaries to: ${BINDIR} @echo Install manpage to: ${MAN1DIR} + + + + + @${INSTALLBIN} ${PROGS} ${BINDIR} @cd man/ && ${INSTALLMAN} ${MANPAGES} ${MAN1DIR} ${INSTALLBIN} -d ${BINDIR} for fl in $(PROGS) ; do \ ${INSTALLBIN} $$fl ${BINDIR}; \ done ${INSTALLMAN} -d ${MAN1DIR}
267
+ + +
for fl in ${MANPAGES} ; do \ (cd man/ && ${INSTALLMAN} $$fl ${MAN1DIR}); \ done @echo Done! Do 'make installdoc' if you wish to install the docs.
installdoc: ${PROGS}
Step 8:
Change directory to your project build directory and start the tool for adding packages:
$ $ $ $ cd installDir ./wrenv.sh -p wrlinux-3.0 cd prjbuildDir wtxwish $WIND_BASE/scripts/importPackages.tcl &
Select Wget as shown if you are downloading the package from the Web.
268
Step 11:
You may, for example, want to download the thttpd-2.25b-16.fc9.src.rpm package from http://download.fedora.redhat.com/pub/fedora/linux/releases/7/Fedora/source/ SRPMS, so enter the full URL with the package name, in this case http://download.fedora.redhat.com/pub/fedora/linux/releases/7/Fedora/source/ SRPMS/thttpd-2.25b-16.fc9.src.rpm. Click Update and note that the package name and version fields get filled-in.
269
Step 12:
Click Go to download the package. If the Verbose box is checked, you will be prompted to press ENTER twice as the script interactively displays its progress. Uncheck the Verbose box to avoid the interactive prompting.
Step 13: Complete the process manually.
The Done message in the importPackages.tcl screen indicates the process of importing the package is complete. You can click Close to end the script. At this point, you can see that the package name has been added to pkglist, the package is in packages/, and the dist/ infrastructure is in place.
From your preferred source for target RPMs, obtain the RPM packages that the man package depends on, and which are not already part of the standard Glibc run-time system. Copy them to the run-time system and install them on the running target with the rpm command:
> rpm -ivh info*rpm
info groff
All installed man pages can now be viewed with the man command.
270
23
Using Custom Templates and Layers
23.1 Introduction 271 23.2 Adding a Layer to a Platform Project 273 23.3 Adding Another Layer 274 23.4 Overriding Layer Contents with Another Layer 275 23.5 Patching a Host Tools Package 276 23.6 Configuring and Patching the Kernel 277 23.7 Using Feature Templates in Layers 279 23.8 Modifying a BSP 280
23.1 Introduction
Layers and templates are optional configuration techniques you may use with Wind River Linux projects. You might use templates, for example, to cause relatively small changes at the end of the configuration process. You would typically use layers to control larger configuration issues, perhaps reconfiguring and patching the kernel, modifying system files, and including one or more templates. Examples of cases where you may find that layers provide advantages are when:
You plan to combine the work of different internal or external groups (many-to-one scenarios). You wish to share work with multiple projects or groups (one-to-many scenarios). You are making a step to a next kernel, release, or product version.
The following examples introduce some of the ways you may use layers and templates to do everything from adding a package to building a product in various feature configurations, or modifying an existing board support package.
271
23.2 Adding a Layer to a Platform Project, p.273 23.3 Adding Another Layer, p.274 23.4 Overriding Layer Contents with Another Layer, p.275 23.5 Patching a Host Tools Package, p.276 23.6 Configuring and Patching the Kernel, p.277 23.7 Using Feature Templates in Layers, p.279 23.8 Modifying a BSP, p.280
The initial example applies a layer that simply adds an application. The initial configuration is then updated with a series of layers to show how layers can be used in combination. Examples that follow this patch the kernel and illustrate how to use a layer with feature templates to configure different product features in a hypothetical phone product line. A final example creates a custom BSP by modifying an existing BSP without altering the original BSPs contents. The examples use a QEMU-supported target, the arm_versatile_926ejs, to verify results.
hello_layerthis is the first layer you add. It adds a new target application. firstmodthis layer modifies the first layer by patching the new target application. secondmodthis layer overrides the patch in the previous layer. qemumodthis layer modifies a host application. kernelmodthis layer modifies the kernel with new configuration settings and a patch. ftthis layer example uses feature templates to configure different feature sets of a phone. bspmodthis layer shows how to modify an existing BSP to create a new BSP without altering the contents of the original BSP.
272
23 Using Custom Templates and Layers 23.2 Adding a Layer to a Platform Project
You now have a standard glibc- and busybox-based file system configured as can be seen in the pkglist file:
$ cat pkglist busybox filesystem glibc libgcc linux setup timezone wrs_kernheaders
NOTE: If you use Workbench to configure your project, you will see many more packages when you view the pkglist file because Workbench includes the additional debug and demo templates by default. Step 2: Add a layer.
This time, create a platform project but add a layer to the existing, default configuration with the --with-layer argument. Include hello_layer from layers_and_templates:
$ configure --enable-board=arm_versatile_926ejs \ --enable-kernel=standard \ --enable-rootfs=glibc_small \ --with-layer=/full_path/layers_and_templates/hello_layer
The layer, hello_layer, has now been configured into the project. The new application, hello, has been added to pkglist by the layers pkglist.add file (hello_layer/templates/default/pkglist.add):
$ cat pkglist busybox filesystem glibc hello libgcc linux setup timezone wrs_kernheaders
Step 3:
You can enter the following make command at this point to confirm that the source is copied into your build directory in preparation for the file system build:
$ make -C build hello.patch
273
Step 4:
Step 5:
Because this is a QEMU-supported target, you can run QEMU to quickly test that the new application is in place and works:
$ make start-target ... # hello hi there # CTRL+A x $
The hello application is there and working. You can see that it is in the root users path. It is located in /bin, as specified in hello_layer/dist/hello/Makefile.
NOTE: The hello application added in this example does not use the standard tar archive or SRPM packaging scheme, rather, the source is already unpacked. This approach can be very useful in development for making changes to source (for example in a source code control system) because those changes are applied immediatelyno repackaging is required to make them available to the build system.
Add the new layer to the --with-layer argument, separating it with a comma as shown:
$ configure --enable-board=arm_versatile_926ejs \ --enable-kernel=standard \ --enable-rootfs=glibc_small \ --with-layer=/full_path/layers_and_templates/firstmod,/full_path/layers_and_templat es/hello_layer
Step 2:
You can now apply the patch provided by the new layer:
$ make -C build hello.distclean $ make -C build hello.patch
If you examine build/hello_WRS/hello.c you can see that it is changed to print "bye there".
274
23 Using Custom Templates and Layers 23.4 Overriding Layer Contents with Another Layer
Step 3:
If you now build and test the new file system, you will see that the hello application prints the message as modified by the patch in the firstmod layer.
NOTE: It is easy to back out of changes made by layersif you do not want to include the changes from firstmod, simply configure your project without it.
Configure a project with the hello_layer, firstmod, and secondmod layers as follows:
$ configure --enable-board=arm_versatile_926ejs \ --enable-kernel=standard \ --enable-rootfs=glibc_small \ --with-layer=/full_path/layers_and_templates/secondmod,/full_path/layers_and_templa tes/firstmod,/full_path/layers_and_templates/hello_layer
Note the order in which the layers are specifiedsecondmod is listed first and then firstmod, so the default template in secondmod will override the default template in firstmod. In other words, layers are applied in reverse order, so that the first layers specified are the last layers applied. The last layers applied override layers applied earlier.
Step 2: View the order of layer processing.
You can see the order that layers are processed in the prjbuildDir/layers file, where the first layers listed overlay the layers listed later:
$ cat prjbuildDir/layers /full_path/layers_and_templates/secondmod /full_path/layers_and_templates/firstmod /full_path/layers_and_templates/hello_layer ...
The secondmod layer is able to modify the code in firstmod because it is applied later. The secondmod layer has a higher priority than the firstmod layer.
Step 3: Apply the patches.
If you examine build/hello_WRS/hello.c you can see that it is changed to print "thats all folks".
275
Step 4:
quilt is used by the build system to manage the patches. You can use the quilt series command (or cat the contents of the prjbuildDir/build/hello-WRS/wrlinux_quilt_patches/series file) to see the order of patch processing:
$ alias quilt=$PWD/host-cross/bin/quilt $ cd build/hello-WRS $ quilt series patches_links/full_path/layers_and_templates/firstmod/templates/default/hello /localchange.patch patches_links/full_path/layers_and_templates/secondmod/templates/default/hell o/localchange2.patch $
(Note that to use quilt you must have prjbuildDir/host-cross/bin in your path, or specify the path to quilt on the command line.) If you had specified firstmod before secondmod on your configure command line, the order of patches in series would be reversed and the build system would attempt to apply the secondmod patch before the firstmod patch. This would fail when you entered the make -C build hello.patch command.
Step 5: Build and test the new file system.
If you now build and test the new file system, you will see that the hello application prints the message thats all folks, which is contained in the secondmod layer.
The host tools packages always use the classic Makefile. See 10. Adding Packages for more on classic packages. Place any patches for the package source tree in tools/pkg/patches/pkg-what_is_done.patch. list any patches in a patches.list file in tools/pkg/patches/patches.list.
See installDir/wrlinux-3.0/layers/wrll-host-tools/tools/ for examples using the host tool package infrastructure. This example uses the layer qemumod to provide a patch (templates/default/qemu/my_qemu.patch) for the qemu emulator host tool. To patch the tool, include the layer and rebuild the host tools as shown in the following example.
276
23 Using Custom Templates and Layers 23.6 Configuring and Patching the Kernel
Step 1:
The following configure command includes the layer with the patch and enables the host tools to be rebuilt:
$ configure --enable-board=arm_versatile_926ejs \ --enable-kernel=standard --enable-rootfs=glibc_small \ --with-layer=/full_path/layers_and_templates/qemumod \ --enable-prebuilt-tools=no
Step 2:
Rebuild qemu.
You can now rebuild qemu so that it will include the patch from the layer:
$ cd build-tools/ $ make qemu.rebuild
Step 3:
You can use quilt or view the series file to see that the last patch applied was the one provided by the layer:
$ cd qemu-version/ $ quilt series ... patches_links/full_path/layers_and_templates/qemumod/templates/default/qemu/m y_qemu.patch
(Note that the patch itself is empty and does nothingit is just used to demonstrate the way patches can be applied to update host tools with layers.)
Enabling CONFIG_BINFMT_AOUT
The default template in the kernelmod layer includes a binfmt.cfg file (templates/default/binfmt.cfg) that enables the CONFIG_BINFMT_AOUT setting:
CONFIG_BINFMT_AOUT=y
277
The following patch (from templates/default/linux/2.6.x/mykernelpatch.patch) is applied, which will output a message at boot time:
--init/calibrate.c | 1 + 1 file changed, 1 insertion(+) --- a/init/calibrate.c +++ b/init/calibrate.c @@ -117,6 +117,7 @@ void __devinit calibrate_delay(void) unsigned long ticks, loopbit; int lps_precision = LPS_PREC; + printk("La-la-la\n"); if (preset_lpj) { loops_per_jiffy = preset_lpj; printk("Calibrating delay loop (skipped)... "
Step 2:
When you have configured the project to include the layer, rebuild the default standard kernel to include your custom modifications:
$ make -C build linux.rebuild
Step 3:
Verify results.
The kernel .config file should now include the CONFIG_BINFMT_AOUT setting:
$ grep CONFIG_BINFMT_AOUT build/linux-*/.config CONFIG_BINFMT_AOUT=y
Refer to 13. Patch Management for more on configuring and patching the kernel.
278
23 Using Custom Templates and Layers 23.7 Using Feature Templates in Layers
Another use for feature templates is aggregation. You can create "master" templates that include other templates. For a use case, imagine that you have a cell phone platform, and you'd like to be able to configure the system for different phones, ranging from a base phone to a full-fledged feature phone. To do this, you could create different feature templates, each one including a different set of sub-features. Master templates could then combine these templates to create different configurations. The ft layer included with this example contains multiple feature templates. Some of the templates contain include filesthese are the master templates that include other templates. For example, basicphone and featurephone each contain include files that combine a different set of features:
$ cat ft/templates/feature/basicphone/include feature/gprs $ cat ft/templates/feature/featurephone/include feature/edge feature/camera
These templates each cause a different set of camera, edge, or gprs feature templates to be included in a configuration.
Step 1: Configure a set of features.
For example, with the following command you would create a basicphone configuration:
$ configure --enable-board=arm_versatile_926ejs \ --enable-kernel=standard --enable-rootfs=glibc_small \ --with-layer=/full_path/layers_and_templates/ft \ --with-template=feature/basicphone
Notice that you must specifically identify the template when it is not the default template in the layer.
Step 2: Verify your configuration.
The end of the template_paths file shows that you have included the new feature templates:
$ cat template_paths ... /full_path/layers_and_templates/ft/templates/feature/gprs /full_path/layers_and_templates/ft/templates/feature/basicphone
The feature/basicphone template uses an include file to include feature template gprs, so you see gprs listed here as well as basicphone.
279
Step 3:
Step 4:
The end of the template_paths file shows that you have included the new feature templates:
/full_path/layers_and_templates/ft/templates/feature/gprs /full_path/layers_and_templates/ft/templates/feature/edge /full_path/layers_and_templates/ft/templates/feature/camera /full_path/layers_and_templates/ft/templates/feature/featurephone
The featurephone template include file includes the edge and camera templates, and the edge template has an include file that includes gprs, so you see them all in template_files.
Step 5: Specify multiple templates.
Another way you could combine features is to specify multiple templates with the --with-templates argument using a comma-separated list. For example, this configure command combines features basicphone and camera:
$ configure --enable-board=arm_versatile_926ejs \ --enable-kernel=standard --enable-rootfs=glibc_small \ --with-layer=/full_path/layers_and_templates/ft \ --with-template=feature/basicphone,feature/camera ... $ cat template_paths ... /full_path/layers_and_templates/ft/templates/feature/gprs /full_path/layers_and_templates/ft/templates/feature/basicphone /full_path/layers_and_templates/ft/templates/feature/camera
With more complicated scenarios, for example if the different camera features required kernel patches, additional files, and so on, you could create layers for each feature instead of just templates, combining them as described earlier to create the desired final configuration.
280
A layer that demonstrates this is the bspmod layer included in the example distribution. The include file in bspmod/templates/board/arm_versatile_926ejs contains an identically named board/arm_versatile_926ejs template that causes the build system to include the original board template from the base kernel layer. This ensures that the base support for the BSP is included. The audit.scc and audit.cfg files control one kernel config option for the purpose of demonstrating how the custom BSP template overrides the default BSP template. Follow this procedure to see how the base BSP gets configured:
Step 1: Configure the project.
Step 2:
Step 3:
By default, for this BSP, the kernel config option CONFIG_AUDIT is not set. It is set, however, in the template just added. View the setting of the CONFIG_AUDIT option after adding the custom template and configuring the kernel:
$ grep CONFIG_AUDIT build/linux-*/.config CONFIG_AUDIT=y CONFIG_AUDIT_GENERIC=y
The BSP has been configured using the standard BSP, adding your custom BSP change.
281
282
24
Kernel Use Cases
24.1 Introduction 283 24.2 Adding a Feature to a Supported Kernel 283 24.3 Using KVM 285 24.4 Collecting Kernel Core Dumps with Kdump 290
24.1 Introduction
This chapter presents various examples of kernel development. Also see 9. Configuring the Kernel and 13. Patch Management for additional examples and explanations of kernel configuration and development.
283
4. 5.
Step 1:
Configure and build with and without the new kernel feature Optionally modify the template to always include the feature
Following the standard conventions of the build system, create a template or a layer with a template that includes a linux subdirectory. For example, if your feature is called custom_log_lvl you would create a template such as:
templates/features/custom_log_lvl
Step 2:
Create a kernel feature template file in the linux subdirectory. The name of the file will be the name of the kernel feature. By convention, the directory and kernel template have the same name, but this is not a requirement. A kernel template file name follows the format: filename.scc. For this example, create templates/features/custom_log_lvl/linux/custom_log_lvl.scc. When properly configured, a feature called custom_log_lvl will be available to the kernel patching subsystem (see step 4).
Step 3: Add patches to the kernel feature template
The kernel patching subsystem offers a set of directives that are used in kernel features to control and manipulate which patches a re applied to the kernel. In this example, the patch directive is used to add patches to the kernel patch queue. Put the following contents in templates/features/custom_log_lvl/linux/custom_log_lvl.scc:
patch add_new_log_lvl.patch patch pr_debug_use_new_lvl.patch
Place the two patches shown in Example 24-1 and Example 24-2 in the template so that they are added to the kernel's patch queue when you want to configure the feature into the kernel.
Example 24-1 add_new_log_lvl.patch
b/include/linux/kernel.h | 1 + 1 file changed, 1 insertion(+) --- a/include/linux/kernel.h.orig +++ b/include/linux/kernel.h @@ -50,6 +50,7 @@ extern const char linux_proc_banner[]; #define KERN_NOTICE "<5> "/* normal but significant condition */ #define KERN_INFO "<6> "/* informational */ #define KERN_DEBUG "<7> "/* debug-level messages */ +#define KERN_CUSTOM "<7> CUSTOM: "/* custom-level messages */ extern int console_printk[]; Example 24-2 pr_debug_use_new_lvl.patch
284
--- a/include/linux/kernel.h.orig +++ b/include/linux/kernel.h @@ -207,7 +207,7 @@ extern void dump_stack(void); #ifdef DEBUG /* If you are writing a driver, please use dev_dbg instead */ #define pr_debug(fmt,arg...) \ printk(KERN_DEBUG fmt,##arg) + printk(KERN_CUSTOM fmt,##arg) #else static inline int __attribute__ ((format (printf, 1, 2))) pr_debug(const char * fmt, ...) {
Step 4:
Configure and build with and without the new kernel feature
To configure and build a kernel with the new feature applied, specify the new kernel feature template when you specify the kernel to the configure command with --enable-kernel=standard+custom_log_lvl. For example, a simple configure line for a common PC could be:
$ configure --enable-board=common_pc \ --with-template-dir=PATH_TO/templates/features/custom_log_lvl \ --enable-kernel=standard+custom_log_lvl \ --enable-rootfs=glibc_std
When the configure command completes you can build the kernel and it will include the new feature. Simply remove the reference to the kernel feature template if you want to build the kernel without it:
$ $ configure --enable-board=common_pc \ --with-template-dir=PATH_TO/templates/features/custom_log_lvl \ --enable-kernel=standard \ --enable-rootfs=glibc_std
Overview Of KVM
KVM (for Kernel-based Virtual Machine) is a full virtualization solution for Linux on x86 hardware containing virtualization extensions (Intel VT or AMD-V).It consists of a loadable kernel module, kvm.ko, which provides the core virtualization infrastructure, and a processor-specific kernel module, kvm-intel.ko or kvm-amd.ko.
285
As you can see, KVM has two partsthe KVM host side code shown on the top; and the KVM guest BSP below, which is used as guest kernel to validate the KVM host.
The host you will use as the KVM host has certain requirements. The following assumes it is currently running Wind River Linux with the glibc_std file system. Before starting the following procedure, determine if virtualization (VT) is supported on your host. This requires two steps: 1. Enter the following command:
$ egrep '(vmx|svm)' --color=always proc/cpuinfo
2.
Confirm that your BIOS has virtualization enabled (if required). This step simply requires you to confirm the relevant options in your BIOS have the correct settings. Examples of options to investigate are:
POST Behavior->Virtualization Performance->Virtualization.
--with-template=feature/kvm --enable-kernel=standard+features/kvm
286
To configure and build the example, use the following procedure: 1. Enter the following configure command:
$ configure \ --enable-board=common_pc_64 \ --enable-rootfs=glibc_std \ --enable-jobs=4 \ --enable-kernel=standard+features/kvm \ --with-template=feature/kvm
2.
When complete, you will find the following KVM packages in prjbuildDir/export/RPMS/x86-64:
3.
Use the following procedure to launch a KVM guest from the KVM host using TAP network configuration.
KVM Host Preparation
In order to launch the KVM guest, you must first boot the KVM host. This section describes how to boot the board (common_pc_64) with the Wind River Linux standard kernel and glibc_std with KVM support. This example assumes a KVM host with a SATA hard disk used as the root device.
Deploy the Root File System
Deploy the root file system on a KVM host root device such as hard disk.
NOTE: You cannot use NFS as the KVM host root device when booting the KVM
guest. The following is a simple way to install the root file system contained in the tar archive, but is based on two assumptions:
There is a idle partition on the KVM host HD, for example sda2. The KVM host machine supports NIC boot. Set the KVM host's boot sequence in the BIOS settings to onboard NIC, and enabled as Onboard Devices > Integrated NI > Enabled w/PXE.
1.
287
2.
Configure a pxeboot server for the KVM host. This can be, for example, the development host. See 17. Deploying Your Board with PXE for information on how to configure a PXE server. Boot your KVM host. Create the file system on the KVM host on the idle partition with mkfs and then mount the partition. Untar your root file system (common_pc_64-glibc-std.tar.bz2) on the idle partition. Reboot the KVM host machine from the hard drive. Untar common_pc_64-linux-modules-WR3.0zz_standard.tar.bz2 in the "/".
3. 4. 5. 6. 7.
Note that if you want to use the apache server in 24.3.3 Run apache or boa, p.289, use glibc-std as your KVM guest root file system. 2. Build the file system:
$ make fs
When this complete, there is a KVM guest kernel image and root file system tar archive in your prjbuildDir/export directory. 3. 4. Transfer the kernel and root file system to proper path in the KVM host (this will be used in the following). Make an hda rootfs image (on the KVM host machine). In the directory of the KVM guest root file system deployed in Deploy the Root File System, p.287, run the following commands:
# modprobe loop # make-kvm-guest-rootfs-img ./rootfs.img 768000 \ prjbuildDir/export/common_pc_64-glibc_std-standard-dist.tar.bz2
rootfs.img is output as an glibc_small or glibc_std ext2 root file system image, both of which can be used as the KVM guest root file system.
288
Start the KVM guest (linux) from the KVM host (linux):
Use the following commands to launch a KVM guest kernel from the KVM host:
# modprobe kvm-intel (or modprobe kvm-amd) # qemu-system-x86_64 -nographic -net nic,model=i82557b \ -net tap,script=/etc/qemu-ifup \ -hda path_to/rootfs.img -kernel path_to/kernel \ -append "root=/dev/hda rw console=ttyS0,115200 \ ip=192.168.0.3::192.168.0.1:255.255.254.0"
Confirm your IP and gateway configuration is correct before starting. The IP address of the KVM guest should be in the same subnet segment of the KVM host. Set your netmask appropriately. After having booted the KVM guest, the virtual machine can be accessed from your local net work through a command such as ssh root@192.168.0.3.
To use apache server, you should have glibc-std as your KVM guest root file system. After you boot the KVM guest from the KVM host, the boa server will be started by default, so you should kill it first so that you can start httpd for apache to avoid conflict for port 80. Open the KVM guest IP address using a web browser from any local IP and you should see the following message:
It works
Running boa
You can also use boa, accessed at http://kvm-guest_ip_address/. The following message should display:
boa is running
Before quitting KVM with CTRL+A X, ensure that the KVM guest is cleanly shut down.
289
2. 3.
enable Processor types and features > kexec system call (This sets CONFIG_KEXEC=y). enable Processor types and features > kernel crash dumps (This sets CONFIG_CRASH_DUMP=y). enable Processor types and features > Build a relocatable kernel (This sets CONFIG_RELOCATABLE=y). For CGL kernels only (so that crash dumps are interpretable), disable Security options > Grsecurity > Grsecurity (This sets CONFIG_GRKERNSEC is not set).
290
24 Kernel Use Cases 24.4 Collecting Kernel Core Dumps with Kdump
5.
Copy the kernel images to the target. Two images are requireda vmlinux image with symbols, referenced by the crash analysis tool; and a bzImage which serves as the capture dump kernel.
$ cp export/common_pc-vmlinux-symbols-WR3.0zz_standard export/dist/root/ $ cp export/common_pc-bzImage-WR3.0zz_standard export/dist/root/
6.
Configure QEMU with a sufficient amount of RAM for this procedure, and also to pass a command-line argument to the kernel to reserve a buffer to hold the capture kernel:
$ make config-target
Set TARGET0_QEMU_MEM=256 for 256 MB. Add crashkernel=64M@16M to the end of TARGET0_QEMU_KERNEL_OPTS (for example, lock=pit oprofile.timer=1 crashkernel=64M@16M). This reserves 64 MB at the physical address 16 MB.
7. 8.
Log in as the user root with password root and load the capture-kernel image into the reserved buffer for execution upon a kernel panic.
# kexec -p common_pc-bzImage-WR3.0zz_standard \ --args-linux \ --append=cat /proc/cmdline | \ sed s/ crashkernel=64M@16M// noacpi maxcpus=1
-p means this kernel should be loaded on panic; --args-linux denotes that the image is a Linux kernel; --append= specifies the command line arguments to pass the capture-kernel
In this case we pass the same arguments as for the primary kernel but without reserving a window for another capture-kernel. We also ensure the crashkernel boots with only one CPU and ACPI disabled. 9. Trigger a kernel panic by loading a bad module, doing something nasty, or executing the following command
# echo c > /proc/sysrq-trigger
Wait for the crash-kernel to boot. 10. Log in again as the user root with password root and copy the core dump from the crashed kernel to permanent storage, for example:
# cp /proc/vmcore /root/vmcore.dump
11.
Reboot the target using the standard kernel. The crash kernel boots with very little memory and so may not be capable of being used to analyze the crash dump without the kernels out-of-memory (OOM) killer killing the process. Target:
# shutdown -h now
Host:
$ make start-target
Note that, alternatively, you could create a boot script to automatically copy the core to storage and reboot back into service.
291
Kexec may also be used to quickly reboot a target, bypassing the system firmware. To do this: 1. 2. Boot the target as normal. You do not need to supply a crashkernel command line argument. Load a kernel to reboot into:
root@localhost:/root> kexec -l common_pc-bzImage-WR3.0zz_standard \ --args-linux \ --append=cat /proc/cmdline
3.
Reboot into the new kernel. Note that this does nothing graceful to prepare userspace to go down.
root@localhost:/root> kexec -e
A window of memory must be reserved to hold the capture kernel and a small amount of bookkeeping data (less than1MB). For most applications a 64MB buffer is sufficient, as specified with crashkernel=64M@16M. Once booted with such a command line argument, that memory is no longer available for use by the system. It is not possible to kexec on panic to a new kernel from the context of a capture kernel. The capture kernel is potentially booted with very little memory and is not recommended for use in SMP mode. Therefore, Wind River recommends the system return to the standard kernel after the crash dump is collected.
292
PA R T V
Appendixes
A B C D E F G H Open Source Documentation ............................ 295 Common make Command Targets ................... 299 File System Layout Configuration .................... 303 KGDB Debugging and the Command Line ...... 309 Connecting with TIPC ........................................ 313 Control Groups (cgroups) .................................. 321 Build Variables .................................................... 325 Cavium Simple Executive Integration and Debugging ........................................................... 331 Glossary .............................................................. 359
293
294
A
Open Source Documentation
A.1 Introduction 295 A.2 Carrier Grade Linux 295 A.3 Networking 296 A.4 Security 296 A.5 Linux Development 296
A.1 Introduction
This chapter includes URL links to open source networking, security, and Linux development documentation that is relevant to Wind River Linux. Such documentation is available from various sources. The main source used here is the Linux Documentation Project. Open source documentation, while valuable, must always be scrutinized for relevance. It is sometimes written specifically for a certain Linux distribution (which may not always be obvious), and sometimes even for a specific version. It is often out-of-date. It is a good idea to compliment, where possible, the resources below with resources that may exist from vendors, mailing lists, and from the maintainers themselves.
295
A.3 Networking
Some of these documents are very general and others very specific. Note that the first two documents are very comprehensive, and include a good deal of information on specific protocols.
The Linux Networking Overview HOWTO. (www.tldp.org/HOWTO/Networking-Overview-HOWTO.html) The Linux Networking HOWTO. Previously the Net-3 Howto. (www.tldp.org/HOWTO/NET3-4-HOWTO.html) The PPP HOWTO. (www.tldp.org/HOWTO/PPP-HOWTO/index.html) ADSL Bandwidth Management HOWTO. (www.tldp.org/HOWTO/ADSL-Bandwidth-ManagementHOWTO/index.html) Traffic Control HOWTO. (www.tldp.org/HOWTO/Traffic-Control-HOWTO/) Netfilter/Iptables HOWTO. This includes a good deal of documentation on packet filtering, NAT, and tutorials. (www.netfilter.org/documentation) VPN HOWTO. (www.tldp.org/HOWTO/VPN-HOWTO/index.html)
A.4 Security
This section includes documents on Netfilter, Iptables, SSL and SSH.
Netfilter/Iptables HOWTO. This includes a good deal of documentation on packet filtering, NAT, and tutorials. (www.netfilter.org/documentation) SSL Certificates HOWTO. (http://www.tldp.org/HOWTO/SSL-Certificates-HOWTO/index.html) OpenSSH. This is the home page for the OpenSSH project, with links to documentation and download sites for all the programs included in the OpenSSH suite. (www.openssh.com)
Building and Installing Software Packages for Linux. (www.tldp.org/HOWTO/Software-Building-HOWTO.html) Program Library HOWTO. (www.tldp.org/HOWTO/Program-Library-HOWTO/index.html) Linux Loadable Kernel Module HOWTO. (www.tldp.org/HOWTO/Module-HOWTO/index.html) Linux Parallel Processing HOWTO. (www.tldp.org/HOWTO/Parallel-Processing-HOWTO.html) Secure Programming for Linux HOWTO. (www.tldp.org/HOWTO/Secure-Programs-HOWTO/index.html)
296
RPM HOWTO. (www.tldp.org/HOWTO/RPM-HOWTO/index.html) http://fedora.redhat.com/docs/drafts/rpm-guide-en/index.html. An RPM guide from Red Hat. http://www.rpm.org/max-rpm/. Additional useful information on RPM.
297
298
B
Common make Command Targets
B.1 Introduction
Table B-1 describes common make commands performed in the project build directory along with their Workbench equivalents.
Table B-1 Command Line and Workbench Build Options
Description
make fs
fs
Build a new file system from RPMs where available, use source otherwise. No need to specify clean because export/dist and the file system image file are automatically cleaned. Force a build of everything (file system and kernel) from source. Remove the project_prj contents and folder. Run the clean rule for each package in the prjbuildDir/build directory. Re-process templates and layers. Recreates list files and makefiles but does not support changes to config.sh (which require a new configuration).
make build-all
build-all delete
kernel-clean
kernel_build
299
Table B-1
Description
kernel_rebuild
Clean, then build Linux kernel. The export/ directory is updated with the:
boot kernel kernel symbol file tar file that contains the kernel modules (which also include debug information).
linux.rebuild only rebuilds objects that are required by dependencies. make -C build linux.config make -C build linux.reconfig kernel_config Extract and patch kernel source for kernel configuration Regenerates the kernel configuration by reassembling the config fragments. kernel_menuconfig Extract and patch kernel source and launch menu-based tool for kernel configuration. Extract and patch kernel source and launch X Window tool for kernel configuration. Wind River Workbench tool for kernel configuration. Generates a board's DTB file needed to boot many PowerPC targets. Consult the BSP README for additional information and the proper DTS Base Name to use. Note that this command requires that you have already built the kernel with make -C build linux or make fs. Include the analysis tools (formerly called ScopeTools) in the file system. Build specific host tool tool.
kernel_xconfig
300
Table B-1
Description
make -C build pkg_name.unpack make -C build pkg_name.prepatch make -C build pkg_name.patch make -C build pkg_name.postpatch make -C build pkg_name.compile
For package build targets using Workbench, click the User Space Configuration tool, select the package you want to build, and select the Targets tab. You can then click the appropriate button for the package build target you want.
Copy package source into the build area and apply patches.
This will only do the compile. If you just specify pkg_name (with no .compile suffix), the toplevel dependency of .sysroot will trigger and the build system will compile the package, generate an RPM, and install it to the sysroot.
make -C build pkg_name.install make -C build pkg_name.clean make -C build pkg_name.distclean Clean the package pkg_name. Clean the package and the package patch list. This deletes the existing build directory of the package as well as .stamp files. Build the specific package pkg_name. Build the specific package pkg_name for the specified alternate CPU. if not recognized by the build system as a target, anything is passed into the package itself and run there. This would be like running make -C \ build/package-<version> <anything> Create the exportable prjbuildDir/export/host-tools.tar.bz2 archive.
make host-tools
Add a package and any packages it is known to require, and reconfigure the makefiles as appropriate. Add a package for the specified CPU and any packages that it requires. Clean, then build a package.
301
Table B-1
Description
Wind River Workbench tool to add, remove, patch packages. Menu-based tool to configure busybox. Extracts the changes to a project into a layer in the export/ directory, which can then be shared with other projects, and added to source control. Creates a sysroot/ directory in the export/ directory, which can be used for providing build specs in Workbench. Create an exportable toolchain that can be used in combination with an exported sysroot for a portable application environment. Starts a GUI applet that assists the developer in adding external packages to a project. Reboot target with latest kernel and file system. Start a QEMU simulation.
make export-sysroot
export-sysroot
make export-toolchain
export-toolchain
import-package
deploy
302
C
File System Layout Configuration
C.1 Introduction 303 C.2 changelist.xml Commands 304 C.3 The fs_final.sh Script 308
C.1 Introduction
The file system layout feature has been designed to allow you to view the contents of the export/dist file system in Workbench as it will be generated by the development system. The following sections explain how to use scripts and XML to add custom files and directories to that file system or to RPMs. See Wind River Workbench User's Guide (Linux Version) for using the File System Configuration Layout tool in Workbench to do the following:
Examine file meta properties. Add files and directories to the file system. View parent packages and remove packages. Add devices to /dev and change their ownership.
The filesystem/changelist.xml file is an XML file that is managed by Workbench but can be edited or modified by editors or command line tools. The script wrlinux/scripts/fs_changelist.lua processes this file immediately before the optional finalization script fs_final.sh (see C.3 The fs_final.sh Script, p.308). The result is the export/dist file system image which has been created as follows: 1. 2. 3. 4. 5. All packages are exported into export/dist. fs_install.sh is processed into an RPM, and exported into export/dist. The files in filesystem/fs are copied into export/dist. The changelist.xml file is processed on top of export/dist. Finally, your optional fs_final.sh is processed, as the last word.
303
Add a directory and then remove subsets. Remove a directory and add back in subsets. Apply unique attributes to any subset of an added directory tree.
All the listed fields in the following are required for their respective action, unless otherwise noted.
General Attributes
action=delfile name=name Name of the file, directory, pipe, symlink, or device to delete.
Example
<cl action="delfile" name="/usr/share/f_foo0" />
action=addfile name=filename Name of the file added to the target file system. umode=permissions The permissions of the target file, in octal (as with chmod).
304
Optional Fields
source=full_path The name and path of the file on the host file system. If present, the source file to be copied into the target file system. If not present, then this entry is used to modify the permissions an existing file. size=size Where size is the pre-calculated size of the file used if the source field is present, saving a size lookup by the tools that process this file (Workbench or command line tools). uid=username The user name, in text or in numeric form (as with chown). gid=groupname The group name, in text or in numeric form (as with chgrp).
Examples
<cl action="addfile" name="/usr/share/f_foo1" <cl action="addfile" name="/usr/share/f_foo2" umode="777" /> <cl action="addfile" name="/usr/share/f_foo3" size="10" /> <cl action="addfile" name="/usr/share/f_foo4" uid="user1" gid="group1" /> source="/tmp/layout/f_foo1" /> source="/tmp/layout/f_foo1" source="/tmp/layout/f_foo1" source="/tmp/layout/f_foo1"
action=adddir name=dirname Name of the directory added to the target file system.
Optional Fields
source=name Name and path of the directory on the host file system. If present, the permissions of the source directory are used to create the directory on the target file system. If not present, then this entry is used to create a new empty directory. umode=permissions The user/group/other permissions of the directory, in octal, if the source field is not present, to modify or override the permissions of an existing directory. uid=username The user name, in text or in numeric form (as with chown). gid=groupname The group name, in text or in numeric form (as with chgrp).
305
Examples
<cl action="adddir" name="/usr/share/f_dir1" <cl action="adddir" name="/usr/share/f_dir2" umode="777" /> <cl action="adddir" name="/usr/share/f_dir3" size="10" /> <cl action="adddir" name="/usr/share/f_dir4" uid="user2" gid="group2" /> source="/tmp/layout/f_dir1" /> source="/tmp/layout/f_dir1" source="/tmp/layout/f_dir1" source="/tmp/layout/f_dir1"
Notes
If the source field is not present, then a new empty directory is created on the target file system. If the source field is present, then the source directory name and attributes are copied from that source location. This command will not copy the contents of the source directory. Each file or sub-directory is expected to be iterated explicitly with the respective file or directory add directive.
action=addsymlink name=name Name of the symlink file added to the target file system. target=target Name of the target within the target file system. umode=permissions The user/group/other permissions of the target symlink, in octal.
Optional Fields
uid=username The user name, in text or in numeric form (as with chown). gid=groupname The group name, in text or in numeric form (as with chgrp).
Examples
<cl action="addsymlink" name="/usr/share/f_sym1" /> <cl action="addsymlink" name="/usr/share/f_sym2" umode="777" /> <cl action="addsymlink" name="/usr/share/f_sym3" size="10" /> <cl action="addsymlink" name="/usr/share/f_sym4" uid="user3" gid="group3" /> target="/usr/share/f_foo1" target="/usr/share/f_foo1" target="/usr/share/f_foo1" target="/usr/share/f_foo1"
306
action=addbdev or addcdev (block or char) name=name Name of the directory added to the target file system umode=permissions The user/group/other permissions of the target device, in octal major=major_number The major number for this device. minor=minor_number The minor number for this device.
Optional Fields
uid=username The user name, in text or in numeric form (as with chown). gid=groupname The group name, in text or in numeric form (as with chgrp).
Examples
<cl action="addbdev" name="/usr/share/f_bdev1" <cl action="addbdev" name="/usr/share/f_bdev2" umode="777" /> <cl action="addbdev" name="/usr/share/f_bdev3" /> <cl action="addbdev" name="/usr/share/f_bdev4" uid="user4" gid="group4"/> <cl action="addcdev" name="/usr/share/f_cdev1" <cl action="addcdev" name="/usr/share/f_cdev2" umode="777" /> <cl action="addcdev" name="/usr/share/f_cdev3" /> <cl action="addcdev" name="/usr/share/f_cdev4" uid="user4" gid="group4"/> major="3" minor="4" /> major="3" minor="4" major="3" minor="4" size="10" major="3" minor="4" major="1" minor="2" /> major="1" minor="2" major="1" minor="2" size="10" major="1" minor="2"
action=addpipe name=name Name of the directory added to the target file system. umode=permissions The user/group/other permissions of the target pipe, in octal.
307
Optional Fields
uid=username The user name, in text or in numeric form (as with chown). gid=groupname The group name, in text or in numeric form (as with chgrp).
Examples
<cl <cl <cl <cl action="addpipe" action="addpipe" action="addpipe" action="addpipe" name="/dev/f_pipe1" name="/dev/f_pipe2" name="/dev/f_pipe3" name="/dev/f_pipe4" /> umode="777" /> size="10" /> uid="user4" gid="group4"/>
308
D
KGDB Debugging and the Command Line
D.1 Introduction 309 D.2 Debugging with KGDB from the Command Line 309 D.3 KGDB Debugging Using the Serial Console (KGDBOC) 312
D.1 Introduction
This appendix presents some notes on KGDB debugging using gdb from the command line. Refer to Wind River Workbench by Example, Linux Version for details on using the Workbench debugger with Wind River Linux. You may find it useful to make a KGDB connection from the command line using gdb for several reasons:
You are more familiar with gdb for particular types of debugging, You wish to automate some KGDB tests. You are having problems with your KGDB connection from Workbench.
Locate your cross-compiled version of gdb.You can find one in the projects host-cross directory, for example this one for a powerpc project where the gdb binary name has a prefix for its cross compile architecture, for example:
./host-cross/arm-wrs-linux-gnueabi/x86-linux2/arm-wrs-linux-gnueabi-gdb
NOTE: If the host and target architectures are the same, you can use the hosts gdb.
309
Run the cross-compiled version of gdb on your vmlinux. You will see various banners when it starts.
$ ./host-cross/arm-wrs-linux-gnueabi/x86-linux2/arm-wrs-linux-gnueabi-gdb \ export/*vmlinux-symbols*
For some boards, you need to assert the architecture for gdb. For the 8560, for example, it is necessary to specify:
(gdb) set architecture powerpc:common
NOTE: Without this setting, gdb will continually respond with errors such as Program received signal SIGTRAP, Trace/breakpoint trap. 0x00000000 in ?? () and other errors.
In the gdb session, connect to the target. Port 6443 is reserved for KGDB communication:
(gdb) target remote udp:target IP:6443
You will see various warnings and exceptions that you can ignore. If, however, gdb informs you that the connection was not made, review your configuration, command syntax, and the IP addresses used. Enter the where command, and note the output.:
(gdb) where
You should see a backtrace stack of some depth. If you see only one or two entries, or a ??, then you are observing an error. Enter the info registers command, and note the output:
(gdb) info registers
You should see the list of registers. Examine this list. If, for example, the program counter (the last entry) is zero or otherwise un-reasonable, then you are observing an error. Enter a breakpoint command for do_fork.
(gdb) break do_fork
Note that the target resumes normal operation. Now press CTRL+C to get the gdb prompt back, and have it wait for the next breakpoint.
CTRL+C (gdb)
From here, you can press CTRL+C to send a break, set breakpoints, view the stack, view variables, and so on. You may wish to build the kernel with CONFIG_DEBUG_INFO=y if you want more debugging info.
310
D KGDB Debugging and the Command Line D.2 Debugging with KGDB from the Command Line
CAUTION: If you quit gdb without first disconnecting from the target, you may have to re-boot the target before you can reconnect. In fact, you may also lose Telnet and other communication, especially if the target was stopped at a breakpoint.
The ARCH parameter is required when the hosts architecture does not match the targets architecture; it is optional when they do match. Go to the Kernel hacking menu item using the down-arrow key, and press ENTER. Then go to the Compile the kernel with debug info menu item using the down-arrow key, and type y to enable. Now click the tab button to move the bottom menu to Exit and click return. Click the tab button again to Exit and click return again. You will now be prompted Do you wish to save our new kernel configuration? The menu should be on the Yes selection, press ENTER to save the configuration. Re-build the kernel. This will reset the stamp files for the kernel, and rebuild the kernel so that the new configuration is applied.
$ cd .. $ make linux.rebuild
NOTE: Using the build target make linux is not sufficient, because this command
will not reset the stamp files and the configuration changes will not be applied. You will have a new kernel and vmlinux symbol table file created in the export directory. Remember these files for Workbench and the command line testing.
311
8250 (most common targets) plb011 (ARM versatile) CPM (various 82xx, 83xx, 85xx) MPSC (ppmc280 ATCAf101)
Target Preparation
To use KGDBOC you must specify the device assigned to the console. You can find this in the console= argument in your target's boot line. You can also view the boot line at runtime with the command cat /proc/cmdline. For example, on an ARM Versatile 926EJS target, console=ttyAMA0. On a common PC target, console=ttyS0. Load the kernel module, supplying the appropriate port, for example:
# modprobe kgdboc kgdboc=ttyS0
Host Preparation
On your development host, run the agent-proxy from your project build directory:
$ ./host-cross/bin/agent-proxy arguments
For example, if you are using a terminal server (128.224.50.30 on port 2011), the command would be:
$ agent-proxy 2222^2223 128.224.50.30 2011
Replace /dev/ttyS0,115200 with your serial port device and baud rate to the target.
NOTE: This program turns your host into a mini terminal server.
After agent-proxy has properly connected to the target, the console port is now multiplexed into a pass-through console and a debug port, which will automatically send the SYSRQ sequence. You can use the Workbench terminal view or a Telnet program to connect to the target console as follows:
$ telnet localhost 2222
For the KGDB connection with Workbench, specify a terminal server connection to TCP port 2223. If you use gdb to connect to KGDB, use the following command to connect:
$ target remote localhost:2223
NOTE: When the KGDB connection is active you will see the raw KGDB data appear on the pass-through console connection.
312
E
Connecting with TIPC
E.1 Introduction 313 E.2 Configuring TIPC Targets 314 E.3 Configuring a TIPC Proxy 315 E.4 Configuring Your Workbench Host 316 E.5 Using usermode-agent with TIPC 317
E.1 Introduction
This chapter describes how to configure Linux TIPC targets and your Workbench host to support debugging. For detailed information about TIPC, see the official TIPC project Web site at http://tipc.sourceforge.net/. The transparent inter-process communication (TIPC) infrastructure is designed for inter-node (cluster) communication. Targets located in a TIPC cluster may not have access to standard communication links or may not be able to communicate with hosts not located on the TIPC network. Because of this, host tools used for development may not be able to access those targets and debug them without special tools. To solve this communication problem between the TIPC target and TCP/IP hosts, Wind River provides the wrproxy process, which acts as a gateway between the host and the target. A basic diagram of a Workbench host configured to debug a TIPC target is shown in Figure E-1.
Figure E-1 Workbench Host, Proxy, and TIPC Target
The Workbench host communicates using UDP, the TIPC target communicates using TIPC, and the proxy translates between them.
313
Note that the functions of the three network hosts shown in Figure E-1 may be combined in different ways, for example, the wrproxy and usermode-agent may both reside on a single target. You may even configure your Workbench host to support all functions if you want to test your debug capabilities in native mode before configuring external TIPC targets. The following sections describe how to configure TIPC targets, configure a proxy, and configure your Workbench host to support debugging over TIPC.
(Your actual command will differ if your network device is not eth0 or if you chose an address different from 1.1.1.)
314
3.
The output should display your current TIPC address, for example, 1.1.1.
You can also use the -p port option to specify a different TCP port number for wrproxy to listen to (default 0x4444), the -V option for verbose mode, or the -h option to get command help.
NOTE: If you specify a port other than the default port for the proxy, then you must specify the same port when configuring the target server as described in E.4 Configuring Your Workbench Host, p.316.
Figure E-2 illustrates a configuration in which the proxy agent runs on the same host as Workbench. Figure E-3 illustrates a configuration in which the proxy agent runs on one of the nodes in a cluster. Another example might be a separate host that runs wrproxy, between the targets in the cluster and the Workbench host.
315
Figure E-2
Figure E-3
Cluster with TIPC Interconnections Workbench Host tgtsvr UDP TIPC Target wrproxy TIPC Target
TIPC Target
TIPC Target
E.4 Configuring Your Workbench Host, p.316 describes how to configure the target server on the Workbench host to connect to the proxy agent and reach the TIPC target that you want to connect to.
For example, to connect to a target with a TIPC address of 1.1.8 using a proxy with the IP address 192.168.1.5, use the following command:
$ tgtsvr -B wdbproxy -tipc -tgt 1.1.8 192.168.1.5
Additional Information
316
Parameter
Value
targetTipcAddress tipcPortType
The TIPC address of the target with the TIPC network stack. For example: 1.1.8. The TIPC port type to use in connecting to the WDB target agent. The default port type for the connection is 70. You should accept the default port unless it is already in use. The TIPC port instance to use in connecting to the WDB target agent. The default port instance for the connection is 71. You should accept the default port instance unless it is already in use. The IP address or DNS name of the target with WDB Agent Proxy.
tipcPortInstance
wdbProxyIpAddress|name
Note that if you change the default TIPC port configuration, you must also change the default TIPC port for the usermode-agent as described in E.5 Using usermode-agent with TIPC, p.317. Alternatively, you can use the Workbench GUI to configure the host. Select wdbproxy as the backend when you create a new connection in the Remote Systems view and then fill in the fields with the values you would supply as command line arguments. The command line that is created at the bottom of the GUI should be similar to the example shown in this section.
This option allows you to select an alternate listening port for the usermode agent. Two network connection types are supported:
UDP this is the default connection type. If you do not specify a particular type of network connection, UDP is used.
317
If you do not want to use the default UDP port (0x4321), you can choose and set the one you want using this option. The port number can be entered in either decimal or hexadecimal format. To set the port number using the hexadecimal format, use the 0x%x format where %x represents the port number in hexadecimal base. For example, to launch the usermode agent using UDP and port 6677:
$ usermode-agent -p 6677
or
$ usermode-agent -p 0x1A15
TIPC this is the TIPC network connection. If you do not want to use the default TIPC port type (70) and TIPC port instance (71) then you can choose and set the ones you want using this option. The port numbers can be entered in either decimal or hexadecimal format. To set the port numbers using the hexadecimal format, you need to use the 0x%x format where %x represents the port number in hexadecimal base. To launch the usermode agent using TIPC and port type 1234, port instance 55:
$ usermode-agent -p 1234:55
or
$ usermode-agent -p 0x4D2:37
Communication Option
The communication option allows to specify which kind of connection will be used for connection between target server and usermode agent. Comm option is:
-comm serial | tipc
If the serial option is set then you can also specify the serial link device to use rather than the default one (/dev/ttyAMA1) and the baud speed for the serial link (115200 is the default baud speed). To set a different device for the serial link connection, the flag -dev has to be used with the -comm serial option. For the baud speed, you need to set the -baud option combined with the -comm serial option.
Example
To launch the usermode agent using serial link connection and serial device /dev/ttyS0:
$ usermode-agent -comm serial -dev /dev/ttyS0
Example
To launch the usermode agent using serial link connection with default serial device and baud speed of 19200:
$ usermode-agent -comm serial -baud 19200
Example
To launch the usermode agent using serial link connection with serial device /dev/ttyS0 and baud speed of 19200:
$ usermode-agent -comm serial -dev /dev/ttyS0 -baud 19200
318
If the tipc option is set then you can also specify the port type (default is 70) and port instance (default is 71) of the TIPC connection. To set a different port type or instance, use the flag -tipcpt or -tipcpi, in either decimal or hexadecimal format.
Examples
To launch the usermode agent using TIPC network connection with default port type and default port instance:
$ usermode-agent -comm tipc
To launch the usermode agent using TIPC network connection with specific port type 123 and specific port instance 456:
$ usermode-agent -comm tipc -tipcpt 123 -tipcpi 456
Daemon mode
The -daemon option lets the usermode agent become a daemon after all initialization functions are completed. The output message, if any, are still reported on the device where the process has been started.
Environment Inheritance
The -inherit-env option makes all the child processes inheriting the environment from the parent environment. Since the usermode agent is the father of all the processes, then the processes will inherit the shell environment from which the usermode agent has been launched.
No Thread Support (Linux Thread Model Only)
The -no-threads option allows you to use the usermode agent on a kernel using Linux threading model even if the libpthread library is stripped. Basically, the libpthread library is used by the usermode agent to detect thread creation, destruction and so on. On a kernel using Linux threading model, if the libpthread library is stripped then the multithread debug would not be reliable so, by default, the usermode agent exit if this option is not set, to ensure a reliable debug scenario. This option is useless if your kernel is running using NPTL threading model.
Other Options
The -v option displays version information about the usermode agent, that is, build and release information. The -V option set the usermode agent to run in verbose mode. This is useful to have the listening information: port number, listening connection type and the target server connection to this usermode agent. The -help or -h option displays all the possible startup options for the usermode agent.
319
320
F
Control Groups (cgroups)
F.1 Introduction 321 F.2 CPUSETS 322 F.3 cgroups 323
F.1 Introduction
The basic functionality discussed here is based on CPUSETS, which allow you to restrict tasks to specific CPUs and specific memory nodes. The restrictive sets to which tasks are assigned are the CPUSETS. cgroups build on CPUSET functionality to provide generic cgroups, which are a means of grouping processes, and resource groups, to provide further control of generic cgroups. Wind River Linux supports the mainline cgroup controllers and adds four additional controllers.
dm-iobandAn I/O bandwidth controller implemented as a device-mapper driver. Several jobs using the same physical device have to share the bandwidth of the device. dm-ioband gives bandwidth to each job according to its weight, and each job can set its own value. bio_trackingAdds block I/O tracking to dm-ioband. net_traffic_controllerA resource controller you can use to schedule and shape traffic belonging to the task(s) in a particular cgroup. The implementation consists of two parts:
A resource controller (cgroup_tc) that is used to associate packets from a particular task belonging to a cgroup with a traffic control class ID (tc_classid). This tc_classid is propagated to all sockets created by tasks in the cgroup and will be used for classifying packets at the link layer. A new traffic control classifier (cls_cgroup) that can classify packets based on the tc_classid field in the socket to specific destination classes.
321
memrlimitImplements a virtual address space controller using cgroups. Address space control is provided along the same lines as RLIMIT_AS control, which is available via getrlimit(2)/setrlimit(2). The interface for controlling address space is provided through rlimit.limit_in_bytes.
The following presents an overview and some simple examples of CPUSETS and cgroups. For detailed information, refer to the files cpusets.txt and cgroups.txt in prjbuildDir/build/linux/Documentation/. Additional discussions are available online, as for example at http://kerneltrap.org/node/8059.
F.2 CPUSETS
CPUSETS provide the base-level infrastructure that enables the dynamic creation and destruction of resource partitions within a system. A given CPUSET may describe zero or more individual CPUs and zero or more memory nodes, and each set may contain zero or more tasks. All tasks within each set are treated according to the normal system resource control mechanisms but are subject to the limitations of the CPUSET, not the full system. One of the key design goals of CPUSETS is for large systems running many processes be able to adapt to varying job loads over time without impacting responsiveness of applications running on the system. This allows key classes of jobs to be given preferential treatment while leaving lower priority tasks to share remaining resources as well as dynamically adjusting resources available to all classes. The following example shows how to start a new job that is to be contained within a CPUSET. Perform the following sequence of commands to create a CPUSET named Charlie, containing CPUs 2 and 3 and Memory Node 1. Then start a subshell in that CPUSET. 1. 2. 3. Create a directory that will serve as the CPUSET:
# mkdir /dev/cpuset
Create the new CPUSET with mkdir's and write's (or echo's as in this example) in the /dev/cpuset virtual file system:
# # # # # # # cd /dev/cpuset mkdir Charlie cd Charlie /bin/echo 2-3 > cpus /bin/echo 1 > mems /bin/echo $$ > tasks sh
The subshell sh is now running in CPUSET Charlie. The following command should display /Charlie:
# cat /proc/self/cpuset
4.
Start a task that will be the "founding father" of the new job.
322
5. 6.
Attach that task to the new cpuset by writing its PID to the /dev/cpuset tasks file for that cpuset. fork, exec or clone the job tasks from this founding father task.
F.3 cgroups
To start a new job that is to be contained within a cgroup, using the cpuset cgroup subsystem, the steps are: 1. 2. 3. 4. 5. 6. Create the cgroup:
# mkdir /dev/cgroup
Create the new cgroup with mkdir's and write's (or echo's) in the /dev/cgroup virtual file system. Start a task that will be the "founding father" of the new job. Attach that task to the new cgroup by writing its PID to the /dev/cgroup tasks file for that cgroup. Fork, exec or clone the job tasks from this founding father task.
For example, the following sequence of commands will setup a cgroup named Charlie, containing just CPUs 2 and 3, and Memory Node 1, and then start a subshell sh in that cgroup:
# # # # # # # # # # # mount -t cgroup -ocpuset cpuset /dev/cgroup cd /dev/cgroup mkdir Charlie cd Charlie /bin/echo 2-3 > cpuset.cpus /bin/echo 1 > cpuset.mems /bin/echo $$ > tasks sh # The subshell 'sh' is now running in cgroup Charlie # The next line should display '/Charlie' cat /proc/self/cgroup
323
324
G
Build Variables
G.1 Introduction
The list and description of config.sh build variables shown in Table G-1 is provided for informational purposes onlyyou would not typically change config.sh files directly. These are constructed and inherited during the configure process from the templates. Note that many of the items are also copied into the config.properties file which is used to initialize Workbench with it's project information, and a few of the fields are also copied into the toolchain wrappers. Therefore, even if you modify config.sh, your modifications may not be carried forward to other components using the fields.
Table G-1 Build Variables and Description
Variable
Description
BANNER
Informational message printed when configure completes. Can be used in any template. Specifies the generic toolchain architecture: arm, i586, mips, powerpc. Must match toolchain. Generally specified in the templates/arch/... item. Only set in an arch template. These are all of the available CPU variants for a configuration. For example, in a Power PC 32-bit/64-bit install, both ppc and ppc64 would be listed. A value from this variable is substituted for the V ARIANT prefix in the following variables.
TARGET_TOOLCHAIN_ARCH
AVAILABLE_CPU_VARIANTS
The following items should be prefixed with the V ARIANT name as specified in AVAILABLE_CPU_VARIANTS. V ARIANT is replaced with the specific variant, for example V ARIANT_TARGET_ARCH=powerpc becomes ppc_TARGET_ARCH=powerpc. V ARIANT_COMPATIBLE_CPU_VARIANT Specifies all of the CPU variants that are compatible with the specific variant. For example ppc is compatible with ppc_750. V ARIANT_TARGET_ARCH The architecture used by GNU configure to specify that variant.
325
Table G-1
Variable
Description
V ARIANT_TARGET_COMMON_CFLAGS
CFLAGS that are beneficial to pass to an application but not required to optimize for a multilib. Equivalent of CFLAGS=... in the environment or in a makefile. Name of a variant. Also used as the RPM architecture.
BIG or LITTLE.
Flags to be passed to the assembler when using the toolchain wrapper to assemble with a given userspace. These are hidden from applications. Flags to be passed to the compiler when using the toolchain wrapper to compile for a given userspace. These are hidden from applications. Flags to be passed to the linker when using the toolchain wrapper. These are hidden. - The name of the library directory for the ABI - lib, lib32, lib64. linux-gnu or linux-gnueabi The preferred color when installing RPM packages to the architecture:
V ARIANT_TARGET_FUNDAMENTAL_CFLAGS
(Color is RPM terminology for a bitmask used in resolving conflicts. If RPM is going to install two files, and they have conflicting md5sum or sha1, it uses the color to decide if it can resolve the conflict. Two files of color 0 cause a conflict and the install fails. Otherwise, the system's "preferred" color takes precedence for the install. If the file is outside of the permitted colors, then again it's an error (if it causes a conflict.) V ARIANT_TARGET_RPM_TRANSACTION_COL
OR
The colors that are allowed when installing RPM packages to that architecture. A bitmask of the above. For example, on a 32-bit system, generally 1. On a 64/32 bit system, 3. On a mips64 system, 7. The internal gcc directory prefix to get to the sysroot information. Bitsize of a word, 32 or 64.
V ARIANT_TARGET_RPM_SYSROOT_DIR V ARIANT_TARGET_USERSPACE_BITS
BSP-Specific Items
326
Table G-1
Variable
Description
BOOTIMAGE_JFFS2_ARGS
For targets that support JFFS2 booting, these values will be passed when creating the JFFS2 image. Endianess (-b/-l), erase block size (-e), and image padding (-p) are commonly passed. Features to be implicitly patched into the kernel independent of the configure line. Name of the image used to boot the board, used to create the export default image symlink. BSP name as recognized by the build system. List of images is created by the kernel build. Mainly used for compatibility reasons. Indicates which platform(s) a particular board supports. Internal Wind River use only. The list of kernels supported by a particular board. List of root file systems supported by a particular board. Additional host tools that should be built to support this board.
KERNEL_FEATURES
LINUX_BOOT_IMAGE
TARGET_TOOLS_SUBDIRS
QEMU-related variables. Refer to the release notes for details on QEMU-supported targets. Enter make config-target in prjbuildDir for additional information.
TARGET_QEMU_BIN
The QEMU host tool binary to use, if this BSP can be simulated by QEMU. The console port the target uses. This is BSP specific. For example, for common_pc it is ttyS0, and for the arm_versatile_926ejs it is ttyAMA0. Some BSPs such as the common_pc and common_pc_64 use a different Ethernet type. This parameter can be used to select a different Ethernet type to override the default that is hard coded in the QEMU host binary.
TARGET_QEMU_BOOT_CONSOLE
TARGET_QEMU_ENET_MODEL
327
Table G-1
Variable
Description
TARGET_QEMU_KERNEL
The "short" name of the boot image to search for in the export directory inside the BUILD_DIR. For common_pc it would be set to bzImage or for the arm_versatile_926ejs it would be set to zImage. The specific image that is used is based on the boot loader that is hard-coded into the QEMU binary. This image is different than the boot image the real target might use in some cases. If you specify a full path to a binary kernel image it will not search the export directory and will instead use the image you specified. These are any extra options you might want to pass to the kernel boot line to override the defaults. These are any additional options you need to pass to the QEMU binary to get it to run correctly. In the case of the ARM Versatile and MTI Malta boards, the -M argument is passed so that the QEMU host binary will be configured with the correct simulation model since each host binary supports multiple simulation models within the same architecture.
TARGET_QEMU_KERNEL_OPTS
TARGET_QEMU_OPTS
Value should be glibc or uclibc. No value means glibc is assumed. Additional flag to add to the fundamental cflags (in the toolchain wrapper) for the libc being used. Normally this is blank except for the uclibc case where it is -muclibc.This is hidden from the application space. An additional CFLAG that needs to be used when a feature or rootfs is specified. Again hidden from the application space. Name of the ROOTFS configured.
TARGET_LIBC_CFLAGS
TARGET_ROOTFS_CFLAGS
TARGET_ROOTFS
Generic Optimizations
TARGET_COPT_LEVEL TARGET_COMMON_COPT TARGET_COMMON_CXXOPT
These are all optional optimizations that override defaults in configure. Generally you use these if you want to change the optimizations for -Os and not -O2. See the glibc_small rootfs for an example.
328
multilib templates are designed to match the multilibs as defined by the compiler and libc's. The cpu templates are expected to include a multilib template and either use it "as-is" or augment it with additional optimizations. Only multilib templates are allowed to specify TARGET_FUNDAMENTAL_* flags. cpu templates can only specify:
Everything else is expected to be inherited from multilib templates. For all of the items in the multilib/cpu templates, they should be prefixed with the variant name. The following items are required to be prefixed with a variant:
TARGET_COMMON_CFLAGS TARGET_CPU_VARIANT TARGET_ARCH TARGET_OS TARGET_FUNDAMENTAL_CFLAGS TARGET_FUNDAMENTAL_ASFLAGS TARGET_FUNDAMENTAL_LDFLAGS TARGET_SYSROOT_DIR TARGET_LIB_DIR TARGET_USERSPACE_BITS TARGET_ENDIAN TARGET_RPM_TRANSACTION_COLOR TARGET_RPM_PREFER_COLOR COMPATIBLE_CPU_VARIANTS TARGET_ROOTFSonly specify in a ROOTFS template TARGET_COPT_LEVEL, TARGET_COMMON_COPT, TARGET_COMMON_CXXOPT - specify either ROOTFS or board template, do not specify CPU or Multilib.
The best way to determine what to do in a custom template is use wrll-wrlinux as an example, with the information provided here in order to create custom templates.
329
330
H
Cavium Simple Executive Integration and Debugging
H.1 Introduction 331 H.2 Preparing the Host 334 H.3 Configuring and Building from the Command Line 335 H.4 Running Simple Executive Applications 337 H.5 Simple Executive Layer Technical Notes 339 H.6 Configuring and Building with Workbench 341 H.7 Configuring the Kernel with Workbench 345 H.8 Debugging from the Command Line 347 H.9 Setting Up the Target 348 H.10 Setting up the Host 350 H.11 Debugging Caveats 351 H.12 Debugging with Workbench 353 H.13 Known Issues, Limitations, and Tips 357
H.1 Introduction
This document describes the release of the Simple Executive (Simple Exec) support for Wind River Linux.
Sections G.1 through G.7 address basic elements, installation, configuration, build, and integration with Wind River Workbench Sections G.8 through G.11 explain debugging using the Command line interface Section G.12 describes debugging using Wind River Workbench Section G.13 discusses Known Limitations and Tips
331
The Octeon SDK from Cavium Networks contains the Simple Executive library source. It also contains demo example source and Makefile fragments. The latter portion requires license agreements to redistribute, so it must be installed in order to build applications. Get the SDK directly from Cavium. In a directory where you have plenty of space, extract the Cavium 1.8.0 SDK archives, for example into /opt/octeon-sdk-1.8.0. Although the SDK archive is an .rpm file, there is no need to install the RPM using the rpm utility. Convert the .rpm to a cpio archive and extract the cpio archive as follows:
$ cd /opt/octeon-sdk-1.8.0 $ rpm2cpio /path/to/OCTEON_SDK-1.8.0-275.i386.rpm | cpio -div
This is a reference kernel implementation from Cavium, containing Simple Executive kernel module source code compatible with the WRLinux Octeon kernel. Get the RPM directly from Cavium. In a directory where you have plenty of space, extract the Cavium Linux 1.8.0 RPM, for example into /opt/octeon-sdk-1.8.0. Typically the Linux RPM is installed in the same directory structure as the SDK RPM. Although the SDK archive is an .rpm file, there is no need to install the RPM using the rpm utility. Convert the .rpm to a cpio archive and extract the cpio archive as follows:
$ cd /opt/octeon-sdk-1.8.0 $ rpm2cpio /path/to/OCTEON_LINUX-1.8.0-275.i386.rpm | cpio -div
The wrlinux-3.0 tree contains a layer incorporating support for building standalone and Linux usermode Simple Exec applications. This new Simple Executive layer can then be found here: $WIND_HOME/wrlinux-3.0/layers/wrll-cavium-simple_exec
332
The wrwb-3.0x_pp-cavium.zip patch, available from Wind River, extends Workbench with dialogue and debugger framework support for Cavium's extended GDB debug. Extract the zip archive in the $WIND_HOME directory, which is the installation directory for Workbench 3.x:
$ cd $WIND_HOME $ unzip /path/to/wrwb-3.0x_pp-cavium.zip
This will place a .jar file (the implementation) into workbench-3.1/wrwb/wrworkbench/eclipse/plugins, and .properties and .xml files into /eclipse/features/com.windriver.ide.debug.octeon_1.0.0.
This adds the original basic Simple Executive applications, plus the mips64_octeon CPU_VARIANTS for the minimal package set, including only one example application package, crypto_proprietary.
libstdcxx simple_exec_open simple_exec_proprietary crypto_proprietary glibc.mips64_octeon libgcc.mips64_octeon libstdcxx.mips64_octeon wrs_kernheaders.mips64_octeon simple_exec_open.mips64_octeon simple_exec_proprietary.mips64_octeon crypto_proprietary.mips64_octeon
This template includes packages that are needed for both n32 and 64-bit usermode builds. If you are only interested in n32 builds, the rootfs can be made smaller by eliminating the .mips64_octeon packages. This could be done by either editing the template, or editing the pkglist file after the project is configured.
--with-template=feature/se_demo_all
This adds the full SE application list, including the mips64_octeon CPU_VARIANTS for the minimal package set.
libstdcxx simple_exec_open simple_exec_proprietary crypto_proprietary application_args_proprietary hello_proprietary linux_filter_proprietary low_latency_mem_proprietary mailbox_proprietary
333
named_block_proprietary queue_proprietary traffic_gen_proprietary uart_proprietary glibc.mips64_octeon libgcc.mips64_octeon libstdcxx.mips64_octeon wrs_kernheaders.mips64_octeon simple_exec_open.mips64_octeon simple_exec_proprietary.mips64_octeon crypto_proprietary.mips64_octeon application_args_proprietary.mips64_octeon hello_proprietary.mips64_octeon linux_filter_proprietary.mips64_octeon low_latency_mem_proprietary.mips64_octeon mailbox_proprietary.mips64_octeon named_block_proprietary.mips64_octeon queue_proprietary.mips64_octeon traffic_gen_proprietary.mips64_octeon uart_proprietary.mips64_octeon
This template includes packages that are needed for both n32 and 64-bit usermode builds. If you are only interested in n32 builds, the rootfs can be made smaller by eliminating the .mips64_octeon packages. This could be done by either editing the template, or editing the pkglist file after the project is configured.
2.
Install wrlinux-3.0 and Workbench 3.1, if you have not already done so. For information on installing the product, see the following documents: Wind River Product Installation and Licensing Administrator's Guide Wind River Product Installation and Licensing Developer's Guide Install the available Workbench 3.x Simple Executive Debug Integration patch. See Workbench 3.x Simple Executive Debug Integration Patch, p.333.
3.
334
H Cavium Simple Executive Integration and Debugging H.3 Configuring and Building from the Command Line
Cavium SDK html-based documentation can be found by opening your browser at this location: $SDK_ROOT/docs/html/index.html
The wrlinux package man and info pages from the file system packages can be found at this location: $WIND_HOME/wrlinux-3.0/docs
Workbench has online documentation. From Workbench, select Help > Help Contents to see the index. For example, information about Wind River Linux Platform Projects can be found under Wind River Documentation > Guides > Operating System > Wind River Linux Platforms User's Guide 3.0. Also, the Linux user's guide and online versions of the package man and info pages can be found under Wind River Documentation > References > Operating System > Wind River Linux Operating System Reference.
Details about the cav_ebt5800 BSP can be found in this file: <project_dir>/READMES/4-README-cav_ebt5800
This configuration example adds in the Simple Executive layer and arranges for the crypto_proprietary sample application to be built.
335
In the project configuration example in H.3.1 Configuring your Project, p.335, above, we used the feature/se_demo_basic to setup the package list. Here is the set of recommended layer and template selections.
Simple Executive layer not included. (No Simple Executive application support)
With Simple Executive layer included, with one Simple Executive sample application
--with-layer=wrll-cavium-simple_exec --with-template=feature/se_demo_basic
--with-layer=wrll-cavium-simple_exec --with-template=feature/se_demo_all
There can be only one --with-template option on the configure command line. If you need to include another template option, add it to the end of the --with-template= parameter, separated with a comma.
You can also manually edit the package list. This will allow you to select other Simple Executive applications than provided by the se_demo_basic and se_demo_all templates. While initially developing Simple Executive applications, you may also wish to avoid building some packages, to reduce the build time. Note that the rootfs created in this way may or may not allow successful booting. You will also need to ensure that all inter-package dependencies are resolved by including all of the prerequisite packages. When preparing to actually boot a target, you should either use a prebuilt rootfs, or build with the full list of packages.
There are no package sources distributed with this layer at present (packages/*.tgz) since Cavium has closed licenses, so octeon-sdk-1.8.0 and octeon-linux-1.8.0 need to be present to extract the source from. The above SDK_ROOT= command line addition will allow the build system to create packages in your <project_dir>/packages from the Octeon SDK.
336
H Cavium Simple Executive Integration and Debugging H.4 Running Simple Executive Applications
Once the packages are extracted from the SDK into packages/*.tgz, then it is no longer necessary to identify SDK_ROOT on the make command line. In fact, you can copy the new tar archives back to the layer, so that other local projects also do not need the SDK_ROOT value. For example:
$ cd <project_dir>/packages $ cp * <installDir>/wrlinux-3.0/layers/wrll-cavium-simple_exec/packages
or
$ OCTEON_TARGET=linux_64 make -C build-mips64_octeon <packagename>.rpm
linux_n32- usermode n32 binary linux_64 - usermode n64 binary cvmx_n32- standalone n32 binary cvmx_64 - standalone n64 binary
OCTEON_CN58XX OCTEON_CN38XX
The complete list of valid OCTEON_MODEL values is available at <project_dir>/host-cross/mips-wrs-linux-gnu/sysroot/usr/include/simple_exec_o pen/octeon-models.txt. The default is to build for OCTEON_MODEL=OCTEON_CN58XX. Demo examples must be compiled for the right MODEL. This can be overridden on the command line, just like OCTEON_TARGET. For example:
$ make OCTEON_TARGET=linux_n32 OCTEON_MODEL=OCTEON_CN58XX
337
The coremask identifies which processor cores to run the application on, for example: core 0 = 0x01, core 1 = 0x02, core 2 = 0x04, and so on. These can be combined to run the application on multiple cores by adding the masks for the needed cores. No application is started until an application is loaded on core 0 (coremask = 0x01). You can use numcores and skipcores instead of coremask. Numcores specifies how many cores to use, and skipcores specifies which core to start on. For example, numcores=3 skipcores=4 is equivalent to coremask=0x0070 It is normally acceptable, and often expected, to run the same application on multiple cores. The crypto_proprietary application demonstrates sharing a serial console through the use of a spinlock to prevent garbled output.
338
H Cavium Simple Executive Integration and Debugging H.5 Simple Executive Layer Technical Notes
The Simple Executive layer provides wrapper Makefiles that will prepare the SDK applications into the expected tarball format, plus provide the additional build rules to support the various OCTEON_TARGET and OCTEON_MODEL values. These provided wrapper Makefiles can be found here:
$ ls $WIND_HOME/wrlinux-3.0/layers/wrll-cavium-simple_exec/dist application_args_proprietary crypto_proprietary hello_proprietary intercept_proprietary linux_filter_proprietary low_latency_mem_proprietary mailbox_proprietary named_block_proprietary queue_proprietary simple_exec_open simple_exec_proprietary traffic_gen_proprietary uart_proprietary
You can use these Makefiles as templates for including additional applications. With the provided Makefiles you can see the actions to support automatically extracting content from the SDK, as well as the build and dependency information required by the wrlinux build system. Here is a quick overview of the content of these Makefiles.
PACKAGES+=<application>: this instructs the build system to add this application package to the file system's package list. <application>_SUMMARY and so forth: these values define the package for the wrlinux build system <application>.check: this build rule will test to see if the package's tarball already is present, else it will call the rule to extract it from the SDK. ifndef OCTEON_TARGET and OCTEON_MODEL: These tests insure that default values for these are present. <application>..compile: this build rule will compile that application for the set of expected OCTEON_TARGET values. <application>..install: this build rule will install the application's target files in the prjbuildDir/TARGET_INSTALL dir. <application>..extract: this custom build rule will extract the respective application from the SDK and form the tarball.
339
The following descriptions apply to some of the support files you will find in the respective application dist directories.
wrlinux.mk: this is the support Makefile wrapper used to provide the needed environment for building Simple Executive applications. makelinks: Makelinks is used once (when the layer is installed) to create the symbolic links in $OCTEON_ROOT/host/bin to provide access to the toolchain executables. mips-wrs-linux-gnu-wrapper.sh: mips-wrs-linux-gnu-wrapper.sh provides a wrapper which translates the names and argument lists of toolchain executables from their Cavium prefixes (mips64-octeon-linux-gnu-) into their Wind River equivalents (mips-wrs-linux-gnu-). mips64octeon-wrs-elf-wrapper.sh: mips64octeon-wrs-elf-wrapper.sh provides a wrapper which translates the names and argument lists of toolchain executables from their Cavium prefixes (mipsisa64-octeon-elf-) into their Wind River equivalents (mips64octeon-wrs-elf-). octeon-app-init.h: octeon-app-init.h is required (and #include'd) by most Simple Executive applications. application.mk, common.mk and common-config.mk: Makefile fragments extracted from Cavium's SDK that provide much of the necessary environment for building Simple Executive applications are built.
H.5.2
The kernel source tree includes a copy of the Simple Executive source files that are used when the wrll-cavium-simple_exec layer is not configured into the project. With the layer configured into the build, Simple Executive source files located in the sysroot are used instead. This allows the BSP to be built without the need to download the SDK. The ethernet driver requires the use of the Simple Executive library, whether wrll-cavium-simple_exec is included or not. When running Linux n32 usermode Simple Executive applications, it is necessary to configure the kernel with CONFIG_CAVIUM_RESERVE32=512 (or larger, in multiples of 512) to provide a shared memory communication region. The intercept_proprietary package is an example of a kernel loadable module built with the proprietary license version of Simple Executive, it stores the output in the target rootfs as /intercept-example.ko. Refer to the documentation in Cavium's SDK for details on the usage of this application.
340
H Cavium Simple Executive Integration and Debugging H.6 Configuring and Building with Workbench
The crypto_proprietary and crypto_proprietary.mips64_octeon packages, along with other_proprietary packages, will build rpms for the linux_n32 and linux_64 type of crypto demo examples, installing the files in /bin/crypto-linux_n32* and ...-linux_64* in the target rootfs. They also compile the standalone images and place symlinks to them in export/<board>-crypto-cvmx_n32* and crypto* (by Cavium convention, there is no extension like -cvmx_n32 for 64-bit non-Linux load images). The *_proprietary packages are named so because they extract proprietary and include licensed files for the package source, and use the simple_exec_proprietary files instead of the _open ones. The _open packages extract open-licensed files.
csh users:
$ setenv SDK_ROOT /opt/octeon-sdk-1.8.0
csh users:
$ setenv SDK_ROOT OCTEON_CN38XX
341
5.
Add a feature template to set up your initial package list, for example: feature/se_demo_basic
6.
Figure H-1
342
H Cavium Simple Executive Integration and Debugging H.6 Configuring and Building with Workbench
343
OCTEON_TARGET=linux_n32: usermode n32 binary OCTEON_TARGET=linux_64: usermode n64 binary OCTEON_TARGET=cvmx_n32: standalone n32 binary OCTEON_TARGET=cvmx_64: standalone n64 binary
4. 5.
Save the changes with File > Save. Click the Targets tab, and select the build (or rebuild) button.
NOTE: You can also use this feature to force a value for OCTEON_MODEL, as described in H.3.4 Specifying Build Types, p.337.
Figure H-3 Overriding the OCTEON_TARGET value for a package
344
H Cavium Simple Executive Integration and Debugging H.7 Configuring the Kernel with Workbench
2. 3.
345
Figure H-4
4.
For this example, double-click on the item Memory to reserve for user process shared region (MB) [CAVIUM_RESERVE32]. Observe that the kernel option tree is automatically opened to this entry, at this location, as reflected in Figure H-5.
Machine selection > Allow User Space to access hardware IO directly [CAVIUM_OCTEON_USER] = Y > Memory to reserve for user process shared region (MB) [CAVIUM_RESERVE32] = 512
5.
If you change any values, their icons will display an asterisk. Select File > Save to save your changes to the .config file and the asterisks will disappear.
346
H Cavium Simple Executive Integration and Debugging H.8 Debugging from the Command Line
Figure H-5
H.8.1 Overview
Cavium Networks' Octeon family of multi-core processors provides developers the option of developing multiprocessor code in either a symmetric or asymmetric manner. Often, a system will dedicate a few cores to run an SMP operating system, such as Wind River Linux, while using one or more additional cores to run other code. This other code can be another (perhaps even another SMP) operating system, but from a performance standpoint it is often beneficial to dedicate a core to a single user-defined function. This function can operate free from the overhead of an operating system, so it does not need to contend with other tasks for the use of the processor core. These functions may be referred to as standalone applications. While sophisticated debugging capabilities are readily available for the Linux environment , the non-Linux standalone applications are very minimal, and have no built-in debugging hooks. Using GDB with these applications requires the presence of a stub program to implement the GDB remote packet protocol. This
347
stub may be embedded in the standalone application during the development process and then removed for production. Debugging standalone applications using this process raises two fundamental issues. First, including the debugging stub in the standalone application increases its size and complexity, and may lead to some uncertainty that the production code functions similarly to the debug code. Second, a debugging stub that is implemented in this way may need to be re-developed to suit the environment of each standalone application, requiring repetition of the expenditure of time and programming resources. Cavium's approach is to localize the debugging stub and provide a minimal standardized interface to it, making availability of GDB for debugging standalone applications automatic. Cavium accomplished this by placing the debugging stub in the same monitor program that is used to load all programs onto the target system, u-boot. The low-level communication between GDB and Simple Executive standalone applications is implemented using a customized version of the GDB packet protocol. The customization includes extensions for multicore debug control. On the host end is a version of GDB that understands this customized protocol. On the target end is a stub embedded into the u-boot bootloader. The connection between the two is an RS-232 serial link.
H.8.2 Prerequisites
First install, configure and build your Wind River Linux Simple Executive project using the instructions in H.3 Configuring and Building from the Command Line, p.335, and H.6 Configuring and Building with Workbench, p.341.
348
H Cavium Simple Executive Integration and Debugging H.9 Setting Up the Target
standalone application without the debugger. Issue the following commands at the u-boot prompt:
Octeon EBT5800# dhcp Octeon EBT5800# tftp $(loadaddr) /crypto-cvmx_n32 Octeon EBT5800# bootoct $(loadaddr) numcores=1
The first command configures the network interface. The second command downloads the standalone application into an address specified by the u-boot environment variable loadaddr. For recent versions of u-boot this is normally 0x20000000. By definition, tftp provides a view of a subtree of the file system of a remote machine. /crypto-cvmx_n32 is an application image prepared by building the WRLinux Simple Executive project. It must be copied from <project_dir>/export to the tftp server's tftproot directory. The third command actually runs the program that was loaded in the second command. To allow loading and simultaneous start of multiple standalone programs, u-boot prevents the code on any of the cores from running until the code running on core 0 is started. At that point, all loaded programs are simultaneously started.
Assigning the debug parameter a value is optional, and controls the serial port used for debugging:
debug: Debug (second) serial port debug=0: Console (first) serial port debug=1: Debug (second) serial port debug=2: Third serial port (if available)
Once a bootoct command that includes core 0 has been issued, u-boot starts the standalone application, but halts it at a breakpoint, essentially at the entry point of the code. The debugging stub then waits for the remote GDB to connect.
349
The correct binary executable of GDB can be confirmed by reference to the first line of the messages printed when GDB is started, currently:
GNU gdb (Wind River Linux Sourcery G++ 4.3-85) 6.8.50.20080821-cvs GDB will issue a startup message and a command prompt: (Core#0-gdb)
If you are connecting your debug host directly to the target's debug port using for example /dev/ttyS1, then it is sufficient to connect the debug session with:
(Core#0-gdb) target octeon /dev/ttyS1
The state of where the target is stopped can be determined with the where command:
(Core#0-gdb) where #0 0x10006b6c in __octeon_trigger_debug_exception () #1 0x10006cd8 in __octeon_app_init () #2 0x100001bc in __start () (Core#0-gdb)
At this point, GDB can be used normally. Typically there is no need to single step through the code from the initial breakpoint at
350
__octeon_trigger_debug_exception. In most cases, one may insert a breakpoint at main and continue to that breakpoint.
This will result in all cores being stopped in __octeon_trigger_debug_exception. Next, enter the commands:
(Core#0-gdb) (Core#0-gdb) (Core#0-gdb) (Core#0-gdb) set step-all 1 b main c set step-all 0
At this point, all cores will be stopped at the entry point of the application main function. The GDB prompt identifies the core that is currently being debugged. To select any particular core, use the set focus <n> command, where <n> specifies the core of interest. While the focus is on one core, and with step-all turned off, each step taken in the debugger is isolated to the core that currently holds the focus. See Cavium's documentation for more information on the Cavium-specific multi-core debugger commands:
While it is possible to debug a single application running across multiple cores, it is not possible to debug two applications, or an application and a Linux image, or any other combination that involves multiple address map, namespaces, and multiple control models. The bootloader debug stubs and GDB are designed around a single application debug context.
351
In the Machine Selection kernel configuration menu, there is a selection for the Octeon watchdog driver titled CONFIG_CAVIUM_WATCHDOG. Make sure that this selection is disabled. In the Kernel Hacking kernel configuration menu, there is a selection for Remote GDB debugging using the Cavium Networks Multicore GDB which must be selected (CONFIG_CAVIUM_GDB).
Loading and booting each image proceeds much like the examples above and as described in Cavium Network's SDK documentation. Keep the following hints in mind:
Specify a load address for the Simple Executive application that does not conflict with Linux' memory usage. For example, if your kernel's size is less than eight MB, then loading Linux at 0x2000000 would allow for loading the Simple Executive application at 0x20800000. Specify a coremask (or skipcores/numcores) to start (bootoct or bootoctlinux) the first image, so that the u-boot command line is still available to start the second one.
No need to recompile the default kernel to debug it. The ability to load and unload KGDB as a module. Provides KGDB over console functionality. Provides a much faster single step when debugged using Workbench.
352
H Cavium Simple Executive Integration and Debugging H.12 Debugging with Workbench
Disadvantages include:
Scheduling is more disrupted as KGDB serializes tasks onto one core. KGDB has some limitations with kernel tasklets. KGDB kernels take away control of the debug interrupts from the bootloader and debug stubs. KGDB is not able to debug standalone (non-Linux) Simple Executive applications.
H.12.1 Prerequisites
Requires a built Cavium Simple Executive application as a Workbench project. See H.3 Configuring and Building from the Command Line, p.335, and H.6 Configuring and Building with Workbench, p.341, for additional information.
353
6.
Figure H-6
A new project has been created in your workspace. Select the project in the Project Explorer pane for the next step.
4. 5.
6. 7.
354
H Cavium Simple Executive Integration and Debugging H.12 Debugging with Workbench
8.
Figure H-7
355
Figure H-8
The currently focused core is displayed as Thread[core#] in the Debug view. For example, if core 0 is active, it is displayed as Thread[0]. In addition to the usual run-control commands, such as step over, step into, and so on, three additional context menu items are available specific to the Cavium Simple Executive Application debugger:
Select Active Cores
This menu item is used to select the set of active cores. This corresponds to the GDB command set active-cores.
Figure H-9 Select Active Cores
356
H Cavium Simple Executive Integration and Debugging H.13 Known Issues, Limitations, and Tips
This corresponds to the GDB command set focus to select the currently focused core .
Figure H-10 Select Focus Core
This is a toggle item which controls the GDB step-all flag. When the item appears with a check-mark in the menu, the step-all flag is on, otherwise it is off.
The export-sysroot build target feature can be used to export the project's sysroot for use on other hosts. The export-toolchain build target feature can be used to export the sysroot's companion Simple Executive toolchain. There has not been any testing with busybox, which is the core part of the glibc_small and the uclibc_small file systems. Due to an erratum in the CN58XX PASS 1 silicon, the low_latency_mem application fails on these parts. The application works correctly on CN38XX and CN58XX Pass 1.1 (or later) silicon.
Avoid single-stepping (particularly in assembly mode with the si instruction) into any code that deals with atomic operations on memory. Note particularly that this precludes stepping into spinlocks. This is true of any debugged code, kernel or standalone images. When setting a breakpoint in the Linux kernel at do_fork, then continuing from the breakpoint, the debugger immediately hits the breakpoint again. Note that do_fork is reached via a system call, which is restarted as a result of a signal received during the processing of the breakpoint.
357
To work around this, each time you continue from the breakpoint, try using a temporary breakpoint, which is removed immediately once the breakpoint is hit. This will allow you to continue the kernel, rather than stopping again.
Single stepping the Linux kernel does not work very well when interrupts are occurring, such as the clock, network, and serial port(s). A single step is likely to unexpectedly take you into the start of the interrupt service routine. When debugging the Linux kernel running on multiple cores (Symmetric Multiprocessing) as a standalone application, you need to set step-all 1. Otherwise, you may see BUG: soft lockup detected on CPU#0!....
The bootloader has only one debug context. It is not possible to debug more than one Simple Executive application at a time, and it is not possible to debug both a Simple Executive application and the Linux kernel at the same time. The Simple Executive GDB (and as a result, Workbench) is not able to load, run, start and restart standalone applications or the Linux kernel. You must manually load everything to be debugged via the bootloader before connecting the debugger to the target. It is suggested to load and start applications from higher numbered cores to lower numbered cores. Once core 0 has been started then it is not possible to load and start additional applications. Specify a load address for the Simple Executive application that does not conflict with Linux' memory usage. For example, if your kernel's size is less than 8 MB, then loading Linux at 0x2000000 will allow for loading the Simple Executive application at 0x20800000.
If the network is congested and/or slow (for example, debugging over a WAN link), Workbench may raise a dialog box that displays: target is not responding (time out). However, it has connected, and the target may be debugged.
358
I
Glossary
board
A model of target hardware; see also target. Several different configurations of a board may each be considered a separate target.
board support package (BSP)
The files needed to allow a particular board to be used as a target by Wind River Linux. Within the context of the Wind River Linux build system, a BSP is a template which can be applied to a project.
BSP directory
The directory containing a particular BSP. This directory is found in the templates/board subdirectory of the layer containing the BSP.
config files
Also kernel config files or config fragments. The *.cfg files that are combined and audited to produce the final .config kernel configuration. file
build directory
The directory named build in a project, where build tasks such as patching and compilation are actually performed.
git
Revision control system used with the Wind River Linux kernel and, in general, by the Linux kernel community.
host toolchain
Used on the development host to build the host tools and other software. This toolchain is provided by the development host operating system, for example by Redhat or Ubuntu. This is a different toolchain than the cross-development toolchain.
host tools
Used on the development host to perform functions that are part of the build process, but other than the toolchain compiling functions. They are built by Wind River or by the user.
359
A file containing kernel configuration instructions to be combined into a complete kernel configuration file. Each fragment generally controls a related set of features. For instance, the kernel configuration fragment for a BSP specifies CPU, architecture, and driver options needed to run on the target.
kernel directory
The directory containing a kernel source tree. Usually a subdirectory of the build directory.
kernel-cache
A Wind River-maintained repository that contains patches, kernel config files, and the information required to construct the kernel git repository.
kernel image
The file containing a compiled kernel, in the format used by a boot loader to load the kernel into memory.
kernel layer
The layer containing the standard Wind River Linux kernel tree and patches. Found in installDir/layers/wrll-linux-version, where version is the revision of the mainline kernel in use, such as 2.6.27.
layer
A collection of packages and templates for use with the Wind River Linux build system.
layer directory
A sequence that contains the set of steps required to create a fully branched, tagged and history-clean git repository.
package
A collection of software and files for installation on a targets root file system. The term package is used generically for both source builds and binary distributions. Examples include the ncurses library, or the busybox shell and utilities.
patch file
A file containing modifications to make to source code, conventionally in a format understood by the historic patch utility. Patches for use with Wind River Linux should be in unified diff format.
patch list
360
I Glossary
project
A working directory containing configuration files used by the build system to produce runnable code for a particular target. A project may also be called a Workbench project or a build project. You may use Workbench, the command line, or a combination of the two when working on projects. A project is assembled by combining templates.
project directory
The directory containing a project. Created by the configure script, or the Wind River Workbench configuration tool.
pseudo
pseudo is Wind Rivers replacement for fakeroot that allows the Wind River build system to install files into the target root file system without having to actually set the UID to root. pseudo intercepts the system calls having to do with root priority on file operations. It creates the regular, special files and directories but maintains a small database on what the settings would be would be if you had actually been rootthis includes setuid, setgid file permissions, device file class/major/minor, uid and gid ownership. This is how the build system can create a root file system tar file that has actual device files and root ownershippseudo carries all the information from one program to the other.
readme file
A file, usually named README, describing a BSP or other files. Sometimes referred to as a readme, rather than a readme file.
smudge file
A file containing patch application instructions. Unlike a patch list, a smudge file can apply patches selectively. Each smudge file must have a unique name.
target
A piece of hardware or simulated hardware on which software needs to be run. Typically, software is built and configured for a particular target before being installed. A target generally refers to a specific configuration of a board.
template
A collection of configurations, settings, and patches used to modify the kernel or file system built for a target. Templates are combined to create a project.
toolchain
The compiler and other tools used on the development host to compile the software that will run on the target. Also called cross-development toolchain to distinguish it from the host toolchain.
unified diff format
The preferred format for patches used with Wind River Linux. Unified diff format is the format produced by diff -u. Unified diffs are easier to read than ed-style diffs, and more compact than context diffs.
361
upstream
To, or in the direction of, the original developer or the maintainer of an open source project.
362
Index
Symbols
.cfg files 23, 104
A
adding packages general 107 makefile 121 spec file 120 SRPM 109, 256 with layers 273 analysis layer 21 application adding to platform project 14 developer 7 development with sysroots 14, 84 audit data directory 101 auditing 97
C
cavium simple executive 331 CGL 9 checksum meta data 45 checksums 45 classic packages 107 con figuration files, templates 23 conditional real-time 123 config file fragments 104 config.log 34 config.sh build variables 325 file 23 configuration 32 configure examples 37 options 35, 40 script 19, 33 template 71 with layers 78 configuring with profiles 36 consumer_premise_equipment profile 25 conventions in document text 6 core layer (wrll-wrlinux) 22 core layer templates 24 creation.log 34 CRITICAL_IRQSOFF_TIMING 128 CRITICAL_PREEMPT_TIMING 128 cross-development tools 10 custom layers 35, 73 custom templates 67, 71
B
board documentation 11 README files 11 supported 11 templates, installed 27 boot-time 138 boot-time, early 139 boot-time, late 141 BSP creation 177 modification example 280 templates 27 build environment 34 methods 43 subdirectories 34 system (LDAT) 19
363
D
debug file system 39 DEBUG_PREEMPT 128 debugging small file systems 39 default template 70 demo file system 39 deploying a project 16 design, build system 31 developer types 7 development environment 17, 18 development workflow 13 directory structure, installed 18 document conventions 6 documentation, Wind River Linux 4
tree 172 using 168 glibc_cgl file system 26 glibc_small file system 26, 39 glibc_std file system 26 guaranteed real-time 123 guilt 169
H
higher layer 49 host requirements 32 host tool patching 276 host tools layer 22
E
ECGL 9 --enable-ldat-checksum 45 epne profile 25 exporting layer 75 exporting sysroots 84 export-layer target 75 extra templates 28
I
importing packages 268 importPackages.tcl 268 include files 23, 56, 69 industrial_equipment profile 25 init boot 141 initramfs 193 installation 11 installed layers 21 installed software organization 17 iso images, creating 238
F
feature matrix (kernel) 8 feature templates 28 feature templates in layers 279 file system construction 62 layout 303 modification 91 types 26, 27, 39 file system/fs directory 62, 91 filesystem types 26 footprint 152 fs directory 62, 91 fs directory files 23 fs target 43, 44 ftrace 138
K
Kconfig files 98 kern_tools 170 kernel and file system components 8 config files 104 config fragments 98 config options 104 configuration 98 configuration (Workbench) 103 feature matrix 8 feature profiles 8 fragment audits 97 layer 22 layer templates 29 lifecycle 170 patching 174, 180 preemption 123 profiles 8 reconfiguration 103 source tree 167 tree 172 types 8 workflow 170 kernel-cache 166
G
gdb 309 git commands 168 general 169 leaf nodes 171 overview 165 repository 171
364
Index
139
mobile_mulitmedia_device profile 25 modifying target file system 91 modlist files 23 multilibs 34, 86
L
layer contents 51 defined 20 local custom 74 search list 50 structure 74 layers 20 and templates 20 and templates, relationship 49 creating 74 custom 73 directory 20, 21 examples 272 exporting 75 file (in prjbuildDir) 51 higher and lower 49 in development environment 21 installed 21 manual creation 77 overview 50 processing order 79 LDAT 31 ldat directory 19 LDAT_FORCE_CLEAN environment variable 45 LDAT_LAYER_PATH environment variable 51 leaf nodes 171, 183 Linux Distribution Assembly Tool (LDAT) 19, 31 Linux Kernel Configurator (LKC) 98 linux.menuconfig target 103 LKC 98 local custom layer 74 local layer 35 login 188 lower layer 49 lpne profile 26
O
online support 11 optimizing boot time 138 optional inclusion of templates 59 options, kernel 104 overriding layers 275
P
package adding classic archive package 121 adding classic with rpmbuild 120 adding RPM 122, 270 adding SRPM 109 adding with importPackages.tcl 268 checksums 45 preparing to add 108 rebuilding and checksums 45 removing 121 password 188 patch management 176 merge 179 patching a host tool 276 SRPMs 160 the kernel 174 with quilt 160 pkglist files 23 platform developer 7 platform project configuration 32 pne profile 25 Pre-boot Execution Environment (PXE) boot loader 215 pre-defined profiles 25 PREEMPT_DESKTOP 125 PREEMPT_HARDIRQS 124 PREEMPT_NONE 125 PREEMPT_RCU 127 PREEMPT_RT 124, 126 preempt_rt 9, 123 PREEMPT_SOFTIRQS 124, 127 PREEMPT_VOLUNTARY 125 preempt-rt 123 prjbuildDir as layer 74 processing *list.* files 60 file fragments 60 include files 56 template components 60
M
make build-all 46 export-layer 74, 75 export-sysroot 84 fs 44 linux.menuconfig 103 menuconfig 103 targets 299 xconfig 103 man pages 270 menuconfig of kernel options merge patches 179
103
365
templates 54 profiles configuration 36 custom 72 general 25 pre-defined 25 templates 25 project build directory 34 project deployment 16 PXE boot process overview 215 PXELinux boot loader file 216
starting Workbench 18 startWorkbensh.sh 18 supported boards 11 Syslinux 215 sysroots 83, 84, 86, 113 sysroots directory 19
T
target TIPC 314 target configuration files 91 target file system 62 template components 60 configuration files 23 defined 20 include files 56 names 68 processing 54 search list 53 search order 52 structure 69 templates 20 and layers 20 custom 67 in the development environment 23 installed 23, 24 kernel layer 29 of the same name 58 overview 52 processing order 71 toolchain 28 test templates 28 TFTP configuration file 203 TFTP download directory 207 tgtsvr command (TIPC) 316 TIPC kernel module 314 overview 313 proxy 315 targets 314 toolchain layer 22 layer templates 28 templates 28 toolslist files 23
Q
QEMU 193 description 187 IP addresses 187 KGDB debugging with Workbench terminating 190 quilt 160
188
R
ram disk size, increasing 213 README file 11 readme files 23 real-time 123 real-time support 123 reference manual pages 270 required-*.txt files 32 requirements, host 32 rootfs templates, installed 26 RPM build 44 RPM build (fs) 43 rtcore kernel 9
S
scc 169, 174, 180 files 181 overview 181 searching layers 50 searching templates 52 selinux 241 server installation 227 size of runtime footprint 152 small kernel 9 source build (build-all) 43 source build method 46 spec file 107 SRPM package example 256 SRPM packages 107 standalone server installation 227 standard kernel 8
U
uclibc_small file system 27, 39 usb image creation 238 usermode-agent reference page 317
366
Index
V
variables, build 325
W
WAKEUP_LATENCY_HIST 128 Wind River Linux, overview 6 Wind River Online Support 11 Wind River Workbench 18 --with-layer 78 --with-template 71 Workbench directories 18 workflow 13 workflow, kernel build 166 wrlinux directory 19 wrlinux-3.0 directory 19 wrll-analysis-version layer 21 wrll-host-tools layer 22 wrll-linux layer 22 wrll-linux-version layer 22 wrll-toolchain-version layer 22 wrll-wrlinux templates 24
X
xinetd 203
367