Sunteți pe pagina 1din 274

3

Writing Device Drivers for


LynxOS
LynxOS Release 5.0

DOC-0818-01
Product names mentioned in Writing Device Drivers for LynxOS are trademarks of their respective manufacturers and
are used here only for identification purposes.

Copyright ©1987 - 2009, LynuxWorks, Inc. All rights reserved.


U.S. Patents 5,469,571; 5,594,903; 6,075,939; 7,047,521

Printed in the United States of America.

All rights reserved. No part of Writing Device Drivers for LynxOS may be reproduced, stored in a retrieval system, or
transmitted, in any form or by any means, electronic, mechanical, photographic, magnetic, or otherwise, without the
prior written permission of LynuxWorks, Inc.

LynuxWorks, Inc. makes no representations, express or implied, with respect to this documentation or the software it
describes, including (with no limitation) any implied warranties of utility or fitness for any particular purpose; all such
warranties are expressly disclaimed. Neither LynuxWorks, Inc., nor its distributors, nor its dealers shall be liable for any
indirect, incidental, or consequential damages under any circumstances.

(The exclusion of implied warranties may not apply in all cases under some statutes, and thus the above exclusion may
not apply. This warranty provides the purchaser with specific legal rights. There may be other purchaser rights which
vary from state to state within the United States of America.)
Contents

PREFACE ................................................................................................................ III


For More Information .....................................................................................iii
Typographical Conventions ............................................................................ iv
Special Notes ................................................................................................... v
Technical Support ............................................................................................ v
How to Submit a Support Request ........................................................... v
Where to Submit a Support Request ....................................................... vi

CHAPTER 1 DRIVER BASICS ......................................................................................... 1


Driver Types .................................................................................................... 1
Character Driver ....................................................................................... 2
Block Driver ............................................................................................. 2
Device Classifications ..................................................................................... 3
Referencing Devices and Drivers .................................................................... 3
Drivers ...................................................................................................... 3
Devices ..................................................................................................... 4
Application Access to Devices and Drivers ............................................. 5
Mapping Device Names to Driver Names ............................................... 6

CHAPTER 2 DRIVER STRUCTURE .................................................................................. 7


Device Information Structure .......................................................................... 7
Device Information Definition ................................................................. 7
Device Information Declaration ............................................................... 9
Statics Structure ............................................................................................. 10
Entry Points ................................................................................................... 11

Writing Device Drivers for LynxOS i


Contents

install Entry Point ................................................................................... 11


open Entry Point ..................................................................................... 14
close Entry Point .................................................................................... 15
read Entry Point ...................................................................................... 16
write Entry Point .................................................................................... 17
select Entry Point ................................................................................... 18
ioctl Entry Point ..................................................................................... 20
uninstall Entry Point ............................................................................... 22
strategy Entry Point ................................................................................ 23

CHAPTER 3 MEMORY MANAGEMENT ......................................................................... 25


LynxOS Virtual Memory Model ................................................................... 25
LynxOS Address Types ................................................................................. 26
User Virtual ............................................................................................ 26
Kernel Virtual ......................................................................................... 26
Kernel Direct Mapped ............................................................................ 26
Physical .................................................................................................. 27
Allocating Memory ....................................................................................... 27
The sysbrk(), sysfree() Technique .......................................................... 27
The alloc_cmem(), free_cmem() Technique .......................................... 28
The get1page(), free1page() Technique ................................................. 28
The get_safedma_page(), free_safedma_page() Technique ................... 29
The alloc_safedma_contig(), free_safedma_contig() Technique ........... 29
DMA Transfers .............................................................................................. 30
Address Translation ....................................................................................... 31

CHAPTER 4 SYNCHRONIZATION .................................................................................. 35


Introduction ................................................................................................... 35
Event Synchronization ........................................................................... 35
Critical Code Regions ............................................................................ 36
Resource Pool Management ................................................................... 38
Memory Access Ordering ...................................................................... 38
Synchronization Mechanisms ........................................................................ 39
Atomic Operations ................................................................................. 40
Kernel Semaphores ................................................................................ 43
Kernel Spin Locks .................................................................................. 47
Disabling Interrupts, the Single CPU Case ............................................ 50
Disabling Preemption, the Single CPU Case ......................................... 51
Disabling Interrupts and Preemption in SMP Case ,

ii Writing Device Drivers for LynxOS


the Big Kernel Lock ............................................................................... 52
Critical Code Regions .................................................................................... 53
Using Kernel Semaphores for Mutual Exclusion ................................... 53
Using Spin Locks for Mutual Exclusion ................................................ 54
Using Interrupt Disabling for Mutual Exclusion .................................... 54
Using Preemption Disabling for Mutual Exclusion ............................... 55
Event Synchronization ................................................................................... 56
Handling Signals .................................................................................... 56
Using sreset with Event Synchronization Semaphores .......................... 57
Caution when Using sreset ..................................................................... 58
Resource Pool Management .......................................................................... 59
Combining Synchronization Mechanisms ..................................................... 60
Manipulating a Free List ........................................................................ 60
Signal Handling and Real-Time Response ............................................. 63
Nesting Critical Regions ........................................................................ 64
Memory Barriers ............................................................................................ 65
Memory Ordering and Its Impact on Synchronization ........................... 65
Memory Barrier API .............................................................................. 67
Memory Barriers in Device Drivers ....................................................... 69

CHAPTER 5 ACCESSING HARDWARE........................................................................... 71


General Tasks ................................................................................................ 71
Synchronization ...................................................................................... 71
Handling Bus Errors ............................................................................... 71
Probing for Devices ................................................................................ 72
Memory Mapped I/O Access Ordering .................................................. 72
Device Access ................................................................................................ 72
Device Resource Manager (DRM) ......................................................... 73

CHAPTER 6 ELEMENTARY DEVICE DRIVERS ................................................................ 75


Character Device Drivers .............................................................................. 75
Device Information Definition ............................................................... 75
Device Information Declaration ............................................................. 76
Declaration for ioctl ................................................................................ 77
Hardware Declaration ............................................................................. 79
Driver Source Code ................................................................................ 79
Statics Structure ...................................................................................... 80
install Routine ......................................................................................... 80

Writing Device Drivers for LynxOS iii


Contents

uninstall Routine .................................................................................... 82


open Routine .......................................................................................... 83
close Routine .......................................................................................... 83
Auxiliary Routines ................................................................................. 83
read Routine ........................................................................................... 92
write Routine .......................................................................................... 92
select Routine ......................................................................................... 93
ioctl Routine ........................................................................................... 93
Dynamic Installation .............................................................................. 93

CHAPTER 7 INTERRUPT HANDLING ............................................................................. 95


LynxOS Interrupt Handlers ........................................................................... 95
General Format .............................................................................................. 96
Use of Queues ................................................................................................ 96
Typical Structure of Interrupt-based Driver .................................................. 97
Example ......................................................................................................... 97
tty Manager ............................................................................................ 98
Device Information Definition ............................................................... 99
Device Information Declaration ........................................................... 100
tty Manager Statics Structure ............................................................... 101
Statics Structure .................................................................................... 102
install Entry Point ................................................................................. 103
uninstall Entry Point ............................................................................. 105
open Entry Point ................................................................................... 105
close Entry Point .................................................................................. 106
Interrupt Handler .................................................................................. 107
write Entry Point .................................................................................. 110
read Entry Point .................................................................................... 110
ioctl Entry Point ................................................................................... 110
The ttyseth Procedure ........................................................................... 111

CHAPTER 8 KERNEL THREADS AND PRIORITY TRACKING ........................................... 113


Device Drivers in LynxOS .......................................................................... 113
Interrupt Latency .................................................................................. 114
Interrupt Dispatch Time ....................................................................... 114
Driver Response Time .......................................................................... 114
Task Response Time ............................................................................ 114
Task Completion Time ......................................................................... 115
Real-Time Response .................................................................................... 115

iv Writing Device Drivers for LynxOS


Kernel Threads ............................................................................................ 116
Creating Kernel Threads ...................................................................... 116
Structure of a Kernel Thread ....................................................................... 117
Exclusive Access .................................................................................. 117
Multiple Access .................................................................................... 119
Priority Tracking .......................................................................................... 121
User and Kernel Priorities .................................................................... 123
Exclusive Access .................................................................................. 124
Multiple Access .................................................................................... 125
Non-atomic Requests ........................................................................... 127
Controlling Interrupts .................................................................................. 128
Example ....................................................................................................... 129
Statics Structure .................................................................................... 130
tty Driver Auxiliary Routines and Macros ........................................... 130
The install Entry Point .......................................................................... 131
The Thread Function ............................................................................ 132
The Interrupt Routine ........................................................................... 134

CHAPTER 9 INSTALLATION AND DEBUGGING ............................................................ 135


Static Versus Dynamic Installation ............................................................. 135
Static Installation .................................................................................. 135
Dynamic Installation ............................................................................ 136
Static Installation Procedure ........................................................................ 136
Driver Source Code .............................................................................. 138
Device and Driver Configuration File .................................................. 138
Configuration File: config.tbl ............................................................... 138
Rebuilding the Kernel .................................................................................. 139
Dynamic Installation Procedure .................................................................. 140
Driver Source Code .............................................................................. 140
Driver Installation ................................................................................. 142
Device Information Definition and Declaration ................................... 143
Device Installation ................................................................................ 143
Node Creation ....................................................................................... 143
Device and Driver Uninstallation ......................................................... 144
Common Error Messages During Dynamic Installation ...................... 144
Debugging ................................................................................................... 145
Hints for Device Driver Debugging ..................................................... 146
Additional Notes .......................................................................................... 146

Writing Device Drivers for LynxOS v


Contents

CHAPTER 10 DEVICE RESOURCE MANAGER................................................................ 149


DRM Concepts ............................................................................................ 149
Device Tree .......................................................................................... 149
DRM Components ................................................................................ 151
DRM Nodes .......................................................................................... 152
DRM Node States ................................................................................. 153
DRM Initialization ............................................................................... 155
DRM Service Routines ................................................................................ 155
Interface Specification .......................................................................... 157
Using DRM Facilities from Device Drivers ................................................ 159
Device Identification ............................................................................ 159
Device Interrupt Management .............................................................. 160
Device Address Space Management .................................................... 161
Device IO ............................................................................................. 162
DRM Tree Traversal ............................................................................ 162
Device Insertion/Removal .................................................................... 163
Advanced Topics ......................................................................................... 164
Adding a New Bus Layer ..................................................................... 164
PCI Bus Layer ............................................................................................. 170
Services ................................................................................................ 170
Resources ............................................................................................. 170
Plug-in Resource Allocator .................................................................. 172
Example Driver ........................................................................................... 175
Kernel Data Structures ................................................................................ 181
struct ether_header ............................................................................... 182
struct arpcom ........................................................................................ 182
struct sockaddr ...................................................................................... 183
struct sockaddr_in ................................................................................ 183
struct in_addr ........................................................................................ 183
struct ifnet ............................................................................................. 184
struct mbuf ............................................................................................ 185
Statics Structure ........................................................................................... 190
Packet Queues ............................................................................................. 190
Driver Entry Points ...................................................................................... 191
install Entry Point ........................................................................................ 191
Finding the Interface Name .................................................................. 191
Initializing the Ethernet Address .......................................................... 192
Initializing the ifnet Structure .............................................................. 192

vi Writing Device Drivers for LynxOS


Packet Output .............................................................................................. 193
ether_output Function ........................................................................... 193
Kernel Thread Processing .................................................................... 195
Packet Input ................................................................................................. 195
Determining Packet Type ..................................................................... 196
Copying Data to mbufs ......................................................................... 196
Enqueueing Packet ............................................................................... 197
Statistics Counters ................................................................................ 197
ioctl Entry Point ........................................................................................... 197
SIOCSIFADDR .................................................................................... 198
SIOCSIFFLAGS .................................................................................. 198
watchdog Entry Point .................................................................................. 199
reset Entry Point .......................................................................................... 199
Kernel Thread .............................................................................................. 200
Priority Tracking .......................................................................................... 200
Driver Configuration File ............................................................................ 201
IP Multicasting Support ............................................................................... 201
ether_multi Structure ............................................................................ 201
Berkeley Packet Filter (BPF) Support ......................................................... 203
Alternate Queueing ...................................................................................... 204
Converting Existing Drivers for ALTQ Support ......................................... 205
Introduction ................................................................................................. 209
LynxOS Device Driver Interfaces ............................................................... 210
General ................................................................................................. 211
Error Return Codes ............................................................................... 211
Device Driver to Hardware Interfaces .................................................. 212
LynxOS Specific Device Driver Organization ..................................... 212
Device Driver Service Calls ........................................................................ 226
MM_BLOCK Device Driver API ........................................................ 231
Memory Management Functions .......................................................... 232
Priority Tracking Functions .................................................................. 233
Semaphore Functions ........................................................................... 236
File System Driver Service Calls ......................................................... 240
TTY Manager Service Calls ................................................................. 243
Kernel Objects ............................................................................................. 245
File System Kernel Objects .................................................................. 247
Definitions ................................................................................................... 248

Writing Device Drivers for LynxOS vii


Contents

INDEX ............................................................................................................ 251

viii Writing Device Drivers for LynxOS


Preface

Writing Device Drivers for LynxOS guide describes fundamental concepts of


device drivers under LynxOS. The following chapters provide numerous examples
to help write customized drivers for a wide variety of devices.
The information in this document revision applies to LynxOS version 5.0. When
generic LynxOS features are discussed in this document, the OS is referred to as
simply LynxOS, even though this is a LynxOS document. When features specific
to LynxOS are discussed, the OS is referred to as LynxOS.

For More Information


For information on the features of LynxOS, refer to the following printed and
online documentation.
• LynxOS Release Notes
This printed document contains details on the features and late-breaking
information about the current release.
• LynxOS Installation Guide
This manual details the installation instructions and configurations of
LynxOS.
• LynxOS User’s Guide
This guide provides information about basic system administration and
kernel configuration for LynxOS.

Writing Device Drivers for LynxOS iii


Preface

Typographical Conventions
The typefaces used in this manual, summarized below, emphasize important
concepts. All references to filenames and commands are case-sensitive and should
be typed accurately.

Kind of Text Examples

Body text; italicized for emphasis, new Refer to the LynxOS User’s Guide.
terms, and book titles

Environment variables, filenames, ls


functions, methods, options, parameter -l
names, pathnames, commands, and myprog.c
computer data /dev/null

Commands that need to be highlighted login: myname


within body text, or commands that must be # cd /usr/home
typed as is by the user are bolded.

Text that represents a variable, such as a cat <filename>


filename or a value that must be entered by mv <file1> <file2>
the user, is italicized.

Blocks of text that appear on the display Loading file /tftpboot/shell.kdi


into 0x4000
screen after entering instructions .....................
or commands File loaded. Size is 1314816
Copyright 2000 - 2006
LynuxWorks, Inc.
All rights reserved.

LynxOS (x86) created Mon Feb 20


17:50:22 GMT 2006
user name:

Keyboard options, button names, and Enter, Ctrl-C


menu sequences

iv Writing Device Drivers for LynxOS


Special Notes

Special Notes
The following notations highlight any key points and cautionary notes that may
appear in this manual.

NOTE: These callouts note important or useful points in the text.

CAUTION! Used for situations that present minor hazards that may interfere with
or threaten equipment/performance.

Technical Support
LynuxWorks Support handles support requests from current support subscribers.
For questions regarding LynuxWorks products or evaluation CDs, or to become a
support subscriber, our knowledgeable sales staff will be pleased to help you
(http://www.lynuxworks.com/corporate/contact/sales.php3).

How to Submit a Support Request


When you are ready to submit a support request, please include all the following
information:
• First name
• Last name
• Your job title
• Phone number
• Fax number
• E-mail address
• Company name
• Address

Writing Device Drivers for LynxOS v


Preface

• City, state, ZIP


• Country
• LynxOS or BlueCat Linux version you are using
• Target platform (for example, PowerPC or x86)
• Board Support Package (BSP)
• Current patch revision level
• Development host OS version
• Description of problem you are experiencing

Where to Submit a Support Request

By E-mail:

Support, Europe tech_europe@lnxw.com


Support, worldwide except Europe support@lnxw.com
Training and courses USA: training-usa@lnxw.com
Europe: training-europe@lnxw.com

By Phone:

Training and courses USA: +1 408-979-4353


Europe: +33 1 30 85 06 00
Support, Europe +33 1 30 85 93 96
(from our Paris, France office)
Support, worldwide except Europe and Japan +1 800-327-5969 or
(from our San José, CA, USA headquarters) +1 408-979-3940
Support, Japan +81 33 449 3131

vi Writing Device Drivers for LynxOS


Where to Submit a Support Request

By Fax:

Support, Europe +33 1 30 85 06 06


(from our Paris, France office)
Support, worldwide except Europe and Japan +1 408-979-3945
(from our San José, CA, USA headquarters)
Support, Japan +81 22 449 3803

Writing Device Drivers for LynxOS vii


Preface

viii Writing Device Drivers for LynxOS


CHAPTER 1 Driver Basics

This chapter describes the LynxOS driver types, major and minor numbers, and
how to reference devices and drivers.

Driver Types
A device driver can be defined as an interface between the operating system and
the hardware device. It is a piece of software designed to hide the idiosyncrasies of
a particular device from the kernel. In LynxOS, device drivers are actually a part of
the kernel and have complete and unrestricted access to operating system
structures. Thus, it is the developer’s responsibility to restrict the access of a device
driver to only the necessary structures.
The mechanism of device driver access by the operating system follows a
predefined format. LynxOS supports two types of drivers:
• Character driver
• Block driver
Driver type selection is carried out according to physical device requirements. For
example, if the kernel buffer cache need not be used, then it makes sense to use a
character driver as opposed to a block driver.
The major differences between a character driver and a block driver are described
in the next sections.

Writing Device Drivers for LynxOS 1


Chapter 1 - Driver Basics

Character Driver
A character driver has the following characteristics:
• A character driver has the read() and write() entry points and no
strategy() entry point.

• A character driver can exist on its own and operates in raw mode only.
The following physical devices are usually handled by a character driver:
• Serial port driver
• Parallel port driver
• Analog-to-digital card driver
• Network card driver (TCP/IP portion of the driver)

Block Driver
A block driver is the combination of a character and a block driver. Both sets of
entry points refer to the same statics structure, but each set is accessed through a
different major number. If the character entry points are used, the block driver is
operating in raw mode. If the block driver entry points are used, the block driver is
operating in block mode.

NOTE: The major difference between raw mode and block mode is that raw mode
is not cached, while block mode is cached

A block driver has the following characteristics:


• A block driver has a strategy() entry and no read() or write()
entry points.
• A block driver is tied in to the kernel disk cache. All requests using the
strategy() routine will go through the disk cache. As a result, only a
block driver can support a file system.
The following physical devices are usually handled by a block driver:
• Disk drivers including the following devices:
- SCSI
- IDE
- ramdisk
- Flash Memory
2 Writing Device Drivers for LynxOS
Device Classifications

Device Classifications
LynxOS identifies devices using major and minor numbers. A major device
corresponds to the set of entry points of a driver and a set of resources. A minor
number is used to divide the functionality of the major number. The major number
and minor number take up 24 bits. The minor number uses the lower 8 bits and the
major number the next 16 bits. The upper 8 bits are reserved for block size.
For example, assume the major number x corresponds to an Adaptec 2940 SCSI
card. All SCSI devices controlled by that card will have major number x because
they use the same interrupt vector and I/O ports. The minor number is used to
select the SCSI device and partition. The minor number uses 4 bits for the SCSI
device and 4 bits to represent the partition. This gives a maximum of 16 SCSI
devices and 16 partitions per major number.

SCSI Partition
/ / /

Figure 1-1: Minor Number for the Adaptec 2940 Driver

Another example is a timer card driver. It has a major number of y. Because it has
no extra functionality, it has a minor number of 0. Therefore, the timer card driver
can be accessed by using only the major number (major number + 0 = major
number). The major number selects the appropriate entry points to use, and the
minor number selects different capabilities within each major number.

Referencing Devices and Drivers


Drivers and devices in LynxOS can be referenced by their identification numbers.

Drivers
Drivers are referenced using a unique driver identification number. This number is
assigned automatically during kernel configuration. Drivers supporting raw
(character) and block interfaces have separate driver identification numbers for

Writing Device Drivers for LynxOS 3


Chapter 1 - Driver Basics

each interface. The drivers command displays the drivers currently installed in
the system and their unique driver identification numbers.
Table 1-1 shows a sample output of the drivers command.

Table 1-1: Sample Output of the drivers Command

Major
ID Type Start Size Name
Devices

0 char 1 0 0 null
1 char 1 0 0 mem
2 char 1 0 0 ctrl driver
3 char 1 0 0 SIM1542 RAW SCSI
4 block 1 0 0 SIM1542 BLK SCSI
5 char 1 0 0 kdconsole
6 char 2 0 0 serial

Devices
Each device is identified by a pair of major/minor numbers. LynxOS automatically
assigns the major numbers during kernel generation. Character and block
interfaces for the same device are indicated by different major numbers.
To view major devices installed on the system, use the devices command.
The ID column of Table 1-2 gives the major number of the device.

Table 1-2: Sample Output of the devices Command

Use
ID Type Driver Start Size Name
Count

0 char 0 2 0 0 null device


1 char 1 1 0 0 memory
2 char 2 0 0 0 ctrl dev
3 char 3 1 db0d8a70 0 SIM1542 RAW
SCSI
4 char 5 9 db0d8fd8 0 kdconsole
5 char 6 0 db0dc260 0 com 1

4 Writing Device Drivers for LynxOS


Application Access to Devices and Drivers

Table 1-2: Sample Output of the devices Command (Continued)

Use
ID Type Driver Start Size Name
Count

6 char 6 0 db0dce40 0 com 2


1 block 4 2 db0d8a70 0 SIM1542 SCSI

Minor devices are identified by the minor device number. These numbers may be
used to indicate devices with different attributes. Minor device numbers are
necessary only if there are multiple minor devices per major device. The meaning
of the minor device number is selected and interpreted only by the device driver.
The kernel does not attach a special meaning to the minor number. For example,
different device drivers will use the minor device number in different ways: device
type, SCSI target ID (for example, a SCSI disk controller driver), or a partition (for
example, an IDE disk controller driver).

Application Access to Devices and Drivers


Like UNIX, LynxOS is designed so that devices and drivers appear as files in the
file system. Applications can access devices and drivers using the special device
files. These files usually reside in the /dev directory (although they can be put
anywhere) and are viewable, like other files, through the ls -l command. The
device special files are named the same way as regular files and are identified by
the device type (character [c] or block [b]) in the first character of the first column
of the listing. Special device files have a file size of 0; however, they do occupy an
inode and take up directory space for their name.
The size column of an ls -l listing of such files shows the device’s major and
minor numbers, respectively (see Table 1-3). Special device files are created with
the mknod utility.

Table 1-3: Sample ls -l Output of the /dev Directory

Major, Modification Device


Permissions Links Owner
Minor Date Filename

crw-rw-rw- 1 root 0,0 Mar 29 01:57 null


crw-r--r-- 1 root 1,0 Mar 29 01:52 mem
crw-rw-rw- 1 root 2,0 Mar 29 01:52 tty
crw------- 1 root 3,0 Mar 29 01:52 rsd1542.0

Writing Device Drivers for LynxOS 5


Chapter 1 - Driver Basics

Table 1-3: Sample ls -l Output of the /dev Directory (Continued)

Major, Modification Device


Permissions Links Owner
Minor Date Filename

crw------- 1 root 3,16 Mar 29 01:52 rsd1542.0a


crw--w--w- 1 chris 4,0 Mar 29 01:58 atc0
crw--w--w- 1 root 4,1 Mar 29 01:57 atc1
crw-rw-rw- 1 root 5,0 Mar 29 01:52 com1
crw-rw-rw- 1 root 6,0 Mar 29 01:52 com2
brw------- 1 root 1,0 Mar 29 01:52 sd1542.0
brw------- 1 root 1,16 Mar 29 01:52 sd1542.0a

Mapping Device Names to Driver Names


The following method can be used to map a device name to a driver:
• Use the ls -l command on the /dev directory to obtain the listing of
all the device names in the system. Determine the major and minor
numbers associated with the name of the device. For example, in the
above table, the device, com1, would be a character device with a major
device number of 5 and a minor device number of 0.
• Use the devices command to get a listing of all the devices in the
system. The value in the ID column corresponds to the major device
number obtained above. If there is more than one entry with the same ID,
the device type (character or block) eliminates any ambiguity involved.
After locating the entry for the driver in question, look in the third
column with the heading Driver. This is the driver ID. For example, in
Table 1-2 on page 4, com1 has the driver ID of 6.
• Use the drivers command to get a listing of all the drivers in the
system; with the driver ID obtained in the above step, obtain the name of
the driver. For com1, the driver name is serial, which is the driver
with ID 6 in Table 1-1 on page 4.

6 Writing Device Drivers for LynxOS


CHAPTER 2 Driver Structure

This chapter describes device driver structures and entry points.


Throughout this chapter, the LynxOS ram disk device driver is used to give
illustrative examples to the ideas described herein. In order to save space and for
the sake of making the explanation transparent, some simplifications have,
however, been made in this chapter. The reader should refer to the source code of
the ram disk driver for the complete version of the driver code.

Device Information Structure


Each device driver consists of entry points, other routines, and various data
structures. Entry points are functions within the driver that are standard methods of
calling the driver. The important data structures are as follows:
• The device information structure
• The statics structure
The device information structure provides the means for users to configure
parameters for each major device.

Device Information Definition


The device information definition contains the device information necessary to
install a major device. This includes information that may vary between instances
of a physical device if the hardware is replicated in the system. Examples of
information that may change include the following items:
• Physical addresses
• Interrupt vectors

Writing Device Drivers for LynxOS 7


Chapter 2 - Driver Structure

• Available resources
The device information definition may also contain information about
configuration defaults. There is one device information definition for each
instantiation of the driver.
The implementation of the device information definition is done through a C
structure, which consolidates the information specific to a particular major device.
The structure is called <dev>info, where <dev> corresponds to the device
name.
For example, a ram disk device driver has a device information definition named
rdinfo, which is given below. The C structure definition is put in a file typically
named <dev>info.h, where <dev> is a prefix that corresponds to a particular
device. The header file may also contain definitions of device-specific constants as
well as various convenient macro definitions. The device information definition
files are located in /sys/dheaders.
In the example in this chapter, the header file rdinfo.h contains the following
data:
struct idinfo {
int bytes; /* Number of bytes for the ramdisk */
/* Physical start address of ramdisk */
int physical_memory_address;};

/*
The rdinfo file contains information for up to 16 ramdisks.
ramdisk is selected through the usual minor number convention
for the filesystem. A minor number is 8 bits with the lower 4 bits
representing the device ID and the upper 4 bits representing the
partition. Each ramdisk ID can be assigned a unique size.
If a physical_memory_address value, of non-zero, is assigned
the ramdisk ID will point to that physical address for size bytes.

In the case of assigning a physical_memory_address a non zero


value, size will be rounded down to the nearest unit of 512 bytes.

In the case of a assigning physical_memory_address the value of 0,


size will be rounded down to the nearest unit of 4096 bytes. In the
event size is < 4096 the allocation will fail with errno set
to EINVAL.
*/

struct rdinfo {
struct idinfo idinfo[16];
};

8 Writing Device Drivers for LynxOS


Device Information Declaration

Device Information Declaration


The device information declaration is an instance of the device information
definition. The data is passed to the appropriate driver upon installation. Each
device information declaration must have a corresponding device information
definition.
The device information declaration can be implemented by declaring a C structure
in the device information definition file. The device information declaration is
named <dev>info<n>, where <dev> is the appropriate device name and <n>
is a device sequence number, starting at 0. The declaration is put in a C source file
named <dev>info.c, where <dev> stands for the device name. The device
information declaration files are located in /sys/devices.
The following code segment shows an example of a device information declaration
for the ram disk.
#include "rdinfo.h"

/*
NOTE:
If a romable image is booted the corresponding ramdisk will
automatically be assigned to ramdisk ID 0 for compatibility.

Format of entries:
bytes, physical_memory_location

Setting physical_memory_location to zero instructs the ramdisk driver


to allocate bytes worth of memory.

Setting physical_memory_location to a non zero value instructs


the ramdisk driver to set up a virtual mapping starting at
physical_memory_location for bytes.

*/

struct rdinfo rdinfo0 = {


{
{ 0, 0 }, /* Ramdisk 0 */
{ 0, 0 }, /* Ramdisk 1 */
{ 0, 0 }, /* Ramdisk 2 */
{ 0, 0 }, /* Ramdisk 3 */
{ 0, 0 }, /* Ramdisk 4 */
{ 0, 0 }, /* Ramdisk 5 */
{ 0, 0 }, /* Ramdisk 6 */
{ 0, 0 }, /* Ramdisk 7 */
{ 0, 0 }, /* Ramdisk 8 */
{ 0, 0 }, /* Ramdisk 9 */
{ 0, 0 }, /* Ramdisk 10 */
{ 0, 0 }, /* Ramdisk 11 */
{ 0, 0 }, /* Ramdisk 12 */
{ 0, 0 }, /* Ramdisk 13 */
{ 0, 0 }, /* Ramdisk 14 */
{ 0, 0 } /* Ramdisk 15 */
}
};

Writing Device Drivers for LynxOS 9


Chapter 2 - Driver Structure

struct rdinfo rdinfo1 = {


{
{ 0, 0 }, /* Ramdisk 0 */
{ 0, 0 }, /* Ramdisk 1 */
{ 0, 0 }, /* Ramdisk 2 */
{ 0, 0 }, /* Ramdisk 3 */
{ 0, 0 }, /* Ramdisk 4 */
{ 0, 0 }, /* Ramdisk 5 */
{ 0, 0 }, /* Ramdisk 6 */
{ 0, 0 }, /* Ramdisk 7 */
{ 0, 0 }, /* Ramdisk 8 */
{ 0, 0 }, /* Ramdisk 9 */
{ 0, 0 }, /* Ramdisk 10 */
{ 0, 0 }, /* Ramdisk 11 */
{ 0, 0 }, /* Ramdisk 12 */
{ 0, 0 }, /* Ramdisk 13 */
{ 0, 0 }, /* Ramdisk 14 */
{ 0, 0 }, /* Ramdisk 15 */
}
};

Statics Structure
The routines within the driver must operate on the same set of information for each
instance of the driver. In LynxOS, this information is known as the driver statics
structure. The install entry point returns the value of the statics structure, and the
remaining entry points are passed a pointer to the statics structure by the kernel. If
the statics structure is not globally declared, then the driver routines can be used
with more than one statics structure simultaneously. The following is an example
of a statics structure:
struct rdstatics
{
ksem_t openclose_sem;
struct driveinfo driveinfo[MAXDEVS];
ksem_t scratch_buffer_sem; /* Protects the scratch buffer */
void *scratch_buffer; /* Points to a 4k page */
int romkernel_supplied_fs; /* rom kernel wants its memory to be */
/* setup as a filesystem */
};

The format of the statics structure for most drivers can be defined in any way the
user wishes. However, for some drivers, namely TTY and network drivers, there
are certain rules that must be followed.

10 Writing Device Drivers for LynxOS


Entry Points

Entry Points
A driver consists of a number of entry points. These routines provide a defined
interface with the rest of the kernel. The kernel calls the appropriate entry point
when a user application makes a system call. LynxOS dictates a predefined list of
entry points, though a driver does not need to have all the entry points. It is
recommended that empty routines for all unused entry points be provided.

install Entry Point


Before a device can be used, a driver must allocate and initialize the necessary data
structures and initialize the hardware if it is present. This is the responsibility of the
install entry point in a LynxOS driver.

The install entry point is called multiple times, once for each of the driver’s
major devices configured in the system. The install entry point is passed the
address of a user-defined device information structure containing the user-
configured parameters for the major device. Each invocation of the install
entry point must receive a different address. Otherwise, different master numbers
will correspond to the same device information structure, which almost always
creates device/driver conflicts and represents a bug.

Character Drivers
As an example of the character driver install entry point, the rrdinstall()
routine from the LynxOS ram disk driver is used here:
/* ------------------------ Raw entry points -------------------------- */
void *rrdinstall(struct rdinfo *info)
{
struct rdstatics *statics;
struct super_block_32bitfs *super_block;
struct driveinfo *di;
int i;
int end;

if ((sizeof(struct rdstatics) > PAGESIZE)) {


/* Sanity checking */
return (void *)SYSERR;
}
if (!(statics = (struct rdstatics *)get1page())) {
goto errorout3;
}
bzero((char *)statics, PAGESIZE);
if (!(statics->scratch_buffer = get1page())) {
goto errorout2;
}
bzero((char *)statics->scratch_buffer, PAGESIZE);

Writing Device Drivers for LynxOS 11


Chapter 2 - Driver Structure

for (i = 0; i < MAXDEVS; i++)


pi_init(&statics->driveinfo[i].access_sem);
/*
Check to see if this driver is installed as part
of a rom kernel. If romable_kernel is set, point the
ramdisk to the memory set aside for it.
This memory that is set aside is virtually
contiguous and must be accessed with different routines.
*/
statics->romkernel_supplied_fs = 0;
/* When a kdi ramdisk is present, ID 0 points to where it
is in memory
*/
if (!rk_rootfs_configured && ROMkp->lmexe.rootfs) {
rk_rootfs_configured = 1;
/* Only do this the first time! */
di = &statics->driveinfo[0];
di->dskmem = (unsigned char *)((ROMkp->rk_ramdisk));
if (di->dskmem) {
/*
A rom kernel is present, and has
memory allocated for a filesystem,
the filesystem will be setup on ramdisk ID 0.
If no memory is allocated for the rom
kernel filesystem then ramdisk ID 0 will
not be claimed.
*/
statics->romkernel_supplied_fs = 1;
super_block = (struct super_block_32bitfs *)
(di->dskmem + 512);
di->blocks = normal_long(super_block->
num_blocks);
di->bytes = di->blocks * 512;
di->flags |= ALLOCATED_MEMORY | DO_NOT_FREE |
VIRTUALLY_CONTIGUOUS;
di->rw = rw_vc;
}
}
/* Allocate the memory for the rest of the ramdisk IDs. */

for (i = (statics->romkernel_supplied_fs) ? 1 : 0; i < MAXDEVS;


i++) {
if (info->idinfo[i].bytes) {
if (info->idinfo[i].physical_memory_address) {
if (setup_physical_memory(&statics->
driveinfo[i], info->
idinfo[i].physical_memory_address,
info->idinfo[i].bytes) == SYSERR)
{

/*
Had an error so this
device will not be setup.
Keep processing entries.
*/
}
}
else { /* Just assign memory to the ID */
di = &statics->driveinfo[i];
if (get_memory(di, info->
idinfo[i].bytes & ~(PAGESIZE-1), 0) ==
SYSERR)
{

12 Writing Device Drivers for LynxOS


Block Drivers

goto errorout1;
}
}
}

}
pi_init(&statics->openclose_sem);

return statics;
errorout1:
/* Free all the memory allocated from the ramdisk IDs. */
end = (statics->romkernel_supplied_fs) ? 1 : 0;
for (; i >= end; i--) {
free_memory(&statics->driveinfo[i]);
}
errorout2:
free1page((char *)statics);
errorout3:
seterr(ENOMEM);
return (void *)SYSERR;
}

In this example, an error in memory allocation for the driver statics structure leads
to setting the errno variable to ENOMEM and returning the SYSERR constant.
Otherwise, the fields of the driver statics structure are initialized in a manner
specific to the ram disk.
Commonly, a device driver will use hardware-specific internal functions to verify
the physical presence of the hardware device prior to any further initialization
steps, such as allocating memory for the driver statics structure. Refer to Chapter 7,
“Interrupt Handling” for an example of the device driver implemented in the latter
way.

Block Drivers
The following code segment gives the general structure of the install entry
point of a block device driver.
char *devinstall(struct devinfo *info, struct statics *status)
{
/* check for existence of device */
...
/* do initialization of the devices */
/* set interrupt vectors */
return (char *) status;
}

The install entry point allocates memory for the statics structure, then returns a
pointer to the statics structure. If an error occurs during the install, the install
entry point must free all resources used and return SYSERR.

Writing Device Drivers for LynxOS 13


Chapter 2 - Driver Structure

open Entry Point


The open entry point performs the initialization generic to a minor device. For
every minor device accessed by the application program, the open entry point of
the driver is accessed. Thus, if synchronization is required between minor devices
(of the same major device), the open entry point handles it. Every open system
call on a device managed by a driver results in the invocation of the open entry
point.
Note that the open entry point is not reentrant, though it is preemptive. Only one
user task can be executing the entry point code at a time for a particular device.
Therefore, synchronization between tasks is not necessary in this entry point.
The general structure of the open entry point is shown in the following section.

Character and Block Drivers


This chapter uses the rrdopen() function to illustrate the structure of a typical
open() entry point as well as to highlight some standard methods used for
implementing LynxOS device drivers.
#include <file.h>
#include <io.h>

int rrdopen(struct rdstatics *statics, int dev, struct file *f)


{
return _rdopen(statics, f, RAW_ENTRY_POINT);
}

LynxOS ram disk driver does not use major and minor device numbers.
However, the basic framework will be given. The variable devno gives the
major and minor device number. To obtain the major device number, use the
macro major(). For the minor device number, use the macro minor().
The statics parameter is the address of the device information structure passed to
the device driver entry point by the operating system kernel. The parameter f is
the file structure defined in file.h.
The rrdopen()routine given in the above example, calls an internal function
_rdopen() which provides the actual implementation of the open functionality.
Refer to “open Routine” on page 81 for the description of the _rdopen() routine.
The open entry point returns either OK or SYSERR, depending on whether or not
the device was successfully opened.

14 Writing Device Drivers for LynxOS


close Entry Point

close Entry Point


The close entry point is invoked only on the “last close” of a device. This means
that if multiple tasks have opened the same device node, the close entry point is
invoked only when the last of these tasks closes it. Specific allocation memory done
in the open routine is deallocated in the close entry point. As with the open
entry point, the close entry point is not reentrant.

Character Drivers
#include <file.h>
#include <io.h>

int rrdclose(struct rdstatics *statics, struct file *f)


{
return _rdclose(statics, f->dev, RAW_ENTRY_POINT);
}

...

int _rdclose(struct rdstatics *statics, int dev, int raw)


{
struct driveinfo *di;
int partition;

di = &statics->driveinfo[dev & ID_MASK];


partition = (dev & PARTITION_MASK) >> 4;
swait(&statics->openclose_sem, SEM_SIGIGNORE);
swait(&di->access_sem, SEM_SIGIGNORE);
di->partitions[partition].open &= raw ? ~1 : ~2;
if (di->flags & FREE_MEMORY_ON_FINAL_CLOSE) {
ssignal(&di->access_sem);
free_memory(di);
}
else
ssignal(&di->access_sem);
ssignal(&statics->openclose_sem);
return OK;
}

The close() entry point in the LynxOS ram disk driver calls the _rdclose()
function.
_rdclose() conditionally deallocates memory resources. For an example of
more sophisticated logic implemented in the close() entry point of a LynxOS
device driver, refer to Chapter 7, “Interrupt Handling.”
The close entry point returns either OK or SYSERR, depending on whether or
not the device was successfully closed.

Writing Device Drivers for LynxOS 15


Chapter 2 - Driver Structure

Block Drivers
#include <file.h>
#include <io.h>

int devclose(struct statics *s, int devno)


{
/* perform any deallocation done previously in open */
return (OK);
}

read Entry Point


The read entry point copies data from the device into a buffer of the calling task.
The structure of the read entry point in the LynxOS ram disk device driver
appears as follows:

Character Drivers Only


int rrdread(struct rdstatics *statics, struct file *f,
char *buff, int count)
{
return rdrw(statics, f, buff, count, B_READ);
}

...

int rdrw(struct rdstatics *statics, struct file *f, char *buff,


int count, int iotype)
{
struct driveinfo *di;
int partition;

partition = (f->dev & PARTITION_MASK) >> 4;


di = &statics->driveinfo[f->dev & ID_MASK];
/*
f->position and count will be no more then
0x7fffffff each if this check fails to find an error.
When added below they cannot overflow an unsigned int.
*/
if (f->position > di->partitions[partition].bytes)
{
seterr(EINVAL); /* Out of range request */
return SYSERR;
}
if (f->position == di->bytes) {
if (iotype == B_READ)
return 0; /* at EOF, return EOF POSIX */
else {
seterr(ENOSPC); /* at EOF, and trying to write */
return SYSERR; /* return ENOSPC, POSIX */
}
}
/*
Unsigned int overflow is not possible due to checking above.
*/

16 Writing Device Drivers for LynxOS


write Entry Point

if (f->position + count > di->partitions[partition].bytes)


{
count = di->partitions[partition].bytes - f->position;
}
di->rw(di, buff, (int)f->position, count, iotype);
/*
rw_vc
rw_vd
*/
f->position += count; /* update position */
return count;
}

The statics parameter is the address of the statics structure passed to the read()
entry point by the operating system kernel. The f parameter is the file structure.
The parameters buff and count refer to the address and size, in bytes, of the
buffer in which the data should be placed. This entry point should return the
number of bytes actually read or SYSERR.
The rrdread() routine given in the above example calls an internal function
rdrw(), the latter providing the actual implementation of the read functionality.
Another example with a more complicated read() entry point can be found in
Chapter 7, “Interrupt Handling.”

write Entry Point


The write entry point copies data from a buffer in the calling task to the device. The
code snippet below is a fragment of the write entry point in the LynxOS ram disk
device driver. It is given here to demonstrate the general structure of a write()
entry point in a LynxOS character device driver.

Character Drivers Only


int rrdwrite(struct rdstatics *statics, struct file *f,
char *buff, int count)
{
return rdrw(statics, f, buff, count, !B_READ);
}

...

int rdrw(struct rdstatics *statics, struct file *f, char *buff,


int count, int iotype)
{
struct driveinfo *di;
int partition;

partition = (f->dev & PARTITION_MASK) >> 4;


di = &statics->driveinfo[f->dev & ID_MASK];
/*

Writing Device Drivers for LynxOS 17


Chapter 2 - Driver Structure

f->position and count will be no more then 0x7fffffff each


if this check fails to find an error. When added below
they cannot overflow an unsigned int.
*/
if (f->position > di->partitions[partition].bytes)
{
seterr(EINVAL); /* Out of range request */
return SYSERR;
}
if (f->position == di->bytes) {
if (iotype == B_READ)
return 0; /* at EOF, return EOF POSIX */
else {
seterr(ENOSPC); /* at EOF, and trying to write */
return SYSERR; /* return ENOSPC, POSIX */
}
}
/*
Unsigned int overflow is not possible due to checking above.
*/
if (f->position + count > di->partitions[partition].bytes)
{
count = di->partitions[partition].bytes - f->position;
}
di->rw(di, buff, (int)f->position, count, iotype);
/*
rw_vc
rw_vd
*/
f->position += count; /* update position */
return count;
}

The statics parameter is the address of the statics structure passed by the
kernel. The f parameter is the file structure. The parameters buff and count
refer to the address and size in bytes of the buffer containing the data to be written
to the device. This entry point should return the number of bytes actually written or
SYSERR.
The rrdwrite() routine given in the above example calls an internal function
rdrw(), the latter providing the actual implementation of the write functionality.

select Entry Point


The LynxOS ram disk device driver does not implement the select entry point,
that is why an abstract entry point is shown here instead of giving an example from
the specific device driver.
The select entry point supports I/O polling or multiplexing. The select entry
point structure looks like this:

18 Writing Device Drivers for LynxOS


Character Drivers

Character Drivers
int select (struct statics *s, struct file *f, int which, struct sel *ffs)
{
switch (which)
{
case SREAD:
ffs->iosem = &s->n_data;
ffs->sel_sem = &s->rsel_sem;
break;
case SWRITE:
ffs->iosem = &s->n_spacefree;
ffs->sel_sem = &s->wsel_sem;
break;
case SEXCEPT:
ffs->iosem = &s->error;
ffs->sel_sem = &s->esel_sem;
break;
}
return (OK);
}

The which field is either SREAD, SWRITE, or SEXCEPT, indicating that the
select entry point is monitoring a read, write, or exception condition. The
select entry point returns OK or SYSERR.

In addition to the select entry point code, a number of fields in the statics
structure are required to support the select system call:
struct statics
{
...
ksem_t *rsel_sem; /* sem pointer for select read */
ksem_t *wsel_sem; /* sem pointer for select write */
ksem_t *esel_sem; /* sem pointer for select exception */
int n_spacefree; /* space available for write */
int n_data; /* data available for read */
int error; /* error condition */
};

The iosem field in the sel structure is a pointer to a flag (the name iosem is
misleading; it is not a semaphore) that indicates whether the condition being polled
by the user task is true or not. The sel_sem field is a pointer to a semaphore that
the driver signals at the appropriate time (see below). The value of the semaphore
itself is managed by the kernel and should never be modified by the driver. A
driver must always set the iosem and sel_sem fields in the select entry
point.
A driver that supports select must also test and signal the select semaphores at the
appropriate points in the driver, usually the interrupt handler or kernel thread. This
should be done when data becomes available for reading, when space is available
for writing, or when an error condition is detected. The semaphore pointer fields
access must be protected with a disable()/restore() region (refer to Chapter

Writing Device Drivers for LynxOS 19


Chapter 2 - Driver Structure

4, “Synchronization”) to guard against the select system call changing those fields
in the meanwhile:
/* data input */
s->n_data++;

disable(ps);
if (s->rsel_sem)
ssignal (s->rsel_sem);
restore(ps);

/* data output */

s->n_spacefree++;
disable(ps);
if (s->wsel_sem)
ssignal (s->wsel_sem);
restore(ps);

/* errors, exceptions */

if (error_found)
{
s->error++;
disable(ps);
if (s->esel_sem)
ssignal (s->esel_sem);
restore(ps);
}

ioctl Entry Point


The ioctl entry point is called when the ioctl system call is invoked for a
particular device. This entry point is used to set certain parameters in the device or
obtain information about the state of the device.

Character Drivers
The following example is a shortened version of the rrdioctl() function,
which implements the ioctl() entry point in the LynxOS ram disk device
driver, provided here to show what an ioctl() entry point looks like:
int rrdioctl(struct rdstatics *statics, struct file *f, int function,
long *data)
{
return rdioctl(statics, f->dev, function, data);
}

...

int rdioctl(struct rdstatics *statics, int dev_num, int function,


long *data)
{
struct driveinfo *di;
struct disk_geometry *dg;

20 Writing Device Drivers for LynxOS


Block Drivers

struct rdinformation *ramdisk_info;


struct rw_absolute_block *rwab;
struct memory_pointer *mp;
int i;

di = &statics->driveinfo[dev_num & ID_MASK];


switch(function) {
/* return # of 512 byte blocks for device */
case BLCK_CAPACITY:
...
break;
case BLCK_SIZE:
...
break;
case DISK_GEOMETRY: /* return ramdisk geometry data */
...
break;
/* Assign memory to a ramdisk ID */
case ALLOCATE_MEMORY:
...
break;
case FREE_MEMORY: /* Take memory from a ramdisk ID */
...
break;
/* Return contents of the statics structure */
case INFORMATION:
...
break;
case BLCK_READ: /* 64 bit access routines for mkpart */
/* These will go away once 64 bit fs is done */
case BLCK_WRITE:
...
break;

/* The kernel uses this to obtain the physical


address of the resident text in a kdi or file
system image.
*/

/* Translate offset to physical address */


case BLCK_PHYS_ADDR:
...
break;
default:
pseterr(EINVAL);
return SYSERR;
}
return OK;
}

Block Drivers
Presented below is a skeleton of the ioctl() entry point for an abstract block
device driver in LynxOS.
int devioctl(
struct statics *s,
int devno,
int command,
char *arg

Writing Device Drivers for LynxOS 21


Chapter 2 - Driver Structure

)
{
/* depending on the command copy relevant
information to or from the arg structure */
}

The driver defines the meaning of command and arg except for FIOPRIO and
FIOASYNC, which are predefined and used by LynxOS to communicate with the
drivers. If the arg field is to be used as a memory pointer, it should first be
checked for validity with either rbounds or wbounds. The RS232_ioctl()
function, for instance, does this if the RS232_SET_PAR_ENABL command is
issued by the user space program making the ioctl() system call.
The kernel uses FIOPRIO to signal the change of a task priority to the driver that
is doing priority tracking. FIOASYNC is invoked when a task invokes the
fcntl() system call on an open file, setting or changing the value of the
FNDELAY or FASYNC flag. The kernel might change the priority of an I/O task in
the case of Priority Inheritance to elevate the priority of a task that has
locked a resource that another higher priority task is blocked on (see “Kernel
Mutexes (Priority Inheritance Semaphores)” on page 43 in Chapter 4).
The ioctl() entry point should return OK in the case of successful ioctl
operation or SYSERR if either an unknown ioctl command was issued or if
some other error condition was met during the ioctl.

uninstall Entry Point


The uninstall entry point is called when a major device is uninstalled
dynamically from the system (via the devinstall command or
cdv_uninstall and bdv_uninstall system calls). It is not called by a
normal reboot of the system. Interrupt vectors set in the install routine and
dynamically-allocated data are freed in the uninstall entry point. The entry
point should also perform any necessary actions to put the hardware device into an
inactive state. Once the interrupt handler is detached, any further interrupts from
the device would cause the system to crash. The format of the uninstall entry
point is shown below.

22 Writing Device Drivers for LynxOS


Character Drivers

Character Drivers
The LynxOS ram disk device deallocates memory resources and returns OK
from the rrduninstall() function, the latter being an implementation of the
uninstall entry point in this particular device driver:
int rrduninstall(struct rdstatics *statics)
{
int i;

for (i = 0; i < MAXDEVS; i++) {


free_memory(&statics->driveinfo[i]);
}
free1page(statics);
return OK;
}

Block Drivers
This is the general structure of the uninstall entry point of a block device
driver:
void devuninstall(struct devinfo *info)
{
...
/* clear interrupt vectors */
...
...
}

As a rule, if the install entry point has allocated some system resources (such
as memory for the driver statics structure), these resources are deallocated in the
uninstall routine. The sysfree() call is used for this purpose. See the
Driver Service Calls man page for additional information. In addition, such
things as waiting for the hardware to become quiescent and optional hardware state
clean-up may be carried out by the uninstall entry point of a device driver. Refer to
the example in Chapter 7, “Interrupt Handling.”

strategy Entry Point


The strategy entry point is valid only for block devices. Instead of a read and
write entry point, block device drivers have the strategy entry point that
handles both read and write. The format of the strategy entry routine is as
follows:
#include <disk.h>
devstrategy(struct statics *status, struct buf_entry *bp)
{
/* do read or write depending on whether a read

Writing Device Drivers for LynxOS 23


Chapter 2 - Driver Structure

or write operation is specified in the buf_entry */


}

The structure buf_entry is defined in disk.h and is reproduced below.


struct buf_entry
{
struct buf_entry *b_forw; /* double link list for owner device */
struct buf_entry *b_back;
struct buf_entry *av_forw; /* free list pointers*/
struct buf_entry *av_back; /*can be used by driver*/
char *memblk; /* data block */
int w_count;
int rw_count;
ksem_t b_rwsem; /* block read/write semaphore*/
int b_error; /* disk error */
ksem_t b_sem; /* block semaphore */
int b_status; /* block status */
int b_device; /* block device number */
long b_number; /* block number */
};

The size of the block request is returned by the macro BLKSIZE(b_device).


BLKSIZE takes as its argument the block device number, b_device.

Every block to be read or written has an associated buffer entry. The encoded
major and minor device numbers are stored in b_device. The memory source or
destination is denoted by memblk. The source or destination number of the block
on the device is b_number. The b_status is used to indicate the status and type
of transfer. If the value B_READ is set, the transfer is a read; otherwise, the
transfer is a write.
The transfer status is indicated by B_DONE or B_ERROR. The b_error field is
set to nonzero if the transfer fails. The b_rwsem semaphore is signaled once the
transfer completes. av_forw points to the next buffer or NULL if the end of the
list has been reached. w_count is available to the driver for its own purposes. For
example, LynxOS drivers use w_count for priority tracking.

24 Writing Device Drivers for LynxOS


CHAPTER 3 Memory Management

This chapter describes the LynxOS memory model, supported address types, how
to allocate memory, DMA transfers, memory locking, address translation, and how
to access user space from interrupt handlers and kernel threads.

LynxOS Virtual Memory Model


LynxOS uses a virtual memory architecture. All memory addresses generated by
the CPU are translated by the hardware Memory Management Unit (MMU). Each
user task has its own protected virtual address space that prevents tasks from
inadvertently (or maliciously) interfering with each other. The kernel, which
includes device drivers, and the currently executing user task exist within the same
virtual address space—the user task is mapped into the lower part, the kernel into
the upper part. Only the part of the virtual map occupied by the user task is
remapped during a context switch.
Applications cannot access the part of the address space occupied by the kernel and
its data structures. The constant OSBASE defines the upper limit of the user
accessible space. Addresses above this limit are accessible only by the kernel.
Kernel code, on the other hand, has access to the entire virtual address space. This
greatly facilitates the passing of data between drivers and user tasks. A device
driver, as part of the kernel, can read or write a user address as a direct memory
reference without the need to use special functions. However, this also means that
it is the user’s responsibility as a developer to restrict the access of his or her device
driver to only the necessary structures.

Writing Device Drivers for LynxOS 25


Chapter 3 - Memory Management

LynxOS Address Types


A LynxOS driver supports several different address types. These address types
include the following:
• User virtual
• Kernel virtual
• Kernel direct mapped
• Physical
In addition, the device driver may have to deal with device addresses such as I/O
port addresses, PCI addresses, VME addresses, and so on.

User Virtual
These addresses are passed to the driver from a user application—typically
addresses of buffers or data structures used to transfer data or information between
the application and a device or the driver. They are valid only when the user task
that passed the address is the current task.

Kernel Virtual
These are the addresses of kernel functions and variables that can be used by a
driver. The mapping of kernel virtual addresses is permanent so they are valid from
anywhere within a driver. They are not accessible from an application.

Kernel Direct Mapped


This is a region of the kernel virtual space that is directly mapped to physical
memory (that is, contiguous virtual addresses map to contiguous physical
addresses). The base of this region is defined by the constant PHYSBASE, which
maps to the start of RAM. The size of the region is platform-dependent.

NOTE: Memory that exists on devices or nonsystem buses (for example, PCI and
VME) is not accessible via PHYSBASE.

26 Writing Device Drivers for LynxOS


Physical

Physical
Physical memory is the nontranslated address for memory. A driver will need to set
up pointers to physical memory for DMA controllers because they bypass the
MMU.

Allocating Memory
The following techniques can be used to allocate and free memory:
• sysbrk(), sysfree()
• alloc_cmem(), free_cmem()
• get1page(), free1page()
• get_safedma_page(), free_safedma_page()
• alloc_safedma_contig(), free_safedma_contig()
Each memory allocation will get memory; however, each technique has its
advantages and disadvantages.
The memory allocated using any of these service calls is from the current
partition’s kernel heap pool. The freed memory goes into the current partition’s
kernel heap pool.
Therefore, special care should be taken to invoke these calls from the appropriate
partition contexts.

The sysbrk(), sysfree() Technique


This technique has the following advantages and disadvantages:
• It is not guaranteed to be physically contiguous.
• It is virtually contiguous.
• It does not return memory to the global memory pool, but can be reused
by future sysbrk().
• It returns a value that is aligned on an 8-byte boundary.
• It fails requests only when no memory is available.
#include <sysbrk.h>

...

Writing Device Drivers for LynxOS 27


Chapter 3 - Memory Management

void *s;
if (!(s = sysbrk(100)))
return SYSERR;
sysfree (s, 100)

The alloc_cmem(), free_cmem() Technique


This technique has the following advantages and disadvantages:
• It is physically contiguous.
• It is virtually contiguous.
• It can return memory to the global memory pool.
• It returns a value that is aligned on a 4k boundary.
• It fails requests if the physical contiguous memory hole is not large
enough to satisfy the requests.
#include <getmem.h>

...
void *s;
if (!(s = alloc_cmem(100)))
return SYSERR;
free_cmem(s, 100);

The get1page(), free1page() Technique


This technique has the following advantages and disadvantages:
• It is physically and virtually contiguous for 4k.
• It can return memory to the global memory pool.
• It fails requests when memory is unavailable.
• It returns a value that is aligned on a 4k boundary.
• Its memory usage is more complicated than the other two methods.
#include <getmem.h>

...
void *s;
ls (!(s = get1page()))
return SYSERR;
free1page(s);

28 Writing Device Drivers for LynxOS


The get_safedma_page(), free_safedma_page() Technique

The get_safedma_page(), free_safedma_page() Technique


This technique provides the following features:
• It allocates a single page within first 16MB of physical memory
(0x00000000-0x00FFFFFF).
• It is based on reserving some pages in the first 16MB of memory at the
system startup, and them managing them as a safeDMA memory pool.
• The no of pages for this safedma memory pool has to be specified in
uparam.h (macro ISA_SAFE_DMA_PAGES).
• The memory obtained is physically and virtually contiguous.
• It can return memory to the safedma memory pool.
• It fails requests when memory is unavailable.
• It returns a value that is aligned on a 4k boundary.
#include <getmem.h>s
...
char *s;
ls (!(s = get_safedma_page()))
return SYSERR;
free_safedma_page(s);

The alloc_safedma_contig(), free_safedma_contig()


Technique
This technique provides the following features:
• It allocates multiple pages within first 16MB of physical memory
(0x00000000-0x00FFFFFF).
• It is based on reserving some pages in the first 16MB of memory at the
system startup, and them managing them as a safeDMA memory pool.
• The no of pages for this safedma memory pool has to be specified in
uparam.h (macro ISA_SAFE_DMA_PAGES).
• The memory obtained is physically and virtually contiguous.
• It can return memory to the safedma memory pool.
• It fails requests when memory is unavailable.
• It fails requests if the physical contiguous memory hole is not large
enough to satisfy the requests.

Writing Device Drivers for LynxOS 29


Chapter 3 - Memory Management

• It returns a value that is aligned on a 4k boundary.


#include <getmem.h>
...
char *s;
if (!(s = alloc_safedma_contig(100)))
return SYSERR;
free_safedma_contig(s, 100);

If the amount of memory freed does not correspond to the amount of memory
allocated, a kernel crash will be the most likely result.
The memory allocated by a driver that uses the sysbrk() function is
permanently claimed by the OS. The various memory routines get memory from
the global memory pool. Some routines don’t give the memory back when the
corresponding free function is called. The kernel sysbrk() is such a routine.
Two other routines, alloc_cmem() and get1page() give back the memory to
the global memory pool when the free_cmem() and free1page() are called,
respectively. Most of the time, the memory usage by the install entry is
negligible. There are, however, occasions when vast quantities of memory are
needed (ram disk memory) and claimed with sysbrk(). When the driver is
uninstalled or closed, the memory will not appear in the global memory pool. The
memory will be available to the kernel or another driver that calls sysbrk() at a
later time. The sysbrk() routine looks through the OS memory pool for
memory. If it cannot find enough memory to fill the request, it will transfer
memory from the global memory pool to the OS memory pool.

DMA Transfers
The fact that LynxOS uses the CPU’s MMU makes the programming of DMA
transfers slightly more complicated. All addresses generated by the CPU are
treated as virtual and are converted to physical addresses by the MMU. Memory
that is contiguous in virtual space can be mapped to noncontiguous physical pages.
DMA devices, however, typically work with physical addresses. Therefore, a
driver must convert virtual addresses to their corresponding physical addresses
before passing them to a DMA controller.
The conversion is more complicated for PCI DMA devices, which work with so-
called “bus” addresses. Bus addresses are very much like CPU physical addresses,
but represent a PCI device’s view of system memory. How PCI bus addresses map
to CPU physical addresses depends on the PCI hardware. Luckily, for most
platforms a bus address of data in RAM is equal to its physical address. An

30 Writing Device Drivers for LynxOS


Address Translation

example of a system design in which it is not so are the PowerPC Reference


Platform (PReP) machines.

Address Translation
A virtual address can be converted to its physical addresses using get_phys().
#include <mem_map.h>
physaddr = (get_phys (virtaddr) - PHYSBASE + drambase);

The address returned is, in fact, a kernel direct mapped address. To convert this
address to its physical address, the constant PHYSBASE must be subtracted and
drambase added. The variable drambase contains the physical address of the
start of RAM. On most platforms, this will be 0, but for maximum portability, it
should be used in the calculation. A driver should never modify the value of
drambase!

The returned address is valid only up to the next page boundary as contiguous
virtual addresses do not necessarily map to contiguous physical addresses. To
convert virtual address ranges that cross page boundaries, mmchain should be
used.
#include <mem.h>
#include <mem_map.h>

struct dmachain array[NCHAIN];

mmchain (array, virtaddr, nbytes);


for (i = 0; i < nsegments; i++) {
physaddr = array[i].address - PHYSBASE + drambase;
length = array[i].count;
...
}

The virtual memory region is defined by its address (virtaddr) and size
(nbytes). mmchain() fills in the fields in the dmachain array with the
physical addresses and sizes of the physical memory segments that make up the
specified virtual address range. The first and last segments in the list may be less
than a whole page in size, as illustrated in Figure 3-1.

Writing Device Drivers for LynxOS 31


Chapter 3 - Memory Management

Virtual Memory Physical Memory

Page 1

Page 2

Page 3

Page 4

Figure 3-1: Mapping Virtual to Physical Memory

It is the responsibility of the driver to ensure that the dmachain array is large
enough to hold all the segments that make up the specified address range. The
maximum number of segments can be calculated as follows:
nsegments = (nbytes + PAGESIZE - 1) / PAGESIZE + 1;

mmchain() returns the actual number of segments.


Both get_phys() and mmchain() translate addresses in the context of the
current process. In the case where no physical memory is mapped to a virtual
address, both get_phys() and mmchain() set the converted address to 0.

32 Writing Device Drivers for LynxOS


Address Translation

mem_lock() is, therefore, normally used before the address conversion routine to
ensure that there is a valid mapping. To translate addresses of a different process,
mmchainjob() can be used. This takes an additional argument, job, which
identifies the pid of the target process.
mmchainjob (job, array, virtaddr, nbytes);

To help with address translation for the PCI bus, LynxOS BSPs provide a
vtobus() function that takes a virtual address in RAM or PCI memory and
returns the corresponding PCI bus address for the system’s default PCI host bridge:
#include <bsp.h>
busaddr = vtobus(virtaddr);

Writing Device Drivers for LynxOS 33


Chapter 3 - Memory Management

34 Writing Device Drivers for LynxOS


CHAPTER 4 Synchronization

This chapter describes synchronization issues and the synchronization methods


that can be used when writing the device driver.

Introduction
Synchronization is probably the most important aspect of a device driver. It is also
the most intricate and, therefore, the most difficult to master. It requires careful
thinking and attention, and is often the cause of the most frustrating bugs which
take hours or days to locate but only seconds to fix. This aspect of a driver can
make or break not only the driver but the whole system. Poorly designed
synchronization, while it may not crash the system, can nevertheless have a
disastrous effect on the system’s real-time performance.
For users new to writing device drivers or unfamiliar with preemptive kernels, the
next sections review problems that give rise to the need for synchronization. This
discussion is not specific to LynxOS, so if already familiar with the subject, see
“Synchronization Mechanisms” on page 39.

Event Synchronization
The most frequent type of synchronization found in device drivers is the need to wait
for a device to complete a data transfer or other operation. Another typical event is
waiting for a resource, such as a buffer, to become free.
The simplest technique is to use polling, but polling can waste CPU time
unnecessarily. However, in some cases, it is unavoidable.
Polling can be used in the following cases:
• Short waits—a few microseconds, where the overhead of an interrupt is
not worth setting up.
Writing Device Drivers for LynxOS 35
Chapter 4 - Synchronization

• Hardware does not have an interrupt capability— forcing the use of


polling.
The disadvantages of polling are as follows:
• CPU time is wasted.
• When the CPU is switched to a higher priority process, the event the
polling was waiting for could occur and be missed.
The use of interrupts overcomes these drawbacks, but a driver needs a way to wait
for an interrupt to occur without using CPU cycles. Kernel semaphores are the
mechanism used to allow a driver to be blocked and to be woken up when the event
occurs, as illustrated in the Figure 4-1.

driver blocks driver continues

Driver
Entry Point

Interrupt occurs,
Interrupt handler wakes up
Handler driver

Time

Figure 4-1: Event Synchronization

Critical Code Regions


The LynxOS kernel, being designed for real-time, is completely preemptive,
including the device drivers. This means that one task can preempt another, even if
the preempted task happens to be in the middle of a system call or well into a
driver.
Consider the following code, which is an attempt to provide exclusive access to a
device:
1 if (device_free == TRUE) {
2 device_free = FALSE;

36 Writing Device Drivers for LynxOS


Critical Code Regions

3 ... /* use device */


4 device_free = TRUE;
5 }

Imagine a system where two tasks are using the device. The first task executes line
1 of the above code and finds the device to be free. But, before being able to
execute line 2, it is preempted by task 2, which then executes the same code. The
second task also finds the device free (the first task has not executed line 2 yet) and
so proceeds to use it. While task 2 is using the device, it is preempted by task 1
(maybe because its time ran out). Task 1 continues where it left off and proceeds to
access the device as well. So, both tasks are accessing the device concurrently, just
the situation that the code was supposed to prevent.
This type of situation is known as a race condition (that is, the result of the
computation depends on the timing of task execution). A race condition is a bug.
Concurrent accesses to a device may well lead to a system crash. The example also
serves to illustrate the insidious nature of this type of problem. The probability that
a preemption occurs between lines 1 and 2 is obviously quite low. So, the problem
will occur very infrequently, and can be extremely difficult to track down or
reproduce systematically.
Code that accesses shared resources, such as a device, is known as a critical code
region. The mechanism used to synchronize access to the code is often referred to
as a mutex as it provides mutual exclusion.
In order to avoid race conditions, a driver must protect the critical regions of code.
There are a number of mechanisms for protecting critical code regions that will be
discussed in the following sections.
When considering synchronization, the interrupt handler must also be taken into
account. It, too, can preempt the current task at any moment and may access static
data within the driver. But, because an interrupt handler is not scheduled in the
same way as other tasks, the mechanisms used to synchronize with it are usually
different from those used to synchronize between user tasks.
Figure 4-2 summarizes the problems of synchronizing critical code regions in a
multitasking kernel.

Writing Device Drivers for LynxOS 37


Chapter 4 - Synchronization

User Task User Task


(preemptible
pre-emptiblekernel only))
kernelonly

Interrupt Handler

Figure 4-2: Problems in Synchronizing Critical Code Regions in a Multitasking


Kernel

Resource Pool Management


A third type of synchronization involves the dynamic management of a shared pool
of resources, such as a set of data buffers. A counting semaphore is used to
atomically remove items from and return items to the central pool. The remove
operation may block if no resources are currently available and the return operation
wakes up any tasks waiting for resources.

Memory Access Ordering


Synchronization of shared resource access requires memory access instructions to
execute and be visible to all CPUs in the system in a particular order. With modern
compilers and CPUs, the instructions may not be executed in the program order,
that is as they appear in the source code, because both compilers and CPUs reorder
accesses internally during compilation/execution to achieve maximum
performance. LynxOS provides a set of calls to enforce the desired memory access
ordering, called memory barriers.

38 Writing Device Drivers for LynxOS


Synchronization Mechanisms

Synchronization Mechanisms
There are a number of synchronization mechanisms that can be used in a
LynxOS device driver:
• Atomic operations
• Kernel semaphores
• Kernel spin locks
• Disabling interrupts on the calling CPU
• Disabling preemption on the calling CPU
• The Big Kernel Lock (only in a multiprocessing environment)
Broadly speaking, we can place the mechanisms on a scale of selectivity, which
refers to the level of impact the use of the mechanism has on other activity in the
system. The more selective a mechanism is, the better. Kernel semaphores are the
most selective and only affect tasks that are using the driver. There is no impact on
other nonrelated tasks or activities in the system.
In a single CPU case, disabling preemption is obviously less selective as it will
have an impact on all tasks running in the system, regardless of whether they are
using the driver. Higher priority tasks will be prevented from executing until
preemption is reenabled. However, the system will continue to handle interrupts.
Disabling interrupts is the least selective method. Both task preemption and
interrupt handling are disabled so the only activity that can occur in the system is
the execution of the driver that disabled interrupts.
SMP (Symmetric Multi-Processing) complicates the picture a bit. In a multi-
processing environment, disabling preemption or interrupts on the calling CPU
does not prevent tasks running on other CPUs from accessing the shared resource.
Therefore, these operations are usually combined with grabbing or releasing either
a spin lock or the Big Kernel Lock (the BKL).
Spin locks require preemption or interrupts to be disabled on the calling CPU while
the lock is being held, and hence come in two flavors: preemption disabling and
interrupt disabling. While the lock is being held by one CPU, the other CPUs,
however, may continue to execute other tasks as long as they don't attempt to grab
the same lock. Thus, in a single CPU case, spin locks are as selective as disabling
preemption/interrupts. In a multi-processor case, it does not make sense to compare
them, because disabling preemption/interrupts does not create critical code regions.
Strictly speaking, spin locks are less selective than kernel semaphores, because
preemption or interrupts are necessarily disabled in spin lock critical code regions.

Writing Device Drivers for LynxOS 39


Chapter 4 - Synchronization

In reality, however, kernel semaphores also employ interrupt disabling internally


for the duration of their API calls, and thus may very well be less preferable than
spin locks for short critical sections.
The Big Kernel Lock is another way to augment the disabling of
preemption/interrupts so that critical code regions can be created. It is hidden from
the user inside the legacy calls that disable interrupts and preemption, and is only
used, if the system has more than one CPU. The BKL is a special kind of spin lock,
which only exists in one instance per the entire system. Only one task may own the
BKL at any time. Therefore, grabbing the BKL in addition to disabling preemption
or interrupts prevents any tasks running on other CPUs from doing the same until it
is released. Since many other unrelated tasks in the system use the BKL and will be
affected if a device driver thread grabs it, this option is less selective and less
preferable than spin locks.
Atomic operations stand aside in this classification. They represent a hardware-
assisted uninterruptible operation on one memory location. On some platforms,
these operations may cause delays for other CPU accessing the RAM - the entire
thing or just a certain range of addresses around the atomically accessed address. It
does not make much sense to count this impact against the selectivity, though,
because other synchronization mechanisms, such as spin locks and semaphores, are
likely to be implemented using atomic operations and inherit the impact.
Which mechanism to use depends on a number of factors. The different
synchronization mechanisms and the situations in which each should be used will
be discussed in detail below. First, let’s describe the mechanisms themselves.

Atomic Operations
LynxOS provides a number of operations that can be atomically performed on an
integer value located in RAM. Atomically means the operation appears to all CPUs
and threads in the system as a single uninterruptible memory access, even if it
consists of multiple memory accesses, provided that only atomic operations are
used to access the memory location. Atomic operations are the fastest
synchronization method available, as they let the user avoid heavier
synchronization primitives such as semaphores and spin locks when it's only a
single integer value that needs access synchronization. The atomic operations API
is defined in the atomic_ops.h header file. Atomic integer operations are
available both to the kernel and user applications.
All atomic operations work on a variable of a platform-dependent type atomic_t
(called an atomic variable). The integer value contained in atomic_t has the
type atomic_value_t; it is at least 32 bits wide and signed. The atomic_t type

40 Writing Device Drivers for LynxOS


Atomic Operations

is intentionally defined so that variables of this type cannot take part in C language
integer expressions per se, only by means of the atomic operations API. Atomic
variables must only be accessed via the API, otherwise the atomicity of the API
calls is not guaranteed.
atomic_init(av, v) call initializes an atomic variable av and sets it to the
specified value v. All atomic variables must be initialized before they can be used.
This is the only atomic variable operation not guaranteed to be atomic. Another
way to initialize an atomic variable is to assign it an initializing value
ATOMIC_INITIALIZER(v) at the variable definition, where v is its initial value:

#include <atomic_ops.h>

atomic_t av1 = ATOMIC_INITIALIZER(0);
atomic_t av2;

atomic_init(av2, 0);

The operations on atomic variables are described in Table 4-1.

Table 4-1: The Operations on Atomic Variables

Operation Description

atomic_set(av, v) Sets an atomic variable to the specified value.


atomic_get(av) Returns the current value of an atomic
variable.
atomic_add(av, v) Adds the specified value to an atomic
variable, and returns its new value.
atomic_sub(av, v) Subtracts the specified value from an
atomic variable, and returns its new value.
atomic_inc(av) Increments the value of an atomic
variable, and returns its new value.
atomic_dec(av) Decrements the value of an atomic
variable, and returns its new value.
atomic_xchg(av, v) Sets the value of an atomic variable av
to v, and returns its old value.
atomic_cmpxchg(av, v1, v2) Takes two values, v1 and v2. If the
atomic variable value is equal to v1, it is
set to v2. The old atomic variable value is
returned in any case.

Writing Device Drivers for LynxOS 41


Chapter 4 - Synchronization

Table 4-1: The Operations on Atomic Variables (Continued)

Operation Description

atomic_or(av, v) Does a bitwise OR of the value of an


atomic variable with the specified bit
mask and stores the result in the atomic
variable. No value is returned.
atomic_and(av, v) Does a bitwise AND of the value of an
atomic variable with the specified bit
mask and stores the result in the atomic
variable. No value is returned.
atomic_xor(av, v) Does a bitwise XOR of the value of an
atomic variable with the specified bit
mask and stores the result in the atomic
variable. No value is returned.
atomic_get_or(av, v) Does a bitwise OR of the value of an
atomic variable with the specified bit
mask and stores the result in the atomic
variable. The old value is returned. This
function is slower than atomic_or()
(which does not return the old value) on
some hardware.
atomic_get_and(av, v) Does a bitwise AND of the value of an
atomic variable with the specified bit
mask and stores the result in the atomic
variable. The old value is returned. This
function is slower than atomic_and()
(which does not return the old value) on
some hardware.

Atomic operations usually take a significantly longer time than their non-atomic
counterparts. Atomic variables must be located in regular cached memory. Do not
place them in uncached memory or above PHYSBASE. It is not recommended to
place multiple frequently accessed atomic variables into a memory region
corresponding to a single cache line, as this may cause performance degradation on
some architectures.

42 Writing Device Drivers for LynxOS


Kernel Semaphores

It is worth noting that atomic_cmpxchg() operation allows to construct


virtually any atomic operation on an atomic variable as follows:
atomic_value_t val = atomic_get(av), old;
while ((old = atomic_cmpxchg(av, val, any_operation(val)) != val)
val = old;

However, this is likely to be slower than using an existing atomic operation API
call.

Kernel Semaphores

General Characteristics
These are the most versatile and widely used driver synchronization mechanisms.
A kernel semaphore is a variable of type ksem_t that is declared by the driver.
All semaphores must be visible in all contexts. This means that the memory for a
semaphore must not be allocated on the stack. The kernel semaphore API is
defined in the swait.h kernel header file.
Kernel semaphores come in two flavors: counting semaphores and kernel mutexes
(alternatively known as priority inheritance semaphores).

Counting Kernel Semaphores


Counting semaphores have an integer value associated with them and can be
initialized to a nonnegative value as follows:
#include <swait.h>
...
ksem_init (&sem, value);

A counting kernel semaphore value may not be greater than KSEM_CNT_MAX.


Another way of initializing counting semaphores that are global objects is using
the KSEM_INITIALIZER() macro:
ksem_t sem = KSEM_INITIALIZER(value);

A semaphore is acquired using the swait() function.


swait (&sem, flag);

Note that the address of the semaphore is passed to swait(). The flag
argument is explained below.

Writing Device Drivers for LynxOS 43


Chapter 4 - Synchronization

If the semaphore value is greater than zero, it is simply decremented and the task
continues. If the semaphore value is less than or equal to zero, the task blocks and
is put on the semaphore’s wait queue. Tasks on this queue are kept in priority order.
A semaphore is signaled using the ssignal() function, which like swait(),
takes the semaphore address as argument.
ssignal (&sem);

If there are tasks waiting on the semaphore’s queue, the highest priority task is
woken up. Otherwise, the semaphore value is incremented.
Counting kernel semaphores have state. The semaphore’s value remembers how
many times the semaphore has been waited on or signaled. This is important for
event synchronization. If an event occurs but there are no tasks waiting for that
event, the fact that the event occurred is not forgotten.
Counting kernel semaphores are not owned by a particular task. Any task can
signal a semaphore, not just the task that acquired it. This is necessary to allow
kernel semaphores to be used as an event synchronization mechanism but requires
care when the semaphore is used for mutual exclusion.
Semaphores are the slowest mutual exclusion mechanism available to the driver;
however, they incur the penalty only if blocking or waking is involved. If the
semaphore is free, the execution time for swait() is very fast. Similarly,
ssignal() incurs significant overhead only if there are tasks blocked on the
semaphore.
The flag argument to the swait() function allows a task to specify how
signals are handled while it is blocked on a semaphore. If the task does not block,
this argument is not used. There are three possibilities for flag, specified using
symbolic constants defined in kernel.h:
SEM_SIGIGNORE Signals have no effect on the blocked task. Any
signals sent to the task while it is waiting on the
semaphore remain pending and will be delivered at
some future time.
SEM_SIGRETRY Signals are delivered to the task. If the task’s signal
handler returns, the task automatically waits again
on the semaphore. Signal delivery is transparent to
the driver as the swait() function does not
indicate whether any signals were delivered.
SEM_SIGABORT If a signal is sent to the task while it is blocked on the
semaphore, the swait() is aborted. The task is

44 Writing Device Drivers for LynxOS


Other Counting Kernel Semaphore Functions

woken up, and swait() returns a nonzero value.


The signal remains pending.

Other Counting Kernel Semaphore Functions


There are a number of other functions used to manipulate counting kernel
semaphores.
ssignaln() Used to signal a semaphore n times. This is
equivalent to calling ssignal() n times.
sreset() Resets the semaphore value to 0 and wakes up all
tasks that are waiting on the semaphore.
scount() Returns the semaphore value. A negative count
indicates the number of tasks that are waiting for the
semaphore. This function is used rarely in a driver
but if the need arises, it should always be used rather
than using the value of the semaphore variable
directly. The latter technique would give erroneous
results because the value is not a simple count when
there are tasks blocked on the semaphore.
These functions may not be used with kernel mutexes. The use of these routines is
further illustrated in the examples discussed below.

Kernel Mutexes (Priority Inheritance Semaphores)


In a multitasking system that uses a fixed priority scheduler, a problem known as
priority inversion can occur. Consider a situation in which a task holds some
resource. This task is preempted by a higher priority task that requires access to the
same resource. The higher priority task must wait until the lower priority task
releases the resource. But the lower priority task may be prevented from executing
(and, therefore, from releasing the resource) by other tasks of intermediate priority.
One solution to this problem is to use priority inheritance whereby the priority of
the task holding the resource is temporarily raised to the priority of the highest
priority task waiting for that resource until it releases the resource. LynxOS kernel
mutexes support priority inheritance. In order to function with priority inheritance,
the semaphore must be initialized by the kernel function kmutex_init():
#include <swait.h>
...
kmutex_init (&mutex);

Writing Device Drivers for LynxOS 45


Chapter 4 - Synchronization

NOTE: Legacy code may use the function pi_init() instead of


kmutex_init(). These two functions are identical.

Another way of initializing mutexes that are global objects is using the
KMUTEX_INITIALIZER macro:
ksem_t mutex = KMUTEX_INITIALIZER;

Kernel mutexes have two states: locked (owned) and unlocked. The initial state is
unlocked. A semaphore is acquired using the swait() function.
swait (&mutex, flag);

swait() first checks whether the mutex is locked or unlocked. If it is unlocked,


the function marks it locked, marks the calling task the owner of the mutex and
returns. If the mutex is locked, the calling task is blocked until the mutex is
unlocked by a ssignal() call. The current mutex owner inherits the priority of
the highest priority task waiting on the mutex.
The flag argument has the same meaning as for counting semaphores. Signals,
however, should normally be ignored when using a kernel mutex. Compared to
waiting for an I/O device, a critical code region is relatively short so there is little
need to be able to interrupt a task that is waiting on the mutex. Unlike an event,
which is never guaranteed to occur, execution of a critical code region cannot
“fail.” The task holding the mutex is bound, sooner or later, to get to the point at
which the mutex is released.
A mutex is unlocked using the ssignal() function:
ssignal (&mutex);

If there are tasks waiting on the mutex’s queue, the highest priority task is woken
up and it acquires the mutex. Only the mutex owner may unlock the mutex. Using
ssignal() with a kernel mutex is, therefore, not allowed in interrupt and kernel
timeout handlers, which are executed in the context of an arbitrary thread and are
not allowed to block.

46 Writing Device Drivers for LynxOS


Kernel Spin Locks

Kernel Spin Locks


Spin locks are a kernel synchronization mechanism used to create critical code
regions, and are similar to kernel mutexes. Spin locks are, in general, faster that
kernel mutexes, but are only intended to be used for very short critical regions. The
kernel spin lock API is defined in the spinlock.h kernel header file. At most one
thread may own a particular spin lock at any time. If a thread attempts to acquire a
free lock (one not owned by any thread), it becomes the owner and returns
immediately. If a thread attempts to acquire a lock, which is currently owned by
another thread, it busy loops (spins, hence the name) until the lock is released.
When a spin lock is released by a thread, the thread stops being the lock owner, and
one of the spinning threads, if any, becomes the new owner, while the others
continue to spin. Note that tasks spinning on spin locks consume CPU time. In this
respect, spin locks differ considerably from kernel mutexes, which put waiting
tasks to sleep.
Spin locks do not transfer priority between threads. In LynxOS, kernel spin locks
are non-recursive, that is, a thread may not relock a lock it already owns. If it
attempts to, it will spin forever.This is intentional, as spin locks are less tolerant to
buggy code this way.
LynxOS kernel spin locks come in two flavors: the preemption-disabling spin
locks and the interrupt-disabling spin locks. The preemption-disabling spin locks
are also referred to as scheduling-disabling spin locks in LynxOS.
The scheduling-disabling spin locks disable preemption on the calling CPU at the
time the lock is acquired for the duration of the lock being held; this is necessary to
avoid potential deadlocks. Otherwise, the owner thread could be preempted by
another, higher priority, thread, which could attempt to lock the same lock and spin
forever, because the lock owner thread wouldn't get any CPU time to release it.
Due to the same reason, a task owning a spin lock must never voluntarily
reschedule.
The scheduling-disabling spin locks are not safe to be used in interrupt handlers or
any code executing in the interrupt context, such as kernel timeout handlers.
Consider the following situation: a thread owning a spin lock gets interrupted, and
the interrupt handler attempts to acquire the same lock. On a single-CPU machine,
it would then spin forever, and on a multiple-CPU machine it would spin until the
interrupted thread gets scheduled on a different CPU and releases the lock, which
is likely to take some time.
To avoid these situations, LynxOS provides interrupt-disabling spin locks, which
disable all interrupts on the calling CPU for the duration of the lock being held
(this also disables preemption). Interrupt-disabling spin locks are safe to be used in
interrupt handlers.

Writing Device Drivers for LynxOS 47


Chapter 4 - Synchronization

Because a spinning thread wastes CPU time, and scheduling/interrupts are disabled
within the critical regions protected by spin locks, those regions must be as short as
possible, a few microseconds at most. A spin lock not used in interrupt handlers
should be a scheduling-disabling spin lock in order to minimize the impact on
LynxOS interrupt response. If longer than a few microseconds delays are possible,
consider using other synchronization mechanisms, such as kernel mutexes. It is not
recommended to have multiple levels of nested spin locks (acquiring a spin lock
while having another spin lock already acquired), because such operations would
spin with interrupts or scheduling disabled, impacting the system's response times.
The object type for spin locks is spin_sched_t for scheduling-disabling ones,
and spin_intr_t for interrupt-disabling ones. Before a spin lock can be used, it
must be initialized:
#include <spinlock.h>

spin_intr_init(&ilock);
spin_sched_init(&slock);

Another way to initialize a spin lock is to give it an initializing value


SPIN_SCHED_INITIALIZER or SPIN_INTR_INITIALIZER, depending on the
spin lock type, at the spin lock variable definition:
spin_intr_t ilock = SPIN_INTR_INITIALIZER;
spin_sched_t slock = SPIN_SCHED_INITIALIZER;

NOTE: An uninitialized spin lock will in most cases behave as though it is locked,
making any attempt to lock it spin forever and thus exposing the coding mistake.

spin_sched_lock() and spin_intr_lock() acquire a spin lock. If the lock


is currently owned by a thread, these functions spin until the lock is released. After
the lock is acquired, the current thread becomes the spin lock owner and
scheduling (for spin_sched_lock()) or interrupts (for spin_intr_lock())
are disabled on the current CPU. spin_sched_unlock() and
spin_intr_unlock() unlock a spin lock and the current thread stops being its
owner. The interrupts/scheduling state at the time the lock was acquired is restored:
spin_sched_lock(&slock);
… /* Critical region */
spin_sched_unlock(&slock);

spin_sched_trylock() and spin_intr_trylock() attempt to acquire a


spin lock. If the lock is free, these functions succeed and return OK, with
scheduling or interrupts disabled respectively. Otherwise, they immediately fail
without spinning, return SYSERR, and the state of scheduling or interrupts is not
changed.

48 Writing Device Drivers for LynxOS


Kernel Spin Locks

if (spin_sched_trylock(&slock) == OK) {
…/* Critical region */
spin_sched_unlock(&slock);
}

spin_sched_islocked() and spin_intr_islocked() functions return


non-zero, if a spin lock is currently owned by some thread, and zero if it is free.
A thread owning a spin lock may not voluntarily reschedule (cause preemption):
spin_sched_lock(&slock);

swait(&sem, SEM_SIGIGNORE); /* Bug! */

spin_sched_unlock(&slock);

Always release the spin lock before making a call that may result in preemption:
spin_sched_lock(&slock);

spin_sched_unlock(&slock);
swait(&sem, SEM_SIGIGNORE); /* Correct */

Spin locks are not guaranteed to be fair. If there are threads spinning on a spin lock,
and the owner thread releases it and then immediately attempts to grab it again, it
may always succeed. The chances of this happening depend on the hardware
implementation, and differ with particular CPU/board manufacturers and models,
so one should never expect this kind of fairness from spin locks. On the other hand,
it is guaranteed that pending interrupts and scheduling requests are delivered when
a spin lock is released.
Spin locks are useful in a uni-processor environment as well. In this case, they are
somewhat redundant, as the lock is always free when a thread attempts to grab it,
and spinning is impossible. However, spin locks are still preferable to disabling of
preemption or interrupts in uni-processor case, because they make it more obvious
from the code, which resource and data are protected in a particular critical region.
Spin locks consume more memory than kernel mutexes. Due to requirements
imposed by the CPU architecture, a spin lock variable (of type spin_sched_t or
spin_intr_t) may take up to hundreds of bytes of RAM. Typically, one spin
lock variable takes up at least an entire cache line, and is automatically aligned at a
cache line boundary by the compiler. However, if the memory for a spin lock is
allocated with sysbrk(), it is the programmer's responsibility to make sure that
the spin lock variable is appropriately aligned. In the following example, the
dynamically allocated device statics structure contains a spin lock variable and
therefore must be aligned. One may use the GCC compiler extension
__alignof__ to find out what the required alignment of the statics structure is:
void *m;
struct statics *s;
int align = __alignof__(struct statics);

Writing Device Drivers for LynxOS 49


Chapter 4 - Synchronization

m = sysbrk(sizeof(*s) + align);
s = (void *)(((kaddr_t)m + align - 1) & -align);
s->alloc = m; /* save the address returned by sysbrk() for sysfree() in
the uninstall routine */

Spin locks must be located in regular cached memory. Do not place them in
uncached memory or above PHYSBASE.

Disabling Interrupts, the Single CPU Case


LynxOS provides routines for disabling/enabling all interrupts on the calling CPU.
The routines for controlling interrupts are disable() and restore(). They are
used as follows:
#include <disable.h>
...
int ps; /* previous state */
disable (ps); /* disable all interrupts */
...
...
restore (ps); /* restore interrupt state */

The variable ps must be a local variable and should never be modified by the
driver. Each call to disable() must have a corresponding call to restore(),
using the same variable. Note that restore() does not necessarily reenable
interrupts. Rather, it restores the state that existed before disable() was called.
So if interrupts were already disabled when disable() was called, the (first) call
to restore() will not reenable them. As a consequence of disabling interrupts,
task preemption is also disabled.
The disable() and restore() routines are the fastest mutual exclusion
mechanisms available to the driver. Use these routines when the critical code
region is very short because they directly affect interrupt response times.
A slightly faster, but less portable mechanism other than disable() and
restore() is possible. This is accomplished by using the assembly language.
The disable and restore routines turn off and then turn on interrupts. They
may also perform a context switch if the driver signals another thread during the
critical region. If no signals are going to be sent during the critical region, then the
actual assembly language can be used to turn off and then enable interrupts.
Care should be taken not to disable the interrupts for too long as this will inhibit the
clock tick interrupt and, hence, the invocation of the scheduler.
Voluntary preemption is allowed while disabled. For instance, if your driver calls
swait() on a locked semaphore while interrupts are disabled, that is acceptable,

50 Writing Device Drivers for LynxOS


Disabling Preemption, the Single CPU Case

so long as interrupts are re-enabled via restore() at the end of the critical
section. That is, the following code is acceptable:
disable(ps);
...
swait(&random_semaphore, semaphore_priority);
...
restore(ps);

The interrupt state is a part of the task's execution context. When the driver task is
preempted, the interrupt state of the new task takes effect. When the driver task
returns from swait(), its interrupt state is restored. The code above may thus be
considered to be two critical regions rather than one: one from disable() to
swait() and the other from swait() to restore().

Note that disabling interrupts on the calling CPU does not create a mutual
exclusion region in a multi-processor environment. For this reason, disable()
and restore() additionally grab/release a special lock called the Big Kernel
Lock. That lock is discussed below.
The disable() and restore() functions are deprecated, because of their lack
of selectivity in a multi-processor environment. Use interrupt-disabling spin locks
instead. Note the important difference: voluntary preemption is not allowed for
tasks holding a spin lock.

Disabling Preemption, the Single CPU Case


The kernel routines sdisable()/srestore() control task preemption on the
calling CPU and are used in much the same way as disable()/restore().
#include <disable.h>
...
int ps; /* previous state */
sdisable (ps); /* disable task preemption */
...
...
srestore (ps); /* restore preemption state */

The variable ps must be a local variable and should never be modified by the
driver. Each call to sdisable() must have a corresponding call to
srestore(), using the same variable. Note that srestore() does not
necessarily reenable preemption. Rather, it restores the state that existed before
sdisable() was called. So if preemption was already disabled when
sdisable() was called, the (first) call to srestore() will not reenable it. The
kernel continues to handle interrupts while preemption is disabled.
Care should be taken not to disable preemption for too long as this will delay the
context switch.

Writing Device Drivers for LynxOS 51


Chapter 4 - Synchronization

Voluntary preemption is allowed while disabled. For instance, if your driver calls
swait() on a locked semaphore while preemption is disabled, that is acceptable,
so long as preemption is re-enabled via srestore() at the end of the critical
section. That is, the following code is acceptable:
sdisable(ps);
...
swait(&random_semaphore, semaphore_priority);
...
srestore(ps);

The preemption state is a part of the task's execution context. When the driver task
is preempted, the preemption state of the new task takes effect. When the driver
task returns from swait(), its preemption state is restored. The code above may
thus be considered to be two critical regions rather than one: one from
sdisable() to swait() and the other from swait() to srestore().

Note that disabling preemption on the calling CPU does not create a mutual
exclusion region in a multi-processor environment. For this reason, sdisable()
and srestore() additionally grab/release a special lock called the Big Kernel
Lock. That lock is discussed below.
The sdisable() and srestore() functions are deprecated, because of their
lack of selectivity in a multi-processor environment. Use scheduling-disabling spin
locks instead. Note the important difference: voluntary preemption is not allowed
for tasks holding a spin lock.

Disabling Interrupts and Preemption in SMP Case ,


the Big Kernel Lock
As mentioned above, simply disabling interrupts or preemption for a while on one
CPU in a multi-CPU system does not create a mutual exclusion region. If there is
more than one CPU, disable() and sdisable() additionally grab, while
restore() and srestore() additionally release the special system-wide lock
called the Big Kernel Lock (the BKL), thereby creating a mutual exclusion region.
There is no special explicit API available to the drivers to access the BKL; it is
only interfaced via the four calls listed above.
The BKL is a special kind of spin lock. The BKL is recursive: a thread may grab it
many times repeatedly. To completely release the BKL the thread has to issue a
releasing call the same number of times. The BKL is automatically released on
preemption and automatically reacquired when the task is scheduled to run again,
therefore it is acceptable to voluntarily reschedule while holding the BKL - just
remember that this creates a hole in your mutual exclusion region.

52 Writing Device Drivers for LynxOS


Critical Code Regions

The most important aspect of the BKL is that there is only one instance of it. Since
a lot of functions in the kernel currently use disable() and sdisable(), they
all have to compete for the same single lock, creating a lot of contention. For this
reason, disable() and sdisable() families of functions are deprecated in
favor of regular kernel spin locks. The BKL is only a temporary solution while
LynxOS kernel is being gradually converted to fine-grained locking mechanisms
such as regular spin locks.
Though it is rarely useful in device drivers, it is possible to disable interrupts or
preemption on the calling CPU without grabbing the BKL, with __disable() or
__sdisable() functions, respectively. They are used as follows:
int ps; /* previous state */

__sdisable(&ps);
… /* no preemption region */
__srestore(&ps);

__disable(&ps);
… /* no interrupts region */
__restore(&ps);

Note that these functions take a pointer to the previous state variable, rather than its
name. These functions behave the same way regardless of the number of CPUs in
the system.

Critical Code Regions


If the shared resource is accessed by an interrupt handler, then
disable()/restore() or spin_intr_lock()/spin_intr_unlock() must
be used to protect critical code regions. If not, then any of the three
synchronization mechanisms can be used.

Using Kernel Semaphores for Mutual Exclusion


Both mutexes and counting semaphores may be used for mutual exclusion. The
difference is that in the latter case, there is no priority inheritance. When used to
protect a critical code region, a counting kernel semaphore must be initialized to 1.
This allows the first task to lock the semaphore and enter the region. Other tasks
that attempt to enter the same region will block until the semaphore is unlocked.
Each call to swait() must have a corresponding call to ssignal().
swait (&s->mutex, SEM_SIGIGNORE); /* enter critical
code region */
...

Writing Device Drivers for LynxOS 53


Chapter 4 - Synchronization

... /* access resource */


...
ssignal (&s->mutex); /* leave critical
code region */

Signals can normally be ignored when using a kernel mutex or using a counting
kernel semaphore as a mutex. Compared to waiting for an I/O device, a critical
code region is relatively short so there is little need to be able to interrupt a task
that is waiting on the mutex. Unlike an event, which is never guaranteed to occur,
execution of a critical code region cannot “fail.” The task holding the mutex is
bound, sooner or later, to get to the point where the mutex is released.

CAUTION! sreset() and ssignaln() should never be used on a kernel


semaphore that is used for mutual exclusion as in both cases this could lead to
more than one task executing the critical code concurrently.

Using Spin Locks for Mutual Exclusion


Spin locks are designed for mutual exclusion, and cannot do anything else. For
example, one could use interrupt_disabling spin locks to protect an addition
of a structure to a free list as follows:
spin_intr_t lock;
...
spin_intr_lock(& lock);
ptr->next = freelist;
freelist = ptr;
spin_intr_unlock(& lock);

(Interrupt-disabling spin locks shown; for scheduling disabling spin locks the
sequence is similar.)
It is important that the time within a critical section is kept to a minimum to avoid
impacting the system response time.

Using Interrupt Disabling for Mutual Exclusion


The disable()/restore() routines (deprecated) or interrupt-disabling spin
locks must be used for synchronization with an interrupt handler. If the interrupt
handler needs to access critical regions, then it will also have to use
disable()/restore() or an interrupt-disabling spin lock. This is because
interrupts are enabled when the driver is executing the interrupt routine. The reason
for this is to allow higher priority interrupts to occur. It is rare, but the interrupt

54 Writing Device Drivers for LynxOS


Using Preemption Disabling for Mutual Exclusion

routine can be preempted by a higher priority interrupt. This behavior is necessary


to minimize delays in servicing events requiring real-time response. When the
device issues an interrupt, the kernel interrupt dispatch routine will turn on higher
priority interrupts rather than call the driver interrupt handler.
disable (ps);
if (ptr = freelist) /* take item off free list */
freelist = ptr->next;
restore (ps);

disable (ps);
ptr->next = freelist; /* put item back on free list */
freelist = ptr;
restore (ps);

If a driver allows multiple tasks to use a device concurrently and the critical region
is in the part of the driver that will be executed very often, then there may be some
advantage in using disable()/restore() in order to avoid an excessive
number of context switches. But if the driver is written so that only one user task
can use the device at a time, then swait()/ssignal() will probably be
sufficient for synchronization with the driver’s kernel thread.
When using disable()/restore(), it is essential that the time during which
interrupts are disabled is kept to a minimum to avoid having an impact on the
system’s response times. It is sometimes possible to split a seemingly long critical
code section into two (or more pieces) by introducing calls to
restore()/disable() at appropriate points. Care must be taken in selecting the
point at which interrupts are reenabled to avoid creating a race condition.
disable (ps);
...
...
restore (ps); /* allow interrupts and pre-emption */

disable (ps);
...
...
restore (ps);

If the critical code region is still too long then the driver should be redesigned so
that swait()/ssignal() can be used and another means found for the interrupt
handler to communicate with the rest of the driver.

Using Preemption Disabling for Mutual Exclusion


If the shared resource is not accessed by the interrupt handler,
sdisable()/srestore() can be used instead of disable()/restore(). This
allows the system to continue to handle interrupts during the critical code region.

Writing Device Drivers for LynxOS 55


Chapter 4 - Synchronization

The same remarks and considerations that apply to the use of


disable()/restore() apply equally to sdisable()/srestore().

Event Synchronization
A kernel semaphore is the mechanism used to implement event synchronization in
a LynxOS driver. The value of the semaphore should be initialized to 0, indicating
that no events have occurred.
Waiting for an event:
if (swait (&s->event_sem, SEM_SIGABORT))
{
pseterr (EINTR);
return (SYSERR);
}

Signaling an event:
ssignal (&s->event_sem);

Handling Signals
Because there is often no guarantee that an event will occur, signals should be
allowed to abort the swait() using SEM_SIGABORT. This way, a task can be
interrupted if the event it is waiting for never arrives. If signals are ignored, there
will be no way to interrupt the task in the case of problems, so the task could
remain in the blocked state indefinitely. The driver must check the return code
from swait() to determine whether a signal was received. As an alternative to
SEM_SIGABORT, timeouts can be used if the timing of events is known in advance.

It is sometimes useful for an application to be able to handle signals while it is


blocked on a semaphore but without aborting the wait. This is possible using the
SEM_SIGRETRY flag to swait(). Signals are delivered to the application and the
swait() automatically restarted. There is no way for the driver to know whether
any signals were delivered while the task was blocked on the semaphore.
A word of caution is necessary concerning the use of SEM_SIGRETRY. If the signal
handler in the application calls exit(3), then the swait() in the driver will
never return. This could cause problems if the task had blocked while holding
some resources. These resources will never be freed. To avoid this type of
problem, a driver can use SEM_SIGABORT in conjunction with the kernel function
deliversigs(), declared in the header file kill.h. This allows the application

56 Writing Device Drivers for LynxOS


Using sreset with Event Synchronization Semaphores

to receive signals in a timely fashion, but without the risk of losing resources in the
driver.
if (swait (&s->event_sem, SEM_SIGABORT)
{
cleanup (s); /* tidy up before delivering signals */
deliversigs (); /* may never return */
}

Using sreset with Event Synchronization Semaphores


Two example uses of sreset() discussed below are:
• In handling error conditions
• Variable length data transfers (with multiple consumers)

Handling Error Conditions


A driver must handle errors that may occur. For example, what should it do if an
unrecoverable error is detected on a device? A frequent approach is to set an error
flag and wake up any tasks that are waiting on the device:
if (error_found) {
s->error++;
sreset (&s->event_sem);
}

But now the driver cannot assume that when swait() returns, the expected event
has occurred. The swait() could have been woken up because an error was
detected. So some extra logic is required when using the event synchronization
semaphore:
if (swait (&s->event_sem, SEM_SIGABORT))
{
pseterr (EINTR);
return (SYSERR);
}
if (s->error)
{
pseterr (EIO);
return (SYSERR);
}

Variable Length Transfers


The second example using sreset() is somewhat more esoteric but is an
interesting example nevertheless. Imagine the following scenario: A device or
“producer” process generates data at a variable rate. Data can also be consumed in

Writing Device Drivers for LynxOS 57


Chapter 4 - Synchronization

variable sized pieces by multiple tasks. At some point, a number of consumer tasks
may be blocked on an event synchronization semaphore, each waiting for different
amounts of data, as illustrated in Figure 4-3.

semaphore Task 1 Task 2 Task 3


request size: 5 request size: 10 request size: 6

Figure 4-3: Synchronization Mechanisms

When some data becomes available, what should the driver do? Without adding
extra complexity and overhead to the driver, there is no easy way for the driver to
calculate how many of the waiting tasks it can satisfy (and should, therefore, wake
up). A simple solution is to call sreset() which will wake all tasks which will
then consume the available data according to their priorities. Tasks that are
awakened but find no data left will have to wait again on the event semaphore.

Caution when Using sreset


To maintain coherency of the semaphore queue, sreset must synchronize with
calls to ssignal(). Because ssignal() can be called from an interrupt
handler, sreset() disables interrupts internally while it is waking up all the
blocked tasks. Since the number of tasks blocked on a semaphore is not limited this
could lead to unbounded interrupt disable times if sreset() is used without
proper consideration.
To avoid this problem, another technique must be used in driver designs where an
unknown number of tasks could be blocked on a semaphore. One possibility is to
wake tasks in a “cascade.” The call to sreset() is replaced by a call to
ssignal(), which will wake up the first blocked task. This task is then
responsible for unblocking the next blocked task, which will unblock the next one,
and so on until there are no more blocked tasks. A negative semaphore indicates
that there are blocked tasks. This is illustrated in the modified error handling code
from the previous section:

58 Writing Device Drivers for LynxOS


Resource Pool Management

if (error_found)
{
s->error++;
if (s->event_sem < 0)
ssignal (&s->event_sem);
}
...
if (swait (&s->event_sem, SEM_SIGABORT))
{
pseterr (EINTR);
return (SYSERR);
}
if (s->error)
{
if (s->event_sem < 0)
ssignal (&s->event_sem);
pseterr (EIO);
return (SYSERR);
}

Because tasks are queued on a semaphore in priority order, they will still be
awakened and executed in the same order as when using sreset(). There is no
penalty to using this technique.

Resource Pool Management


Counting kernel semaphores can also be used for managing a resource pool. The
value of the semaphore should be initialized to the number of resources in the pool
(note that the semaphore value may not exceed KSEM_CNT_MAX). To allocate a
resource, swait() is used. ssignal() is used to free a resource. The following
code shows an example of using swait() to allocate and ssignal() to free a
resource.
struct resource *
allocate (struct statics *s;)
{
struct resource *resource;
int ps;

swait (&s->pool_sem, SEM_SIGRETRY);


sdisable (ps);
resource = s->pool_freelist;
s->pool_freelist = resource->next;
srestore (ps);
return (resource);
}
void free (struct statics *s, struct resource *resource)
{
struct resource *resource;
int ps;
sdisable (ps);
resource->next = s->pool_freelist;

Writing Device Drivers for LynxOS 59


Chapter 4 - Synchronization

s->pool_freelist = resource;
srestore (ps);
ssignal (&s->pool_sem);
}

The counting semaphore functions implicitly as an event synchronization


semaphore too. When the pool is empty an attempt to allocate will block until
another task frees a resource.
A mutex mechanism is still needed to protect the code that manipulates the free
list. The combining of different synchronization techniques is discussed more fully
in the following section.

Combining Synchronization Mechanisms


The examples discussed in the preceding sections have all been fairly
straightforward in that they have only used one synchronization mechanism. In a
real driver the scenarios are often far more complex and require the combining of
different techniques. The following sections discuss when and how the
synchronization mechanisms should be combined.

Manipulating a Free List


Refer to the code in “Using Interrupt Disabling for Mutual Exclusion” on page 54.
This illustrates the use of interrupt disabling to remove an item from a free list, but
does not address what the driver can do if the free list is empty.
One possibility is that the driver blocks until another task puts something back on
the free list. This scenario requires the use of a mutex and an event synchronization
semaphore. Two different approaches to this problem are illustrated in the
following examples. The first example is deliberately complicated to demonstrate
various synchronization techniques.
/* get_item : get item off free list, blocking if
list is empty */
struct item *
get_item (struct statics *s)
{
struct item *p;
int ps;

do
{
disable (ps); /* enter critical code */
if (p = s->freelist) /* take 1st item on list */
s->freelist = p->next;

60 Writing Device Drivers for LynxOS


Manipulating a Free List

else
/* list was empty, so wait */
swait (&s->freelist_sem, SEM_SIGIGNORE);
restore (ps); /* exit critical code */
} while (!p);
return (p);
}

/* put_item : put item on free list, wake up waiting tasks */


void put_item (struct statics *s, struct item *p)
{
int ps;

disable (ps); /* enter critical code */


p->next = s->freelist; /* put item on list */
s->freelist = p;
if (s->freelist_sem < 0)
ssignal (&s->freelist_sem); /* wake up waiter */
restore (ps); /* exit critical code */
}

There are a number of points of interest illustrated by this example:


1. The example uses SEM_SIGIGNORE for simplicity. If SEM_SIGABORT
is used, the return value from swait() must be checked.
2. The example uses the disable()/restore() mechanism for mutual
exclusion. This allows the free list to be accessed from an interrupt
handler using put_item(). get_item() should never be called from
an interrupt handler, though, as it may block. If the free list is not
accessed by the interrupt handler, sdisable()/srestore() can be
used instead.
3. The get_item() function uses the value of the item taken off the list to
determine if the list was empty or not. Note that the freelist_sem()
is being used simply as an event synchronization mechanism, not a
counting semaphore. (Managing a free list with a counting semaphore is
illustrated in the second approach). As a consequence, the code that puts
items back on the free list must signal the semaphore only if there is a
task waiting. Otherwise, if the semaphore was signaled every time an
item is put back, the semaphore count would become positive and a task
calling swait() in get_item() would return immediately, even
though the list is still empty.
4. Blocking with interrupts disabled may seem at first sight like a dangerous
thing to do. But this is necessary as restoring interrupts before the
swait() would introduce a race condition. LynxOS saves the interrupt
state on a per task basis. So, when this task blocks and the scheduler
switches to another task, the interrupt state will be set to that associated
with the new task. But, from the point of view of the task executing the
above code, the swait() executes atomically with interrupts disabled.

Writing Device Drivers for LynxOS 61


Chapter 4 - Synchronization

5. swait()/ssignal() cannot be used as the mutex mechanism in this


particular example as this could lead to a deadlock situation where one
task is blocked in the swait() while holding the mutex. Other tasks
wishing to put items back on the list will not be able to enter the critical
region. If a critical code region may block, care must be taken not to
introduce the possibility of a deadlock. To avoid a deadlock,
sdisable()/srestore() or disable()/restore() should be used
as the mutex mechanism rather than swait()/ssignal(). But, once
again, the critical code region must be kept as short as possible to avoid
having any adverse effect on the system’s real-time responsiveness. An
alternative would be to raise an error condition if the list is empty rather
than block. This would allow swait()/ssignal() to be used as the
mutex mechanism.
6. A call to ssignal() in put_item() may make a higher priority task
eligible to execute but the context switch won’t occur until preemption is
reenabled with restore().
In the second approach to this problem, a kernel semaphore is used as a counting
semaphore to manage items on the free list. The value of the semaphore should be
initialized to the number of items on the list.
struct item *
get_item (struct statics *s;)
{
struct item *p;
int ps;

swait (&s->free_count, SEM_SIGRETRY);


disable (ps);
p = s->freelist;
s->freelist = p->next;
restore (ps);
return (p);
}

void put_item (struct statics *s, struct item *p)


{
int ps;

disable (ps);
p->next = s->freelist;
s->freelist = p;
restore (ps);
ssignal (&s->free_count);
}

This code illustrates the following points:


1. A kernel semaphore used as a counting semaphore incorporates the
functionality of an event synchronization semaphore. swait() will

62 Writing Device Drivers for LynxOS


Signal Handling and Real-Time Response

block when no items are available and ssigna()l will wake up waiting
tasks.
2. The example uses the disable()/restore() mechanism for mutual
exclusion. This allows the free list to be accessed from an interrupt
handler using put_item(). get_item() should never be called from
an interrupt handler, though, as it may block. If the free list is not
accessed by the interrupt handler, sdisable()/srestore() can be
used instead.
3. The event synchronization is outside of the critical code region so there is
no possibility of deadlock. Therefore, swait()/ssignal() could be
used as the mutex mechanism if the code does not need to be called from
an interrupt handler.
4. The function put_item() could be modified to allow several items to
be put back on the list using ssignaln(). But items can only be
consumed one at time, since there is no function swaitn().

Signal Handling and Real-Time Response


“Handling Signals” on page 56 discussed the use of the SEM_SIGRETRY flag with
swait(). It is not advisable to use swait() with this flag inside a critical code
region protected with (s)disable()/(s)restore(). The reason for this is that,
internally, swait() calls the kernel function deliversigs() to deliver
signals when the SEM_SIGRETRY flag is used. If the swait() is within a region
with interrupts or preemption disabled, then the execution time for
deliversigs() will contribute to the total interrupt or preemption disable time,
as illustrated in the following example:
sdisable (ps); /* enter critical region */
...
swait (&s->event_sem, SEM_SIGRETRY);
/* may call deliversigs internally */
...
srestore (ps); /* leave critical region */

In order to minimize the disable times it is better to use SEM_SIGABORT and


reenable interrupts/preemption before calling deliversigs(). The above code
then becomes:
sdisable (ps); /* enter critical region */
...
while (swait (&s->event_sem, SEM_SIGABORT))
{
srestore (ps); /* re-enable pre-emption
before delivering signals */
deliversigs (); /* may never return */

Writing Device Drivers for LynxOS 63


Chapter 4 - Synchronization

sdisable (ps);
}
...
srestore (ps); /* leave critical region */

Nesting Critical Regions


It is also possible to nest critical regions. As a general rule, a less selective
mechanism can be nested inside a more selective one. For instance, the following
is permissible:
int sps, ps;
sdisable (sps);
...
disable (ps);
...
restore (ps);
...
srestore (sps);

Note that different local variables must be used for the two mechanisms. However,
the converse is not true. It is not permitted to do the following:
disable (ps);
...
sdisable (sps);
...
srestore (sps);
...
restore (ps);

In any case, the inner sdisable()/srestore() is completely redundant, as


preemption is already disabled and the BKL is already locked by the outer
disable().

A spin lock critical region may be nested inside a kernel semaphore/mutex critical
region, but not vice versa. As already mentioned, nesting spin lock critical regions
within other spin lock critical regions is possible, but not recommended. A spin
lock critical region should always be the innermost critical region.
Kernel semaphore/mutex critical regions can be nested within other kernel
semaphore/mutex critical regions. Pay attention to keeping the order of
semaphore/mutex locking constant; a varying locking order may lead to deadlocks.

64 Writing Device Drivers for LynxOS


Memory Barriers

Memory Barriers

Memory Ordering and Its Impact on Synchronization


Synchronization of shared resource access requires memory access instructions
(stores and loads) to execute and become visible to all CPUs in the system in a
particular order. A programmer might expect the accesses to be carried out in the
same order that they appear in the code, the program order. However, without
additional tricks described in this section, that may prove wrong, because modern
optimizing compilers may reorder CPU instructions in general and memory
accesses in particular to achieve better performance. The compiler may also
optimize away repeated same type accesses to a memory location. For example, it
may cache a memory location contents in a CPU register and eliminate any loads
from the location after the first one. Another example: if the code stores 1, then,
later, 0 to the same memory location, the compiler may eliminate the first store. If
the first store was lock the resource, and the second was unlock the resource, the
consequence is malfunctioning code.
Modern CPUs present another problem: most modern CPU architectures are
out-of-order, meaning CPU instructions may be executed not in the program order,
but in the order their data becomes available (for example, a read from a cached
memory location might execute before a preceding read from a location that is not
in the cache yet). Each CPU architecture has a set of rules for the memory access
execution order called the memory model. In all architectures, a particular CPU is
self-consistent, that is to a program running on the CPU it looks like all its
instructions are executed in program order. This only becomes a problem when
there are other observers on the memory bus, such as external I/O devices and,
most importantly, other CPUs.
As already said, memory access reordering by the compiler and the CPU may
impact resource access synchronization. Consider a generic sequence of operations
in a critical code region protecting a shared resource:
1. Lock Resource
2. Access Resource
3. Unlock Resource
All three items contain memory accesses. It is crucial that item 1 is executed first,
then item 2, then item 3. However, this may not be so obvious to the optimizing
compiler and the CPU, which only see a flow of instructions without knowing their
purpose. If the compiler or the CPU rearranges memory access instructions of item

Writing Device Drivers for LynxOS 65


Chapter 4 - Synchronization

2 with item 1 or item 3, those instruction would have leaked from the critical
region, and that's definitely not what the programmer wants.
LynxOS provides a set of calls to enforce the desired memory access ordering,
called memory barriers. There are two kinds of memory barriers: generic and
specialized. Generic memory barriers are guaranteed to perform their function in
any situation; however, they are usually quite slow. A particular platform may
allow to use lighter-weight barriers in some specific situations; those situations are
covered with specialized barriers.
In general, a memory barrier keeps execution of memory accesses of a certain type
on the same side of itself (either before or after) as they appear in program order.
For example, if you have the following program sequence:

Access A Access B Memory barrier Access C Access D

The memory barrier guarantees that accesses A and B will be carried out before
accesses C and D, but does not guarantee any order for A versus B and C versus D.
Also, a memory barrier cannot be relied on doing anything other than access
ordering. For example, it may not even wait for accesses A and B to complete;
whether it does or not depends on the particular platform, particular barrier and
particular barrier implementation. It is safe to assume that it does not.

66 Writing Device Drivers for LynxOS


Memory Barrier API

Memory Barrier API


The memory barrier API is defined in the membar.h header file. Memory barriers
are available in both kernel and user code. The generic memory barriers are
provided in Table 4-2.

Table 4-2: The Generic Memory Barriers

Memory Barrier Description

membar_compiler() The compiler optimization barrier.


Prevents the compiler from reordering memory access
operations across this barrier for optimization purposes. It
also makes the compiler drop all assumptions about memory
contents: the compiler will not keep memory contents cached
in a register across the barrier, and will not optimize away
stores to the same memory location (but only if the stores are
on the different sides of the barrier). This is the basic barrier:
all other barriers include the compiler optimization barrier
functionality. Note that even if it does not directly correspond
to any CPU instructions generated, the compiler optimization
barrier is not free: more often than not, it makes slower code
generated around it. The same can be said about all memory
barriers.
membar_store() The store barrier.
Makes sure all outstanding stores preceding this barrier (in
program order) are resolved and globally visible before the
stores following this barrier.
membar_load() The load barrier.
Makes sure all outstanding loads preceding this barrier (in
program order) are resolved before the loads following this
barrier.
membar_all() The full memory barrier.
Makes sure all outstanding memory accesses (load and stores)
preceding this barrier (in program order) are resolved and
globally visible before the memory accesses following this
barrier.

Writing Device Drivers for LynxOS 67


Chapter 4 - Synchronization

The specialized memory barriers are provided in Table 4-3.

Table 4-3: The Specialized Memory Barriers

Memory Barrier Description

membar_io() The I/O barrier.


It is like the full memory barrier, but only serializes loads
and stores to uncached memory. Uncached memory is
usually used for memory-mapped I/O, so this barrier is of a
particular interest to device driver programmers. If the
platform does not provide a lightweight I/O barrier, this is
a full memory barrier.
membar_lock_enter() The lock enter barrier.
It is a specialized barrier used with data locks to ensure that
the lock protects its data. It is placed directly after the
memory store that acquires the lock. A lock enter barrier
makes sure that the shared data protected by a lock are not
accessed until the lock has been acquired. This barrier
requires that the shared data to be protected are in regular,
that is fully cached, system memory. If the platform does
not provide a lightweight lock enter barrier, this is a full
memory barrier. If the shared data to be protected are not in
the regular memory (for example, some or all of them are
in uncached I/O memory), then a full memory barrier
membar_all() must be placed in the code right after
acquiring the lock.

68 Writing Device Drivers for LynxOS


Memory Barriers in Device Drivers

Table 4-3: The Specialized Memory Barriers (Continued)

Memory Barrier Description

membar_lock_exit() The lock exit barrier.


It is a specialized barrier used with data locks to ensure that
the lock protects its data. It is placed directly before the
memory store that releases the lock. A lock exit barrier
make sure that the shared data protected by a lock are not
accessed after the lock has been released. This barrier
requires that the shared data to be protected are in regular,
that is fully cached, system memory. If the platform does
not provide a lightweight lock exit barrier, this is a full
memory barrier. If the shared data to be protected are not in
the regular memory (for example, some or all of them are
in uncached I/O memory), then a full memory barrier
membar_all() must be placed in the code just before
releasing the lock.
membar_pause() It is a specialized barrier used with data locks. It introduces
a short pause to improve the performance of spin-wait
loops. Spin-wait loops may suffer a severe performance hit
when exiting the loop on some CPU's: the store, which
obtains the lock, can not be performed before a large
number of outstanding loads from the same memory
location done within the spin-wait loop. The pause makes
sure there is not many outstanding memory loads at the
time the lock is obtained. If the platform does not provide a
pause barrier, this is just a compiler optimization barrier.

Memory Barriers in Device Drivers


To make programmers' job simpler, all LynxOS synchronization primitives that
create critical code regions such as spin_sched_lock() and
spin_sched_unlock() already include the lock enter and lock exit specialized
barriers. For device drivers, however, that may not be enough. An important
feature of those two barriers is that they expect the protected shared resource to
reside in regular cached memory. Accesses to other memory types may leak
through them.
For device drivers, that most importantly means any uncached memory accesses,
the kind typically used for memory-mapped I/O. Hence, if you use synchronization
primitives and critical code regions to serialize memory-mapped I/O access to an
external device, you need to insert full memory barriers membar_all() at the
very beginning and the end of the critical section (that is, immediately after locking

Writing Device Drivers for LynxOS 69


Chapter 4 - Synchronization

a spin lock and immediately before unlocking it, or immediately after disable()
and immediately before restore()). This is only necessary if the target platform
supports SMP, or the driver is intended to be portable to such systems.
For most external devices, the order of memory-mapped I/O device accesses is
important. To prevent the CPU from reordering a sequence of device accesses, use
membar_io() barrier between them.

70 Writing Device Drivers for LynxOS


CHAPTER 5 Accessing Hardware

Device drivers provide the ability to synchronize access and control of the
operating system to the hardware. This chapter describes how device drivers access
the hardware layer and illustrates the virtual memory mappings used by
LynxOS on different hardware platforms.

General Tasks

Synchronization
LynuxWorks recommends protecting the access to device registers by disabling
interrupts. For more information, refer to Chapter 7, “Interrupt Handling.”

Handling Bus Errors


An access to a memory location where no device is responding may cause the
hardware to generate a bus error. By default, LynxOS will halt, printing some
diagnostic information on the debug port, when a bus error occurs. To change this
behavior, a driver can catch bus errors using the recoset/noreco routines.
These routines should surround the code that could potentially cause a bus error.
if (recoset ())
{
pseterr (EFAULT); /* a bus error occured */
noreco(); /* must restore before leaving */
return (SYSERR);
}
... /* access device */
noreco (); /* restore default behavior */

Writing Device Drivers for LynxOS 71


Chapter 5 - Accessing Hardware

When first called, recoset returns a zero value. If a bus error occurs
subsequently, the kernel does a nonlocal jump so that the execution flow resumes
again where recoset returns, this time with a nonzero value. noreco restores
the default system behavior. An exception to this general scheme is the driver
install entry point, discussed in Chapter 2, “Driver Structure” on page 7.

Probing for Devices


It is very common for the driver to test for the presence of a device during the
install entry point. For this reason, LynxOS handles bus errors during
execution of the install routine, thus relieving the driver of this responsibility. If a
bus error occurs, the kernel does not return to the install routine from the bus error
handler. The error is taken to mean that the device is not present and user tasks will
not be permitted to open it.

Memory Mapped I/O Access Ordering


Modern CPUs may reorder memory accesses to achieve better performance. In
case of memory mapped I/O, this may be undesirable. For example, a device may
take commands from the CPU by having it write to an I/O register mapped into
memory. The device driver must ensure that the command sequence is not
reordered by the CPU.
For that purpose, LynxOS provides an I/O memory barrier membar_io(),
declared in the <membar.h> header file. This call ensures that any I/O memory
accesses preceding the barrier call in the code are issued by the CPU before any
memory accesses following the barrier call. Refer to “Memory Barriers” in
Chapter 4, “Synchronization” for more information on memory barriers.

Device Access
The following sections contain platform-specific information about hardware
device access from LynxOS. Each section contains memory map figures to
illustrate the mapping of LynxOS virtual memory to the hardware device.

72 Writing Device Drivers for LynxOS


Device Resource Manager (DRM)

In general, the kernel has permissions to access the full range of virtual memory
while the user processes have restricted access. Table 5-1 shows a generalization
of this concept. Keep this in mind when viewing the memory maps.

Table 5-1: Virtual Memory Access to User Processes

LynxOS Virtual Memory Area Permissions

OSBASE and above Kernel only; no user access


SPECPAGE Read-only to user
Kernel Stack Read-only to user
Shared Memory Depends on mapping
User Stack Read-write to user
User Data Read-write to user
User Text Read-only to user

Device Resource Manager (DRM)


Device Resource Manager is a LynxOS module which manages device resources.
The DRM assists device drivers in identifying and setting up devices, as well as
accessing and managing the device resource space. Developers of new device
drivers should use the DRM for managing PCI devices. Chapter 10, “Device
Resource Manager” describes the DRM in detail.

Writing Device Drivers for LynxOS 73


Chapter 5 - Accessing Hardware

74 Writing Device Drivers for LynxOS


CHAPTER 6 Elementary Device Drivers

This chapter describes elementary device drivers. These are simple device drivers
that do everything in a polled fashion. Drivers of this type are efficient to a degree
because they do not interfere with the real-time response of the system.
It is useful to see a comparison between these drivers and their more efficient
interrupt-based and thread-based implementations in order to understand the
performance advantages of one over the other.

Character Device Drivers


The ram disk driver is considered an elementary device driver. This driver does
not use interrupts or polling. The ramdisk driver provides both the block and
character ramdisk devices. The following source code will consider the
ramdisk driver implementation for the character device only.

Device Information Definition


/* rdinfo.h */

struct idinfo {
int bytes; /* Number of bytes for the ramdisk */
/* Physical start address of ramdisk */
int physical_memory_address;};

struct rdinfo {
struct idinfo idinfo[16];
};

The device information definition for the ram disk driver contains the number of
bytes for the ramdisk as well as the physical start address of the ramdisk.

Writing Device Drivers for LynxOS 75


Chapter 6 - Elementary Device Drivers

Device Information Declaration


/* rdinfo.c */

#include "rdinfo.h"

struct rdinfo rdinfo0 = {


{
{ 0, 0 }, /* Ramdisk 0 */
{ 0, 0 }, /* Ramdisk 1 */
{ 0, 0 }, /* Ramdisk 2 */
{ 0, 0 }, /* Ramdisk 3 */
{ 0, 0 }, /* Ramdisk 4 */
{ 0, 0 }, /* Ramdisk 5 */
{ 0, 0 }, /* Ramdisk 6 */
{ 0, 0 }, /* Ramdisk 7 */
{ 0, 0 }, /* Ramdisk 8 */
{ 0, 0 }, /* Ramdisk 9 */
{ 0, 0 }, /* Ramdisk 10 */
{ 0, 0 }, /* Ramdisk 11 */
{ 0, 0 }, /* Ramdisk 12 */
{ 0, 0 }, /* Ramdisk 13 */
{ 0, 0 }, /* Ramdisk 14 */
{ 0, 0 } /* Ramdisk 15 */

};

struct rdinfo rdinfo1 = {


{
{ 0, 0 }, /* Ramdisk 0 */
{ 0, 0 }, /* Ramdisk 1 */
{ 0, 0 }, /* Ramdisk 2 */
{ 0, 0 }, /* Ramdisk 3 */
{ 0, 0 }, /* Ramdisk 4 */
{ 0, 0 }, /* Ramdisk 5 */
{ 0, 0 }, /* Ramdisk 6 */
{ 0, 0 }, /* Ramdisk 7 */
{ 0, 0 }, /* Ramdisk 8 */
{ 0, 0 }, /* Ramdisk 9 */
{ 0, 0 }, /* Ramdisk 10 */
{ 0, 0 }, /* Ramdisk 11 */
{ 0, 0 }, /* Ramdisk 12 */
{ 0, 0 }, /* Ramdisk 13 */
{ 0, 0 }, /* Ramdisk 14 */
{ 0, 0 }, /* Ramdisk 15 */
}
};

The device information declaration for the above device information definition sets
physical_memory_location to zero instructing the ram disk to allocate
given number of bytes of memory which is zero. If
physical_memory_location was set to a non zero value this would instruct the
ramdisk driver to set up a virtual mapping starting at
physical_memory_location for a given number of bytes.

The rdinfo.c file is compiled and executed as a standalone program to create a


data file to be passed to the install routine during dynamic installation of the ram
disk driver. For a prelinked device, the main routine is not needed. The NODL

76 Writing Device Drivers for LynxOS


Declaration for ioctl

preprocessor symbol is set at compile-time. If the symbol is defined, the driver is


not assumed to be a dynamically linked one.

Declaration for ioctl


The ram disk driver does not have additional ioctl declarations.So, the code
snippet from the radeon drm driver is shown as an example.
/* drm.h */

..

/**
* DRM_IOCTL_VERSION ioctl argument type.
*
* \sa drmGetVersion().
*/
typedef struct drm_version {
int version_major; /**< Major version */
int version_minor; /**< Minor version */
int version_patchlevel; /**< Patch level */
DRM_SIZE_T name_len; /**< Length of name buffer */
char __user *name; /**< Name of driver */
DRM_SIZE_T date_len; /**< Length of date buffer */
char __user *date; /**< User-space buffer to hold date
*/
DRM_SIZE_T desc_len; /**< Length of desc buffer */
char __user *desc; /**< User-space buffer to hold desc
*/
} drm_version_t;

/**
* DRM_IOCTL_GET_UNIQUE ioctl argument type.
*
* \sa drmGetBusid() and drmSetBusId().
*/
typedef struct drm_unique {
DRM_SIZE_T unique_len; /**< Length of unique */
char __user *unique; /**< Unique name for driver
instantiation */
} drm_unique_t;

#undef DRM_SIZE_T

typedef struct drm_list {


int count; /**< Length of user-space structures */
drm_version_t __user *version;
} drm_list_t;

typedef struct drm_block {


int unused;
} drm_block_t;

/**
* DRM_IOCTL_CONTROL ioctl argument type.
*
* \sa drmCtlInstHandler() and drmCtlUninstHandler().
*/
typedef struct drm_control {

Writing Device Drivers for LynxOS 77


Chapter 6 - Elementary Device Drivers

enum {
DRM_ADD_COMMAND,
DRM_RM_COMMAND,
DRM_INST_HANDLER,
DRM_UNINST_HANDLER
} func;
int irq;
} drm_control_t;

/**
* Type of memory to map.
*/
typedef enum drm_map_type {
_DRM_FRAME_BUFFER = 0, /**< WC (no caching), no core dump */
_DRM_REGISTERS = 1, /**< no caching, no core dump */
_DRM_SHM = 2, /**< shared, cached */
_DRM_AGP = 3, /**< AGP/GART */
_DRM_SCATTER_GATHER = 4, /**< Scatter/gather memory for PCI DMA */
_DRM_CONSISTENT = 5 /**< Consistent memory for PCI DMA */
} drm_map_type_t;

/**
* Memory mapping flags.
*/
typedef enum drm_map_flags {
_DRM_RESTRICTED = 0x01, /**< Cannot be mapped to
* user-virtual
*/
_DRM_READ_ONLY = 0x02,
_DRM_LOCKED = 0x04, /**< shared, cached, locked */
_DRM_KERNEL = 0x08, /**< kernel requires access */
_DRM_WRITE_COMBINING = 0x10, /**< use write-combining if
* available
*/
_DRM_CONTAINS_LOCK = 0x20, /**< SHM page that contains lock */
_DRM_REMOVABLE = 0x40 /**< Removable mapping */
} drm_map_flags_t;

typedef struct drm_ctx_priv_map {


unsigned int ctx_id; /**< Context requesting private mapping
*/
void *handle; /**< Handle of map */
} drm_ctx_priv_map_t;

/**
* DRM_IOCTL_GET_MAP, DRM_IOCTL_ADD_MAP and DRM_IOCTL_RM_MAP ioctls
* argument type.
*
* \sa drmAddMap().
*/
typedef struct drm_map {
unsigned long offset; /**< Requested physical address
* (0 for SAREA)
*/
unsigned long size; /**< Requested physical size (bytes) */
drm_map_type_t type; /**< Type of memory to map */
drm_map_flags_t flags; /**< Flags */
void *handle; /**< User-space: "Handle" to pass
* to mmap()
*/
/**< Kernel-space: kernel-virtual address */
int mtrr; /**< MTRR slot used */
/* Private data */

78 Writing Device Drivers for LynxOS


Hardware Declaration

} drm_map_t;

The ioctl routine in this driver returns the version module information, maps
memory regions, etc. As a result, the user can issue the ioctl() system call with
the appropriate command (that is, DRM_IOCTL_VERSION) and a pointer to the
structure drm_version_t. The version information will be returned in the
drm_version_t structure.

Hardware Declaration
The ram disk driver does not have a separate file for additional hardware
declarations. So, what follows is the tty driver header file containing various
hardware declarations.
/* tty8250.h */

#define U_THR 0x00 /* WR: transmit buffer (LCR_DLAB=0) */


#define U_DLL 0x00 /* WR: divisor latch LSB (LCR_DLAB=1) */
#define U_DLM 0x01 /* WR: divisor latch MSB (LCR_DLAB=1) */
#define U_LCR 0x03 /* WR: line control register */
#define U_LSR 0x05 /* RD: line status register */

/* line control register (read/write) */


#define LCR_DLAB 0x80 /* divisor latch access bit */

/* line status register (read/write) */


#define LSR_TRE 0x20 /* transmit register empty */

/* baud rates using 1.8432 MHZ clock */


#define CILL -1
#define C0 0
#define C9600 12
#define C19200 6
#define C38400 3
#define C57600 2
#define C115200 1

The tty8250.h header file describes the UART port registers used by the driver.
These are the registers to output characters, configure the port, and check the port
status.

Driver Source Code


#include <kernel.h>
#include <sys/types.h>
#include <sys/ioctl.h>
#include <sys/file.h>
#include <inode.h>
#include <load.h>
#include <disk.h>
#include <string.h>
#include <stdio.h>

Writing Device Drivers for LynxOS 79


Chapter 6 - Elementary Device Drivers

#include <sysbrk.h>
#include <getmem.h>
#include <swait.h>
#include <cprintf.h>
#include <stdlib.h>
#include <time.h>
#include <errno.h>
#include <st.h>
#include <rdinfo.h>
#include <rk.h>
#include <strings.h>
#include <dldd.h>
#include <utils.h>
#include <sbrk.h> /* for r/wbounds() */

#include "rd.h"

struct driveinfo {
unsigned char ***top; /* pointer to the memory */
unsigned char *dskmem; /* pointer to virtually contiguous memory */
int pdskmem; /* physical address */
uint32_t bytes; /* size of ramdisk in bytes */
uint32_t blocks; /* size of ramdisk in 512 byte blocks */
int flags;
ksem_t access_sem; /* Arbitrates access to memory */
struct rdpartition partitions[MAXDEVS]; /* The partitions */
};

Statics Structure
/* rddrvr.c driver statics */

struct rdstatics
{
ksem_t openclose_sem;
struct driveinfo driveinfo[MAXDEVS];
ksem_t scratch_buffer_sem; /* Protects the scratch buffer */
void *scratch_buffer; /* Points to a 4k page */
/* rom kernel wants its memory to be */
int romkernel_supplied_fs;
};

The statics structure is defined as shown above. The variables openclose_sem


and scratch_buffer_sem are semaphores used to prevent simultaneous
open/close operations as well as access to the scratch_buffer.

install Routine
/* rddrvr.c */
/* ------------------------ Raw entry points -------------------------- */
void *rrdinstall(struct rdinfo *info)
{
struct rdstatics *statics;
struct super_block_32bitfs *super_block;
struct driveinfo *di;
int i;

80 Writing Device Drivers for LynxOS


install Routine

int end;

if ((sizeof(struct rdstatics) > PAGESIZE)) { /* Sanity checking */


return (void *)SYSERR;
}
if (!(statics = (struct rdstatics *)get1page())) {
goto errorout3;
}
bzero((char *)statics, PAGESIZE);
if (!(statics->scratch_buffer = get1page())) {
goto errorout2;
}
bzero((char *)statics->scratch_buffer, PAGESIZE);
for (i = 0; i < MAXDEVS; i++)
pi_init(&statics->driveinfo[i].access_sem);
/*
Check to see if this driver is installed as part
of a rom kernel.
If romable_kernel is set, point the ramdisk to
the memory set aside for it. This memory that is set
aside is virtually contiguous and must be accessed
with different routines.
*/
statics->romkernel_supplied_fs = 0;
/* When a kdi ramdisk is present, ID 0 points to
where it is in memory
*/
if (!rk_rootfs_configured && ROMkp->lmexe.rootfs) {
/* Only do this the first time! */
rk_rootfs_configured = 1;
di = &statics->driveinfo[0];
di->dskmem = (unsigned char *)((ROMkp->rk_ramdisk));
if (di->dskmem) {
/*
A rom kernel is present, and has memory
allocated for a filesystem, the filesystem
will be setup on ramdisk ID 0.
If no memory is allocated for the rom
kernel filesystem then ramdisk ID 0 will
not be claimed.
*/
statics->romkernel_supplied_fs = 1;
super_block = (struct super_block_32bitfs *)(di->
dskmem + 512);
di->blocks = normal_long(super_block->num_blocks);
di->bytes = di->blocks * 512;
di->flags |= ALLOCATED_MEMORY | DO_NOT_FREE |
VIRTUALLY_CONTIGUOUS;
di->rw = rw_vc;
}
}
/* Allocate the memory for the rest of the ramdisk IDs. */

for (i = (statics->romkernel_supplied_fs) ? 1 : 0; i < MAXDEVS;


i++) {
if (info->idinfo[i].bytes) {
if (info->idinfo[i].physical_memory_address) {
if (setup_physical_memory(&statics->
driveinfo[i],
info->
idinfo[i].physical_memory_address,
info->idinfo[i].bytes) == SYSERR)
{

Writing Device Drivers for LynxOS 81


Chapter 6 - Elementary Device Drivers

/*
Had an error so this device
will not be setup.
Keep processing entries.
*/
}
}
else { /* Just assign memory to the ID */
di = &statics->driveinfo[i];
if (get_memory(di, info->idinfo[i].bytes &
~(PAGESIZE-1), 0) ==
SYSERR)
{
goto errorout1;
}
}
}

}
pi_init(&statics->openclose_sem);

return statics;
errorout1:
/* Free all the memory allocated from the ramdisk IDs. */
end = (statics->romkernel_supplied_fs) ? 1 : 0;
for (; i >= end; i--) {
free_memory(&statics->driveinfo[i]);
}
errorout2:
free1page((char *)statics);
errorout3:
seterr(ENOMEM);
return (void *)SYSERR;
}

The install routine allocates the statics structure and initializes it. It returns SYSERR
with errno set to ENOMEM if the static structure allocation failed.
Otherwise, it allocates memory for the ram disks and initializes the semaphores.

uninstall Routine
int rrduninstall(struct rdstatics *statics)
{
int i;

for (i = 0; i < MAXDEVS; i++) {


free_memory(&statics->driveinfo[i]);
}
free1page(statics);
return OK;
}

The uninstall routine deallocates memory allocated to the statics structure and to
the driveinfo[] array.

82 Writing Device Drivers for LynxOS


open Routine

open Routine
int rrdopen(struct rdstatics *statics, int dev, struct file *f)
{
return _rdopen(statics, f, RAW_ENTRY_POINT);
}

The rrdopen() function calls an internal function _rdopen(), the latter


providing the actual implementation of the open functionality. The _rdopen()
function can be found in “Auxiliary Routines”.

close Routine
int rrdclose(struct rdstatics *statics, struct file *f)
{
return _rdclose(statics, f->dev, RAW_ENTRY_POINT);
}

The rrdclose() function calls an internal function _rdclose(), the latter


providing the actual implementation of the close functionality. The _rdclose()
function can be found in “Auxiliary Routines”.

Auxiliary Routines
int _rdopen(struct rdstatics *statics, struct file *f, int raw)
{
struct driveinfo *di;
int partition;
struct partition *p;
struct bootblock_x86 *bbx86;
di = &statics->driveinfo[f->dev & ID_MASK];
partition = (f->dev & PARTITION_MASK) >> 4;

swait(&statics->openclose_sem, SEM_SIGIGNORE);
swait(&di->access_sem, SEM_SIGIGNORE);
if (!partition) { /* Opening the entire device */
/*
It is posible that memory may not be allocated to
this device yet. This is ok. Just don't allow a
partition other then 0 to be opened with no memory.
*/
di->partitions[0].sblock = 0;
di->partitions[0].bytes = di->bytes;
}
else {
if (!(di->flags & ALLOCATED_MEMORY)) {
seterr(ENODEV);
goto errorout;
}
if (partition < 1 || partition > 4) {
seterr(ENODEV);
goto errorout;
}
bbx86 = statics->scratch_buffer;
di->rw(di, statics->scratch_buffer, 0, 512, B_READ);

Writing Device Drivers for LynxOS 83


Chapter 6 - Elementary Device Drivers

if (bbx86->boot_magic[0] != 0x55 || bbx86->boot_magic[1] !=


0xaa)
{
seterr(ENODEV);
goto errorout;
}
/* 0 is the raw partition */
p = &bbx86->partitions[partition - 1];
di->partitions[partition].sblock = con((char *)&p-
>first_block);
di->partitions[partition].bytes = con((char *)&p->size) * 512;
}
di->partitions[partition].open |= raw ? 1 : 2;
ssignal(&di->access_sem);
ssignal(&statics->openclose_sem);
return OK;
errorout:
ssignal(&di->access_sem);
ssignal(&statics->openclose_sem);
return SYSERR;
}

The _rdopen() routine first acquires the openclose_sem and access_sem


semaphores. Then it checks for partitions on the ramdisk device. When the
ramdisk is partitioned the partition and magic numbers are verified.

If any of them are invalid, _rdopen() returns an error.


int _rdclose(struct rdstatics *statics, int dev, int raw)
{
struct driveinfo *di;
int partition;

di = &statics->driveinfo[dev & ID_MASK];


partition = (dev & PARTITION_MASK) >> 4;
swait(&statics->openclose_sem, SEM_SIGIGNORE);
swait(&di->access_sem, SEM_SIGIGNORE);
di->partitions[partition].open &= raw ? ~1 : ~2;
if (di->flags & FREE_MEMORY_ON_FINAL_CLOSE) {
ssignal(&di->access_sem);
free_memory(di);
}
else
ssignal(&di->access_sem);
ssignal(&statics->openclose_sem);
return OK;
}

The _rdclose() function acquires the openclose_sem, access_sem


semaphores and further conditionally deallocates memory.
int get_memory(struct driveinfo *di, int size, int virtually_contiguous)
{
int i;
int numpages;
int mbptr = 0;
unsigned char **mid = 0; /* To shut up stupid gcc warnings */

swait(&di->access_sem, SEM_SIGIGNORE);

/* if a re-start, the memory was removed, go get new */

84 Writing Device Drivers for LynxOS


Auxiliary Routines

if ((di->flags & ALLOCATED_MEMORY) && (!VmRestart)) {


seterr(EEXIST); /* memory already exists! */
goto errorout;
}
if (size < PAGESIZE) {
seterr(EINVAL); /* Too little memory */
goto errorout;
}
size = (size & ~(PAGESIZE-1)); /* round down to nearest 4k page */
if (virtually_contiguous) {
di->flags |= DID_ALLOC_CMEM;
if (!(di->dskmem = (unsigned char *)alloc_cmem(size))) {
di->flags &= ~DID_ALLOC_CMEM;
if (!(di->dskmem = (unsigned char *)sysbrk(size))) {
seterr(ENOMEM);
goto errorout;
}
}
di->flags |= ALLOCATED_MEMORY | VIRTUALLY_CONTIGUOUS;
di->rw = rw_vc;
}
else {
numpages = size / PAGESIZE;
if (!(di->top = (unsigned char ***)get1page())) {
seterr(ENOMEM);
goto errorout;
}
/* Successfully allocated memory */
di->flags |= ALLOCATED_MEMORY;
bzero((char *)di->top, PAGESIZE);
for (i = 0; i < numpages; i++) {
if (!(i & 1023)) {
if (!(di->top[mbptr] = (unsigned
char **)get1page()))
goto out_of_memory;
mid = di->top[mbptr];
bzero((char *)mid, PAGESIZE);
mbptr++;
}
if (!(mid[i & 1023] = (unsigned char *)get1page()))
{
goto out_of_memory;
}
bzero((char *)mid[i & 1023], PAGESIZE);
}
di->rw = rw_vd;
}
di->partitions[0].sblock = 0;
di->partitions[0].bytes = size;
di->bytes = size; /* update driveinfo fields */
/* convert bytes to # of 512 byte blocks */
di->blocks = size >> 9;
ssignal(&di->access_sem);
return size;
out_of_memory:
ssignal(&di->access_sem);
free_memory(di);
seterr(ENOMEM);
return SYSERR;
errorout:
ssignal(&di->access_sem);
return SYSERR;
}

Writing Device Drivers for LynxOS 85


Chapter 6 - Elementary Device Drivers

The get_memory function locks the access_sem semaphore at the beginning


to ensure the routine be reenterable. The get_memory function allocates memory
to the ram disk. It first checks whether the memory has already been allocated to
the ram disk and whether the requested size of memory is less than one page. It
further checks whether memory to be allocated is virtually contiguous.
The get_memory function returns the number of bytes allocated on success or
SYSERR indicating an error.
int free_memory(struct driveinfo *di)
{

unsigned char **mid;


int i, j;

swait(&di->access_sem, SEM_SIGIGNORE);
if (!(di->flags & ALLOCATED_MEMORY)) { /* No memory allocated */
seterr(ENOSPC); /* No space left to remove */
goto errorout;
}
if (di->flags & DO_NOT_FREE) { /* Must not free this kind of */
seterr(EINVAL); /* memory */
goto errorout;
}
if (di->flags & VIRTUALLY_CONTIGUOUS) {
if (di->flags & DID_ALLOC_CMEM)
free_cmem((char *)di->dskmem, di->bytes);
else
sysfree((char *)di->dskmem, di->bytes);
di->flags &= (~DID_ALLOC_CMEM & ~VIRTUALLY_CONTIGUOUS);
}
else {
for (i = 0; i < 1024; i++) {
if (di->top[i]) {
mid = di->top[i];
for (j = 0; j < 1024; j++) {
if (mid[j]) {
free1page((char *)mid[j]);
}
}
free1page((char *)di->top[i]);
}
}
free1page((char *) di->top);
}
di->flags &= (~ALLOCATED_MEMORY & ~FREE_MEMORY_ON_FINAL_CLOSE);
di->bytes = 0;
di->blocks = 0;
di->rw = rw_null;
ssignal(&di->access_sem);

return OK;
errorout:

ssignal(&di->access_sem);
return SYSERR;
}

86 Writing Device Drivers for LynxOS


Auxiliary Routines

The free_memory function locks the access_sem semaphore at the beginning


to ensure the routine be reenterable. Then the function checks whether the memory
was allocated to the ram disk and if so whether it was freeable. If no memory has
been allocated or this memory is not to be freed, then an error is returned.
Otherwise, the function frees the memory.
int setup_physical_memory(struct driveinfo *di, unsigned long
physical_address,
int bytes)
{
if (bytes < 512) {
seterr(EINVAL);
return SYSERR;
}
di->bytes = bytes & ~511; /* round down */
/* Get # 512 byte blocks from bytes */
di->blocks = di->bytes >> 9;
di->dskmem = (unsigned char *) permap(physical_address, di->
bytes);
di->pdskmem = physical_address;
di->flags |=ALLOCATED_MEMORY | DO_NOT_FREE |
VIRTUALLY_CONTIGUOUS;
di->rw = rw_vc; /* Use virtually contiguous routines */
return OK;
}

The setup_physical_memory function establishes a virtual mapping starting


at physical_address for the number of bytes. If number of bytes requested is
less than 512 the function returns an error.
int rdrw(struct rdstatics *statics, struct file *f, char *buff, int count,
int iotype)
{
struct driveinfo *di;
int partition;

partition = (f->dev & PARTITION_MASK) >> 4;


di = &statics->driveinfo[f->dev & ID_MASK];
/*
f->position and count will be no more then 0x7fffffff each
if this check fails to find an error. When added below
they cannot overflow an unsigned int.
*/
if (f->position > di->partitions[partition].bytes)
{
seterr(EINVAL); /* Out of range request */
return SYSERR;
}
if (f->position == di->bytes) {
if (iotype == B_READ)
return 0;
/* at EOF, return EOF POSIX */
else {
seterr(ENOSPC); /* at EOF, and trying to write */
return SYSERR; /* return ENOSPC, POSIX */
}
}
/*
Unsigned int overflow is not possible due to checking above.

Writing Device Drivers for LynxOS 87


Chapter 6 - Elementary Device Drivers

*/
if (f->position + count > di->partitions[partition].bytes)
{
count = di->partitions[partition].bytes - f->position;
}
di->rw(di, buff, (int)f->position, count, iotype);
/*
rw_vc
rw_vd
*/
f->position += count; /* update position */
return count;
}

The rdrw() function provides implementation for read and write functionality.
The rdrw() routine calls the internal rw_vc(), rw_vd() functions that
actually implement read and write operations on virtually contiguous and
discontiguous regions of memory correspondingly.
void rw_vc(struct driveinfo *di, char *buffer, int position, int count,
int iotype)
{
if (iotype & B_READ)
fcopy(buffer, (char *)&di->dskmem[position], count);
else
fcopy((char *)&di->dskmem[position], buffer, count);
}

The rw_vc() function implements read and write operations on a virtually


contiguous region of memory.
void rw_vd(struct driveinfo *di, char *buffer, int position, int count,
int iotype)
{
int num_bytes, ppos, partial_bytes, xfr_byte_cnt, offset;
unsigned char **mid;
unsigned char *page;

num_bytes = count;
if ((ppos = position & (PAGESIZE-1))) { /* 1st partial */
partial_bytes = PAGESIZE - ppos;
xfr_byte_cnt = num_bytes > partial_bytes ?
partial_bytes : num_bytes;
mid = di->top[position >> 22];
page = mid[(position & 0x3fffff)>>12];
offset = position & (PAGESIZE-1);
if (iotype == B_READ)
fcopy(buffer, (char *)&page[offset],
xfr_byte_cnt);
else
fcopy((char *)&page[offset], buffer, xfr_byte_cnt);
buffer += xfr_byte_cnt;
position += xfr_byte_cnt;
num_bytes -= xfr_byte_cnt;
}
while (num_bytes >= PAGESIZE) { /* Big chunks */
xfr_byte_cnt = num_bytes > PAGESIZE ? PAGESIZE : num_bytes;
mid = di->top[position >> 22];
page = mid[(position & 0x3fffff)>>12];
if (iotype == B_READ)
fcopy(buffer, (char *)page, xfr_byte_cnt);

88 Writing Device Drivers for LynxOS


Auxiliary Routines

else
fcopy((char *)page, buffer, xfr_byte_cnt);
buffer += xfr_byte_cnt;
position += xfr_byte_cnt;
num_bytes -= xfr_byte_cnt;
}
if (num_bytes) { /* Last partial */
xfr_byte_cnt = num_bytes;
mid = di->top[position >> 22];
page = mid[(position & 0x3fffff)>>12];
offset = position & (PAGESIZE - 1);
if (iotype == B_READ)
fcopy(buffer, (char *)&page[offset], xfr_byte_cnt);
else
fcopy((char *)&page[offset], buffer, xfr_byte_cnt);
position += xfr_byte_cnt;
}
}

The rw_vd() function implements read and write operations on a virtually


discontiguous region of memory.
int rdioctl(struct rdstatics *statics, int dev_num, int function,
long *data)
{
struct driveinfo *di;
struct disk_geometry *dg;
struct rdinformation *ramdisk_info;
struct rw_absolute_block *rwab;
struct memory_pointer *mp;
int i;

di = &statics->driveinfo[dev_num & ID_MASK];


switch(function) {
/* return # of 512 byte blocks for device */
case BLCK_CAPACITY:
if (!wbounds((kaddr_t)data, sizeof(int))) {
seterr(EFAULT);
return SYSERR;
}
/*
if (!(di->flags & ALLOCATED_MEMORY)) {
seterr(ENODEV);
return SYSERR;
}
*/
*(int *)data = di->
partitions[PARTITION_NUM(dev_num)].bytes >> 9;
break;
case BLCK_SIZE:
if (!wbounds((kaddr_t)data, sizeof(int))) {
seterr(EFAULT);
return SYSERR;
}
/* The ramdisk driver uses a 512 */
*(int *)data = 512; /* byte physical block size */
break;
case DISK_GEOMETRY: /* return ramdisk geometry data */
if (!wbounds((kaddr_t)data, sizeof(struct
disk_geometry))) {
seterr(EFAULT);
return SYSERR;

Writing Device Drivers for LynxOS 89


Chapter 6 - Elementary Device Drivers

}
/*
Disk geometry is used by mkpart to
determine the number of
blocks that will make up a cylinder.
Disk space is then allocated in units
of cylinders. A cylinder is max heads
* max sectors. The following will set
the cylinder size to 1 block.
*/
dg = (struct disk_geometry *)data;
dg->heads = 1;
dg->sectors = 1;
dg->cylinders = dg->heads * dg->sectors;
break;
case ALLOCATE_MEMORY: /* Assign memory to a ramdisk ID */
if (!rbounds((kaddr_t)data, sizeof(struct
memory_pointer))) {
seterr(EFAULT);
return SYSERR;
}
mp = (struct memory_pointer *)data;
/* physical memory pointer invalid */
if (!mp->physical_address) {
return get_memory(di, mp->bytes, mp-
>virtually_contiguous);
}
else {
/*
User has passed in a physical
memory address and a count.
point the ramdisk ID to that
memory.
*/
swait(&di->access_sem, SEM_SIGIGNORE);
if (di->flags & ALLOCATED_MEMORY) {
/* memory already exists! */
seterr(EEXIST);
ssignal(&di->access_sem);
return SYSERR;
}
if (setup_physical_memory(di, mp-
>physical_address,
mp->bytes) == SYSERR)
{
seterr(EINVAL);
ssignal(&di->access_sem);
return SYSERR;
}
ssignal(&di->access_sem);
}
/*
User passed a pointer to
physical memory and a number of bytes.
*/
break;
case FREE_MEMORY: /* Take memory from a ramdisk ID */
swait(&di->access_sem, SEM_SIGIGNORE);
di->flags |= FREE_MEMORY_ON_FINAL_CLOSE;
ssignal(&di->access_sem);
break;
/* Return contents of the statics structure */
case INFORMATION:

90 Writing Device Drivers for LynxOS


Auxiliary Routines

if (!wbounds((kaddr_t)data, sizeof(struct
rdinformation))) {
seterr(EFAULT);
return SYSERR;
}
ramdisk_info = (struct rdinformation *)data;
for (i = 0; i < MAXDEVS; i++) {
ramdisk_info->id[i].bytes = statics->
driveinfo[i].bytes;
ramdisk_info->id[i].flags = statics->
driveinfo[i].flags;
ramdisk_info->id[i].dskmem = statics->
driveinfo[i].dskmem;
}
return OK;
break;
case BLCK_READ: /* 64 bit access routines for mkpart */
/* These will go away once 64 bit fs is done */
case BLCK_WRITE:
rwab = (struct rw_absolute_block *)data;
if (!rbounds((kaddr_t)data,
sizeof(struct rw_absolute_block)) ||
!wbounds((kaddr_t)rwab->buffer, 512) ||
!rbounds((kaddr_t)rwab->buffer, 512))
{
pseterr(EFAULT);
return SYSERR;
}
di->rw(di, rwab->buffer, rwab->block << 9, 512,
function == BLCK_READ ? B_READ : 0);
break;

/* The kernel uses this to obtain the physical


address of the resident text in a kdi or file
system image.
*/

/* Translate offset to physical address */


case BLCK_PHYS_ADDR:

/* This ioctl may be called from the kernel


from load_(type) as the kernel has this data
on the supervisor stack, don't bound it...
*/

if (!(currtptr->flag & KERNADDR)) {


if ( !rbounds((kaddr_t) data, sizeof(int))
&& !wbounds((kaddr_t) data,
sizeof(int))) {
pseterr(EFAULT);
return(SYSERR);
}
}
if ((dev_num & ID_MASK) == 0 && statics->
romkernel_supplied_fs) {
*data = dmz(*data);
/* Original root device */
} else {
/* Additional file system */
*data = ((di->pdskmem + *data) >> PAGESHIFT);
}
break;
default:

Writing Device Drivers for LynxOS 91


Chapter 6 - Elementary Device Drivers

pseterr(EINVAL);
return SYSERR;
}
return OK;
}

The rdioctl() routine operates based on the command issued by the user. For
example, if the command is ALLOCATE_MEMORY, first the rbounds routine checks if
the user’s data pointer is valid. If the pointer is invalid the EFAULT error is
returned. Further, depending on the value to where the ioctl argument data points
to, the driver allocates or maps the specified number of bytes of memory.

read Routine
int rrdread(struct rdstatics *statics, struct file *f, char *buff,
int count)
{
return rdrw(statics, f, buff, count, B_READ);
}

The rrdread() routine given in the above example calls an internal function
rdrw(), the latter providing implementation of the read functionality. The
rdrw() routine in turn calls internal rw_vc(), rw_vd() functions that
implement read and write operations on virtually contiguous regions of memory
and read and write operations on virtually discontiguous regions of memory
correspondingly.

write Routine
int rrdwrite(struct rdstatics *statics, struct file *f, char *buff,
int count)
{
return rdrw(statics, f, buff, count, !B_READ);
}

The rrdwrite() routine given in the above example calls an internal function
rdrw(), the latter providing implementation of the write functionality. The
rdrw() routine in turn calls the internal rw_vc(), rw_vd() functions that
implement read and write operations on virtually contiguous regions of memory
and read and write operations on virtually discontiguous regions of memory
correspondingly.

92 Writing Device Drivers for LynxOS


select Routine

select Routine
int rrdselect(struct rdstatics *statics, struct file *f, int which,
struct sel *ffs)
{
pseterr(ENOSYS);
return SYSERR; /* Not supported */
}

The rrdselect() function simply sets errno to ENOSYS and returns an error
to indicate that entry point is not implemented in the ramdisk driver.

ioctl Routine
int rrdioctl(struct rdstatics *statics, struct file *f, int function,
long *data)
{
return rdioctl(statics, f->dev, function, data);
}

The rrdioctl() function calls an internal rdioctl() routine, the latter


providing the actual implementation of the ioctl functionality.

Dynamic Installation
#ifndef NODL

#if !defined(__powerpc__)
static
#endif

struct dldd __unused entry_points = {


rrdopen, rrdclose, rrdread, rrdwrite,
rrdselect, rrdioctl, rrdinstall, rrduninstall,
0
};

#endif

The above structure defines the entry points of the sample driver, which are
required for dynamic installation.

Writing Device Drivers for LynxOS 93


Chapter 6 - Elementary Device Drivers

94 Writing Device Drivers for LynxOS


CHAPTER 7 Interrupt Handling

Interrupts are external hardware exception conditions which are delivered to the
processor to indicate the occurrence of a specific event. Some of the things for
which they are useful are listed below:
• Indication of the completion of an operation.
For example, an interrupt could be generated indicating the completion of
a Direct Memory Access (DMA) transfer. The device driver would give a
command to the DMA controller to transfer a block of data and set the
vector for the interrupt generated by the controller to a specific driver
function. This, in turn, would signal a semaphore to wake up any system
or user or threads waiting on the completion of the DMA transfer.
• Data availability.
The availability of data at a port is often indicated by an interrupt. A tty
driver receives an interrupt when a character is ready to be read from the
port.
• Device ready for a command.
A printer generates an interrupt when it has printed a character and is
ready to print the next character.

LynxOS Interrupt Handlers


Interrupt handlers in LynxOS are specified in the install or open entry points
and are cleared in the uninstall or close entry points. These interrupt
handlers run before any other kernel or application processing is completed.

Writing Device Drivers for LynxOS 95


Chapter 7 - Interrupt Handling

Since interrupt handlers have the highest run priority, the minimization of the
length of each interrupt service routine is paramount. This leads to better interrupt
response time and task response time.
Interrupt handlers in LynxOS are declared and reset using the functions
iointset() and iointclr() or, in case the Device Resource Manager is
used, drm_register_isr() and drm_unregister_isr(). See the man
pages for a complete description of these functions. Interrupt handlers and other
driver calls cannot directly use application virtual addresses. The application
virtual addresses must be translated to kernel virtual addresses before they can be
accessed by driver routines.

General Format
The general format of an interrupt-based device driver under LynxOS:
• The general driver entry points are referred to as the top-half of the
device driver.
• The interrupt handler is known as the bottom-half of the driver.

Use of Queues
Queues are often used to communicate between the top and bottom halves.
Examples of the use of queues to communicate between entry points and interrupt
handlers follow:
• For communication from the write() entry point to the interrupt
handler.
A counting semaphore, initialized to the size of the queue, tracks the free
space in the write() entry point. The swait() routine is called and
if space is available in the queue, a character is enqueued. The interrupt
handler subsequently dequeues the character and signals the semaphore
using ssignal().
• For communication from the interrupt handler to the read() entry
point.

96 Writing Device Drivers for LynxOS


Typical Structure of Interrupt-based Driver

The semaphore in the read() entry point tracks data availability in the
queue. The swait() routine blocks until data is available in the queue.
The interrupt handler posts the data to the queue, signaling the semaphore
that data is available, if queue space is available. If queue space is
unavailable, an error flag is set.

Typical Structure of Interrupt-based Driver


device_read()
{
swait(&receive_data_available,SEM_SIGIGNORE);
disable();
dequeue_receive_data();
restore();
}
device_write()
{
disable();
swait(&space_on_queue_available,SEM_SIGIGNORE);
enqueue_send_data();
if (no_interrupt_pending)
output_data();
restore();
}
interrupt_handler()
{
if (data_received)
{
enqueue_receive_data();
ssignal(&receive_data_available);
}
else
{
if (dequeue_send_data())
output_data();
else no_interrupt_pending = 1;
}
}

Example
What follows is an example of the interrupt-based LynxOS driver for the 16550-
compatible UART device (tty driver) that adheres to the format specified in
Chapter 6, “Elementary Device Drivers.” Unlike Chapter 6, this chapter does not
present all the routines the driver source code contains. Only those data structures
and functions that are essential for understanding the driver internals are
considered.

Writing Device Drivers for LynxOS 97


Chapter 7 - Interrupt Handling

tty Manager
The LynxOS tty driver employs a separate kernel subsystem called the tty manager
that contains various routines to handle such tasks as the character queue
management, general tty I/O, opening and closing the tty device, and much more.
This section contains a brief description of the tty manager routines that are
essential for understanding the internals of the LynxOS interrupt-based tty driver.
The implementation details for these routines are omitted for the sake of brevity
and clearness in this section. Refer to the contents of the file ttymgr.c for
further details.

The tmgr_install() Function


The prototype of the tmgr_install() function is as follows:
int
tmgr_install(struct ttystatics *s, struct
old_sgttyb *sg, int type, void (*transmit_enable)(),
char *arg).

After clearing out the tty statics structure, the tmgr_install() routine
initializes the values of the former structure, depending on the values of the other
arguments this function is called with.

The tmgr_write() Function


The prototype of the tmgr_write() routine is as follows:
int tmgr_write(struct ttystatics *s, struct file *f,
char *buff, int count)

This routine handles various tasks pertaining to writing to the serial port. The
sophisticated algorithm implemented in this function manages, among other things,
the write queue and takes into account certain unusual situations, which may occur
at or during the write operations.

The tmgr_read() Function


The prototype of the tmgr_read() function is as follows:
int tmgr_read(struct ttystatics *s, struct file *f,
char *buff, int count)

98 Writing Device Drivers for LynxOS


The tmgr_ex() Function

The code in the kernel tty manager function implements the tty read functionality.
The function is called from the read entry point of the LynxOS interrupt-based tty
driver read entry point. The read queue management is also being carried out by
this function.

The tmgr_ex() Function


The prototype of the tmgr_ex() function is as follows:
int tmgr_ex(struct ttystatics *s, int com, int c)

The function is an exception dispatcher in the kernel tty manager. This routine
manages various exceptional serial line conditions, such as modem hang-up, parity
error, lost carrier, and the like. The exception code is transmitted to this function
via the com variable. The third argument of this function, an integer value c, is
used to check which symbol was received when the error was encountered.
Depending on the value of the com variable (the exception code), the tty manager
exception dispatcher decides how to proceed with the particular serial line error.

Device Information Definition


/* ttyinfo.h */

struct tty_uinfo {
long location;
long vector;
struct old_sgttyb sg;
};
struct old_sgttyb {
/* This used to be the sgttyb structure
*/ char sg_ispeed; /* input speed */
char sg_ospeed; /* output speed */
char sg_erase; /* erase char (default "#") */
char sg_2erase; /* 2nd erase character */
char sg_kill; /* kill char (default "@") */
int sg_flags; /* various modes and delays */
/* This used to be the tchars structure */
char t_intrc; /* interrrupt char (default "^?") */
char t_quitc; /* quit char (default "^\") */
char t_startc; /* start output (default "^Q") */
char t_stopc; /* stop output (default "^S") */
char t_eofc; /* end-of-file char (default "^D") */
char t_brkc; /* input delimiter (like nl), (default -1) */
/* This used to be the local mode word struct */
long l_localmode; /* local mode word */
/* This used to be the ltchars structure, it defines interrupt chars */
char t_suspc; /* stop process signal (default "^Z") *\
char t_dsuspc; /* delayed stop process signal (default "^Y")
*/
char t_rprntc; /* reprint line (default "^R") */
char t_flushc; /* flush output (toggles) (default "^O") */
char t_werasc; /* word erase (default "^W") */

Writing Device Drivers for LynxOS 99


Chapter 7 - Interrupt Handling

char t_lnextc; /* literal next character (default "^V") */


};

The device information definition structure has three fields associated with the
interrupt-based driver. The location variable is essentially an I/O address,
which is used, for instance, by the install entry point of the tty driver to check
the presence of a UART. vector is the interrupt vector number, while sg is the
structure used by the tty device driver to define the modes and option settings for
the serial port.

Device Information Declaration


/* ttyinfo.c */

#include "ttyinfo.h"

struct tty_uinfo com1 = {


IOBASE + 0x3F8, /* IO address */
PWBIT_TTY1, /* Interrupt vector number */
{
B9600, B9600, /* input and output speed */
'H' - '@', /* erase char */
-1, /* 2nd erase char */
'U' - '@', /* kill char */
ECHO | CRMOD, /* mode */
'C' - '@', /* interrupt character */
'\\' - '@', /* quit char */
'Q' - '@', /* start char */
'S' - '@', /* stop char */
'D' - '@', /* EOF */
-1, /* brk */
(LCRTBS | LCRTERA | LCRTKIL | LCTLECH),
/* local mode word */
'Z' - '@', /* process stop */
'Y' - '@', /* delayed stop */
'R' - '@', /* reprint line */
'O' - '@', /* flush output */
'W' - '@', /* word erase */
'V' - '@' /* literal next char */
}
}

The I/O address for the first serial port COM1 is set to the value
IOBASE + 0x3F8 in this example. Note that the IOBASE region is used to map
the memory-mapped device registers in. It is assumed, therefore, that the IOBASE
area is already mapped in at the BSP level. The IOBASE region includes ISA-IO,
PCI-IO, PCI-MEM, and graphics memory. The vector field is set to the
PWBIT_TTY1 constant, the latter being defined at the BSP level. The code snippet
above fills the sg structure with the default values that are specific for the VMPC
serial port.

100 Writing Device Drivers for LynxOS


tty Manager Statics Structure

For the second serial port, COM2, the tty_uinfo structure is initialized in an
analogous manner in the same source file.
The following are the possible values for the local-mode-word l_localmode in
the sg structure.
#define LCRTBS 000001 /* Backspace on erase rather than echoing erase*/
#define LPRTERA 000002 /* Printing terminal erase mode */
#define LCRTERA 000004 /* Erase char echoes as backspace-spc-backspace */
#define LCTLOUT 000010 /* most all control chars prefixed with ^ */
#define LMDMBUF 000020 /* Stop/start output when carrier drops */
#define LLITOUT 000040 /* Supress output translations */
#define LTOSTOP 000100 /* Send SIGTTOU for background output */
#define LFLUSHO 000200 /* Output is being flushed */
#define LNOHANG 000400 /* Don't send hangup when carrier drops */
#define LTRANS 001000 /* real transparency mode */
#define LCRTKIL 002000 /* Bkspc-spc-bkspc erase entire line on line ki ll
*/
#define LNOMDM 004000 /* Ignore the modem control signals */
#define LCTLECH 010000 /* Echo input ctrl chars with a "^" prefixe d */
#define LPENDIN 020000 /* Retype pending inpt at next read or inpt char */
#define LDECCTQ 040000 /* Only ^Q restarts output after ^S like DEC */

tty Manager Statics Structure


/* ttymgr.h -- the tty manager statics structure */

struct ttystatics {
int flags;
struct old_sgttyb sg;
struct termios tm;
long xflags1;
struct queue rxqueue;
struct queue txqueue;
ksem_t hsem;
char t_line; /* indication for SLIP discipline */
};

This structure is a statics structure for the kernel-based tty manager. It serves a
purpose similar to that of the driver statics structure. In other words, it holds data
that is shared between the function of the tty manager and those of the tty driver
itself.

NOTE: The name chosen in the LynxOS source code for the structure tag seems to
be somewhat misleading in this case. It is the tty_u structure (refer to “Statics
Structure” on page 102) that actually is the tty driver statics structure.

In order to save space and reduce the information overload in this chapter, only
some fields that are specific to the code examples given in this chapter are shown.
Refer to the struct ttystatics definition in the header file ttymgr.h for

Writing Device Drivers for LynxOS 101


Chapter 7 - Interrupt Handling

complete information about the other fields in the statics structure of the
LynxOS tty driver.
The rxqueue and txqueue are receive and transmit queues, respectively. The
close_sem is a semaphore that is used in the serial line closing code for handling
the system timeouts. The trans field is a pointer to the function, which enables
transmission, while transarg is the argument to the transmitter enabling
function, with which the latter is called at various places in the kernel tty manager.
The hsem variable is a kernel semaphore used in the LynxOS interrupt-based tty
driver to implement critical sections at various places of the tty driver code in a
manner explained in “Using Kernel Semaphores for Mutual Exclusion” in
Chapter 4, “Synchronization.”
The xflags1 field holds the information about various facets of the UART
channel state: whether the parity check is enabled, whether the serial line ignores a
break, and so on. Refer to the file ttymgr.h for a complete list of values this
field can assume.
The meaning of the sg field in the tty manager statics structure is the same as that
of the field with the same name in the tty driver info structure. Refer to “Device
Information Definition” on page 99 and “Device Information Declaration” on
page 100 for more information.
The flags field is another integer whose bits denote various properties of the
serial line. It is this variable that knows whether, for example, the line has
transmission disabled or if it is at all active. Refer to the file ttymgr.h for a
complete list of values this field can assume.
The termios field in the tty manager statics structure is the standard UNIX-like
general terminal interface that is provided to control asynchronous
communications ports. The fields of this structure describe the input, output, and
control modes of the communication port, among other things.

Statics Structure
/* ttydrvr.c -- the tty driver statics structure */

#include <ttymgr.h> /* for struct ttystatics, etc. */

struct tty_u {
long vector;
__port_addr_t uartp;
int cts; /* Clear To Send */
int int_link_id; /*
Interrupts can only be shared on
PPC sitka boards

102 Writing Device Drivers for LynxOS


install Entry Point

*/
int init;
int dcd_check;
struct ttystatics channel;
};

The tty_u structure plays the role of the driver statics structure in the
LynxOS tty driver. In the tty_u structure, in addition to the ttystatics
structure, some other tty channel-specific data is placed, such as the port address; a
couple of semaphores used in the driver auxiliary routines; the Clear-to-Send state
used in hardware handshake code; the dcd_check field, which is nonzero
whenever the driver is doing a hardware handshake; the int_link_id field
needed for the interrupt sharing code; and the boolean init field that is set to true
if the hardware has been already initialized by the device driver.
The LynxOS kernel allows users to share one interrupt vector with more than one
interrupt handler. The ioint_link() kernel routine is used for this purpose.
This routine should be called with the integer value returned from the
corresponding call to the iointset() kernel function. It is this latter return
value that is stored in the int_link_id field of the tty driver statics structure for
further use from the interrupt service routine (refer to “Interrupt Handler” on
page 107).
Note that in case of extending the tty driver or when doing some modifications to
the latter, it is the tty_u structure that is a convenient place to put the data
specific to a driver instance. Refer to Chapter 8, “Kernel Threads and Priority
Tracking” for an extended version of the UART channel structure.

install Entry Point


struct tty_u *
ttyinstall(struct tty_uinfo *info)
{
struct tty_u *p;
struct ttystatics *s;
struct old_sgttyb *sg;
__port_addr_t uartp;

if (uart_check(info->location))
{
p = (struct tty_u *)sysbrk((long)sizeof(struct tty_u));
p->cts = 0;
p->uartp = uartp = __tty_uart_location(info->location);
p->vector = info->vector;

sg = (struct old_sgttyb *)&info->sg;


s = &p->channel;

/*
The trans_en() routine looks at uartp and channel.tm.c_cflag,
so it needs the full tty_u structure passed to it.

Writing Device Drivers for LynxOS 103


Chapter 7 - Interrupt Handling

*/
tmgr_install(s, sg, 0, trans_en, (char *)p);
ksem_init(&s->hsem, 1);
ttyseth(s, (struct sgttyb *)sg, uartp, 1, 0);
/* The device is not open yet, so disconnect the modem
control lines
*/
__port_andb(uartp + U_MCR, ~(MCR_DTR | MCR_RTS | MCR_OT2));

p->int_link_id = iointset(p->vector,
(int (*)())duart_int, (char *)p);
p->init = 0;

/* Enable the modem status interrupts */


__port_orb(uartp + U_IER, IER_MS);
}
return p;
}

The ttyinstall() routine checks for the presence of a UART, using the
auxiliary uart_check() function defined in the source code of the tty driver.
The uart_check() routine writes data into the divisor latch and then reads it
back. If the data returned by the read operation is the same as the data that had been
written, uart_check() assumes that the UART is available.
Once the presence of the UART is confirmed, the tty_u data structure is
allocated and initialized by the board-specific parameters supplied by means of the
device information structure. The tmgr_install() routine from the tty
manager kernel subsystem is then invoked to initialize the ttystatics structure
as well as some kernel structures pertaining to the tty subsystem.
The hsem semaphore is initialized by the value 1. Note that the semaphore
initialization must be made before calling the ttyseth() function, the latter
being described later in this chapter, in order to make certain that the call to
swait() in the ttyseth() succeeds.

The iointset() function is called to set the interrupt handler duart_int().


The duart_int() is defined by the tty driver and described in “Interrupt
Handler” on page 107.
The return value of the iointset() function is saved into the int_link_id
field of the channel structure p and used later by the interrupt sharing mechanism.
The modem control lines are disconnected and the modem status interrupts are
enabled in the ttyinstall() routine. This puts the serial line into the state in
which it is expected to stay after opening the serial line.

104 Writing Device Drivers for LynxOS


uninstall Entry Point

uninstall Entry Point


int ttyuninstall(struct tty_u *p) {
__port_addr_t uartp;

uartp = p->uartp;

__outb(uartp + U_IER, 0);


iointclr(p->vector);
sysfree(p, (long)sizeof(struct tty_u));

return OK;
}

The uninstall routine writes zero to the serial line interrupt enable register
U_IER, thus disabling hardware interrupts from the serial line, resets the interrupt
handler for the serial line interrupt vector and frees the memory allocated for the
tty_u structure at driver installation.

open Entry Point


int ttyopen(struct tty_u *p, int dev, struct file *f)
{
int out;
struct ttystatics *s;
__port_addr_t uartp;
int data;
int i;

s = &p->channel;
if ((out = tmgr_open(s, f)) == SYSERR) {
return SYSERR;
}

if (s->sg.sg_ospeed != B0) {
/*
The output baud rate is not B0,
so it's OK to assert modem control lines.
*/
if (hardware_handshake(p, f, s) == SYSERR) {
if (out) {
ttyclose(p, f);
}
return SYSERR;
}
}

if (out && 0 == p->init) { /* first open */


p->init = 1;
uartp = p->uartp;
__port_orb(uartp + U_MCR, MCR_OT2 | MCR_RTS);
__outb(uartp + U_LCR, LCR_WL8 | LCR_STB_1);
data = __inb(uartp + U_IIR);
data = __inb(uartp + U_MSR);

/* flush hardware buffer before enabling interrupts */

for (i = 0; i < MAX_COUNT; i++) {

Writing Device Drivers for LynxOS 105


Chapter 7 - Interrupt Handling

if (!((data = __inb(uartp + U_LSR)) & IER_RDA)) {


break; /* no more fake data */
}
data = __inb(uartp + U_RBR);
}

__outb(uartp + U_IER, IER_RDA | IER_TRE | IER_RLS | IER_MS);


}
return OK;
}

The kernel tty manager routine tmgr_open() is used to logically open the serial
line. Refer to “tty Manager” on page 98 for the description of the tmgr_open()
function. Further, in case of no errors have been caught in the call to the tty
manager tmgr_open() function and provided that the serial line baud rate is
nonzero, a hardware handshake is attempted. If the handshake produces an error,
the tty is closed and an error value is returned from the tty driver open entry point.
As has been already mentioned in “Statics Structure” on page 102, the init field
is used to track the information about whether the device has been initialized. By
this point, no errors have been caught from either the tty manager or the hardware
handshake process. The positive return value obtained from the tmgr_open()
function means success. If, however, it is simultaneously found that the UART port
has not yet been properly initialized, the open entry point performs a series of
initialization steps. First, using the modem control register, the Request to Send as
well as the auxiliary user-designated output (OUT2) signals are forced to be set to
logic 0. The line control register is then logically or’ed with the constants
LCR_WL8 and LCR_STB_1 that set the word length to 8 and the number of stop
bits to 1. Then the interrupt identification register as well as the modem state
register get flushed by reading from them. A similar operation is done with the
receiver buffer register: while the line state register indicates that receiver data is
available, the characters are read from the receiver buffer, the latter getting thereby
emptied from any random data that might be there at device opening. Finally, the
open entry point enables all hardware interrupts by setting all four bits of the
interrupt enable register to logic 1.

close Entry Point


void
ttyclose(struct tty_u *p, struct file *f)
{
__port_addr_t uartp;

tmgr_close(&p->channel, f);
uartp = p->uartp;
xmit_quies(uartp,p);

106 Writing Device Drivers for LynxOS


Interrupt Handler

/*
According to the POSIX Test Suite (DC:settable:054),
if HUPCL and CLOCAL are both clear, then the last close
shall not cause the modem control lines to be disconnected.
*/
if ((p->channel.xflags1 &
(X1_HUP_CLOSE | X1_MODEM_OK)) != X1_MODEM_OK) {
/* HUPCL is set or CLOCAL is set */
__port_andb(uartp + U_MCR, ~(MCR_DTR | MCR_RTS | MCR_OT2));
}
__outb(uartp + U_IER, 0);
/* disable serial interrupts */
p->channel.flags &= ~ACTIVE;
p->init = 0;
}

Like most entry points in the LynxOS tty driver, the ttyclose() function calls
the tty closing procedure tmgr_close() from the kernel tty manager. Refer to
“tty Manager” on page 98 for a brief description of the tmgr_close() function.
The xmit_quies() routine, which is called immediately afterwards, basically
waits for the serial line transmitter to finish its operations and become quiet.
Finally, the ttyclose() function cleans up the serial line state and disables the
serial interrupts.

Interrupt Handler
static void duart_int(
struct tty_u *p) /* ptr to duart channel structure */
{
__port_addr_t uartp;
struct ttystatics *s; /* ptr to static struct for a chan */
int status;
int data;
unsigned char c; /* make unsigned to aid debugging */
int got_break = 0; /* added for inbreak check */
int parity_error = 0; /* added for parity check */

uartp = p->uartp;
s = &p->channel;

for (;;) {
if ((status = __inb(uartp + U_IIR)) & IIR_PEN) {
/* no intr pending */
break;
}
switch (status & IIR_IID) {
case IIR_RLS: /* receiver error */
if ((data = __inb(uartp + U_LSR)) & LSR_BI) {
/* Break interrupt */
got_break = 1;
tmgr_ex(s, EX_BREAK, 0);
}
if (data & (LSR_PE | LSR_FE)) {
/* Parity error or framing error */
if (s->xflags1 & X1_ENABLE_INPUT_PC) {
parity_error = 1;

Writing Device Drivers for LynxOS 107


Chapter 7 - Interrupt Handling

}
}
if ((data & LSR_BI) || !(data & (LSR_PE | LSR_FE))) {
/* Flush the receive buffer register */
data = __inb(uartp + U_RBR);
}
break;
case IIR_RDA: /* receiver data ready */
data = __inb(uartp + U_RBR);

if (skdb_entry_from_irq && __tty_is_skdb_port(uartp)


&& data == skdb_key_serial) {
tty_skdb(p);
continue;
}

/*
The ttydrvr receives a 0 on transition
from break on to break off.
*/
if (!data && (s->t_line != SLIPDISC)) {
if (got_break == 1) {
got_break = 0;
break; /* dont send character on */
}
}

if (!parity_error) {
if (tmgr_rx(s, data)) {
/* enable transmit interrupt */
__port_orb(uartp + U_IER, IER_TRE);
}
}
else { /* parity error */
tmgr_ex(s, EX_PARITY, data);
parity_error = 0;
/* enable transmit interrupt */
__port_orb(uartp + U_IER, IER_TRE);
}
break;
case IIR_TRE: /* transmitter empty */
if (p->dcd_check && !p->cts) {
/* do not send because "No CTS" */
break;
}
if (tmgr_tx(s, (char *)&c)) { /* call manager */
/* disable transmit interrupt */
__port_andb(uartp + U_IER, ~IER_TRE);
}
else {
__outb(uartp + U_THR, c);
}
break;
case IIR_MS:
/* flow control hardware handshake */
data = __inb(uartp + U_MSR);

if ((data & MSR_DCTS) && (p->dcd_check


|| (s->tm.c_cflag & (V_CRTSCTS | V_CRTSXOFF)))) {
if (!(data & MSR_CTS)) { /* DCE is not ready */
p->cts = 0;
}
else {

108 Writing Device Drivers for LynxOS


Interrupt Handler

p->cts = 1;
/* enable transmit interrupt */
__port_orb(uartp + U_IER, IER_TRE);
}
}
break;
}
}

/* shared interrupts */
ioint_link(p->int_link_id);
}

The above example is a simplified version of the duart_int() interrupt handler


routine for the LynxOS tty driver. The status word is obtained from the UART port,
and, if no interrupt is pending, no action is taken by the interrupt handler. In case of
a receiver error, the serial port line status register is queried for the data contained
therein. If the data received indicates a break interrupt, the exception handling
routine tmgr_ex() from the kernel tty manager is invoked and the got_break
variable, which is used elsewhere in the interrupt handler code, is set to 1. Refer to
“tty Manager” on page 98 for the description of the tmgr_ex() function. If a
break interrupt is encountered, the receive buffer register is forcibly flushed by the
tty driver.
In the case of a receiver data ready condition with no parity error detected, the
transmitter interrupts are enabled by writing the appropriate constant into the
UART interrupt enable register after the tmgr_rx() function returns the positive
truth value indicating that the queue management has succeeded. The parity error,
in its turn, triggers the call to the tmgr_ex() function, which is an exception
dispatcher implemented in the kernel tty manager (refer to “tty Manager” on
page 98). The transmitter empty interrupt condition leads to a call to the
tmgr_tx() and, depending on the return value of the latter, either the transmit
interrupts are masked out or the character to be sent is put to the UART transmit
buffer.
If the modem status interrupt is received, the tty driver interrupt handler resets the
interrupt by reading the UART modem status register. After that, the driver
provides for the special case when the following conditions are met:
• [The CTS input to the chip has changed state since the last time it was
read by the CPU]
AND
• [[The interrupt occurred during hardware handshake, which is indicated
by nonzero value of the dcd_check field of the tty driver statics
structure]
OR

Writing Device Drivers for LynxOS 109


Chapter 7 - Interrupt Handling

• [The termios control flag indicates that the hardware flow control is
taking place]]
In this case, if the modem status register indicates that the clear-to-send is not set in
the hardware, the cts field in the tty driver statics structure is zeroed out.
Otherwise, the latter field is set to logic 1 and the transmit interrupts are enabled by
writing an appropriate value into the UART interrupt enable register.
Having done everything needed to handle the interrupt received, the
duart_int() function calls the ioint_link() function with the value that
was stored in the tty driver statics structure at driver installation.
The function in the kernel tty manager offer the mechanisms to handle the
character buffer and queue management for the serial device. That is why the
interrupt service routine itself has no buffer or queue handling code whatsoever.

write Entry Point


int ttywrite(struct tty_u *p, struct file *f, char *buff, int count)
{
return tmgr_write(&p->channel, f, buff, count);
}

In this example, the write to the serial port functionality is entirely handled by the
kernel tty manager. Refer to “tty Manager” on page 98 for a brief description of the
tmgr_write() function.

read Entry Point


int ttyread(struct tty_u *p, struct file *f, char *buff, int count)
{
return tmgr_read(&p->channel, f, buff, count);
}

As in the write entry point of the LynxOS tty driver, the ttyread function calls
the tmgr_read() routine from the kernel tty manager. It is the latter function
that implements the read functionality for the tty driver. Refer to “tty Manager” on
page 98 for a description of the tmgr_read() function.

ioctl Entry Point


The implementation of the ioctl entry point in the LynxOS interrupt-based tty
driver is not in any way specific to the class of the interrupt-driven device drivers.

110 Writing Device Drivers for LynxOS


The ttyseth Procedure

It is not, therefore, considered educational to analyze the source code of this entry
point in this chapter.

The ttyseth Procedure


struct sgttyb {
char sg_ispeed; /* input speed */
char sg_ospeed; /* output speed */
char sg_erase; /* erase char (default "#") */
char sg_2erase; /* 2nd erase character */
char sg_kill; /* kill char (default "@") */
int sg_flags; /* various modes and delays */
};

static void
ttyseth(struct ttystatics *s, struct sgttyb *sg,
__port_addr_t uartp, int firsttime, int tflags)
{
swait(&s->hsem, SEM_SIGIGNORE);
if (firsttime || (sg->sg_flags & (EVENP|ODDP)) != (tflags &
(EVENP|ODDP))) {
if (sg->sg_flags & EVENP) {
if (sg->sg_flags & ODDP) { /* both even and odd 7 bit no parity
*/
__outb(uartp + U_LCR, LCR_WL7 | LCR_STB_1);
}
else {
__outb(uartp + U_LCR, LCR_WL7 | LCR_PEN | LCR_EPS |
LCR_STB_1);
}
}
else if (sg->sg_flags & ODDP) {
__outb(uartp + U_LCR, LCR_WL7 | LCR_PEN | LCR_STB_1);
}
else {
__outb(uartp + U_LCR, LCR_WL8 | LCR_STB_1);
}
}

if (s->sg.sg_ispeed != sg->sg_ispeed || s->sg.sg_ospeed != sg-


>sg_ospeed) {
if (!set_baud(uartp, sg->sg_ispeed, sg->sg_ospeed)) {
if (s->sg.sg_ospeed == B0 && sg->sg_ospeed != B0) {
/*
Changing output speed from B0 to non-B0, so it is
necessary to reconnect the modem control lines.
*/
__port_orb(uartp + U_MCR, (MCR_DTR | MCR_RTS | MCR_OT2));
}
s->sg.sg_ispeed = sg->sg_ispeed;
s->sg.sg_ospeed = sg->sg_ospeed;
}
}
ssignal(&s->hsem);
}

ttyseth() is a helper routine called from both the install and ioctl entry
points of the LynxOS VMPC tty driver.

Writing Device Drivers for LynxOS 111


Chapter 7 - Interrupt Handling

The ttyseth() function receives the pointer to the statics structure as its first
argument. Various bits and pieces of information about the serial line parameters
are stored in the fields of the sgttyb structure, which is given as the second
argument to this function. The UART port address, the first time call indicator flag,
and the tflags bitmask make up the rest of the arguments to this helper routine.
Inside a critical section isolated by the swait()/ssignal() pair (refer to “Using
Kernel Semaphores for Mutual Exclusion” on page 51 in Chapter 4), modes and
option settings for the serial port are set by the ttyseth() procedure. For
example, if the function is called for the first time (which happens when the
install entry point calls it), the serial line word length, the number of stop bits,
and the parity enable status are set according to the values received by this function
from its caller.

112 Writing Device Drivers for LynxOS


CHAPTER 8 Kernel Threads and Priority
Tracking

To off-load processing from interrupt-based sections of a device driver,


LynxOS offers a feature known as kernel threads, also referred to as system
threads. Kernel threads are defined as independently schedulable entities which
reside in the kernel’s virtual address space. They closely resemble processes but do
not have the memory overhead associated with processes.
Although kernel threads have independent stack and register areas, the kernel
threads share both text and data segments with the kernel. Each kernel thread has a
priority associated with it which is used by the operating system to schedule it.
Kernel threads can be used to improve the interrupt and task response times
considerably. Thus, they are often used in device drivers.
Priority tracking is the method used to dynamically determine the kernel thread’s
priority. The kernel thread assumes the same priority as the highest-priority
application which it is currently servicing.

Device Drivers in LynxOS


Device drivers form an important part of any operating system, but even more so in
a real-time operating system such as LynxOS. The impact of the device driver
performance on overall system performance is considerable. Since it is imperative
for the operating system to provide deterministic response time to real-world
events, device drivers must be designed with determinism in mind.
Some of the important components of real-time response are described in the
following sections.

Writing Device Drivers for LynxOS 113


Chapter 8 - Kernel Threads and Priority Tracking

Interrupt Latency
Interrupt latency is the time taken for the system to acknowledge a hardware
interrupt. This time is measured from when the hardware raises an interrupt to
when the system starts executing the first instruction of the interrupt routine (in the
case of LynxOS this routine is the interrupt dispatcher). This time is dependent on
the interrupt hardware design of the system and the longest time interrupts are
disabled in the kernel or device drivers.

Interrupt Dispatch Time


Interrupt dispatch time is the time taken for the system to recognize the interrupt
and begin executing the first instruction of the interrupt handler. Included in this
time is the latency of the LynxOS interrupt dispatcher (usually negligible).

Driver Response Time


The driver response time is the sum of the interrupt latency and the interrupt
dispatch time. This is also known as the interrupt response time.

Task Response Time


The task response time is the time taken by the operating system to begin running
the first instruction of an application task after an interrupt has been received that
makes the application ready to run. This figure is the total of:
• The driver response time (including the delays imposed by additional
interrupts)
• The longest preemption time
• The context switch time
• The scheduling time
• The system call return time
Only the driver response time and the preemption time are under the control of the
device driver writer. The other times depend on the implementation of
LynxOS on the platform for which the driver is being written.

114 Writing Device Drivers for LynxOS


Task Completion Time

Task Completion Time


The task completion time is the time taken for a task to complete execution,
including the time to process all interrupts which may occur during the execution
of the application task.

NOTE: The device driver writer should be aware of all delays that the interrupts
could potentially cause to an application. This is important when considering the
overall responsiveness of the “application-plus-kernel” combination in the worst-
possible timing scenario.

Real-Time Response
To improve the real-time response of any operating system, the most important
parameters are the driver response time, task response time, and the task
completion time. The time taken by the driver in the system can have a direct effect
on the system’s overall real-time response. A single breach of this convention can
cause a high performance real-time system to miss a real-time deadline.
Kernel threads were introduced into LynxOS in order to keep drivers from
interfering with the real-time response of the overall system. LynxOS kernel threads
were designed specifically to increase driver functionality while decreasing driver
response time, task response time, and task completion time.
In a normal system, interrupts have a higher priority than any task. A task,
regardless of its priority, is interrupted if an interrupt is pending (unless the
interrupts have been disabled). The result could mean that a low priority interrupt
could interrupt a task which is executing with real-time constraints.
A classic example of this would be a task collecting data for a real-time data
acquisition system and being interrupted by a low priority printer interrupt. The
task would not continue execution until the interrupt service routine had finished.
With kernel threads, delays of this sort are significantly reduced. Instead of the
interrupt service routine doing all the servicing of the interrupt, a kernel thread is
used to perform the function previously performed by the interrupt routine. The
interrupt service routine is now reduced to merely signaling a semaphore which the
kernel thread is waiting on.
Since the kernel thread is running at the application’s priority (actually it is running
at half a priority level higher as described below), it is scheduled according to
process priority and not hardware priority. This ensures that the interrupt service

Writing Device Drivers for LynxOS 115


Chapter 8 - Kernel Threads and Priority Tracking

time is kept to a minimum and the task response time is kept short. A further result
of this is that the task completion time is also reduced.
The use of kernel threads and priority tracking in LynxOS drivers are the
cornerstone to guaranteeing deterministic real-time performance.

Kernel Threads
Kernel threads execute in the virtual memory context of the null process, which
is process 0 of the partition they are created for. However, kernel threads do not
have any user code associated with them, so context switch times for kernel threads
are quicker than for user threads. Like all other tasks in the system, kernel threads
have a scheduling priority which the driver can change dynamically to implement
priority tracking. They are scheduled with the SCHED_FIFO algorithm.

Creating Kernel Threads


A kernel thread is created once in the install or open entry point. The advantage of
starting it in the open is that, if the device is never opened, the driver doesn't use up
kernel resources unnecessarily. However, as the thread is only created once, the
open routine must check whether this is the first call to open. One thread is created
for each interrupting device, which normally corresponds to a major device. The
following code fragment illustrates how a thread might be started from the install
entry point:
int threadfunc ();
int stacksize, priority;
char *threadname;

s->st_id = ststart (threadfunc, stacksize, priority, threadname, 1, s);


if (s->st_id == SYSERR)
{
sysfree (s, sizeof (struct statics));
pseterr (EAGAIN);
return (SYSERR);
}

The thread function specifies a C function which will be executed by the thread.
The structure of the thread code is discussed in the next section.
The second argument specifies the thread’s stack size. This stack does not grow
dynamically, so enough space must be allocated to hold all the thread’s local
variables.

116 Writing Device Drivers for LynxOS


Structure of a Kernel Thread

As kernel threads are preemptive tasks, they have a scheduling priority, just like
other user threads in the system, which determines the order of execution between
tasks. The priorities of kernel threads are discussed more fully in “Priority
Tracking” on page 121. It is usual to create the thread with a priority of 1.
The thread name is an arbitrary character string which is printed in the name
column by the ps T command. It will be truncated to PNMLEN characters
(including NULL terminator). PNMLEN is currently 32 (see the proc.h file).
The last two parameters allow arguments to be passed to the thread. In most cases,
it is sufficient to pass the address of the statics structure, which normally contains
all other information the thread might need for communication and
synchronization with the rest of the driver.

Structure of a Kernel Thread


The structure of a kernel thread and the way in which it communicates with the rest
of the driver depends largely on the way in which a particular device is used. For
the purposes of illustration, two different driver designs will be discussed.
1. Exclusive access. Only one user task is allowed to use the device at a
time. The exclusive access is often enforced in the open entry point.
2. Multiple access. Multiple user tasks are permitted to have the device open
and make requests.

Exclusive Access
If we consider synchronous transfers only, then this type of driver will typically
have the following structure:
1. The top-half entry point (read/write) starts the data transfer on the device,
then blocks waiting for I/O completion.
2. The interrupt handler signals the kernel thread when the I/O completes.
3. The kernel thread consists of an infinite for loop which does the
following:
- Wait for work to do
- Process interrupt
- Wake up user task

Writing Device Drivers for LynxOS 117


Chapter 8 - Kernel Threads and Priority Tracking

The statics structure will contain a number of variables for communication


between the thread and the other entry points. These would include
synchronization semaphores, error status, transfer length, and so on.

Top-Half Entry Point


The read/write entry point code will not be any different from a driver that does not
use kernel threads. It starts an operation on the device, then blocks on an event
synchronization semaphore.
int drv_read (struct statics *s, struct file *f, char *buff, int count)
{
start_IO (s, buff, count, READ);
/* start I/O on device */
swait (&s->io_sem, SEM_SIGABORT);
/* wait for I/O completion */
if (s->error) { /* check error status */
pseterr (EIO);
return (SYSERR);
}
return (s->count); /* return # bytes transfered */
}

Interrupt Handler
Apart from any operations that may be necessary to acknowledge the hardware
interrupt, the interrupt handler’s only responsibility is to signal the kernel thread,
informing it that there is some work to do:
void intr_handler (struct statics *s;)
{
ssignal (&s->intr_sem); /* wake up kernel thread */
}

Kernel Thread
The kernel thread waits on an event synchronization semaphore. When an interrupt
occurs, the thread is woken up by the interrupt handler. It processes the interrupt,
checking the device status for errors and the like and wakes up the user task that is
waiting for I/O completion. For best system real-time performance, the kernel
thread should reenable interrupts from the device.
void kthread (struct statics *s;)
{
for (;;) {
swait (&s->intr_sem, SEM_SIGIGNORE);
/* wait for work to do*/
...
/* process interrupt, check for errors etc. */
...

118 Writing Device Drivers for LynxOS


Multiple Access

if (error_found)
s->error = 1;
/* tell user task there was an error */
ssignal (&s->io_sem); /* wake up user task */
}
}

Multiple Access
In this type of design, any number of user tasks can open a device and make
requests to the driver. But as most devices can perform only one operation at a
time, requests from multiple tasks must be held in a queue. In a system without
kernel threads, the structure of such a driver is:
1. The top-half routine starts the operation immediately if the device is idle;
otherwise, it enqueues the request. It then blocks, waiting for the request
to be completed.
2. The interrupt handler processes interrupts, does all I/O completion, wakes
up the user task and then starts the next operation on the device
immediately if there are queued requests.
The problem with this strategy is that it can lead to an overly long interrupt routine
owing to the large amount of work done in the handler. Since interrupt handlers are
not preemptive, this can have an adverse effect on system response times. When
multiple requests are queued up, the next operation is started immediately after the
previous one has finished. The result of this is that a heavily used device can
generate a series of interrupts in rapid succession until the request queue is
emptied. Even if the requests were made by low priority tasks, the processing of
these interrupts and requests will take priority over high priority tasks because it is
done within the interrupt handler.
The use of kernel threads resolves these problems by off-loading the interrupt
handler. A kernel thread is responsible for dequeuing and starting requests,
handling I/O completion and waking up the user tasks. Figure 8-1 illustrates the
overall design.
A data structure containing variables for such things as event synchronization and
error status is used to describe each request. The pending request queue and list of
free request headers will be part of the statics structure. The interrupt handler code
is the same as in the case of the exclusive use design.

Writing Device Drivers for LynxOS 119


Chapter 8 - Kernel Threads and Priority Tracking

request queue kernel thread interrupt handler

User tasks

Figure 8-1: Interrupt Handler Design

Top-Half Entry Point


int drv_read (struct statics *s, struct file *f, char *buff, int count)
{
struct req_hdr *req;

...
enqueue (s, req); /* enqueue request */
swait (&req->io_sem, SEM_SIGABORT);
/* wait for I/O completion */
...
}

120 Writing Device Drivers for LynxOS


Kernel Thread

Kernel Thread
void kthread (struct statics *s)
{
struct req_hdr *curr_req;

for (;;) {
curr_req = dequeue (s); /* wait for a request */
start_IO (s, curr_req); /* start I/O operation */
/* wait for I/O completion */
swait (&s->intr_sem, SEM_SIGIGNORE);
...
/* process interrupt, check for errors etc. */
...
if (error_found)
/* tell user task there was an error */
curr_req->error = 1;
/* wake up user task */
ssignal (&curr_req->io_sem);
}
}

Priority Tracking
The previous examples did not discuss the priority of the kernel thread. It was
assumed to be set statically when the thread is created. There is a fundamental
problem with using a static thread priority in that, whatever priority is chosen,
there are always some conceivable situations where the order of task execution
does not meet real-time requirements. The same is true of systems that implement
separate scheduling classes for system and user level tasks.
Figure 8-2 shows two possible scenarios in a system using a static thread priority.
In both scenarios, Task A is using a device that generates work for the kernel
thread. Other tasks, with different priorities, exist in the system. These are
represented by Task B.

Writing Device Drivers for LynxOS 121


Chapter 8 - Kernel Threads and Priority Tracking

Kernel
Task A
Thread
Priority

Task B Task B

Kernel
Task A
Thread

Scenario 1 Scenario 2

Figure 8-2: Scheduling with Static Thread Priorities

In the first scenario, Task B has a priority higher than Task A but lower than the
kernel thread. The kernel thread will be scheduled before Task B even though it is
processing requests on behalf of a lower priority task. This is essentially the same
situation that occurs when interrupt processing is done in the interrupt handler. In
Scenario 2, the situation is reversed. The kernel thread is preempted by Task B
resulting in Task A being delayed.
The only solution that can meet the requirements of a deterministic real-time
system with bounded response times is to use a technique that allows the kernel
thread priority to dynamically follow the priorities of the tasks that are using a
device.

122 Writing Device Drivers for LynxOS


User and Kernel Priorities

User and Kernel Priorities


User applications can use 256 priority levels from 0–255. However, internally the
kernel uses 512 priority levels, 0–511. The user priority is converted to the internal
representation simply by multiplying it by two, as illustrated in Figure 8-3.

Kernel Priorities
511
510
509
508
User Priorities
255
254

1
0

3
2
1
0

Figure 8-3: User and Kernel Priorities

As can be seen, a user task will always have an even priority at the kernel level.
This results in “empty,” odd priority slots between the user priorities. These slots
play an important role in priority tracking.
The examples below will again discuss the exclusive and multiple access driver
designs for illustrating priority tracking techniques.

Writing Device Drivers for LynxOS 123


Chapter 8 - Kernel Threads and Priority Tracking

Exclusive Access
Whenever a request is made to the driver, the top-half entry point must set the
kernel thread priority to the priority of the user task.
int drv_read (struct statics *s, struct file *f, char *buff, int count)
{
uprio = _getpriority ();
/* get priority of current task */
stsetprio (s->kt_id, (uprio << 1) + 1);
/* set k.t. priority */
start_IO (s, buff, count, READ);
/* start I/O on device */
swait (&s->io_sem, SEM_SIGABORT);
/* wait for I/O completion */
if (s->error) {
/* check error status */
pseterr (EIO);
return (SYSERR);
}
return (s->count);
/* return # bytes transfered */
}

The expression (uprio << 1) + 1 converts the user priority to a kernel level
priority. The thread priority is in fact set to the odd numbered kernel priority just
above the priority of the user task. This ensures that the kernel thread executes
before any tasks at the same or lower priority as the user task making the request
but after any user tasks of higher priority, as shown in Figure 8-4.

Kernel Priorities

User priority n+1


Priority of kernel thread
serving user task at priority n
User priority n

Figure 8-4: Kernel Thread Priorities

124 Writing Device Drivers for LynxOS


Multiple Access

When the request has been completed the thread resets its priority to its initial
value.
void kthread (struct statics *s;)
{
for (;;) {
swait (&s->intr_sem, SEM_SIGIGNORE);
/* wait for work to do */
...
/* process interrupt, check for errors etc. */
...
if (error_found)
s->error = 1;
/*tell user task there was an error*/
ssignal (&s->io_sem);
/* wake up user task */
stsetprio (s->kt_id, 1);
/* reset kernel thread priority */
}
}

Multiple Access
As was seen in the previous discussion of this type of design, the driver maintains a
queue of pending requests from a number of user tasks. These tasks will probably
have different priorities. So, the driver must ensure that the kernel thread is always
running at the priority of the highest priority user task which has a request pending.
If the requests are queued in priority order this will ensure that the thread is always
processing the highest priority request. The thread priority must be checked and
adjusted at two places: whenever a new request is made and whenever a request is
completed.
How can the driver keep track of the priorities of all the user tasks that have
outstanding requests? In order to do this the driver must use a special data
structure, struct priotrack, defined in st.h. Basically, the structure is a set
of counters, one counter for each priority level. The value of each counter
represents the number of outstanding requests at that priority. The values of the
counters are incremented and decremented using the routines priot_add and
priot_remove. The routine priot_max returns the highest priority in the set.
The use of these routines is illustrated in the following code examples.

Top-Half Entry Point


The top-half entry point must first use priot_add to add the new request to the
set of tracked requests. The code then decides whether the kernel thread’s priority
must be adjusted. This will be necessary if the priority of the task making the new
request is higher than the thread’s current priority. A variable in the statics structure

Writing Device Drivers for LynxOS 125


Chapter 8 - Kernel Threads and Priority Tracking

is used to track the kernel thread’s current priority. The request header must also
contain a field specifying the priority of the task making each request. This is used
by the kernel thread.
int drv_read (struct statics *s, struct file *f, char *buff, int count)
{
...
uprio = _getpriority (); /* get user task priority */
req->prio = uprio; /* save for later use */
enqueue (s, req); /* enqueue request */
/*
* Do priority tracking. Add priority of new request
* to set. If priority of new request is higher than
* current thread priority, adjust thread priority.
*/
swait(&s->prio_sem, SEM_SIGIGNORE);
/* synchronize with kernel thread */
priot_add (&s->priotrack, uprio, 1);
if (uprio > s->kt_prio) {
stsetprio (s->kt_id, (uprio << 1) + 1);
s->kt_prio = uprio;
}
ssignal(&s->prio_sem);
swait (&req->io_sem, SEM_SIGABORT);
/* wait for I/O completion */
...
}

Kernel Thread
When the kernel thread has finished processing a request, the priority of the
completed request is removed from the set using priot_remove. The thread must
then determine whether to change its priority or not, depending on the priorities of
the remaining pending requests. The thread uses priot_max to determine the
highest priority pending request.
void kthread (struct statics *s)
{
...
for (;;) {
...
curr_req = dequeue (s); /* wait for a request */
start_IO (s, curr_req); /* start I/O operation */
swait (&s->intr_sem, SEM_SIGIGNORE);
/* wait for I/O completion */
...
/* process interrupt, check for errors etc. */
...
/*
* Do priority tracking. Remove priority of
* completed request from set. Determine high
* priority of remaining requests. If this is
* lower than current priority, adjust thread
* priority.
*/
swait(&s->prio_sem, SEM_SIGIGNORE);
/* synchronize with top-half */

126 Writing Device Drivers for LynxOS


Non-atomic Requests

priot_remove (&s->priotrack, curr_req->prio);


maxprio = priot_max (&s->priotrack);
if (maxprio < s->kt_prio) {
stsetprio (s->kt_id, (maxprio << 1) + 1);
s->kt_prio = maxprio;
}
ssignal(&s->prio_sem);
...
}
}

Non-atomic Requests
The previous examples implicitly assumed that requests made to the driver are
handled atomically, that is to say, the device can handle an arbitrary sized data
transfer. This is not always the case. Many devices have a limit on the size of
transfer that can be made, in which case the driver may have to divide the user data
into smaller blocks. A good example is a driver for a serial device. A user task may
request a transfer of many bytes, but the device can transfer only one byte at a time.
The driver must split the request into multiple single byte requests.
From the point of view of priority tracking, a single task requesting an n byte
transfer is equivalent to n tasks requesting single byte transfers. Since each byte is
handled as a separate transfer by the driver (each byte generates an interrupt), the
priority tracking counters must count the number of bytes rather than the number
of requests.
The functions priot_addn and priot_removen can be used to add and
remove multiple requests to the set of tracked priorities. What is defined as a
request depends on the way the driver is implemented. It will not always
correspond on a one-to-one basis with a request at the application level. Taking
again the example of a driver for a serial device, a single request at the application
level consists of a call to the driver to transfer a buffer of length n bytes. However,
the driver will split the buffer into n single byte transfers, each byte representing a
request at the driver level. The top-half entry point would add n requests to the set
of tracked priorities using priot_addn. As each byte is transferred, the kernel
thread would remove each request priority using priot_remove.
The priority of the kernel thread would only be updated when all bytes have been
transferred. It is very important that the priority tracking is based on requests as
defined at the driver level, not the application level, in order for the priority
tracking to work correctly.

Writing Device Drivers for LynxOS 127


Chapter 8 - Kernel Threads and Priority Tracking

Controlling Interrupts
One of the problems that was discussed concerning drivers that perform all
interrupt processing in the interrupt handler was the fact that in certain
circumstances a device can generate a series of interrupts in rapid succession. For
many devices, the use of the kernel thread and priority tracking techniques
illustrated above resolves the problem.
One example concerns a disk driver. Figure 8-5 represents a situation that can
occur in a system without kernel threads. A lower priority task makes multiple
requests to the driver. Before these requests have completed, a higher priority task
starts execution. But this higher priority task is continually interrupted by the
interrupt handler for the disk. Because, the amount of processing that can be done
within the interrupt handler and the number of requests queued up for the disk
could have been very large, the response time of the system and execution time for
the higher priority task is essentially unbounded.
Figure 8-6 shows the same scenario using kernel threads. The important thing to
note is that the higher priority task can be interrupted only once by the disk. The
kernel thread is responsible for starting the next operation on the disk, but because
the kernel thread’s priority is based on task B’s priority, it will not run until the
higher priority task has completed. In addition, the length of time during which
task A is interrupted by the interrupt handler is a small constant time as the
majority of the interrupt processing has been moved to the kernel thread.

Interrupts from disk used by Task B

Interrupt
Handler
Priority

Task A

higher priority Task A


continually interrupted
by disk interrupts

Task B lower priority Task B makes


multiple requests to disk driver

Time

Figure 8-5: Interrupt Handling without Kernel Threads

128 Writing Device Drivers for LynxOS


Example

Interrupts from disk used by Task B

Interrupt
Handler
Priority

Task A

Kernel
Thread
lower priority Task B
makes multiple requests kernel thread starts next
Task B
to disk driver request on disk

Time

Figure 8-6: Interrupt Handling with Kernel Threads

This scheme takes care of devices where requests are generated by lower priority
user tasks. But what about devices where data is being sent from a remote system?
The local operating system cannot control when or how many packets are received
over an Ethernet connection, for example. Or a user typing at a keyboard could
generate multiple interrupts.
The solution to these situations is again based on the use of kernel threads. For
such devices, the interrupt handler must disable further interrupts from the device.
Interrupts are then reenabled by the corresponding kernel thread. So again, a device
can only generate a single interrupt until the thread has been scheduled to run.
Any higher priority tasks will execute to completion before the device-related
thread and can be interrupted by a maximum of one interrupt from each device.
The use of this technique requires that the device has the ability to store some small
amount of incoming data locally during the time that its interrupt is disabled. This
is not usually a problem for most devices.

Example
In this section an example of a thread-based VMPC tty driver is considered.
Essentially, this driver is just another version of the one studied in detail in
Chapter 7, “Interrupt Handling.” Taking the above into account, the internals of the

Writing Device Drivers for LynxOS 129


Chapter 8 - Kernel Threads and Priority Tracking

threaded tty driver are presented only where they differ from those of the interrupt-
driven one.
In its threaded incarnation, the LynxOS tty driver implements a lightweight
interrupt handler whose sole purpose is to block the hardware interrupts once an
interrupt is received and then signal a separate thread created at the driver
installation. It is this latter thread that does the actual interrupt processing and
unblocks the interrupts back once the processing is finished. Unlike the interrupt-
driven tty driver, the threaded driver utilizes the UART line status register for
determining the cause of hardware interrupt instead of the interrupt identification
register because the latter does not allow checking for an interrupt once the
interrupt has been masked out.

Statics Structure
The device driver statics structure, also referred to as the UART channel structure
in this chapter, which describes the parameters of a tty channel, contains four fields
that are specific to the threaded driver:
struct tty_u {

/* The fields common to the interrupt-driven and


thread-based driver implementation */

int thr_id;
ksem_t thr_sem;
int intr_blocked;
unsigned char ier;

/* The rest of the fields common to the interrupt-driven


and thread-based driver implementation */
};

In the thr_id field, the channel-wise thread ID is stored. It is via thr_sem that
the interrupt handler function signals the driver thread to inform the latter about a
hardware interrupt received. The boolean inter_blocked field is set to true
if and only if the hardware interrupts are blocked. The ier field is used to cache
the state of the UART interrupt enable register.

tty Driver Auxiliary Routines and Macros


static inline intr_block(struct tty_u *p, int intr_blocked)
{
p->intr_blocked = intr_blocked;

if (!p->intr_blocked) {
__outb(p->uartp + U_IER, p->ier);
} else {

130 Writing Device Drivers for LynxOS


The install Entry Point

__outb(p->uartp + U_IER, 0);


}
}

static inline __ier_outb(struct tty_u *p, unsigned char ier)


{
p->ier = ier;

if (!p->intr_blocked) {
__outb(p->uartp + U_IER, p->ier);
}
}

#define __ier_orb(p, mask) __ier_outb(p, (p)->ier | mask)


#define __ier_andb(p, mask) __ier_outb(p, (p)->ier & mask)

The tty driver auxiliary routines along with the macro definitions given above
provide a convenient interface for handling the hardware device blocked state. For
instance, when the intr_block() function is called, the truth value given as its
second argument is stored in the intr_blocked field of the statics structure (see
the description of this structure earlier in this chapter), and the hardware interrupts
are either disabled altogether by writing zero to the UART interrupt enable register,
or the value stored in the ier field of the channel structure is written back to the
hardware device.
The __ier_outb() function stores the value of its second argument to the ier
field of the UART channel structure pointed to by the first argument to this
function, and, in case the interrupts are not blocked, writes the ier value to the
UART interrupt enable register.
The macros __ier_orb() and __ier_andb() serve the purpose of masking
or unmasking a particular interrupt, the meaning of both macros being apparent
from their definitions.

The install Entry Point


struct tty_u * ttyinstall(struct tty_uinfo *info)
{
struct tty_u *p;
struct ttystatics *s;
struct old_sgttyb *sg;
__port_addr_t uartp;

p = (struct tty_u *)SYSERR;


if (uart_check(info->location)) {
p = (struct tty_u *)sysbrk((long)sizeof(struct tty_u));
p->cts = 0;
p->uartp = uartp = __tty_uart_location(info->location);
p->vector = info->vector;
p->thr_sem = 0;
p->intr_blocked = 0;
p->ier = 0;

Writing Device Drivers for LynxOS 131


Chapter 8 - Kernel Threads and Priority Tracking

sg = (struct old_sgttyb *)&info->sg;


s = &p->channel;

/*
The trans_en() routine looks at uartp and
channel.tm.c_cflag, so it needs the full
tty_u structure passed to it.
*/
tmgr_install(s, sg, 0, trans_en, (char *)p);
ksem_init(&s->hsem, 1);
ttyseth(s, (struct sgttyb *)sg, uartp, 1, 0);
/* The device is not open yet, so disconnect the modem
control lines. */
__port_andb(uartp + U_MCR, ~(MCR_DTR | MCR_RTS | MCR_OT2));

/* Start the interrupt handling thread. */


p->thr_id = ststart((int(*)())duart_thread,
4096,
_getpriority()*2+1, "tty", 1, p);

/* Mask the interrupt and register the handler. */


__ier_outb(p, 0);
p->int_link_id = iointset(p->vector,
(int (*)())duart_int, (char *)p);
p->init = 0;

/* Enable the modem status interrupts. */


__ier_orb(p, IER_MS);
}
return p;
}

Having initialized the fields of the UART channel structure, the install entry
point starts the kernel thread using the ststart() function. Finally, the
install entry point sets the interrupt handler in just the same way as the
interrupt-driven tty driver demonstrated in the previous chapter.

The Thread Function


static void duart_thread(struct tty_u *p)
{
__port_addr_t uartp;
unsigned char iir, msr, lsr, data, c;
struct ttystatics *s;

uartp = p->uartp;
s = &p->channel;

for (;;) {
swait(&p->thr_sem, SEM_SIGIGNORE);

lsr = __inb(uartp + U_LSR);

/* 1. Process the receiver interrupt */


if (lsr & LSR_DR) {
data = __inb(uartp + U_RBR);

if (lsr & LSR_BI) {

132 Writing Device Drivers for LynxOS


The Thread Function

/* Break interrupt */
tmgr_ex(s, EX_BREAK, 0);
}

/*
The ttydrvr receives a 0 on transition
from break on to break off.
*/
if (!((lsr & LSR_BI) && data && (s->t_line != SLIPDISC))) {
if ((lsr & (LSR_PE | LSR_FE)) &&
(s->xflags1 & X1_ENABLE_INPUT_PC)) {
/* Parity error or framing error */
tmgr_ex(s, EX_PARITY, data);
/* enable transmit interrupt */
__ier_orb(p, IER_TRE);
} else {
if (tmgr_rx(s, data)) {
/* enable transmit interrupt */
__ier_orb(p, IER_TRE);
}
}
}
}

/* 2. Check the modem signals status */


msr = __inb(uartp + U_MSR);
/*
This case should only occur during an open when
the file is configured as blocking, OR when we're
doing hardware flow-control. Keep in mind that
CRTSCTS means both input and output hardware
flow-control (Linux compatible). The dcd_check
is for the open routine to block appropriately.
*/
if ((msr & MSR_DCTS) && (p->dcd_check
|| (s->tm.c_cflag & (V_CRTSCTS | V_CRTSXOFF)))) {
if (!(msr & MSR_CTS)) { /* DCE is not ready */
p->cts = 0;
} else {
p->cts = 1;
/* enable transmit interrupt */
__ier_orb(p, IER_TRE);
}
}

/* 3. Process the transmiter interrupt */


if (lsr & LSR_TRE) { /* transmitter empty */
if (p->dcd_check && !p->cts) {
/* do not send because "No CTS" */
break;
}
if (tmgr_tx(s, (char *)&c)) { /* call manager */
/* disable transmit interrupt */
__ier_andb(p, ~IER_TRE);
} else {
__outb(uartp + U_THR, c);
}
}

/* Restore the interrupt mask */


intr_block(p, 0);
}
}

Writing Device Drivers for LynxOS 133


Chapter 8 - Kernel Threads and Priority Tracking

Unlike the interrupt-driven tty driver, whose interrupt handler is designed to


process hardware interrupts inside itself, the threaded driver does the same
processing in a separate thread function. In the endless loop, the thread function
waits on the thread semaphore p->thr_sem that is used to signal a hardware
interrupt (see below for a description of the interrupt routine with more details).
The UART line status register is used to read the information about the hardware
interrupt, which is then used to take the actions appropriate for the particular
interrupt type. These actions are very similar to those analyzed in the previous
chapter so that the details about the workings of the interrupt handling code are left
outside to keep the text concise. After processing the interrupt, the thread function
unblocks the hardware interrupts and goes back waiting on the interrupt
semaphore.

The Interrupt Routine


static void duart_int(struct tty_u *p)
{
if ((__inb(p->uartp + U_IIR) & IIR_PEN) == 0) {
/*
There is an interrupt pending. Mask the
interrupt on the device and signal the thread.
*/
intr_block(p, 1);
ssignal(&p->thr_sem);
}
}

The interrupt handler in the threaded driver is designed to be as short and fast as
possible in order to minimize the interrupt response time. If an interrupt is pending,
the interrupts on the tty channel are blocked altogether and the thread semaphore is
signaled to wake up the thread function. As explained in “The Thread Function” on
page 132, the interrupts are unblocked by the thread function after the latter has
done all the steps necessary to process the interrupt condition.

134 Writing Device Drivers for LynxOS


CHAPTER 9 Installation and Debugging

This chapter discusses the two methods of device driver installation in


LynxOS: static and dynamic installation.

Static Versus Dynamic Installation


The two methods of driver installation are as follows:
• Static Installation
• Dynamic Installation
A brief comparison of the two methods is given to ensure that the systems
programmer understands the intricacies involved in each type of installation. The
next sections will help in choosing the type of installation procedure to suit specific
requirements.

Static Installation
With this method, the driver object code is incorporated into the image of the
kernel. The driver object code is linked with the kernel routines and is installed
during system start-up. A driver installed in this fashion can be removed (that is,
made ineffective), but its text and data segments remain within the body of the
kernel.
The advantages of static installation are as follows:
• With static installation, devices are instantly available on system start-up,
simplifying system administration. The initial console and root file
system devices must use static installation.
• The installation procedure can be avoided each time the system reboots.

Writing Device Drivers for LynxOS 135


Chapter 9 - Installation and Debugging

• Static linking allows the driver symbols to be visible from within the
kernel debugger.

NOTE: While neither installation method affects a device driver’s functionality,


LynuxWorks recommends using a dynamic installation during the development of
a new driver. This is because the driver can be installed statically once it is working
without problems

Dynamic Installation
This method allows the installation of a driver after the operating system is booted.
The driver object code is attached to the end of the kernel image and the operating
system dynamically adds this driver to its internal structure. A driver installed in
this fashion can also be removed dynamically.

Static Installation Procedure


The code organization for static installation is done in the following manner.

Table 9-1: Code Organization for Static Installation

Directory File Description

/sys/lib libdrivers.a Drivers object code library


libdevices.a Device information declarations
/sys/dheaders devinfo.h Device information definition
/sys/devices devinfo.c Device configuration file
Makefile Instructions for making
libdevices.a
/sys/drivers/drvr driver source The source code for the driver
drvr to be installed

136 Writing Device Drivers for LynxOS


Static Installation Procedure

Table 9-1: Code Organization for Static Installation (Continued)

Directory File Description

/sys/bsp.<bsp_name>, config.tbl Master device & driver


where <bsp_name> is the configuration file, where
name of the BSP <bsp_name> is the name of the
BSP
a.out LynxOS kernel binary
Makefile Instructions for making
/sys/bsp.<bsp_name>/\
a.out
/etc nodetab Device nodes
/sys/cfg driv.cfg Configuration file for driv
driver and its devices.

The following steps describe how to do a static installation:


1. Create a device information definition and declaration. Place the device
information definition file devinfo.h in the directory
/sys/dheaders along with the existing header files for other drivers in
the system.
2. Make sure that the device information declaration file devinfo.c is in
the /sys/devices directory and has the following lines in the file in
addition to the declaration.
....
#include "../dheaders/devinfo.h"
....

This ensures the presence of the device information definition.


3. Compile the devinfo.c file and update the
/sys/lib/libdevices.a library file to include devinfo.o. This
may also be automated by adding devinfo.c to the Makefile.
DEVICE_FILES=atcinfo.x dtinfo.x flopinfo.x devinfo.x

To update /sys/lib/libdevices.a, enter:


make install

Writing Device Drivers for LynxOS 137


Chapter 9 - Installation and Debugging

Driver Source Code


Assuming the new driver is driver, the following steps must be followed for
driver code installation:
1. Make a new directory driver under /sys/drivers and place the code
of the device driver there.
2. Create a Makefile to compile the device driver.
3. Update the library file /sys/lib/libdrivers.a with the driver
object file using the command:
make install

Device and Driver Configuration File


The device and driver configuration file should be created with the appropriate
entry points, major device declarations, and minor device declarations. The system
configuration file is config.tbl in the sys/bsp.<bsp_name> directory.
The config.tbl file is used with the config utility to produce driver and
device configuration tables for LynxOS. Drivers, major devices, and minor devices
are listed in this configuration file. Each time the system is rebuilt, config reads
config.tbl and produces a new set of tables and a corresponding nodetab
file.

Configuration File: config.tbl


The parsing of the configuration files in LynxOS follows these rules:
• The commands are designated by single letters as the first character on a
line
• The field delimiter is:
• Spaces between the delimiter are not ignored. They are treated literally
• Blank lines are ignored
The special characters in the configuration file are
# Used to indicate a comment in the configuration file. The
rest of the line is ignored when this is found as the first
character on any line.

138 Writing Device Drivers for LynxOS


Rebuilding the Kernel

\ Used as the continuation character to continue a line even


within a comment.
: If the : is the first character in the line, it is ignored.
I:filename Used to indicate that the contents of the file filename
should replace the declaration.
The format of a device driver entry with its major and minor device declarations
should look like this:
# Character device
C:driver name:driveropen:driverclose: \
:driverread:driverwrite: \
:driverselect:driverioctl: \
:driverinstall:driveruninstall
D:some driver:devinfo::
N:minor_device1:minor_number
N:minor_device2:minor_number

# Block device
B:driver name:driveropen:driverclose: \
:driverstrategy:: \
:driverselect:driverioctl: \
:driverinstall:driveruninstall
D:some driver:devinfo::
N:minor_device1:minor_number
N:minor_device2:minor_number

The entry points should appear in the same order as they are shown here. If a
particular entry point is not implemented, the field is left out, but the delimiter
should still be in place.
If above declarations are in a file driver.cfg the entry:
I:driver.cfg
should be inserted into the config.tbl file.

Rebuilding the Kernel


To rebuild the LynxOS kernel, type the following commands:
cd /sys/bsp.<bsp_name>
make install
where <bsp_name> is the name of the target BSP.
In order for the applications programs to use a device, a node must be created in the
file system. For static device drivers, this is done automatically by the mkimage

Writing Device Drivers for LynxOS 139


Chapter 9 - Installation and Debugging

utility when creating a Kernel Downloadable Image, provided that the path to the
nodetab file created by config is specified in the mkimage spec file.

Dynamic Installation Procedure


Dynamic installation requires a single driver object file, and a pointer to the entry
points must be declared. The location of the driver source code is irrelevant in
dynamic installation. In LynxOS, the installation of dynamically-loaded device
drivers is usually done by trusted system software after system startup.

Driver Source Code


To install a device driver dynamically the entry points must be declared in a
structure defined in dldd.h. The variable should be named entry_points and
for a block composite driver block_entry_points is also required.
The format of the dldd structure is illustrated below:
#include <dldd.h>
struct dldd entry_points = { open, close, read,
write, select, ioctl, install,
uninstall, streamtab}

For block composite drivers, the block driver entry points are specified as:
struct dldd block_entry_points =
{ b_open, b_close, b_strategy,
ionull, ionull, b_ioctl, b_install,
b_uninstall, streamtab}

The include file dldd.h must be included in the driver source code, and the
declaration must contain the entry points in the same order as they appear above. If
a particular entry point is not present in a driver, the field in the dldd structure
should refer to the external function ionull, which is a kernel function that
simply returns OK. The last field in the dldd structure, streamtab, is unused in
LynxOS and should be set to NULL for all drivers.

140 Writing Device Drivers for LynxOS


Driver Source Code

The following example shows the null device driver that will be installed
dynamically.
/* -------------- NULLDRVR.C ------------------*/

#include <conf.h>
#include <kernel.h>
#include <file.h>
#include <dldd.h>

extern int ionull ();

int nullread(void *s, struct file *f, char *buff, int count)
{
return 0;
}
int nullwrite(void *s, struct file *f, char *buff, int count)
{
return (count);
}
int nullioctl()
{
pseterr (EINVAL);
return (SYSERR);
}
int nullselect()
{
return (SYSERR);
}
int nullinstall()
{
return (0);
}
int nulluninstall()
{
return (OK);
}
struct dldd entry_points = {
ionull,
ionull,
nullread,
nullwrite,
nullselect,
nullioctl,
nullinstall,
nulluninstall,
(char *) 0
};

Note that calls to a driver entry point replaced by ionull will still succeed. If a
driver does not support certain functionality, then it must include an entry point
that explicitly returns SYSERR, as in the case of the ioctl and select entry
points in the above example. This will cause calls to these entry points from an
application task to fail with an error.

Writing Device Drivers for LynxOS 141


Chapter 9 - Installation and Debugging

Driver Installation
In this release of LynxOS, follow these recommendations for compiling and
installing dynamic device drivers on a specific LynxOS platform.
LynxOS supports dynamic driver compilation with the GNU C compiler.
1. Compile the driver with GNU C.
gcc -c -o driver.o driver.c
-I/sys/include/kernel
-I/sys/include/family/<arch> -D__LYNXOS
where <arch> is the architecture (for example, x86).
2. Now create a dynamically loadable object module by entering the
following (all on one line):
ld -o driver.obj driver.o
Once a dynamic driver object module has been created, this object module can now
be dynamically installed. LynxOS provides a drinstall utility that allows
driver installation from the command line.
For character device drivers, enter:
drinstall -c driver.obj
For block device drivers, enter:
drinstall -b driver.obj
If successful, drinstall or dr_install returns the unique driver-id that
is defined internally by the LynxOS kernel. For the block composite driver, the
driver-id returned will be a logical OR of the character driver-id in the
lower 16 bits and the block driver-id in the upper 16 bits.
It is also possible to use a program to install a driver by using the system call
drinstall().

For a character device driver, use:


dr_install("./driver.obj", CHARDRIVER);
For a block device driver, use:
dr_install("./driver.obj", BLOCKDRIVER);

142 Writing Device Drivers for LynxOS


Device Information Definition and Declaration

Device Information Definition and Declaration


The device information definition is created the same way as in the static
installation. To create a device information declaration, one can use objcopy
utility to instantiate the device information definition.
Assuming the device information definition appears as:
/* myinfo.h */
struct my_device_info {
int address;
int interrupt_vector;
};
/* myinfo.c */
struct my_device_info devinfo = { 0xd000, 4 };

Compile myinfo.c into an object file myinfo.o:


gcc -c myinfo.c
The device information file can then be created using:
objcopy -O binary myinfo.o my.info

Device Installation
The installation of the device should be done after the installation of the driver. The
two ways of installing devices are either through the devinstall utility program
or cdv_install and bdv_install system calls.
devinstall -c -d driver_id mydevice_info
devinstall -b -e raw_driver_id -d
block_driver_id mydevice_info
The driver_id is the identification number returned by the drinstall
command or system call. This installs the appropriate device with the
corresponding driver and assigns a major device number to it (in this case, we will
assume this is major_no).

Node Creation
Unlike the static installation, there is no feature to automatically generate the nodes
under dynamic installation. This could be done manually using the mknod
command or system call. (See the LynxOS User' s Guide.) In LynxOS, this is
usually done by trusted system software.
Since the root file system in LynxOS is always read-only, by convention, a RAM
disk is typically created and mounted under /dev/ddev, and the dynamic device
node is created in the /dev/ddev directory. The creation of the nodes allows

Writing Device Drivers for LynxOS 143


Chapter 9 - Installation and Debugging

application programs to access the driver by opening and closing the file that has
been associated with the driver through the mknod command.
mknod /dev/ddev/device c major_no minor_no
The major_no is the number assigned to the device after a devinstall
command. This can be obtained by using the devices command. The
minor_no is the minor device number which can be specified by the user in the
range of 0–255. The c indicating a character device could also be a b to indicate a
block device.

Device and Driver Uninstallation


Dynamically loaded device drivers can be uninstalled when they are no longer
needed in the system. This can help in removing unwanted code in physical
memory when it is no longer relevant. Removal is performed with the drinstall
command. However, the device attached to the driver has to be uninstalled before
uninstalling the driver. Removing the device is accomplished with the
devinstall command.

For character devices:


devinstall -u -c device_id
For block devices:
devinstall -u -b device_id
After the device is uninstalled the driver can be uninstalled using the command:
drinstall -u driver_id

Common Error Messages During Dynamic Installation


The following list describes some common error messages that may be
encountered during dynamic installation of a device driver. In this case, the
LynxOS kernel assists in debugging efforts by printing help messages to the
system console.
• Bad Exec Format
This is usually seen when a drinstall command is executed. If this
happens, it indicates that a symbol in the device driver has not been
resolved with the kernel symbols. Make sure that there are no symbols
that cannot be resolved by the kernel and that the structure dldd has
been declared inside the driver. This error can also be caused by the

144 Writing Device Drivers for LynxOS


Debugging

driver binary being stripped, or if “resident=true” is specified in the spec


file and the “executable” permission is set for the driver binary.
• Device Busy
This error message is seen when attempting to uninstall the driver before
uninstalling the device. The correct order is to uninstall the device before
uninstalling the driver.

NOTE: A driver cannot be dynamically installed on a kernel that has been stripped.

Debugging
The development of device drivers can become complicated unless proper
debugging facilities are available. Since device drivers are not attached to a
particular control terminal, ordinary printf() statements will not work.
LynxOS provides a device driver service routine to debug device drivers
effectively. The routine is kkprintf(). This routine has exactly the same format
as the printf() call. The KKPF_PORT preprocessor macro defined in the
/sys/bsp.<bsp_name>/uparam.h header file controls where the output of the
kkprintf() routine is sent. Note that the LynxOS kernel needs to be rebuilt each
time the value of this macro is changed. Please refer to the values of the SKDB_C*
family of macros defined in the same header file for the options that can be used
for the definition of the KKPF_PORT macro.

NOTE: Configuring the kkprintf() port may not be supported by a particular


BSP, in which case the kkprintf() output always goes to the port hard-coded in
the BSP. Usually, it is the primary serial port or the console.

The device driver support routine cprintf() will print to the console. Unlike
kkprintf(), cprintf() cannot be used in an interrupt routine or where
interrupts or preemption is disabled.
The kkprintf() routine can also be used in interrupt routines as it polls the
UART for it to be ready for the next character.
The Simple Kernel Debugger (SKDB) can also be used to debug device drivers. It
allows breakpoints to be set and can display stack trace and register information.
However, since the symbol table of a dynamic device driver is not added to the

Writing Device Drivers for LynxOS 145


Chapter 9 - Installation and Debugging

symbol table of the kernel, SKDB may not be useful for debugging dynamic device
drivers. See LynxOS Total/db for more information on SKDB.

Hints for Device Driver Debugging


• Insert some kkprintf() statements in the install() routine of the
new driver. After the devinstall command is executed, the
install() entry point is invoked. This is a good way of identifying that
the install() entry point is being invoked at all and if the device
information declaration passed is being received properly.
• Insert kkprintf() statements in the uninstall() entry point. After
every deinstallation the uninstall() entry point is invoked. The
kkprintf() statements should be seen after deinstallation.

• Initially, it is advisable to put a kkprintf() statement at the beginning


of every entry point to make sure it is being invoked properly. Once all
the errors have been found, they can be removed.
• Using kkprintf() and cprintf() statements for debugging can
affect the timing characteristics of the driver and may mask timing related
problems. A way to reduce this debugging overhead involves having the
driver write status information to an internal chunk of memory. When a
failure occurs, use SKDB to investigate this area in memory.
• Use a serial connection to a second machine running kermit to capture
debugging output from SKDB.

Additional Notes
• Statically installed device drivers in LynxOS can also be uninstalled
dynamically. However, the memory reclamation of the TEXT section is
not done in this case.
• Symbols from two or more dynamically-loaded device drivers cannot be
resolved. If there are two dynamically-loaded device drivers using
function f(), the code for function f() has to be present in both the
drivers’ source code. This is because if one of the drivers is loaded
initially, function f() does not get resolved with the second device
driver even though it is in memory. Thus, only statically-loaded drivers’
symbols are resolved with dynamic drivers.

146 Writing Device Drivers for LynxOS


Additional Notes

• Garbage collection is not provided in LynxOS. Thus, any memory that is


dynamically allocated must be freed before uninstalling the device driver.
• The same device driver source code will compile and run on LynxOS 4.2
and LynxOS 5.0 with the following changes.
1. LynxOS 5.0 requires the following additional include files (shown
commented out):
// INCLUDE FILES
#include <mem.h>
#include <kernel.h>
#include <pci_resource.h>
#include <strings.h>
//#include <getmem.h>
//#include <swait.h>
//#include <kprintf.h>
//#include <drm.h>
//#include <setpriority.h>
//#include <sysbrk.h>
//#include <mem_map.h>

2. LynxOS 5.0 requires strict type definition for swait, etc (required code
shown commented out):
// If no DMA buffers are available then wait.
if (s->index == s->tail)
// swait((ksem_t *)&s->wakeup, 0);
swait(&s->wakeup, 0);

3. LynxOS 5.0 requires that the return type of the install entry point be void.
//void *dspinstall(struct dspinfo *info)
char *dspinstall(struct dspinfo *info)
{

Writing Device Drivers for LynxOS 147


Chapter 9 - Installation and Debugging

148 Writing Device Drivers for LynxOS


CHAPTER 10 Device Resource Manager

The Device Resource Manager (DRM) is a LynxOS module that functions as an


intermediary between the operating system, device drivers, and physical devices
and buses. The DRM provides a standard set of service routines that device drivers
can use to access devices or buses without having to know device or bus-specific
cofiguration options. DRM services include device identification, interrupt
resource management, device I/O to drivers, and device address space
management. The DRM also supports dynamic insertion and deletion of devices.
This chapter introduces DRM concepts and explains DRM components. Sample
code is provided for DRM interfaces and services. The PCI bus layer is described
in detail with a sample driver and application. This chapter provides information
on the following topics:
• DRM concepts
• DRM service routines
• Using DRM facilities from device drivers
• Advanced topics
• PCI bus layer
• Example driver

DRM Concepts

Device Tree
The device tree is a hierarchical representation of the physical device layout of the
hardware. DRM builds a device tree during kernel initialization. The device tree is

Writing Device Drivers for LynxOS 149


Chapter 10 - Device Resource Manager

made up of nodes representing the I/O controllers, host bridges, bus controllers,
and bridges. The root node of this device tree represents the system controller
(CPU). There are two types of nodes in the device tree: DRM bus nodes and DRM
device nodes.
DRM bus nodes represent physical buses available on the system, while DRM
device nodes represent physical devices attached to the bus.
The DRM nodes are linked together to form parent, child, and sibling relationships.
A typical device tree is shown in Figure 10-1. To support Hot Swap environments,
DRM nodes are inserted and removed from the device tree, mimicking Hot Swap
insertion and extraction of system devices.

System Controller
(CPU)
Bridge

PCI Host
Bridge

Ethernet SCSI
ISA Device PCI2PCI Device
Bridge Bridge

KBD & Mouse Printer Port COM Ports PCI2PCI PCI2PCI


Device Device Device Bridge Bridge

Ethernet ISDN Telco Telco


Device Device Device Device

Figure 10-1: Device Tree

150 Writing Device Drivers for LynxOS


DRM Components

DRM Components
A module view of DRM and related components is shown in Figure 10-2. A brief
description of each module is given below the figure.

KERNEL

Device
DRM Driver
Device
Driver
Board Support Device
Package Bus-layer Driver
and Configuration Bus-layer
Management
Bus-layer

Figure 10-2: Module View

• DRM— DRM provides device drivers with a generalized device


management interface.
• Kernel— The LynxOS kernel provides service to applications and device
drivers. DRM uses many of the kernel service routines.
• Bus Layer— These modules perform bus-specific operations. DRM uses
the service routines of the bus layer to provide service to the device
drivers.
• Device Driver— These modules provide a generic application
programming interface to specific devices.
• BSP— The Board Support Package (BSP) provides a programming
interface to the specific hardware architecture hosting LynxOS. This
module also provides device configuration information to other modules.

Bus Layer
DRM uses bus layer modules to support devices connected to many different kinds
of buses. There are numerous bus architectures, many of which are standardized.

Writing Device Drivers for LynxOS 151


Chapter 10 - Device Resource Manager

Typical bus architectures seen in systems are the ISA, PCI, and VME standards,
however, proprietary bus architectures also exist. DRM needs a specific bus layer
module to support a specific kind of bus architecture. The device drivers use DRM
service routines to interface to the bus layers. The bus layers interface with the BSP
to get board-specific information.
The bus layers provide the following service routines to DRM:
• Find bus nodes and device nodes.
• Initialize bus and device nodes.
• Allocate resources for bus and device nodes.
• Free resources from bus and device nodes.
• Map and unmap a device resource.
• Perform device I/O.
• Insert a bus or device node.
• Remove a bus or device node.
LynxOS supports only one bus layer which is used for managing PCI and
CompactPCI devices. Some of the DRM functions described later in this chapter
require the bus layer ID. The correct symbol to use is PCI_BUSLAYER.

DRM Nodes
A DRM node is a software representation of the physical device. Each node
contains fields that provide identification, device state, interrupt routing, bus-
specific properties, and links to traverse the device tree. DRM service routines are
used to access the DRM node fields. These routines provide device drivers access
to DRM facilities via a standard interface. This eliminates the need to know
implementation details of the specific software structure. Some of the important
fields of the DRM node are shown in the next table.

NOTE: Subsequent coding examples in this chapter make reference to a data


structure of type drm_node_s. This structure is a data item used internally by
the LynxOS kernel as the software representation of a DRM node and is not
intended to be accessed at the driver or user level. LynxOS does not export a
definition of this structure. The coding examples use opaque pointers which are
passed around and are not meant to be dereferenced.

152 Writing Device Drivers for LynxOS


DRM Node States

Table 10-1: DRM Node Fields

Field Name Description

vendor_id This field is used for device vendor identification.


device_id This field identifies the DRM node.
This field identifies the primary bus layer of the
pbuslayer_id
bus/device node.
This field identifies the secondary bus layer of a bus
sbuslayer_id
node.
This field indicates the node type: bus node or device
node_type node, and indicates if it is statically configured or
dynamically configured.
This field describes the life cycle state of the DRM
drm_state node. DRM nodes include: IDLE, SELECTED,
READY, or ACTIVE.
This field links this node to its parent node. The root
parent node has this field set to NULL to indicate that it has
no parent.
This field links to the child node of this bus node.
child
Only bus nodes have children.
This field links to the sibling node of this DRM node.
sibling
The last sibling of a bus has this field set to NULL.
This field indicates if the device raises an interrupt to
intr_flg
request service.
If the device uses interrupt service, this field indicates
intr_cntlr
the controller to which the device is connected.
This indicates the interrupt request line of the
intr_irq
controller to which this device is connected.
drm_tstamp This field indicates when this node was created.
This field links to bus-specific properties of the
prop
device.

DRM Node States


The status of a DRM node is indicated by its state. Initially, a DRM node is set to
IDLE when it is created. Devices that are removed from the DRM tree, or
undetected devices are considered UNKNOWN. The UNKNOWN state is not used

Writing Device Drivers for LynxOS 153


Chapter 10 - Device Resource Manager

by DRM, but it is used to denote a device that is unrecognized to DRM. The


following diagram details the stages of DRM node states.

DRM Node
Description
State
Devices that exist in the system, but are not known to DRM are considered
UNKNOWN. DRM does not use this state, but it is useful for users to
UNKNOWN think of all devices unrecognized by DRM to be in this state. Deleting a
DRM node makes the device UNKNOWN to DRM. Once the DRM has
initialized a device node, it is considered IDLE.

A device is discovered/inserted into the DRM tree and initialized to a


minimal extent. The buslayer_id, device_id, and vendor_id
node fields identify the node. Some of its bus-specific properties are
initialized. System resources are not allocated to the device. Devices for
IDLE
which drivers are not available, or devices that are not needed, are left in this
state. IDLE nodes are created by the drm_locate(), or the
drm_insertnode() service routines. IDLE nodes are deleted by the
drm_delete_node() service routine.

Devices needed by the system are set to the SELECTED state and resources
are allocated to them. In Hot Swap and high availability, environments ,
configuration-specific policies control the selection of nodes. IDLE DRM
SELECTED
nodes are selected by the drm_select_node() service routine.
SELECTED nodes are set to the IDLE state by the
drm_unselect_node() service routine.

When resources are allocated to the SELECTED node, it is set to READY.


A node in READY state is fully initialized, but not in use. System resources
and bus resources are assigned to this node. In case the resource allocation
READY for the DRM node fails, the node remains in the SELECTED state. A
SELECTED node is set to the READY state by the sevice routine
drm_alloc_resource(). The READY nodes are put into a
SELECTED state by the drm_free_resource() service routine.

A DRM node in use by a device driver is in the ACTIVE state. Either the
drm_get_handle() or drm_claim_handle() service routines
ACTIVE make a DRM node ACTIVE. A driver releases a device by using the
drm_free_handle() service routine. This causes the DRM node to go
into a READY state.

Figure 10-3: DRM Node States

154 Writing Device Drivers for LynxOS


DRM Initialization

DRM Initialization
The DRM module is initialized during LynxOS kernel initialization. DRM builds a
device tree of all visible devices and brings them up to a READY state, if possible.
This is to enable all statically linked drivers to claim the DRM nodes and bring up
the basic system service routines. Some DRM nodes may be left in the
SELECTED state after kernel initialization is complete. Typically, this can be the
result of unavailable resources.
LynxOS provides the ability to control PCI resource allocation. PCI resources can
be allocated by either the BIOS or the DRM. By default, LynxOS x86 distributions
use the BIOS to handle resource allocation. For other platforms, DRM handles the
resource allocation. Because DRM uses the same set of interfaces, whether or not it
handles resource allocation, device drivers do not need to change.

DRM Service Routines


DRM service routines are used by device drivers to identify, set up, and manage
device resources. Typically, they are used in the install() and uninstall()
entry points of the device driver. Device drivers locate the device they need to
service and obtain an identifying handle. This handle is used in subsequent DRM
calls to reference the device. The table below gives a brief description of each
service routine and typical usage. See the DRM man pages for more details.
Additionally, see “Example Driver” on page 175.

Table 10-2: Summary of DRM Services

Service Description Usage

drm_get_handle Searches for a DRM node with a specific All Drivers


vendor and device identification and claims it
for use
drm_free_handle Releases a DRM node and makes it READY All Drivers
drm_register_isr Sets up an interrupt service routine All Drivers
drm_unregister_isr Clears an interrupt service routine All Drivers
drm_map_resource Creates an address translation for a device All Drivers
resource
drm_unmap_resource Removes an address translation for a device All Drivers
resource

Writing Device Drivers for LynxOS 155


Chapter 10 - Device Resource Manager

Table 10-2: Summary of DRM Services (Continued)

Service Description Usage

drm_device_read Performs read on a device resource All Drivers


drm_device_write Performs write on a device resource All Drivers
drm_locate Locates and builds the DRM device tree; It General Device
probes for devices and bridges recursively and Management
builds the DRM subtree
drm_insertnode Inserts a DRM node with specific properties. General Device
Only a single node is added to the DRM tree Management
by this service routine.
drm_delete_subtree Removes a DRM subtree. Only nodes in the General Device
IDLE state are removed. Management
drm_prune_subtree Removes a DRM subtree. Nodes in the General Device
READY state are brought to the IDLE state Management
and then deleted.
drm_select_node Selects a node for resource allocation General Device
Management
drm_select_subtree Selects a DRM subtree for resource allocation. General Device
All the nodes in the subtree are SELECTED. Management
drm_unselect_node Ignores a DRM node for resource allocation General Device
Management
drm_unselect_subtree Ignores an entire DRM subtree for resource General Device
allocation Management
drm_alloc_resource Allocates a resource to a DRM node or General Device
subtree Management
drm_free_resource Frees a resource from a DRM node or subtree General Device
Management
drm_claim_handle Claims a DRM node, given its handle. The General Device
DRM node is now ACTIVE. Management
drm_getroot Gets the handle to the root DRM node General Device
Management
drm_getchild Gets the handle to the child DRM node General Device
Management
drm_getsibling Gets the handle to the sibling DRM node General Device
Management

156 Writing Device Drivers for LynxOS


Interface Specification

Table 10-2: Summary of DRM Services (Continued)

Service Description Usage

drm_getparent Gets the handle to the parent DRM node General Device
Management
drm_getnode Gets the DRM node contents General Device
Management
drm_setnode Sets the DRM node contents General Device
Management

Interface Specification
Device drivers call DRM service routines like any standard kernel service routine.
The following table provides a synopsis of the service routines and their interface
specification. Refer to LynxOS man pages for a complete description.

Table 10-3: DRM Service Routine Interface Specification

Name Synopsis

drm_locate() int drm_locate(struct drm_node_s *handle)


drm_insertnode() int drm_insertnode(struct drm_node_s
*parent_node, void *prop,
struct drm_node_s **new_node)
drm_delete_subtree() int drm_delete_subtree(struct drm_node_s *handle)
drm_prune_subtree() int drm_prune_subtree(struct drm_node_s *handle)
drm_select_subtree() int drm_select_subtree(struct drm_node_s *handle)
drm_unselect_subtree int drm_unselect_subtree(struct drm_node_s
() *handle)
drm_select_node() int drm_select_node(struct drm_node_s *handle)
drm_unselect_node() int drm_unselect_node(struct drm_node_s *handle)
drm_alloc_resource() int drm_alloc_resource(struct drm_node_s *handle)
drm_free_resource() int drm_free_resource(struct drm_node_s *handle)
drm_get_handle() int drm_get_handle(int buslayer_id,
int vendor_id,
int device_id, struct drm_node_s **handle)
drm_claim_handle() int drm_claim_handle(struct drm_node_s *handle)

Writing Device Drivers for LynxOS 157


Chapter 10 - Device Resource Manager

Table 10-3: DRM Service Routine Interface Specification (Continued)

Name Synopsis

drm_free_handle() int drm_free_handle(struct drm_node_s *handle)


drm_register_isr() int drm_register_isr(struct drm_node_s
*handle, void (isr)(), void *args)
drm_unregister_isr() int drm_unregister_isr(struct drm_node_s *handle)
drm_map_resource() int drm_map_resource(struct drm_node_s *handle,
int resource_id, addr_t *vaddr)
drm_unmap_resource() int drm_unmap_resource(struct drm_node_s *handle,
int resource_id)
drm_device_read() int drm_device_read(struct drm_node_s *handle,
int resource_id,
unsigned int offset, unsigned int size,
void *buffer)
drm_device_write() int drm_device_write(struct drm_node_s *handle,
int resource_id, unsigned in offset,
unsigned int size, void *buffer)
drm_getroot() int drm_getroot(struct drm_node_s **root_handle)
drm_getchild() int drm_getchild(struct drm_node_s *handle,
struct drm_node_s **child)
drm_getsibling() int drm_getsibling(struct *handle,
struct drm_node_s **sibling)
drm_getparent() int drm_getparent (struct drm_nodes_s *handle,
struct drm_node_s **parent)
drm_getnode() int drm_getnode (struct drm_nodes_s *src,
struct drm_node_s **dest)
drm_setnode() int drm_setnode (struct drm_nodes_s *handle)

158 Writing Device Drivers for LynxOS


Using DRM Facilities from Device Drivers

Using DRM Facilities from Device Drivers

Device Identification
In the install() device driver entry point a driver attempts to connect to the
device it intends to use. To locate its device, the driver needs to use the
drm_get_handle() service routine. drm_get_handle() returns a pointer to
the DRM node handle via its handle argument. The driver specifies the device it
is interested in by using drm_get_handle() in the following manner:
int install() {

ret = drm_get_handle(buslayer_id,
vendor_id, device_id, &handle)
if(ret)
{
/* device not found .. abort installation */
}
}

It is possible to supply a wild card to drm_get_handle() using


vendor_id = -1 and device_id = -1 as parameters. This claims and returns
the first READY device in an unspecified search order. The driver examines the
properties of the device to perform a selection. The driver needs to subsequently
release the unused devices.
It is also possible to navigate the device tree using traversal functions and obtain
handles for the nodes. Device selection is performed by other modules, drivers or
system management applications. If device selection has been done by some other
means, the driver claims the device by using the drm_claim_handle() service
routine, taking the node handle as a parameter.
int install() {

/* handle obtained externally */


ret = drm_claim_handle(handle);
if(ret)
{
/* Cannot claim device -- abort device install */
}
..
}

Writing Device Drivers for LynxOS 159


Chapter 10 - Device Resource Manager

The drm_free_handle() service routine is used to release the handle. The


release of the device is typically done in the uninstall() routine of the driver.
The drm_free_handle() takes the node handle to be freed as a parameter.
int uninstall() {

ret = drm_free_handle(handle);
if(ret)
{
/* Error freeing handle, perhaps handle is bogus? */
}
}

In Hot Swap environments, system management service routines select, make


devices ready, and provide node handles for drivers to claim and use. The system
management service routines facilitates the selection and dynamically loading of
needed drivers and provides them with node handles for use.

Device Interrupt Management


DRM maintains all interrupt routing data for a device node. Drivers use the
drm_register_isr() service routine to register an interrupt service routine and
the drm_unregister_isr() service routine to clear a registration. Typically,
this service routine are used in the install() and uninstall() entry points
of the driver. To support sharing of interrupts in a hot swap/high availability
environment, DRM internally dispatches all ISRs sharing an interrupt. The
returned link_id is NULL, and the iointlink() kernel service routine does
not perform any dispatches.
The following code segments illustrate the use of these DRM service routines:
int install() {

int ret, link_id;


ret = drm_get_handle(buslayer_id, vendor_id,
device_id, &handle);
link_id = drm_register_isr(handle, isr_func, args);
}

int uninstall() {

ret = drm_unregister_isr(handle);
if(Ret)
{
/* Cannot unregister isr? bogus handle? */
}
ret = drm_free_handle(handle);
if(ret)
{
/* Cannot free handle, is handle bogus? */
}
}

160 Writing Device Drivers for LynxOS


Device Address Space Management

The interrupt management service routines return a status message when applied to
a polled mode device.

Device Address Space Management


Many devices have internal resources that need to be mapped into the processor
address space. The bus layers define such device-specific resources. For example,
the configuration registers, the bus number, device number, and the function
number of PCI devices are considered resources. The bus layer defines resource
IDs to identify device-specific resources. Some of the device resources may need
to be allocated. For example, the base address registers of a PCI device space need
to be assigned a unique bus address space. DRM provides service routines to map
and unmap a device resource into the processor address space. The function
drm_map_resource() takes as parameters the device handle, resource ID and a
pointer to store the returned virtual address. The drm_unmap_resource() takes
as parameters a device handle and resource ID.
The following code fragment illustrates the use of these service routines:
int install() {
ret = drm_get_handle(PCI_BUSLAYER,SYMBIOS_VID,NCR825_ID,
&handle);
if(ret)
{
/* Cannot find the scsi controller */
}
link_id = drm_register_isr(handle,scsi_isr,
scsi_isr_args);
ret = drm_map_resource(handle,PCI_BAR1,
&scsi_vaddr);
if(ret)
{
/* Bogus resource_id ? */
/* resource not mappable 2*/
/* invalid device handle ? */
}
scsi_control_regs =
(struct scsi_control *)(scsi_vaddr);
ret =drm_unmap_resource(handle,PCI_BAR1);
if(ret)
{
/* Bogus handle */
/* resource is not mappable */
/* resource was not mapped */
/* invalid resource_id */
}
}

Writing Device Drivers for LynxOS 161


Chapter 10 - Device Resource Manager

Device IO
DRM provides service routines to perform read and write to the bus layer-defined
resources. The drm_device_read() service routine allows the driver to read a
device-specific resource. The drm_device_write() service routine allows the
driver to perform a write operation to a device-specific resource. The resource IDs
are usually specified in a bus layer-specific header file. For example, the file
pci_resource.h defines the PCIBUSLAYER resources. Both these service
routines use the handle, resource ID, offset, size, and a buffer as
parameters. The meaning of the offset and size parameter is defined by the
bus layer. Drivers implement platform-independent methods of accessing device
resources by using these service routines. The following code fragment illustrates
the use of these service routines.
#include <pci_resource.h>
...

/* Enable PCI_IO_SPACE */
ret = drm_device_read(handle,PCI_RESID_CMDREG,0,0,
&pci_cmd);
if(ret)
{
/* could not read device resource? validate parameters? */
}
pci_cmd |= PCI_IO_SPACE_ENABLE;
ret = drm_device_write(handle,PCI_RESID_CMDREG,0,0,
&pci_cmd);
if(ret)
{
/* could not write device resource? validate parameters? */
}

This code is platform-independent. The service routines take care of endian


conversion, serialization, and other platform-specific operations.

DRM Tree Traversal


DRM provides a set of functions to navigate the device tree. Most of these
functions take a reference node as input and provide a target node as output. The
functions are listed below:
drm_getroot(&handle) returns the root of the device tree in handle.

drm_getparent(node,&handle) returns the parent of node in handle.

drm_getchild(node,&handle) returns the child of node in handle.

drm_getsibling(node,&handle) returns the sibling of node in handle.

162 Writing Device Drivers for LynxOS


Device Insertion/Removal

Device Insertion/Removal
DRM provides two service routines that add nodes to the DRM tree:
drm_locate() recursively finds and creates DRM nodes given a parent node as
reference; drm_insertnode() inserts one node. The drm_insertnode()
service routine is used when sufficient data is known about the device being
inserted. The drm_locate() service routine is used to build entire subtrees.
A typical application involves inserting the bridge device corresponding to a slot,
using the drm_insertnode() service routine. For a given configuration, the
geographic data associated with the slots is generally known. This data is used to
insert the bridge device. The data that is needed to insert a node is bus layer-
specific. For the PCIBUSLAYER, the PCI device number and function number are
provided. The reference parent node determines the bus number of the node being
inserted. Also, the bus layer determines the location of the inserted node in the
DRM tree. Once the bridge is inserted, the drm_locate() service routine is used
to recursively build the subtree below the bridge.
The drm_locate() and drm_insertnode() service routines initialize the
DRM nodes to the IDLE state. The drm_selectnode() or
drm_select_subtree() service routines are used to select the desired nodes
and sets the nodes to the SELECTED state. The drm_alloc_resource()
service routines are used to set the nodes to a READY state. DRM nodes in the
READY state are available to be claimed by device drivers. After being claimed,
the node is set to the ACTIVE state.
During extraction, device drivers release the DRM node using the
drm_free_handle() service routine. This brings the DRM node back to a
READY state. Resources associated with the nodes are released by using the
drm_free_resource() service routine. This sets the nodes to the SELECTED
state. The DRM nodes are then put in an IDLE state by using the
drm_unselect_subtree() or drm_unselect_node() service routines. The
IDLE nodes are removed by using the drm_delete_subtree() or
drm_delete_node() service routines. This last operation puts the device back
into an unknown state. The device is now extracted from the system. A
convenience function, drm_prune_subtree(), removes DRM’s knowledge of a
entire subtree. This routine operates on subtrees that are in the READY state.
When DRM nodes are inserted, they are time-stamped to assist in locating recently
inserted nodes.

Writing Device Drivers for LynxOS 163


Chapter 10 - Device Resource Manager

Advanced Topics

Adding a New Bus Layer


This section describes the entry points and the Application Programming Interface
(API) between the DRM module and the bus layer. The information in this section
is useful for implementing a new bus layer or enhance an existing bus layer.
The DRM invokes functions in the bus layer by means of a bus handle. The bus
handle is a pointer to a structure containing function pointers that implement the
bus layer functionality. The file drm.h defines the bus handle as:
struct drm_bushandle_s {

int (*init_buslayer)(); /* Initialize the buslayer */


int (*busnode_init)(); /* Initialize a bus node */
int (*alloc_res_bus)(); /* Allocate resource to
a bus node */
int (*alloc_res_dev)(); /* Allocate resources
to a device node */
int (*free_res_bus)(); /* Free resources
from a bus node */
int (*free_res_dev)(); /* Free resources from a
device node */
int (*find_child)();/* Find children of a bus node*/
int (*map_resource)(); /* Map a device resource
into kernel virtual
address space */
int (*unmap_resource)(); /* Unmap a device resource from
kernel address space */
int (*read_device)(); /* Perform a read operation
on a device resource */
int (*write_device)(); /* Perform a write operation
on the device resource */
int (*translate_addr)(); /* address translation service */
int (*del_busnode)(); /* Remove a bus node */
int (*del_devnode)(); /* Remove a device node */
int (*insertnode)(); /* Insert a new drm node */
int (*ioctl)(); /* buslayer specific ioctl functions */
};

The DRM maintains an array of drm_bushandles registering all the known bus
handles. The index into this array is the bus layer ID for the bus layer. A new bus
layer is registered by adding a new entry to this array. A NULL entry indicates the
end of the list. The file drm_conf.c has this configuration information.

164 Writing Device Drivers for LynxOS


init_buslayer ()

init_buslayer ()
This entry point is invoked during the bus layer initialization. This function is
called only once. The bus layer is supposed to initialize bus layer-specific data. For
example, any built-in resource allocators are initialized, tables are allocated,
devices are turned on or off, and so forth. init_buslayer() takes no
parameters. This entry point is invoked when the drm_init() service routine is
called.
int init_buslayer(void);

busnode_init ()
This entry point is invoked to initialize a bus-node. Bus layer-specific data is
allocated during this call. The hardware must be configured to allow further probes
behind this node. For example, the PCIBUSLAYER needs to allocate the secondary
bus number and the subordinate bus number of PCI-to-PCI Bridges during
busnode_init(). busnode_init() takes a DRM node handle as a parameter.
int busnode_init(struct drm_node_s *handle);

The secondary bus layer ID determines which bus layer is invoked to call
busnode_init(). This entry point is invoked during recursive probes and when a
bus node is inserted.

alloc_res_bus ()
This entry point is called as a result of invoking the drm_alloc_resource()
service routine on this bus node. The bus layer must allocate node-specific
resources like IO windows and bus space.
alloc_res_bus() takes a DRM node handle as the parameter.
int alloc_bus_res(struct drm_node_s *handle);

The secondary bus layer ID determines which bus layer is invoked by this call to
alloc_res_node(). This entry point is invoked every time the node transits from
the SELECTED state to the READY state. Take care of any previous history of
this node by clearing unused registers.

alloc_res_dev ()
This entry point is called as a result of invoking the drm_alloc_resource()
service routine on this node. The bus layer must allocate device-specific resources,
like bus address, physical address, interrupt controller ID, interrupt request line,

Writing Device Drivers for LynxOS 165


Chapter 10 - Device Resource Manager

and interrupt_needed flag. The device needs to be programmed with the


allocated resources so that it is in the READY state. The DRM node handle is
passed as a parameter to this call. For example, the base address registers are
allocated and programmed in a PCI device as a result of this call:
int alloc_dev_res(struct drm_node_s *handle);

free_res_bus ()
This entry point is called as a result of invoking the drm_free_resource()
service routine on this bus node. The bus layer must free resources allocated like
IO windows, bus space, and so forth. This function should return an error if any of
its children have resources still allocated. The DRM node handle for the bus node
is passed in as an input parameter to this call:
int free_res_bus(struct drm_node_s *handle);

free_res_dev ()
This entry point is invoked as a result of invoking drm_free_resource() on
this device node. The bus layer should free any resources allocated specific to the
device such as bus address space, interrupt request line, and so forth. The DRM
node handle is passed in as an input parameter to this call:
int free_res_dev(struct drm_node_s *node_h);

find_child ()
This entry point is used by DRM to probe and find a child node. The find entry
point is invoked with the following arguments:
int find_child(struct drm_node_s *parent,
struct drm_node_s *ref_node,
struct drm_node_s **child);

The DRM keeps a list of sibling nodes linked singly. find_child() needs to
enumerate the bus and find a node between the ref_node and
ref_node->next. If ref_node is NULL, find_child() needs to see if there
are any new nodes at the head of the list. If ref_node->next is NULL,
find_child() needs to find a new node at the end of the list. If no new nodes are
found, find_child() returns an existing sibling node (if any). The head of the
list is returned if ref_node was NULL; otherwise, ref_node->next is
returned.

166 Writing Device Drivers for LynxOS


map_resource ()

If a new node is discovered by the bus layer, it needs to use the


drm_alloc_node() service routine to allocate a new DRM node. This new node
is returned in the parameter child. DRM checks the state of the node to
determine if it is a new node. All new nodes have their states set to DRM_VIRGIN.
This is a transitory state that is used by the DRM to indicate a known but
uninitialized node.
When the find_child() discovers a new node, it needs to minimally initialize
the new node so that further processing on the node is performed. The DRM node
fields that need to be initialized are: device_id, vendor_id, and node_type
(DRM_BUS or DRM_DEVICE, DRM_AUTO or DRM_STATIC). The bus layer-
specific prop field is populated with data that provides geographic information
and other properties of the node. As prop is of type (void *), a pointer to a
bus layer-specific data structure is maintained in this field. In the case of a
DRM_BUS device, the secondary bus ID needs to be determined and set up in the
sbuslayer_id field. The pbuslayer_id field is inherited from the parent’s
sbuslayer_id field.

DRM_AUTO represents a node that was created during run-time with a call to the
drm_alloc_node() service routine. DRM_STATIC represents a node that was
configured into the kernel at build time. Only nodes that are of type DRM_AUTO
are deleted. If needed, the bus layer maintains two lists, one that is static and one
that automatically implements static devices that are removed and inserted.

map_resource ()
This entry point is invoked when the driver calls the drm_map_resource()
service routine. The bus layer assigns resource IDs to device resources. Some of
these resources are mapped in the kernel address space. This entry point creates an
address translation of the device resource. The bus layer must take care of
multilevel translations as necessary. For example, an address associated with a
VME device behind a VME-to-PCI bridge needs several translations to convert to
a kernel virtual address. The VME address gets translated to a PCI bus address.
The PCI bus address gets translated to a CPU physical address. The CPU physical
address gets translated into a kernel virtual address. The map_resource()
functions takes the following arguments:
int map_resource(drm_node_s *handle,int resource_id, void
*vaddr);

The virtual address needs to be returned to *vaddr when it is invoked


successfully. This function generates a failure if the resource ID is invalid or not
mappable, or if the resource is already mapped.

Writing Device Drivers for LynxOS 167


Chapter 10 - Device Resource Manager

unmap_resource ()
This entry point is called when the driver invokes the drm_unmap_resource()
service routine. The bus layer needs to clear address translations set up for the
indicated resource. The node handle and resource ID are passed in as input
arguments to the call:
int unmap_resource(drm_node_s *node, int resource_id);

read_device ()
This entry point is called when the driver invokes the drm_device_write()
service routine. This call is used to perform device I/O to defined device resources.
The resources are identified by the resource ID. The call takes the arguments as
shown:
int read_device(drm_node_s *node, int resource_id,
unsigned int offset, int size, void *buffer);

where node is the DRM node representing the device resource ID. It identifies
the resource being read. The offset and size parameters are specific to the
bus layers. If the resource is seekable, then offset and size are used as seek
offset and read size. If a set of device registers are being read, the offset
indicates the register number. If the data is accessed in different widths, the size
parameter is used to indicate the width. The result of the read operation is placed in
the buffer. The bus layer implements platform-specific synchronization
instructions or I/O flush instructions as needed. Mapped resources are allowed I/O
access via this interface. Bus layer resources associated with the device are allowed
access via this interface. For example, the PCI bus number, the PCI device number,
and the PCI function number are read using this interface.

write_device ()
This entry point is similar to the read_device() entry point and allows the
device to be written by the driver. The DRM makes the call to this entry point in
the following manner:
int write_device(drm_node_s *node, int resource_id,
unsigned int offset, int size, void *buffer);

168 Writing Device Drivers for LynxOS


translate_addr ()

translate_addr ()
This entry point is called in response to a drm_translate_addr() service
routine to translate an address. This function has not yet been finalized; in the bus-
specific implementation, it just returns DRM_ENOTSUP.
int translate_addr (drm_node_s * MasterHandle, drm_node_s * SlaveHandle,
drm_node_s * InAddr_handle, drm_node_s * OutAddr_handle
int ControlFlags);

del_busnode ()
This entry point is called prior to a bus node being removed from the DRM tree.
The bus layer must release any pending resources associated with this node. For
example, any data storage for this node subject to sysbrk needs to be freed. The
DRM invokes this entry point with the node handle as a parameter:
int del_busnode(drm_node_s *handle);

This entry point is called when the drm_delete_node() service routine is


invoked. After this operation the bus node is made visible by a drm_locate() or
a drm_insertnode() service routine.

del_devnode ()
This entry point is called prior to a device node being removed from the DRM tree.
The bus layer must release all pending resources allocated to this node. Bus layer-
specific storage attached to the node->prop field needs to be freed.
int del_devnode(drm_node_s *handle);

insertnode ()
This entry point is called in response to a drm_insertnode() service routine.
The bus layer must verify the existence of the device being inserted, check if it is a
duplicate, and return a reference node so that the DRM inserts the node into the
tree. The DRM calls this entry point in the following manner:
int insertnode(struct drm_node_s *parent, void *prop,
struct drm_node_s **new_h,
struct drm_node_s **ref_h);

The bus layer creates a new node based on the properties given and returns it in
*new_h. The bus layer indicates where this node is to be inserted by initializing

Writing Device Drivers for LynxOS 169


Chapter 10 - Device Resource Manager

*ref_h with the node handle to insert *new_h. If *ref_h is NULL, the DRM
inserts the node at the head of the list. Similar to find_child(), the node should
be minimally initialized so that it participates in the DRM service routine calls. If
the node being inserted is a DRM_BUS type node, the sbuslayer_id field
should be initialized. DRM invokes busnode_init on the bus node inserted by
this call. This call adds only one node to the DRM tree.

ioctl ()
This entry point is called in response to a drm_ioctl() service routine. This
function performs bus layer-specific ioctl operations.

PCI Bus Layer


LynxOS provides a PCI bus layer that interfaces with DRM. The PCI bus layer
supports PCI-to-PCI bridges and CompactPCI Hot Swap environments. The PCI
bus layer features a plug-in resource allocator technology to customize PCI
resource allocation policies. LynxOS supplies a default PCI resource allocator
suitable for most applications.

Services
Device drivers use the standard DRM API to interface with the PCI bus layer. The
PCI bus layer-specific definitions are in the machine/pci_resource.h include
file.

NOTE: All PCI devices are disabled by default. The device driver enables the
device by writing to the command register in the install() function of the
driver. Failure to do this results in the device not responding to
IO/MEM/BUSMASTER accesses and appear to be nonfunctional.

Resources
The PCI bus layer resources are defined in the header file pci_resource.h.
These resources provide the following functionality:
• Access to the PCI registers

170 Writing Device Drivers for LynxOS


Resources

• Mapping and unmapping to the base address registers


• Access to the bus number, device number, and function number of the
device
The resource PCI_RESID_REGS allows access to all the PCI registers. The
offset parameter to the drm_device_read() and drm_device_write()
service routines indicates the register to be used. The size parameter uses 1, 2, or
4, indicating a byte access, short access, or a word access. A size of zero
implicitly performs a word access. The table below describes the PCI resources
and the corresponding DRM service routines in which they are used.

Table 10-4: PCI Resources

PCI Resource Description DRM Service Routine

PCI_RESID_REGS Read/write any PCI config space drm_device_read()


register. The offset drm_device_write()
parameter in the
drm_device_read/write()
service routines is the register
number.
PCI_RESID_BUSNO The bus number on which the drm_device_read()
device resides
PCI_RESID_DEVNO The geographic device number drm_device_read()
PCI_RESID_FUNCNO The function number with in the drm_device_read()
device
PCI_RESID_DEVID The PCI device ID drm_device_read()
PCI_RESID_VENDORID The PCI vendor ID drm_device_read()
PCI_RESID_CMDREG The PCI command/status register drm_device_read()
drm_device_write()
PCI_RESID_REVID The PCI revision ID drm_device_read()
PCI_RESID_STAT Alias for the PCI command status drm_device_read()
register drm_device_write()
PCI_RESID_CLASSCODE The PCI class code register drm_device_read()
PCI_RESID_SUBSYSID The PCI subsystem ID drm_device_read()
PCI_RESID_SUBSYSVID The PCI subsystem vendor ID drm_device_read()
PCI_RESID_BAR0 Base address register 0 drm_map_resource()
drm_unmap_resource()

Writing Device Drivers for LynxOS 171


Chapter 10 - Device Resource Manager

Table 10-4: PCI Resources (Continued)

PCI Resource Description DRM Service Routine

PCI_RESID_BAR1 Base address register 1 drm_map_resource()


drm_unmap_resource()
PCI_RESID_BAR2 Base address register 2 drm_map_resource()
drm_unmap_resource()
PCI_RESID_BAR3 Base address register 3 drm_map_resource()
drm_unmap_resource()
PCI_RESID_BAR4 Base address register 4 drm_map_resource()
drm_unmap_resource()
PCI_RESID_BAR5 Base address register 5 drm_map_resource()
drm_unmap_resource()

Plug-in Resource Allocator


The PCI bus layer uses a set of function pointers stored in a
struct pcibus_alloc_handle_s to allow a customized resource allocator to
be installed. This structure is defined in drm/pci_conf.h as follows:
struct pcibus_alloc_handle_s {
int (*pcialloc_init)(); /* Init entry point */
int (*pcialloc_busno)(); /* alloc bus number */
int (*pcialloc_bus_res)() /* Alloc bus node resource */
int (*pcialloc_dev_res)(); /* Alloc dev node resource */
int (*pcifree_busno)(); /* Free bus number resource */
int (*pcifree_bus_res)(); /* Free bus node resource */
int (*pcifree_dev_res)(); /* Free dev node resource */
int (*pcialloc_hb_busno)();/* Alloc Host-Bridge Sec and Sub bus numbers */
};

It is also useful to know the pci_node structure maintained by the PCI bus layer
for every PCI device. This structure is as follows:
struct pci_node_s {
/* Common PCI node related data */
int bus_no; /* PCI bus number */
int dev_no; /* PCI device number */
int func_no; /* PCI function number */
struct pci_resource_s resource[PCI_NUM_BAR];
/* BusNode related data, used only by PCI2PCI bridge*/
int sec_bus_no; /* Used by BusNodes */
int sub_bus_no; /* Used by BusNodes */
struct pci_resource_s *res_io_head; /* Used by Busnodes */
struct pci_resource_s *res_mem_head; /* Used by Busnodes */
struct pci_resource_s *res_mem64_head;/* Used by Busnodes */
struct pci_resource_s *res_mempf_head;/* Used by Busnodes */
};

172 Writing Device Drivers for LynxOS


pcialloc_init()

The pci_resource structure referenced above is:


struct pci_resource_s {
struct pci_resource_s *next;
struct pci_resource_s *vnext;
int alignment;
int size;
int type;
unsigned int baddr;
unsigned int paddr;
unsigned int vaddr;
};

pcialloc_init()
This entry point is invoked during bus layer initialization. Any global data
structures that are used by the allocator should be created and set up in this routine.
This function does not take any arguments.

pcialloc_busno()
This entry point is invoked to allocate the secondary and subordinate bus numbers
for a PCI-to-PCI bridge node. The handle to the DRM node representing the bridge
device is passed to this function. This function must populate the
handle->prop->sec_bus_no and the handle->prop->sub_bus_no fields
of the DRM node.

pcialloc_bus_res()
This entry is used to allocate address space resources for the passed in
PCI-to-PCI bridge node. The resources assigned to this bridge are stored in the
resource array. Array index-0 is assigned to hold the PCI I/O resources. Index-1 is
assigned to PCI memory resource. Index-2 is assigned to PCI memory64, and
Index-3 is assigned to PCI prefetch memory. These allocations are linked using the
next field to the parent resource list. The pci_node_s data structure also
maintains the resource list heads for the children of this bridge. This function needs
to populate the resource array suitably and clear the list head pointers.

pcialloc_dev_res()
This entry point is used to allocate device-specific address space. This function
gets input as parameters the DRM node handle and a base address register (BAR)
index. The type, size, and alignment requirements are filled in by the PCI bus layer.
The allocator needs to fill in the baddr (bus address) and the paddr (CPU

Writing Device Drivers for LynxOS 173


Chapter 10 - Device Resource Manager

physical address) fields of the resource. Note that the vaddr (virtual address) is
allocated when the resource is mapped in. The allocator uses an architecture-
dependent service routine hw_baddr2paddr() provided by the BSP to translate
a bus address to a physical address. Depending on the policy implemented by the
allocator, the parent node’s resource list-head and allocations are used to assign
resource to this node. All the resources are chained to the parent’s resource list-
heads, if needed, by using the next field in the resource structure.

pcifree_busno()
This entry point is called in response to a drm_free_resource() service
routine request. The allocator should reclaim the busno and give it back to the
free pool. This function is passed the DRM node as input.

pcifree_bus_res()
This entry point is invoked in response to a drm_free_resource() service routine
request on a PCI2PCI bridge node. The allocator should reclaim the bus address
space and give it back to the free pool. This function should report an error if any
of the children still have resources allocated.

pcifree_dev_res()
This entry point is used to reverse the allocation performed by the
pcialloc_dev_res() entry point. The node handle and the BAR index are
passed in as parameters.

174 Writing Device Drivers for LynxOS


Example Driver

Example Driver
/* This is a sample driver for a hypothetical PCI device. This PCI device has a
vendor_id of ABC_VENDORID and a device_id ABC_DEVICEID. This device has one base
address register implemented as a PCI Memory BAR and needs 4K of space. The
device registers are implemented in this space. The device needs a interrupt
service routine to handle events raised by the device. It may be possible that
there are multiple of these devices in the system. */

#include <pci_resource.h>

#define PCI_IO_ENABLE 0x1


#define PCI_MEM_ENABLE 0x2
#define PCI_BUSMASTER_ENABLE 0x4

struct device_registers {
unsigned int register1;
unsigned int register2;
unsigned int register3;
unsigned int register4;
};

struct device_static {
struct drm_node_s *handle;
struct device_register *regptr;
int bus_number;
int device_number;
int func_number;
};

int abc_install(struct info_t *info)


{

struct device_static *static_ptr;


int rv = 0;
unsigned int val;

/* Allocate device static block */

static_ptr = (struct device_static *)


sysbrk(sizeof(struct device_static));

if(!static_ptr)
{
/* memory allocation failed !! */
goto error_0;
}

/* Find the device ABC_VENDORID, ABC_DEVICEID. Every call to


abc_install() by the OS, installs a ABC device. The number of times
abc_install() is called depends on how many static devices for ABC have
been configured via the standard LynxOS device configuration facilities.
This entry point is also called during a dynamic device install. */

Writing Device Drivers for LynxOS 175


Chapter 10 - Device Resource Manager

/* A Hot Swap capable driver may replace the next call with
drm_claim_handle() and pass the handle given by the system management
layer, instead of finding the device by itself */

#if !defined(HOTSWAP)

rv = drm_get_handle(PCI_BUSLAYER,
ABC_VENDORID,ABC_DEVICEID,
&(static_ptr->handle));

#else /* Hot Swap capable */

rv = drm_claim_dhandle(info->handle);
static_ptr->handle = info->handle;

#endif

if(rv)
{
/* drm_get_handle or drm_claim_handle failed to find a
device. return failure to the OS saying install failed. */

debug(("failed to find device(%x,%x)\n",


ABC_VENDORID,ABC_DEVICEID));
goto error_1;
}

/* Register an interrupt service routine for this


device */

rv = drm_register_isr(static_ptr->handle,
abc_isr, NULL);

if(rv == SYSERR)
{

/*If register isr fails release the handle and exit*/

debug(("drm_register_isr failed %d\n",rv));


goto error_2;
}

/* Map in the memory base address register (BAR) */

rv = drm_map_resource(static_ptr->handle,
PCI_RESID_BAR0,
&(static_ptr->regptr));

if(rv)
{

/*drm_map_resource failed , release the device and


exit*/

debug(("drm_map_resource failed with %d\n",rv));

goto error_3;

/* Enable the device for memory access */

rv = drm_device_read(static_ptr->handle,

176 Writing Device Drivers for LynxOS


Example Driver

PCI_RESID_CMDREG,0,0,&val);

if(rv)
{
debug(("drm_device_read failed with %d\n",rv));
goto error_4;
}

val |= PCI_MEM_ENABLE ;

rv = drm_device_write(static_ptr->handle,
PCI_RESID_CMDREG,0,0,&val);

if(rv)
{
debug(("drm_device_write failed to update the
command register, error = %d\n",rv);
goto error_4;
}

/* Read the Geographic properties of the device, this


is used by the driver to uniquely identify the
device */

rv = drm_device_read(static_ptr->handle,
PCI_RESID_BUSNO,0,0,
&(static_ptr->bus_number));

if(rv)
{
debug(("drm_device_read failed to read bus
number %d\n",rv));
goto erro_4;
}

rv = drm_device_read(static_ptr->handle,
PCI_RESID_DEVNO,0,0,
&(static_ptr->device_number));

if(rv)
{
debug(("drm_device_read failed to read device
number %d\n",rv));
goto error_4;
}

rv = drm_device_read(static_ptr->handle,
PCI_RESID_FUNCNO,0,0,
&(static_ptr->func_number));

if(rv)
{

debug(("drm_device_read failed to read function


number %d\n",rv));
goto error_4 ;
}

Writing Device Drivers for LynxOS 177


Chapter 10 - Device Resource Manager

/* perform any device specific initializations here,


the following statements are just illustrative */
/* recoset() is used to catch any bus errors */

if(!recoset())
{

static_ptr->regptr.register1 = 0;
static_ptr->regptr.register2 = 9600;
static_ptr->regptr.register3 = 1024;

if(static_ptr->regptr.register4 == 0x4)
{
static_ptr->regptr.register3 = 4096;
}
} else {
/* caught a bus error */
goto error_4;
}
noreco(); /* ……………… and so on */

/* Succesfull exit from the install routine, return


the static pointer */

return(static_ptr);

error_4:

drm_unmap_resource(static_ptr->handle,
PCI_RESID_BAR0);

error_3:

drm_unregister_isr(static_ptr->handle);

error_2:

drm_free_handle(static_ptr->handle);

error_1:

sysfree(static_ptr,sizeof(struct device_static));

error_0:

return(SYSERR);

} /* abc_install */

int abc_uninstall(struct device_static *static_ptr)


{

unsigned int val;


int rv = 0;

/* perform any device specific shutdowns */

static_ptr->regptr.register1 = 0xff ;

/* and so on */

178 Writing Device Drivers for LynxOS


Example Driver

/* Disable the device from responding to memory access */

rv = drm_device_read(static_ptr->handle,
PCI_RESID_CMDREG,0,0,&val);
if(rv)
{
debug(("failed to read device %d\n",rv));
}
val &= ~(PCI_MEM_ENABLE);
rv = drm_device_write(static_ptr->handle,
PCI_RESID_CMDREG,0,0,&val);
if(rv)
{
debug(("failed to write device %d\n",rv));
}

/* Unmap the memory resource */

rv = drm_unmap_resource(static_ptr->handle,
PCI_RESID_BAR0);
if(rv)
{
debug(("failed to unmap resource %d\n",rv));
}

/* unregister the isr */

rv = drm_unregister_isr(static_ptr->handle);
if(rv)
{
debug(("failed to unregister isr %d\n",rv));
}
/* release the device handle */

rv = drm_free_handle(static_ptr->handle);
if(rv)
{
debug(("Failed to free the device handle %d\n",rv));
}

sysfree(static_ptr,sizeof(struct device_static));

return(0);
}

/* The other entry points of the driver are device specific */

int abc_open(…) {
}

int abc_read(…) {
}

int abc_write(…) {
}

int abc_ioctl(…) {
}

int abc_close(…) {
}

Writing Device Drivers for LynxOS 179


Chapter 10 - Device Resource Manager

int abc_isr(…) {
}

180 Writing Device Drivers for LynxOS


APPENDIX A Network Device Drivers

A network driver is defined as a link-level device driver that interfaces with the
LynxOS TCP/IP module. Unlike other drivers, a network driver does not interface
directly with user applications. It interfaces instead with the LynxOS TCP/IP
module. This interface is defined by a set of driver entry points and data structures
described in the following sections.
Kernel threads play an important role in LynxOS networking software, not only
within the drivers, but also as part of the TCP/IP module.
The example code fragments given below illustrate the points discussed for an
Ethernet device. These examples can easily be adapted to other technologies.

Kernel Data Structures


A network driver must make use of a number of kernel data structures. Each of
these structures are briefly described here and their use is further illustrated
throughout the rest of the chapter.
A driver must include the following header files which define these structures and
various symbolic constants that are used in the rest of this chapter:
#include <kernel.h>
#include <kern_proto.h>
#include <getmem.h>
#include <stask.h>
#include <partschedu.h>
#include <drm.h>
#include <bsp.h>
#include <io.h>

Writing Device Drivers for LynxOS 181


Appendix A - Network Device Drivers

#include <sys/param.h>
#include <sys/systm.h>
#include <sys/mbuf.h>
#include <sys/protosw.h>
#include <sys/socket.h>
#include <sys/kernel.h>
#include <sys/sockio.h>

#include <net/if.h>
#include <net/if_arp.h>
#include <net/ethernet.h>
#include <net/if_dl.h>
#include <net/if_media.h>

#include <net/bpf.h>
#include <net/if_types.h>
#include <net/if_vlan_var.h>

#include <netinet/in_systm.h>
#include <netinet/in.h>
#include <netinet/ip.h>
#include <netinet/tcp.h>
#include <netinet/udp.h>
#include <netinet/in_var.h>

#include <sys/sysctl.h>

struct ether_header
The Ethernet header must be prefixed to every outgoing packet. It specifies the
destination and source Ethernet addresses and a packet type. The symbolic
constants ETHERTYPE_IP, ETHERTYPE_ARP, and ETHERTYPE_RARP can be
used for the packet type.
struct ether_header {
u_char ether_dhost[6]; /* dest Ethernet addr */
u_char ether_shost[6]; /* source Ethernet addr */
u_short ether_type; /* Ethernet packet type */
}

struct arpcom
The arpcom structure is used for communication between the TCP/IP module and
the network interface driver. It contains the ifnet structure (described below)
and the interface’s Ethernet and Internet addresses. This structure must be the first
element in the device driver’s statics structure.
struct arpcom {
struct ifnet ac_if; /* network visible interface */
u_char ac_enaddr[6]; /* Ethernet address */
struct in_addr ac_ipaddr; /* Internet address */
struct ether_multi *ac_multiaddrs; /* list of ether multicast addrs */
int ac_multicnt; /* length of ac_multiaddrs list */

182 Writing Device Drivers for LynxOS


struct sockaddr

void*ac_netgraph; /* ng_ether(4) netgraph node info */


};

NOTE: Static structure is the structure that is specific to the device. For example,
the following is the structure for the fxp driver:
struct fxp_softc {
struct arpcom arpcom; /* per-interface network data */
struct resource *mem; /* resource descriptor for registers */
int rtp; /* register resource type */
int rgd; /* register descriptor in use */
struct resource *irq; /* resource descriptor for interrupt */
void *ih; /* interrupt handler cookie */
struct mbuf *rfa_headm; /* first mbuf in receive frame area */

. . . . . .
. . . . . .
}

struct sockaddr
The sockaddr structure is a generic structure for specifying socket addresses,
containing an address family field and up to 14 bytes of protocol-specific address
data.
struct sockaddr {
u_char sa_len; /* total length */
u_char sa_family; /* address family */
char sa_data[14]; /* longer; addr value */
};

struct sockaddr_in
The sockaddr_in structure is a structure used for specifying socket addresses
for the Internet protocol family.
struct sockaddr_in {
uint8_t sin_len;
sa_family_t sin_family; /* always AF_INET */
u_short sin_port; /* port number */
struct in_addr sin_addr; /* host Internet addr */
char sin_zero[8];
};

struct in_addr
The in_addr structure is a structure specifying a 32-bit host Internet address.
struct in_addr {
u_long s_addr;
};

Writing Device Drivers for LynxOS 183


Appendix A - Network Device Drivers

struct ifnet
This is the principle data structure used to communicate between the driver and the
TCP/IP module. It provides the TCP/IP software with a generic, hardware-
independent interface to the network drivers. It specifies a number of entry points
that the TCP/IP module can call in the driver, a flag variable indicating general
characteristics and the current state of the interface, a queue for outgoing packets,
and a number of statistics counters.
struct ifnet {
#ifdef __Lynx__
void *p; /* pointer to driver state */
#define if_softcp
#else
void *if_softc; /* pointer to driver state */
#endif
char *if_name; /* name, e.g. ``en'' or ``lo'' */
TAILQ_ENTRY(ifnet)if_link; /* all struct ifnets are chained */
structifaddrhead if_addrhead; /* linked list of addresses per if */
int if_pcount; /* number of promiscuous listeners */
struct bpf_if *if_bpf; /* packet filter structure */
u_short if_index; /* numeric abbreviation for this if */
short if_unit; /* sub-unit for lower level driver */
short if_timer; /* time 'til if_watchdog called */
short if_flags; /* up/down, broadcast, etc. */
int if_ipending; /* interrupts pending */
void *if_linkmib; /* link-type-specific MIB data */
size_t if_linkmiblen; /* length of above data */
struct if_data if_data;
struct ifmultihead if_multiaddrs;/* multicast addresses configured */
int if_amcount; /* number of all-multicast requests */
/* procedure handles */
int (*if_output) /* output routine (enqueue) */
(struct ifnet *, struct mbuf *, struct sockaddr *,
struct rtentry *);
void (*if_start) /* initiate output routine */
(struct ifnet *);
union {
int (*if_done) /* output complete routine */
(struct ifnet *); /* (XXX not used) */
int uif_capabilities; /* interface capabilities */
} _u1;
int (*if_ioctl) /* ioctl routine */
(struct ifnet *, u_long, caddr_t);
void (*if_watchdog) /* timer routine */
__P((struct ifnet *));
#ifdef __Lynx__
int (*if_reset)) /* Lynx Thing */
__P((struct ifnet *)); /* new autoconfig will permit removal */
int (*if_setprio)
__P((struct ifnet *, int));/* To change the prio of the
interface*/
#endif

union {
int (*if_poll_recv) /* polled receive routine */
(struct ifnet *, int *);
int uif_capenable; /* enabled features */
} _u2;

184 Writing Device Drivers for LynxOS


struct mbuf

int (*if_poll_xmit) /* polled transmit routine */


(struct ifnet *, int *);
void (*if_poll_intren) /* polled interrupt reenable routine */
(struct ifnet *);
void (*if_poll_slowinput) /* input routine for slow devices */
(struct ifnet *, struct mbuf *);
void (*if_init) /* Init routine */
(void *);
int (*if_resolvemulti) /* validate/resolve multicast */
(struct ifnet *, struct sockaddr **, struct sockaddr *);
#if 1 /* ALTQ */
structifaltq if_snd; /* output queue (includes altq) */
#else
structifqueue if_snd; /* output queue */
#endif
structifqueue *if_poll_slowq; /* input queue for slow devices */
#if defined(__Lynx__)
struct raweth *if_raweth;
#endif
void *if_afdata[AF_MAX];
void *if_bridge; /* for bridge STP code */
};

The symbolic constants IFF_UP, IFF_RUNNING, IFF_BROADCAST, and


IFF_NOTRAILERS can be used to set bits in the if_flags field.

Looking at the arpcom structure, notice that the first member is an ifnet
structure. A driver should declare a struct arpcom as part of the statics structure
and use the ifnet structure within this. There is an important reason for this,
which is explained in the discussion of the ioctl entry point.

struct mbuf
Data packets are passed between the TCP/IP module and a network interface driver
using mbuf structures. This structure is designed to allow the efficient
encapsulation and decapsulation of protocol packets without the need for copying
data. A number of functions and macros are defined in sys/mbuf.h for using
mbuf structures.
/* header at beginning of each mbuf: */
struct m_hdr {
struct mbuf *mh_next; /* next buffer in chain */
struct mbuf *mh_nextpkt; /* next chain in queue */
int mh_len; /* amount of data in this mbuf */
caddr_t mh_data; /* location of data */
short mh_type; /* type of data in this mbuf */
short mh_flags; /* flags; see below */
};

/* record/packet header in first mbuf of chain;


* valid if M_PKTHDR set
*/

struct pkthdr {
struct ifnet *rcvif; /* rcv interface */

Writing Device Drivers for LynxOS 185


Appendix A - Network Device Drivers

int len; /* total packet length */


/* variables for ip and tcp reassembly */
void *header; /* pointer to packet header */
/* variables for hardware checksum */
int csum_flags; /* flags regarding checksum */
int csum_data; /* data field used by csum routines */
SLIST_HEAD(packet_tags, m_tag) tags; /* list of packet tags */
};

/* description of external storage mapped into mbuf,


* valid if M_EXT set
*/

*/
struct m_ext {
caddr_t ext_buf; /* start of buffer */
void (*ext_free) (caddr_t, u_int); /* free routine if not the
usual */
u_int ext_size; /* size of buffer, for ext_free
*/
void (*ext_ref) (caddr_t, u_int); /* add a reference to the ext
object */
};

struct mbuf {
struct m_hdr m_hdr;
union {
struct {
struct pkthdr MH_pkthdr; /* M_PKTHDR set */
union {
struct m_ext MH_ext; /* M_EXT set */
char MH_databuf[MHLEN];
} MH_dat;
} MH;
char M_databuf[MLEN]; /* !M_PKTHDR, !M_EXT */
} M_dat;
};

#define m_next m_hdr.mh_next


#define m_len m_hdr.mh_len
#define m_data m_hdr.mh_data
#define m_type m_hdr.mh_type
#define m_flags m_hdr.mh_flags
#define m_nextpkt m_hdr.mh_nextpkt
#define m_act m_nextpkt
#define m_pkthdr M_dat.MH.MH_pkthdr
#define m_ext M_dat.MH.MH_dat.MH_ext
#define m_pktdat M_dat.MH.MH_dat.MH_databuf
#define m_dat M_dat.M_databuf

186 Writing Device Drivers for LynxOS


Adding or Removing Data in an mbuf

Adding or Removing Data in an mbuf


The position and number of data currently in an mbuf are identified by a pointer
and a length. By changing these values, data can be added or deleted at the
beginning or end of the mbuf. A pointer to the start of the data in the mbuf can be
obtained using the mtod macro. The pointer is cast as an arbitrary data type,
specified as an argument to the macro:
char *cp;
struct mbuf *mb;

cp = mtod (mb, char *);


/* get pointer to data in mbuf */

The macro dtom takes a pointer to data placed anywhere within the data portion
of the mbuf and returns a pointer to the mbuf structure itself. For example, if we
know that cp points within the data area of an mbuf, the sequence will be:
struct mbuf *mb;
char *cp;

mb = dtom(cp);

Data is added to the head of an mbuf by decrementing the m_data pointer,


incrementing the m_len value, and copying the data using a function such as
bcopy. Data is added to the tail of an mbuf in a similar manner by incrementing
the m_len value. The ability to add data to the tail of an mbuf is useful for
implementing trailer protocols, but LynxOS does not currently support such
protocols.
Data is removed from the head or tail of an mbuf by simply incrementing the
m_data pointer and/or decrementing m_len.

Allocating mbufs
The above examples did not discuss what to do when sufficient space is not
available in an mbuf to add data. In this case, a new mbuf can be allocated using
the function m_get. The new mbuf is linked onto the existing mbuf chain using
its m_next field. m_get can be replaced with MGET, which is a macro rather
than a function call. MGET produces faster code, whereas m_get results in
smaller code. The example to add data to the beginning of a packet now becomes:
struct mbuf *m;
caddr_t src, dst;

MGET(m, M_DONTWAIT, MT_HEADER);


if (m == NULL)
return (ENOBUFS);
dst = mtod (m, caddr_t);

Writing Device Drivers for LynxOS 187


Appendix A - Network Device Drivers

bcopy (src, dst, n);

The second argument to m_get/MGET specifies whether the function should block
or return an error when no mbufs are available. A driver should use
M_DONTWAIT, which will cause the mbuf pointer to be set to zero if no mbufs
are free.
The third argument to m_get/MGET specifies how the mbuf will be used. This is
for statistical purposes only and is used, for example, by the command
netstat -m. The types used by a network driver are MT_HEADER for protocol
headers and MT_DATA for data packets.

mbuf Clusters
When receiving packets from a network interface, a driver must allocate mbufs to
store the data. If the data packets are large enough, a structure known as an mbuf
cluster can be used. A cluster can hold more data than a regular mbuf, MCLBYTES
bytes as opposed to MLEN. As a rule of thumb, there is benefit to be gained from
using a cluster if the packet is larger than MCLBYTES/2.

Freeing mbufs
Since there are a limited number of mbufs in the system, the driver must take care
to free mbufs at the appropriate points, listed below:
Packet Output:
• Interface down
• Address family not supported
• No mbufs for Ethernet header
• if_snd queue full
• After packet has been transferred to interface
Packet Input:
• Not enough mbufs to receive packet
• Unknown Ethernet packet type
• Input queue full

188 Writing Device Drivers for LynxOS


Freeing mbufs

The sections on packet input and output show appropriate code examples for each
of the above situations. Failure to free them will eventually lead to the system
running out of mbufs.

Table A-1: Summary of Commonly Used mbuf Macros

Macro Description

MCLGET Get a cluster and set the data pointer of the mbuf to point to the
cluster.
MGETHDR Allocate an mbuf and initialize it as a packet header.
MH_ALIGN Set the m_data pointer to an mbuf containing a packet header to
place an object of the specified size at the end of mbuf, longword
aligned.
M_PREPEND Prepend specified bytes of data in front of the data in the mbuf.
dtom Convert the data pointer within mbuf to mbuf pointer.
mtod Convert mbuf pointer to data pointer of specified type.

Table A-2: Summary of Commonly Used mbuf Functions

Function Description

m_adj Remove data from mbuf at start/end.


m_cat Concatenate one mbuf chain to another.
m_copy Version of m_copym that does not wait.
m_copydata Copy data from mbuf chain to a buffer.
m_copyback Copy data from buffer to an mbuf chain.
m_copym Create a new mbuf chain from an existing mbuf chain.
m_devget Create a new mbuf chain with a packet header from data in a buffer.
m_free A function version of MFREE macro.
m_freem Free all mbufs in a chain.
m_get A function version of MGET macro.
m_getclr Get an mbuf and clear the buffer.

Writing Device Drivers for LynxOS 189


Appendix A - Network Device Drivers

Table A-2: Summary of Commonly Used mbuf Functions (Continued)

Function Description

m_gethdr A function version of MGETHDR macro.


m_pullup Pull up data so that the certain number of bytes of data are stored
contiguously in the first mbuf in the chain.

Statics Structure
In keeping with the general design philosophy of LynxOS drivers, network drivers
should define a statics structure for all device-specific information. However, the
TCP/IP software has no knowledge of this structure, which is specific to each
network interface, and does not pass it as an argument to the driver entry points.
The solution to this problem is for the ifnet structure to be contained within the
statics structure. The user-defined field p in the ifnet structure is initialized to
contain the address of the statics structure.
Given the address of the ifnet structure passed to the entry point from the
TCP/IP software, the driver can obtain the address of the statics structure as
follows:
struct ifnet *ifp;
struct statics *s = (struct statics *) ifp->p;

The arpcom structure must be the first element in the statics structure. In the code
examples below, the arpcom structure is named ac.

Packet Queues
A number of queues are used for transferring data between the interface and the
TCP/IP software. There is an output queue for each interface, contained in the
ifnet structure. There are two input queues used by all network interfaces—one
for IP packets and another for ARP/RARP packets. All queues are accessed using
the macros IF_ENQUEUE, IF_DEQUEUE, IF_QFULL, and IF_DROP.

190 Writing Device Drivers for LynxOS


Driver Entry Points

Driver Entry Points


A network driver contains the following entry points:
install/uninstallCalled by the kernel in the usual manner. The install
routine must perform a number of tasks specific to
network drivers.
interrupt handlerThe interrupt handler is declared and called in exactly
the same manner as for other drivers. It will not be
discussed further here.
output Called by TCP/IP software to transmit packets on the
network interface
ioctl Called by TCP/IP software to perform a number of
commands on the network interface.
watchdog Called by the TCP/IP software after a user-specified
timeout period.
reset Called by the kernel during the reboot sequence.
setprio Called by the TCP/IP software to implement priority
tracking.
By convention, the entry point names are prefixed with the driver name.

install Entry Point


In addition to the usual things done in the install routine (allocation and
initialization of the statics structure, declaration of interrupt handler, and so on),
the driver must also fill in the fields of the ifnet structure and make the interface
known to the TCP/IP software. Note also that hardware initialization is normally
done in the ioctl routine rather than in the install routine.

Finding the Interface Name


The install routine must initialize the if_name field in the ifnet structure.
This is the name by which the interface is known to the TCP/IP software. It is used,
for example, as an argument to the ifconfig and netstat utilities.

Writing Device Drivers for LynxOS 191


Appendix A - Network Device Drivers

The interface name is a user-defined field specified in the driver’s device


configuration file (drvr.cfg) in the /sys/cfg directory. The usual technique
used by the driver to find this field is to search the ucdevsw table for an entry
with a matching device information structure address. ucdevsw is a kernel table
containing entries for all the character devices declared in the config.tbl file.
The kernel variable nucdevsw gives the size of this table.
extern int nucdevsw;
extern struct udevsw_entry ucdevsw[];
struct statics *s;
struct ifnet *ifp;

ifp->if_name = (char *) 0;
for (i = 0; i < nucdevsw; i++) {
if (ucdevsw[i].info == (char *) info) {
if (strlen (ucdevsw[i].name) > IFNAMSIZ) {
sysfree (s, (long) sizeof (struct statics));
return ((char *) SYSERR);
}
ifp->if_name = ucdevsw[i].name;
break;
}
}
if (ifp->if_name == (char*) 0) {
sysfree (s, (long) sizeof (struct statics));
return ((char *) SYSERR);
}

Note that this method only works for statically installed drivers. Dynamically
installed drivers do not have an entry in the ucdevsw table.

Initializing the Ethernet Address


The ac_enaddr field in the arpcom structure is used to hold the interface’s
Ethernet address, which must be included in the Ethernet header added to all
outgoing packets. The install routine should initialize this field by reading the
Ethernet address from the hardware.
struct statics *s;

get_ether_addr (&s->ac.ac_enaddr);

In the above example, get_ether_addr would be a user-written function that


reads the Ethernet address from the hardware.

Initializing the ifnet Structure


The various fields in the ifnet structure should be initialized to appropriate
values. Unused fields should be set to zero or NULL. Once the structure has been

192 Writing Device Drivers for LynxOS


Packet Output

initialized, the network interface is made known to the TCP/IP module by calling
the if_attach function.
struct statics *s;
struct ifnet *ifp;

ifp->if_timer = 0 ;
ifp->p = (char *) s;/* address of statics structure */
ifp->if_unit = 0;
ifp->if_mtu = ETHERMTU;
ifp->if_flags = IFF_BROADCAST | IFF_NOTRAILERS | IFF_MULTICAST;
ifp->if_init = drvr_init;
ifp->if_output = ether_output;
ifp->if_ioctl = drvr_ioctl;
ifp->if_reset = drvr_reset;
ifp->if_start = drvr_start;
ifp->if_setprio = drvr_setprio;
ifp->if_watchdog = drvr_watchdog;

ifp->if_snd.ifq_maxlen = IFQ_MAXLEN;

ether_ifattach(ifp, nbpfilter);

Note that the if_output handle in the ifnet structure should point to the
ether_output routine in the TCP/IP module. Previously, it pointed to the driver-
specific local routine. After the ether_output routine has packaged the data for
output, it calls a start routine specified by if_start, a member of the interface
ifnet structure. For example:
ifp->if_start = lanstart;

Packet Output
The processing of outgoing packets is divided into two parts. The first part
concerns the TCP/IP module which is responsible for queueing the packet on the
interface’s output queue. The actual transmission of packets to the hardware is
handled by the driver start routine and the kernel thread. The driver is responsible
in all cases for freeing the mbufs holding the data once the packet has been
transmitted or at any time an error is encountered.

ether_output Function
A number of tasks previously performed by the driver output routine are now done
in TCP/IP module by the ether_output routine. Thus, the if_output field of
the interface ifnet structure is initialized to the address of the ether_output
routine in the driver install routine:

Writing Device Drivers for LynxOS 193


Appendix A - Network Device Drivers

ifp->if_output = ether_output;

This causes the ether_output routine to be called indirectly when the TCP/IP
module has a packet to transmit. After enqueuing the packet to transmit,
ether_output calls a device-specific function indirectly through the if_start
pointer in the ifnet structure. For example, if ifp points to an ifnet
structure,
(*ifp->if_start)(ifp),

the if_start field is also initialized by the driver install routine. The driver start
routine starts output on the interface if resources are available. Before removing
packets from the output queue for transmission, the code will normally have to test
whether the transmitter is idle and ready to accept a new packet. It will typically
dequeue a packet (which is enqueued by ether_output) and transmit it.
sdisable(s);
if (ds->xmt_pending) {
/* if already one transmission is in progress */
srestore(s);
return 1;
}
IF_DEQUEUE(&ifp->if_snd, m);
if (m == 0) {
srestore(s);
return 0;
}
ds->xmt_pending = 1;
srestore(s);

...
...
/* Initiate transmission */
return 0;

Note that manipulation of the output queue must be synchronized with


sdisable/srestore. Once the packet has been transferred, the driver must free
the mbuf chain using m_freem.
One important point to consider here is that the start routine can now be called by
the TCP/IP module (via ether_output) and the driver kernel thread upon
receiving an interrupt. Thus, the start routine must protect code and data in the
critical area. For example, it could check a “pending” flag, which is set before
starting to transmit and cleared when a transmit done interrupt is received. If the
transmit start routine is not reentrant, it could signal a semaphore in order to notify
the driver’s kernel thread that packets are now available on the output queue. The
routine should then return 0 to indicate success.
ssignal(&s->thread_sem);
return (0);

194 Writing Device Drivers for LynxOS


Kernel Thread Processing

Also note that the total data available in an mbuf can be obtained from the mbuf
packet header. This is a new feature in BSD 4.4 and above. Previously, one had to
traverse the mbuf chain to calculate it.
/* put mbuf data into TFD data area */
length = m->m_pkthdr.len;
m_copydata(m, 0, length, p);
m_freem(m);

Kernel Thread Processing


The kernel thread must perform the following activities relating to packet output:
• Start transmission
• Maintain statistics counters

Starting Transmission
As explained above, the kernel thread also calls the driver start routine to start
transmission. The transmit start routine dequeues a packet from the interface send
queue and transmits it.

Statistics Counters
The counters relating to packet output are the if_opackets, if_oerrors, and
if_collisions fields in the ifnet structure. The if_opackets counter
should be incremented for each packet that is successfully transmitted by the
interface without error. If the interface indicates that an error occurred during
transmission, the if_oerrors counter should be incremented. The driver should
also interrogate the interface to determine how many collisions, if any, occurred
during transmission of a packet. A collision is not necessarily an error condition.
An interface will normally make a number of attempts to send a packet before
raising an error condition.

Packet Input
When packets are received by a network interface, they must be copied from the
device to mbufs and passed to the TCP/IP software. Because this can take a
significant amount of time, the bulk of the processing of incoming packets should
be done by the driver’s kernel thread so that it does not impact the system’s real-

Writing Device Drivers for LynxOS 195


Appendix A - Network Device Drivers

time performance. The interrupt handler routine should do the minimum necessary
to ensure that the interface continues to function correctly.
To maintain bounded system response times, the interrupt handler should also
disable further interrupts from the interface. These will be reenabled by the driver’s
kernel thread. The processing of input packets involves the following activities:
• Determine packet type.
• Copy data from interface into mbufs.
• Strip off Ethernet header.
• Enqueue packet on input queue.
• Reenable receiver interrupts.
• Maintain statistics counters.

Determining Packet Type


The packet type is specified in the Ethernet header and is used by the driver to
determine where to send the received packet. In the following code fragment, the
ptr variable is assumed to be a pointer to the start of the received Ethernet frame.
The use of the ntohs function ensures the portability of code across different
CPU architectures.
ether_type = ntohs (((struct
ether_header *)ptr)->ether_type);

Copying Data to mbufs


Most network devices have local RAM that is visible to the device driver. On
packet reception, the driver must allocate sufficient mbufs to hold the received
packet, copy the data to the mbufs, and then pass the mbuf chain to the TCP/IP
software. The ifnet structure is added to the start of the packet so that the upper
layers can easily identify the originating interface. The Ethernet header must be
stripped from the received packet. This can be achieved simply by not copying it
into the mbuf(s). If the entire packet cannot be copied, any allocated mbufs must
be freed. The following code outlines how a packet is copied from the hardware to
mbufs using the m_devget routine. m_devget is called with the address and
the size of the buffer that contains the received packet. It creates a new mbuf
chain and returns the pointer to the chain.
m = m_devget(buf, len, 0, ifp, 0);

196 Writing Device Drivers for LynxOS


Enqueueing Packet

ifp is the device interface pointer. The variable buf points to the received data.
This is usually an address in the interface’s local RAM.
By default, m_devget uses bcopy, which copies data one byte at a time. A
driver can provide a different algorithm for more efficiency and pass its address to
the m_devget routine.

Enqueueing Packet
The packet read routine finally calls a TCP/IP module called ether_input to
enqueue the received packet on one of the TCP/IP software’s input queues and for
further processing.
struct ifnet *ifnet;
struct ether_header *et;
struct mbuf *m;

ether_input(ifp, et, m);

Statistics Counters
The counters relating to packet input are the if_ipackets and if_ierrors
fields in the ifnet structure. The if_ipackets counter should be incremented
for each packet that is successfully transferred from the interface and enqueued on
the TCP/IP input queue. Receive errors are normally indicated by the interface in a
status register. In this case, the if_ierrors counter should be incremented.

ioctl Entry Point


The ioctl entry point is called by the TCP/IP software with the following
syntax:
drvr_ioctl (ifp, cmd, arg)
struct ifnet *ifp;
u_long cmd; /* ioctl command id */
caddr_t arg; /* command specific data */

The ioctl function must support the two commands SIOCSIFADDR and
SIOCSIFFLAGS.

Writing Device Drivers for LynxOS 197


Appendix A - Network Device Drivers

SIOCSIFADDR
This command is used to set the network interface’s IP address. Currently, the only
address family supported is Internet. Typically, this ioctl gets called by the
ifconfig utility. The driver should set the IFF_UP bit in the if_flags and
call the drvr_init function to initialize the interface. The argument passed to
the ioctl routine is cast to a pointer to an ifaddr structure, which is then used
to initialize the interface’s Internet address in the arpcom structure. The driver
should also call arpwhohas to broadcast its Internet address on the network. This
will allow other nodes to add an entry for this interface in their ARP tables.

SIOCSIFFLAGS
This command is used to bring the interface up or down and is called, for example,
by the command ifconfig name up. The TCP/IP software sets or resets the
IFF_UP bit in the if_flags field before calling the driver’s ioctl entry point
to indicate the action to be taken. An interface that is down cannot transmit
packets.
When the interface is being brought up, the driver should call the drvr_init
function to initialize the interface. When the interface is being brought down, the
interface should be reset by calling drvr_reset. In both cases, the statistics
counters in the ifnet structure should be zeroed.
The driver normally defines a flag in the statics structure which it uses to keep
track of the current state of the interface (s->ds_flags in the example code
below).
struct statics *s;
struct ifaddr *ifa;

case SIOCSIFADDR:
ifa = (struct ifaddr *) arg;
ifp->if_flags |= IFF_UP;
drvr_init (s);
switch (ifa->ifa_addr->sa_family) {
case AF_INET :
((struct arpcom*)ifp)->ac_ipaddr = IA_SIN
(ifa)->sin_addr;
arpwhohas ((struct arpcom*)ifp, &IA_SIN
(ifa)->sin_addr);
break;
default :
break;
}
break;

case SIOCSIFFLAGS:
if ((ifp->if_flags & IFF_UP) == 0 && s->ds_flags &
DSF_RUNNING) {
drvr_reset (s); /* interface going down */

198 Writing Device Drivers for LynxOS


watchdog Entry Point

s->ds_flags &= ~DSF_RUNNING;


} else if ((ifp->if_flags & IFF_UP) &&
!(s->ds_flags & DSF_RUNNING)) {
drvr_init(s); /* interface coming up */
}
ifp->if_ipackets = 0 ;
ifp->if_opackets = 0 ;
ifp->if_ierrors = 0 ;
ifp->if_oerrors = 0 ;
ifp->if_collisions = 0 ;
break;

watchdog Entry Point


The watchdog entry point can be used to implement a function that periodically
monitors the operation of the interface, checking for conditions such as a hung
transmitter. The function can then take corrective action, if necessary. If the driver
does not have a watchdog function, the corresponding field in the ifnet structure
should be set to NULL before calling if_attach.
The watchdog function is used in conjunction with the if_timer field in the
ifnet structure. This field specifies a timeout interval in seconds. At the
expiration of this interval, the TCP/IP module calls the watchdog entry point in
the driver, passing it the p field from the ifnet structure as an argument. The p
field is normally used to contain the address of the statics structure.
Note that the timeout interval specified by if_timer is a one-shot function. The
driver must reset it to a nonzero value to cause the watchdog function to be called
again. Setting the if_timer value to 0 will disable the watchdog function.

reset Entry Point


This entry point is called by the kernel during a reboot sequence, passing it the p
field from the ifnet structure, which is normally the address of the statics
structure. This function may also be called internally from the driver’s ioctl
entry point. The function should reset the hardware, putting it into an inactive state.

Writing Device Drivers for LynxOS 199


Appendix A - Network Device Drivers

Kernel Thread
The kernel thread receives events from two sources, the interrupt handler
(indicating completion of a packet transmission or reception) and the driver output
routine (indicating the availability of packets on the if_snd queue). A single
event synchronization semaphore is used for both these purposes. The thread
should handle interrupts first and then the packets on the output queue. The general
structure of the thread will look something like this:
struct statics *s;

for (;;) {
swait (&s->threadsem, SEM_SIGIGNORE);
handle_interrupts (s);/* handle any interrupts */
output_packets (s);
/* start tranmitter if necessary */
}

The precise details of the thread code will be very much dependent on the
hardware architecture. The function for processing interrupts will contain the
packet input code discussed above. It will also maintain the various statistics
counters. Also, receiver interrupts, if disabled by the interrupt handler, are
reenabled at this point. The output function will perform the tasks discussed above
in the section on packet output.

Priority Tracking
Whenever the set of user tasks using the TCP/IP software changes or the priority of
one of these tasks changes, the setprio entry point in the driver is invoked to
allow the driver to properly implement priority tracking on its kernel thread. The
entry point is passed two parameters, the address of the ifnet structure and the
priority which the kernel thread should be set to.
int drvrsetprio (struct ifnet *ifp, int prio)

{
int ktid; /* kernel thread id */

ktid = ((struct statics *) (ifp->p))->kthread_id;


stsetprio (ktid, prio);
}

200 Writing Device Drivers for LynxOS


Driver Configuration File

Driver Configuration File


The driver configuration file drvr.cfg, in the /sys/cfg directory, needs to
declare only the install (and uninstall) entry points. The other entry points
are declared to the TCP/IP module dynamically using the if_attach function. A
typical configuration file looks something like this:
C:wd3e: \
::::: \
:::wd3einstall:wd3euninstall
D:wd:wd3e0_info::
N:wd:0:

IP Multicasting Support

ether_multi Structure
For each Ethernet interface, there is a list of Ethernet multicast address ranges to be
received by the hardware. This list defines the multicast filtering to be
implemented by the device. Each address range is stored in an ether_multi
structure:
struct ether_multi {
u_char enm_addrlo[6]; /* low/only addr of range */
u_char enm_addrhi[6]; /* high/only addr of range */
struct arpcom *enm_ac; /* back pointer to arpcom */
u_int enm_refcount; /* num claims to addr/range */
struct ether_multi *enm_next; /* ptr to next ether_multi */
};

The entire list of ether_multi is attached to the interface’s arpcom structure.


1. If the interface supports IP multicasting, the install routine should set the
IFF_MULTICAST flag. For example:
ifp->if_flags = IFF_BROADCAST | IFF_MULTICAST;

where ifp is pointer to the interface ifnet structure.


2. Two new ioctls need to be added. These are SIOCADDMULTI to add
the multicast address to the reception list and SIOCDELMULTI to delete
the multicast address from the reception list:
case SIOCADDMULTI:
case SIOCDELMULTI:

Writing Device Drivers for LynxOS 201


Appendix A - Network Device Drivers

/* Update our multi-cast list */


error = (cmd == SIOCADDMULTI) ?
ether_addmulti((struct ifreq *)data,
&s->es_ac) :
ether_delmulti((struct ifreq *)data,
&s->es_ac);

if (error == ENETRESET) {
/*
* Multi-cast list has changed; set the
* hardware filter accordingly.
*/
lanreset(s);
error = 0;
}

3. The driver reset routine must program the controller filter registers from
the filter mask calculated from the multicast list associated with this
interface. This list is available in the arpcom structure, and there are
macros available to access the list. For example:
struct ifnet *ifp = &s->s_if;
register struct ether_multi *enm;
register int i, len;
struct ether_multistep step;

/*
* Set up multi-cast address filter by passing
* all multi-cast addresses through a crc
* generator, and then using the high order 6
* bits as a index into the 64 bit logical
* address filter. The high order two bits
* select the word, while the rest of the bits
* select the bit within the word.
*/

bzero(s->mcast_filter,
sizeof(s->mcast_filter));
ifp->if_flags &= ~IFF_ALLMULTI;
ETHER_FIRST_MULTI(step, &s->es_ac, enm);

while (enm != NULL) {


if (bcmp((caddr_t)&enm->enm_addrlo,
(caddr_t)&enm->enm_addrhi,
sizeof(enm->enm_addrlo)) != 0) {
/*
* We must listen to a range of multi-cast
* addresses. For now, just accept all
* multi-casts, rather than trying to set only
* those filter bits needed to match the
* range.
* (At this time, the only use of address
* ranges is for IP multi-cast routing, for
* which the range is big enough to require
* all bits set.)
*/
for (i=0; i<8; i++)
s->mcast_filter[i] = 0xff;
ifp->if_flags |= IFF_ALLMULTI;
break;

202 Writing Device Drivers for LynxOS


Berkeley Packet Filter (BPF) Support

}
getcrc((unsigned char *)&enm->enm_addrlo,
s->mcast_filter);
ETHER_NEXT_MULTI(step, enm);
}

4. If the driver input routine receives an Ethernet multicast packet, it should


set the M_MCAST flag in the mbuf before passing that mbuf to
ether_input:
char *buf;
struct ether_header *et;
u_short ether_type;
struct mbuf *m = (struct mbuf *)NULL;
int flags = 0;

/* set buf to point to start of received frame */


...
...
et = (struct ether_header *) buf;
ether_type = ntohs((u_short) et->ether_type);

if (et->ether_dhost[0] & 1)
flags |= M_MCAST;

/* pull packet off interface */


...
...
m->m_flags |= flags;
ether_input(ifp, et, m);

Berkeley Packet Filter (BPF) Support


1. If the system has a bpf driver installed, call ether_ifattach ()
with an argument nbpfilter:
ether_ifattach(ifp, nbpfilter);

2. If bpf is listening on this interface, let it see the packet before it is


committed to the write. This is done by tapping data in the driver output
routine as follows:
if (ifp->if_bpf)
(*bpf_tap_p)(ifp->if_bpf, buf, length);

where buf is the output buffer and length is the size of the packet to
be transmitted.

Writing Device Drivers for LynxOS 203


Appendix A - Network Device Drivers

3. If there is a bpf filter listening on this interface, hand off the raw
(received) packet to enet. This is done in the packet transmit routine as
follows:
if (ifp->if_bpf) bpf_mtap(ifp, mb_head);

4. The SIOCSIFFLAGS ioctl should support setting/clearing of


promiscuous mode.
/*
* If the state of the promiscuous bit changes,
* the interface must be reset.
*/
if (((ifp->if_flags ^ s->es_iflags) &
IFF_PROMISC) && (ifp->if_flags &
IFF_RUNNING)) {
s->es_iflags = ifp->if_flags;
laninit(s);
}

where s is the driver statics structure.

Alternate Queueing
The LynxOS networking stack implements ALTQ for better output queuing,
packet classification, and packet scheduling (for example, CBQ) or active queue
management. The following is the structure that defines a queue for a network
interface.
struct ifaltq {
/* fields compatible with struct ifqueue */
struct mbuf *ifq_head;
struct mbuf *ifq_tail;
int ifq_len;
int ifq_maxlen;
int ifq_drops;

/* alternate queueing related fields */


int altq_type; /* discipline type */
int altq_flags; /* flags (e.g. ready, in-use) */
void *altq_disc; /* for discipline-specific use */
struct ifnet *altq_ifp; /* back pointer to interface */

int (*altq_enqueue) __P((struct ifaltq *ifq, struct mbuf *m,


struct altq_pktattr *));
struct mbuf *(*altq_dequeue) __P((struct ifaltq *ifq, int remove));
int (*altq_request) __P((struct ifaltq *ifq, int req, void *arg));

/* classifier fields */
void *altq_clfier; /* classifier-specific use */
void *(*altq_classify) __P((void *, struct mbuf *, int));

/* token bucket regulator */

204 Writing Device Drivers for LynxOS


Converting Existing Drivers for ALTQ Support

struct tb_regulator *altq_tbr;

/* input traffic conditioner (doesn't belong to the output queue...) */


struct top_cdnr *altq_cdnr;
};

This queuing model consists of three independent components.


• Classifier—Classifies a packet to a scheduling class based on predefined
rules.
• Queueing Discipline—Implements packet scheduling and buffer
management algorithms.
• Queue Regulator—Limits the amount that a driver can dequeue at a time.

Converting Existing Drivers for ALTQ Support


ALTQ support can be added to existing drivers with minor modifications. The
procedure below provides high level guidelines for modifying a driver for ALTQ
support.
In the driver look for lines that contain if_snd. Modify those lines as follows:
1. Queue status check:
If the driver code checks ifq_head to see whether the queue is empty
or not, use IFQ_IS_EMPTY() to check queue status.

Old Driver Model New Driver Model


if (ifp->if_snd.ifq_head != if (!IFQ_IS_EMPTY(&ifp->if_snd))
NULL)

2. Dequeue operation:
Replace the dequeue macro IF_DEQUEUE() with IFQ_DEQUEUE().
Also, check for the validity of the dequeued network data buffer mbuf
against NULL.

Old Driver Model New Driver Model


IF_DEQUEUE(&ifp->if_snd, m); IFQ_DEQUEUE(&ifp->if_snd, m);
if (m == NULL)
return;

Writing Device Drivers for LynxOS 205


Appendix A - Network Device Drivers

3. Poll-and-dequeue:
If the code polls the packet at the top of the queue and actually uses it
before dequeueing it, use IFQ_POLL() and replace the dequeue macro
IF_DEQUEUE() with IFQ_DEQUEUE().

Old Driver Model New Driver Model


m = ifp->if_snd.ifq_head; IFQ_POLL(&ifp->if_snd, m);
if (m != NULL) { if (m != NULL) {
/* use m to get resources */ /* use m to get resources */
if (something goes wrong) if (something goes wrong)
return; return;
IF_DEQUEUE(&ifp->if_snd, m); IFQ_DEQUEUE(&ifp->if_snd, m);
/* kick the hardware */ /* kick the hardware */
} }

4. Eliminating IF_PREPEND:
Usually, the driver uses the macro IF_PREPEND() to cancel a previous
dequeue operation. Convert this logic into poll-and-dequeue by using the
combinations of IFQ_POLL() and IFQ_DEQUEUE().

Old Driver Model New Driver Model


IF_DEQUEUE(&ifp->if_snd, m); IFQ_POLL(&ifp->if_snd, m);
if (m != NULL) { if (m != NULL) {
if (something_goes_wrong) { if (something_goes_wrong) {
IF_PREPEND(&ifp->if_snd, m); return;
return; }
} /* at this point, the driver is
/* kick the hardware */ committed to send this packet.*/
} IFQ_DEQUEUE(&ifp->if_snd, m);
/* kick the hardware */
}

5. Purge operation:
Use IFQ_PURGE() to empty the queue. Note that a nonwork-conserving
queue cannot be emptied by a dequeue loop.

Old Driver Model New Driver Model


while (ifp->if_snd.ifq_head != IFQ_PURGE(&ifp->if_snd);
NULL) {
IF_DEQUEUE(&ifp->if_snd, m);
m_freem(m);
}

206 Writing Device Drivers for LynxOS


Converting Existing Drivers for ALTQ Support

6. Interface attach routine:


Use IFQ_SET_MAXLEN() to set a value to ifq_maxlen. Add
IFQ_SET_READY() to show that this driver is converted to the new style
(this is used to distinguish new-style drivers).

Old Driver Model New Driver Model


if_attach(ifp); IFQ_SET_READY(&ifp->if_snd);
if_attach(ifp);

7. Interface statistics:
All statistics counters in the drivers are incremented through macros.

Old Driver Model New Driver Model


ifp->if_snd.ifq_maxlen = qsize IFQ_SET_MAXLEN(&ifp->if_snd, qsize);
IF_DROP(&ifp->if_snd); IFQ_INC_DROPS(&ifp->if_snd);
ifp->if_snd.ifq_len++; IFQ_INC_LEN(&ifp->if_snd);
ifp->if_snd.ifq_len--; IFQ_INC_LEN(&ifp->if_snd);

Writing Device Drivers for LynxOS 207


Appendix A - Network Device Drivers

208 Writing Device Drivers for LynxOS


APPENDIX B LynxOS Device Driver Interface
Standard

This appendix defines the interface for device drivers designed to execute on
LynxOS. This appendix details the device driver entry points, their input
parameters, and return values. It also provides general guidance on the
functionality of each entry point. The appendix lists the service calls that the driver
can use to access the kernel logical resources and kernel resources that can be
accessed read-only by the drivers. The appendix also specifies certain device
drivers architectural requirements.
LynxOS-compatible device drivers must comply with the guidelines set forth
within this appendix.

Introduction
Device drivers provide the processing necessary to hide the details of hardware
operation from kernel and application software.
A device driver is composed of the software used to control and monitor a device.
Applications invoke the device driver through the standard POSIX interfaces of
open, close, read, and write, and through the LynxOS ioctl interface.
LynxOS maps this interface to the device driver open, close, read, write,
and ioctl entry points. LynxOS also provides the ability to install and uninstall a
device by providing a mapping to the driver install and uninstall entry
points. The device driver provides the only interface to the hardware devices.
Applications cannot access the hardware directly and must use the POSIX and
LynxOS interfaces provided to interact with the hardware devices. The
relationships between the device driver software and hardware interfaces can be
seen below in Figure B-1. It is not within the scope of this document to describe

Writing Device Drivers for LynxOS 209


Appendix B - LynxOS Device Driver Interface Standard

the POSIX or LynxOS interfaces but to detail the device driver interfaces required
by LynxOS.

Applications

POSIX Interface
LynxOS Interface
LynxOS

Device Driver

Device Hardware

Figure B-1: Device Driver Software and Hardware Interfaces

LynxOS Device Driver Interfaces


This section defines the interface for device drivers developed under LynxOS.
A device driver consists of a set of interface functions, or entry points, and zero or
more support routines. The entry points define the interface with LynxOS. LynxOS
calls the appropriate entry point when a user application makes the associated
system call.
LynxOS has predefined the list of entry points for device drivers developed under
LynxOS. Even though a driver is not required to have all the entry points defined,
LynxOS device drivers will define all entry points. This will assist applications in
determining if an inappropriate call has been made to a device driver. If all entry
points were not defined, an entry point might succeed and an application might
assume the task had completed when, in fact, it did not.

210 Writing Device Drivers for LynxOS


General

General
In general, device drivers will not use global data (the exception being the
entry_points structure for dynamically loadable device drivers). Device drivers
that use global data should clearly document its use.
In general, device drivers will not make functions visible outside of the device
driver, the two exceptions being the device driver entry points and functions that
must be made available to other device drivers. Functions visible outside of the
device driver should be clearly documented.
All driver entry points will check the return parameters of any function calls they
make.

Error Return Codes


When an error is detected, the driver should call pseterr() to set errno to one
of the following values and return SYSERR unless otherwise stated.

EFAULT If a driver determines that a pointer parameter is invalid, the


driver will set errno to EFAULT (bad address).
EINVAL If a driver discovers a command or argument that is out of range,
the driver will set errno to EINVAL (invalid argument).
EIO If a driver discovers that communication with the device failed,
the driver will set errno to EIO (input/output error).
ENOMEM If allocation of memory fails, the driver will set errno to
ENOMEM (not enough memory).
ENODEV If an entry point is not functionally applicable to the driver, the
driver will set errno to ENODEV (no such device).
ENOSYS If an entry point is not available in this implementation, the
driver will set errno to ENOSYS (function not
implemented).
ENOTSUP If an entry point does not support certain device options, the
driver will set errno to ENOTSUP (not supported).
EAGAIN If an entry point makes a request to a resource that is temporarily
unavailable but later calls may complete normally, the driver will
set errno to EAGAIN (resource temporarily unavailable).

Writing Device Drivers for LynxOS 211


Appendix B - LynxOS Device Driver Interface Standard

EBUSY If an entry point makes a request to a resource that is


permanently unavailable or would result in a conflict with
another request, the driver will set errno to EBUSY
(resource busy).
ENXIO If the device verification fails (that is, the device was not found),
the driver will set errno to ENXIO (no such device or
address).
EPERM If the operation is not permitted or the calling process does not
have the appropriate privileges or permissions to perform the
requested operation, the driver will set errno to EPERM
(operation not permitted).

Device Driver to Hardware Interfaces


Each device driver will interface with its respective hardware device.

LynxOS Specific Device Driver Organization

Common Device Driver Structures


Several entry points make reference to various different structures. A description
of these common device driver structures follows.

Device Information Structure


The device information structure contains device specific information used to
configure the device. There are no requirements imposed on the contents of this
structure. The contents of this structure are defined by the design of the device
driver and may be unique between device drivers. The only requirement imposed
on or by this structure is that the address of the structure must be passed in
appropriately to the install or uninstall entry point to maintain a stable
and consistent interface.

NOTE: The device information structure may not apply to a particular device and,
therefore, may not be defined. In this case, NULL may be passed to the install
routine for the device information structure pointer.

212 Writing Device Drivers for LynxOS


Entry Points

Device Statics Structure


The device statics structure contains device-specific configuration information
(possibly copied from the information file), driver communication and
synchronization mechanisms. There are no requirements imposed on the contents
of this structure. The contents of this structure are defined by the design of the
device driver and are typically unique to a device driver. The only requirement
imposed on or by this structure is that the address of the structure must be passed in
appropriately to each entry point to maintain a stable and consistent interface.

file Structure
The file structure contains information about each open file. This includes
information about the current position within the file, the number of file descriptors
associated with this file, and the file access mode. For more information on the
file structure, see the include file <sys/file.h>.

sel Structure
The sel structure is passed to the select entry point. This structure contains
control information for determining when data is available for reading or when
space is available for writing. For more information on the sel structure, see the
include file <sys/file.h>.

buf_entry Structure
The buf_entry structure is passed to the strategy entry point. This structure
contains information about each block transfer request. The buf_entry structure
refers to a NULL-terminated linked list of blocks to transfer. The pointer,
av_forw, points to the next block to transfer or points to NULL if the end of the
list has been reached. For more information on the buf_entry structure, see the
include file <disk.h>.

Entry Points
A device driver will implement all of the entry points pertaining to its type of
driver, character or block.
A character device driver will implement all of the following entry points:
• install
• uninstall

Writing Device Drivers for LynxOS 213


Appendix B - LynxOS Device Driver Interface Standard

• open
• close
• read
• write
• select
• ioctl
In addition to the character entry points above, a block device driver will
implement all of the following entry points:
• install
• uninstall
• open
• close
• strategy
• ioctl
In the following subparagraphs, <dev> is a placeholder for the mnemonic
representation of the device. It is recommended that all entry points contain the
type of entry point in the name to provide consistency across all device drivers (for
example, my_dev_open, my_dev_close, and so on). This is to provide a
consistent look and feel across all device drivers.

Character Entry Points

install

Synopsis
char *<dev>_install(struct dev_info_s *info_ptr)
Description
The install entry point is responsible for allocating and initializing
resources, verifying the device exists, and initializing the device.
Parameters
struct dev_info_s *info_ptr
A pointer to a device information structure or a NULL pointer if the
device information structure is not applicable.

214 Writing Device Drivers for LynxOS


Entry Points

Return Value
A pointer to the device statics structure if installation is successful.
SYSERR if installation fails.
Functional and Operational Requirements
The install entry point will verify the existence of the device, if possible.
The install entry point will allocate all memory required by the device
driver.
The install entry point should initialize all memory allocated.
If necessary, the install entry point will query the startup condition by
calling kgetenv() and reading a kernel environment variable.
The install entry point will initialize (configure) the device hardware for
proper operation based upon the startup condition and information obtained
from the device information file.
The install entry point will install any required interrupt handlers.
The install entry point should create any required threads.
The install entry point may test the device.
Notes
N/A

uninstall

Synopsis
int <dev>_uninstall(struct statics_s *statics_ptr)
Description
The uninstall entry point is called whenever a major device is uninstalled
dynamically from the system.
Parameters
struct statics_s *statics_ptr
A pointer to the device statics structure
Return Value
OK.

Writing Device Drivers for LynxOS 215


Appendix B - LynxOS Device Driver Interface Standard

Functional and Operational Requirements


The uninstall entry point will place the system in a stable state.
The uninstall entry point will relinquish resources acquired during
installation with the exception of deallocating memory.
Notes
N/A

open

Synopsis
int <dev>_open(struct statics_s *statics_ptr, int devno,
struct file *f)
Description
The open entry point performs generic initialization of a minor device.
Parameters
struct statics_s *statics_ptr
A pointer to the device statics structure.
int devno
The device number.
struct file *f
A pointer to the file structure associated with the device.
Return Value
OK if the open operation is successful.
SYSERR if the open operation fails
Functional and Operational Requirements
The open entry point should initialize minor devices.
Notes
The open entry point is not reentrant, but it is preemptible.
The errno values set by the entry point are overridden by the operating
system with a value of ENXIO.

216 Writing Device Drivers for LynxOS


Entry Points

close

Synopsis
int <dev>_close(struct statics_s *statics_ptr,
struct file *f)
Description
The close entry point will terminate communication with the device.
Parameters
struct statics_s *statics_ptr
A pointer to the device statics structure
struct file *f
A pointer to the file structure associated with the device
Return Value
OK if the close operation is successful.
SYSERR if the close operation fails.
Functional and Operational Requirements
When a device is closed, the close entry point will clean up after itself and
provide capabilities to reopen the device.
The close entry point should place the hardware in standby (or power
down) mode.
Notes
The close entry point is not re-entrant.
The close entry point is invoked only on the “last close” of a minor device.

read

Synopsis
int <dev>_read(struct statics_s *statics_ptr,
struct file *f, char *buf_ptr, int bytes_to_read)
Description
The read entry point copies data from the device into a buffer in the calling
task.

Writing Device Drivers for LynxOS 217


Appendix B - LynxOS Device Driver Interface Standard

Parameters
struct statics_s *statics_ptr
A pointer to the device statics structure.
struct file *f
A pointer to the file structure associated with the device.
char *buf_ptr
A pointer to a buffer for storing data read from the device.
int bytes_to_read
The number of bytes to read from the device.
Return Value
The number of bytes read if the read operation is successful.
SYSERR if the read operation fails.
Functional and Operational Requirements
The read entry point will retrieve data, up to the number of bytes specified,
from the device.
Notes
The buffer passed to the read entry point has already been checked by the
kernel and is guaranteed to be writeable by the device driver.

write

Synopsis
int <dev>_write(struct statics_s *statics_ptr,
struct file *f, char *buf_ptr, int bytes_to_write)
Description
The write entry point copies data from a buffer in the calling task to the
device.
Parameters
struct statics_s *statics_ptr
A pointer to the device statics structure.
struct file *f
A pointer to the file structure associated with the device.

218 Writing Device Drivers for LynxOS


Entry Points

char *buf_ptr
A pointer to a buffer containing the data to be written to the device.
int bytes_to_write
The number of bytes to write to the device
Return Value
The number of bytes written if the write operation is successful
SYSERR if the write operation fails
Functional and Operational Requirements
The write entry point will transfer data, up to the number of bytes specified,
from the application data buffer to the device.
Notes
The buffer passed to the write entry point has already been checked by the
kernel and is guaranteed to be readable by the device driver.

ioctl

Synopsis
int <dev>_ioctl(struct statics_s *statics_ptr,
struct file *f, int command, char *arg)
Description
The ioctl entry point is used to set certain parameters in the device or
obtain information about the state of the device.
Parameters
struct statics_s *statics_ptr
A pointer to the device statics structure.
struct file *f
A pointer to the file structure associated with the device.
int command
An integer identifying the ioctl command to execute.
char *arg
A pointer to a buffer (containing command arguments used for set
operations or a receive buffer used for get/status operations) or a NULL
pointer if the argument is not applicable.

Writing Device Drivers for LynxOS 219


Appendix B - LynxOS Device Driver Interface Standard

Return Value
OK if the ioctl operation is successful.
SYSERR if the ioctl operation fails.
Functional and Operational Requirements
The ioctl entry point will test all user pointer parameters against wbounds
(before writing) and rbounds (before reading).
The ioctl entry point will acknowledge the LynxOS command FIOPRIO
for adjusting priority tracking.
The ioctl entry point will acknowledge the LynxOS command FIOASYNC
when associated file descriptor flags have changed.
Notes
Care should be taken when assigning ioctl command numbers to avoid
using numbers assigned to the common ioctl commands. As a general rule,
device driver command numbers should begin with the number 300 or greater.
Since the ioctl argument buffer may in itself contain pointers to additional
buffers, it may be possible for an application to corrupt one of these additional
pointers after the pointer has been checked by the device driver using
wbounds and rbounds. For example, the application could modify one of
the pointers to point at a kernel address, which could result in the device driver
corrupting the data at this location.) In order to prevent this situation, the
device driver should disable interrupts and copy the contents of the buffer to a
buffer within kernel address space. Copying the contents of the buffer to a
buffer within kernel address space will also eliminate any data alignment
problems.

select

Synopsis
int <dev>_select(struct statics_s *statics_ptr,
struct file *f, int type, struct sel *sel_struct_ptr)
Description
The select entry point supports I/O polling or multiplexing.
Parameters
struct statics_s *statics_ptr
A pointer to the device statics structure.

220 Writing Device Drivers for LynxOS


Entry Points

struct file *f
A pointer to the file structure associated with the device.
int type
The type of monitoring the routine is performing: read (SREAD), write
(SWRITE), or exception (SEXCEPT) condition.
struct sel *sel_struct_ptr
A pointer to the sel structure containing information used by the driver
select entry point

Return Value
OK if the select operation is successful.
SYSERR if the select operation fails.
Functional and Operational Requirements
If 'which' == SREAD, the select entry point will configure the
sel struct structure for read operations.
If 'which' == SWRITE, the select entry point will configure the
sel struct structure for write operations.
If 'which' == SEXCEPT, the select entry point will configure the
sel struct structure for exception operations.
If a driver implements the select entry point, it should test and signal the
select semaphores.
Notes
In the select entry point, when the kernel sees the errno value ENOSYS
and a SYSERR being returned by the entry point, the operating system
intercepts the error and returns OK.

Block Entry Points

install

Synopsis
char *<dev>_install(struct dev_info_s *info_ptr,
struct statics_s *statics_ptr)

Writing Device Drivers for LynxOS 221


Appendix B - LynxOS Device Driver Interface Standard

Description
See the description of the character install entry point.
Parameters
struct dev_info_s *info_ptr
A pointer to the device information structure.
struct statics_s *statics_ptr
A pointer to the character device statics structure.
Return Value
A pointer to the device statics structure if installation is successful.
SYSERR if the installation fails.
Functional and Operational Requirements
See the Functional and Operational Requirements of the character install
entry point.
Notes
The character install entry point is called prior to the block install entry
point in order to create the character device statics structure.

uninstall

Synopsis
int <dev>_uninstall(struct statics_s *statics_ptr)
Description
See the description of the character uninstall entry point.
Parameters
struct statics_s *statics_ptr
A pointer to the device statics structure.
Return Value
OK.
Functional and Operational Requirements
See the Functional and Operational Requirements of the character
uninstall entry point.

222 Writing Device Drivers for LynxOS


Entry Points

Notes
N/A

open

Synopsis
int <dev>_open(struct statics_s *statics_ptr, int devno,
struct file *f)
Description
See the description of the character open entry point.
Parameters
struct statics_s *statics_ptr
A pointer to the device statics structure.
int devno
The device number.
struct file *f
A pointer to the file structure associated with the device.
Return Value
OK if the open operation is successful.
SYSERR if the open operation fails.
Functional and Operational Requirements
The open entry point should initialize minor devices.
The open entry point should set the block size (if other than 512 bytes) by
calling bdev_wbsize() to encode the block size within the device number.
Notes
The open entry point is not reentrant, but it is preemptible.

close

Synopsis
int <dev>_close(struct statics_s *statics_ptr, int devno)
Description
See the description of the character close entry point.

Writing Device Drivers for LynxOS 223


Appendix B - LynxOS Device Driver Interface Standard

Parameters
struct statics_s *statics_ptr
A pointer to the device statics structure.
int devno
The device number.
Return Value
OK if the close operation is successful.
SYSERR if the close operation fails
Functional and Operational Requirements
See the Functional and Operational Requirements of the character close
entry point.
Notes
The close entry point is not reentrant.

ioctl

Synopsis
int <dev>_ioctl(struct statics_s *statics_ptr, int devno,
int command, char *arg)
Description
See the description of the character ioctl entry point.
Parameters
struct statics_s *statics_ptr
A pointer to the device statics structure.
int devno
The device number.
int command
An integer identifying the ioctl command to execute.
char *arg
A pointer to a buffer (containing command arguments used for set
operations or a receive buffer used for get/status operations) or a
NULL pointer if the argument is not applicable.

224 Writing Device Drivers for LynxOS


Entry Points

Return Value
OK if the ioctl operation is successful.
SYSERR if the ioctl operation fails.
Functional and Operational Requirements
See the Functional and Operational Requirements of the character ioctl
entry point.
Notes
N/A

strategy

Synopsis
int <dev>_strategy (struct statics_s *statics_ptr, struct buf_entry *bp)
Description
The strategy entry point handles reads and writes for block devices.
Parameters
struct statics_s *statics_ptr
A pointer to the device statics structure.
struct buf_entry *bp
A pointer to the buf_entry structure.
Return Value
OK.
Functional and Operational Requirements
The strategy entry point will process multiple block transfer requests. The
buf_entry structure refers to a NULL-terminated linked list of blocks to
transfer. The pointer bp->av_forw points to the next block to transfer or
NULL if the end of the list has been reached.
If the B_READ bit of bp->b_status is set, the strategy entry point will
read from the device. Otherwise, the entry point will write to the device.
If data is to be read, the strategy entry point will transfer data from the
device to a kernel data buffer (bp->memblk).
If data is to be written, the strategy entry point will transfer data from a
kernel data buffer (bp->memblk) to the device.

Writing Device Drivers for LynxOS 225


Appendix B - LynxOS Device Driver Interface Standard

Upon completion of each block transfer request, the strategy entry point
will set the B_DONE bit of bp->b_status.
If the block transfer request fails, the strategy entry point will set the
B_ERROR bit of bp->b_status and set bp->b_error to a nonzero fault
code.
Upon completion of each block transfer request, the strategy entry point
will signal the completion of the block transfer request. In addition, if the
B_ASYNC bit of bp->b_status is set, the strategy entry point will free
the block by calling brelse().
Notes
N/A

Device Driver Service Calls


Table B-1 provides a list of device driver service calls to gain access into the
LynxOS kernel resources.
Table B-1: Device Driver Service Calls to Gain Access into the LynxOS Kernel
Resources (Sheet ? of ?)

LOCK_ILOOKUP() getime() sismine()

UNLOCK_ILOOKUP() getime_u() sreset()

_exit() getpid() srestore()

_getpgrp() getticks() ssignal()

_getpriority() getuid() ssignaln()

_getrpriority() ioint_link() st_awaken()

_kill() iointclr() st_catatonia()

_killpg() iointset() st_getstid()

alloc_cmem() is_cwd_or_root() stgetid()

bdev_wbsize() kgetenv() stgetprio()

bdwrite() kkprintf() stremove()

bforget() ksprintf() stsetprio()

bread() make_entry() ststart()

brelse() mmchain() swait()

226 Writing Device Drivers for LynxOS


Device Driver Service Calls

bwrite() mmchainjob() symlinkcopy()

cancel_timeout() nanotime() sysbrk()

cb_clean() newcinode() sysfree()

cprintf() noreco() timeout()

deliversigs() permap() tkill()

dfread() pgeterr() tmgr_close()

disable() pi_init() tmgr_ex()

dmz() priot_add() tmgr_install()

execve() priot_addn() tmgr_ioctl()

f_len() priot_max() tmgr_open()

f_name() priot_remove() tmgr_read()

free_cmem() priot_removen() tmgr_select()

free1page() pseterr() tmgr_tx()

freefile() rbounds() tmgr_uninstall()

freeup_inode() recojmp() tmgr_write()

get_phys() recoset() unswait()

get_timestamp() restore() usec_sleep()

get1page() scount() vfs_insert()

getblk() sdisable() wbounds()

getblkp() setprio() wgetblkp()

To access kernel logical resources, only the service calls from the list in the above
table are allowed in the device drivers. Direct access to the kernel resources
avoiding the service calls is prohibited except for those listed in “Kernel Objects.”
If a macro defined in LynxOS header files is used to access kernel logical resources
and it is not listed above or in the “Kernel Objects” section, the expansion of that
macro shall satisfy requirements set in this document.
The following is a short description of each service call.

_getpgrp()

int _getpgrp(void);
The _getpgrp() device driver function returns the process group of the current
process.

Writing Device Drivers for LynxOS 227


Appendix B - LynxOS Device Driver Interface Standard

_getpriority()

int _getpriority(void);
The _getpriority() device driver function returns the priority of the current
process.

_getrpriority()

int _getrpriority(void);
The _getrpriority() device driver function returns the real priority of the
current process.

getpid()

pid_t getpid( void )


The getpid() function returns the process ID of the calling process.

getuid()

uid_t getuid(void);
The getuid() function returns the real user ID of the calling process.

_exit()

void _exit(int* status)


Exit the current process and return the status pointed to by status to the parent.

_kill()

int _kill(int pid, int sig)


The _kill() function sends the signal sig to a process as specified by pid.

_killpg()

int _killpg(int pgrp, int sig)


The _killpg() function sends the signal sig to the group of processes
specified by pgrp.

228 Writing Device Drivers for LynxOS


Device Driver Service Calls

deliversigs()

int deliversigs(void)
deliversigs() handles thread cancellation requests and delivers all pending
signals to the current thread.

execve()

int execve(execve_t* p)
The execve() function runs a binary executable file.

get_timestamp()

void get_timestamp(struct timespec *ts)


get_timestamp() sets the timespec structure pointed to by ts to the current
number of seconds and nanoseconds since the beginning of system time

disable()

void disable(int ps)


The macro disable() saves the current system level (slevel) in the automatic
variable ps, disables external interrupts, and sets the system level to 3.

nanotime()

unsigned long nanotime(unsigned long *secs)


nanotime() returns the number of nanoseconds since the system boot up time.

getime()

getime() returns the number of full seconds from the system startup.

getime_u()

getime_u() returns the number of microseconds passed since the system startup.

Writing Device Drivers for LynxOS 229


Appendix B - LynxOS Device Driver Interface Standard

iointset()

int iointset(int vector, void(*function)(),


void *argument)
The function iointset() installs the handler given by function for the
interrupt given by vector. After the handler is installed, the interrupt dispatcher
will call this handler when the specified interrupt is detected by the interrupt
dispatcher.
After a successful call to iointset(), the interrupt specified by vector is
enabled.

iointclr()

int iointclr(int vector)


The function iointclr() removes the current handler for the interrupt given by
vector and restores the previously installed handler and the related argument.

ioint_link()

int ioint_link(int id)


The function ioint_link() calls the interrupt handler specified by id. The
parameter id is the handle to the handler that was returned by iointset().

recoset()

int recoset(void)
If recoset() is called, it returns 0.
If a thread that has registered a fault recovery buffer via recoset() initiates a
supervisor bus error, execution resumes as if recoset() had returned 1.
If a supervisor bus error is caught via recoset(), the recovery buffer will be
unregistered so that a following bus error will be caught.

noreco()

void noreco(void)
A call to noreco() undoes a previous registration via recoset() for the
calling thread.

230 Writing Device Drivers for LynxOS


MM_BLOCK Device Driver API

ksprintf()

int ksprintf(char *buf, char *fmt, ...)


Formatted printout to a memory buffer.

MM_BLOCK Device Driver API

get_phys()

char* get_phys(kaddr_t vaddr)


The function get_phys() can be called from a device driver to translate the
virtual address given by vaddr into the corresponding physical address in the
memory context of the calling process.

mmchainjob()

int mmchainjob(int pid, struct dmachain* array,


char* vaddr, long count)
The function mmchainjob() can be called by a device driver to convert a block
of virtual memory into the corresponding physical pages. The conversion is done
in the context of the process identified by pid.

mmchain()

int mmchain(struct dmachain* array, char* vaddr,


long count)
The function mmchain() works like mmchainjob() except that the address
conversion is done in the context of the calling process.

permap()

void* permap(kaddr_t physaddr, size_t size)


The function permap() can be called by a device driver to map the physical
address space given by the physical start address physaddr and the size given by
size into the address space of the kernel.

Writing Device Drivers for LynxOS 231


Appendix B - LynxOS Device Driver Interface Standard

Memory Management Functions

get1page()

char *get1page(void)
A successful call to the function get1page() allocates one page from the kernel
heap and returns the PHYSBASE alias of the allocated page.
If there is not enough memory on the free system memory pool for the allocation
request, get1page() returns NULL

free1page()

void free1page(char *addr)


The function free1page() returns a memory page given by the PHYSBASE
alias addr to the free system memory pool.

alloc_cmem()

char *alloc_cmem(int size)


A successful call to alloc_cmem() allocates at least size bytes of physically
contiguous memory from the kernel heap.

free_cmem()

void free_cmem(char *p, int size)


free_cmem() returns the memory block given by the PHYSBASE alias of the
start address and the size to the free system memory pool.

pgeterr()

int pgeterr(void)
The pgeterr() function returns the kernel error code of the current running
thread.

232 Writing Device Drivers for LynxOS


Priority Tracking Functions

Priority Tracking Functions

priot_add()

int priot_add(struct priotrack *p, int prio)


Function priot_add() increments the use count for priority prio in the
priority tracking structure pointed to by p.

priot_remove()

int priot_remove(struct priotrack *p, int prio)


Function priot_remove() decrements the use count for priority prio in the
priority tracking structure pointed to by p.

priot_addn()

int priot_addn(struct priotrack *p, int prio, int n)


Function priot_addn() increments the use count by the value of n for priority
prio in the priority tracking structure pointed to by p.

priot_removen()

int priot_removen(struct priotrack *p, int prio, int n)


Function priot_removen() decrements the use count by the value of n for
priority prio in the priority tracking structure pointed to by p.

priot_max()

int priot_max(struct priotrack *p)


Function priot_max() retrieves the maximum priority recorded in a priority
tracking structure.

pseterr

int pseterr(int)
The pseterr() function sets the kernel level error code of the currently running
thread.

Writing Device Drivers for LynxOS 233


Appendix B - LynxOS Device Driver Interface Standard

rbounds()

long rbounds(unsigned long)


rbounds() returns the number of bytes that can be read from the address
specified as its parameter.

wbounds()

long wbounds(unsigned long)


wbounds() returns the number of bytes that can be read and written from the
address specified as its parameter.

recojmp()

void recojmp(void)
recojmp() does a kernel long jump for exception recovery.

restore()

void restore(int ps)


The macro restore() restores the system level to the value saved by the
corresponding disable(), from its parameter, and enables external interrupts if
the new system level is below 3.

st_awaken()

void st_awaken(st_t *s)


st_awaken() resumes the execution of a kernel thread previously stopped by
st_catatonia().

sdisable()

void sdisable(int ps)


The macro sdisable() saves the current system level in the automatic variable
ps and sets the system level to 2.

The macro sdisable() has no effect if the current system level is above 1.

234 Writing Device Drivers for LynxOS


Priority Tracking Functions

srestore()

void srestore(int ps)


The macro srestore() restores the system level to the value saved by the
corresponding sdisable() from ps.

st_catatonia()

void st_catatonia(st_t *s)


st_catatonia() stops a given kernel thread.

st_getstid()

int st_getstid()
st_getstid() is the same as the system call _getstid().

stgetid()

int stgetid(void)
Returns the kernel thread ID of the calling kernel thread.

stgetprio()

int stgetprio(int stid)


Returns the user priority of the kernel thread stid.

stremove()

int stremove(int stid)


Removes the kernel thread stid from the system. This function may not attempt
to remove a currently running thread.

stsetprio()

int stsetprio(int stid, int prio)


Sets the priority of the kernel thread with the kernel thread ID stid to the priority
prio in the in-kernel priority range (0 to 511).

Writing Device Drivers for LynxOS 235


Appendix B - LynxOS Device Driver Interface Standard

ststart()

int ststart(int (*procaddr)(), int stacksize, int prio,


char *name, int nargs, uint32_t arg0, uint32_t arg1,
uint32_t arg2, uint32_t arg3, uint32_t arg4)
Creates a new kernel thread that will start its execution at address procaddr.

Semaphore Functions

kmutex_init(), pi_init()

void kmutex_init(ksem_t *sem);


void pi_init(ksem_t *sem);
Initializes the kernel mutex (priority inheritance semaphore) sem.

ksem_init()

void ksem_init(ksem_t *sem, int value);


Initializes the counting kernel semaphore sem with value value.

swait()

int swait (ksem_t *sem, int pri);


Makes a thread wait until it acquires the semaphore pointed to by sem or is
interrupted by a signal.

unswait()

int unswait(st_t *tptr)


Removes the thread from the semaphore waiting threads list.

ssignal()

int ssignal(ksem_t *sem);


Signals the semaphore pointed to by sem.

236 Writing Device Drivers for LynxOS


Semaphore Functions

sreset()

int sreset(ksem_t *sem);


Resets the counting semaphore pointed to by sem, waking up all the threads
waiting on it.

ssignaln()

int ssignaln(ksem_t *sem, int n);


Signals a counting semaphore pointed to by sem the number of times specified by
n.

scount()

int scount(int ksem_t *countersem);


Returns the number of threads waiting on the counting semaphore counter sem (if
the value is negative) or the count of that semaphore.

sismine()

int sismine(st_t **sem);

Returns the ownership of a semaphore.

sysbrk()

void *sysbrk(long size)


The function sysbrk() attempts to allocate size bytes from the kernel heap.
Upon successful completion, sysbrk() returns the address of a memory block
allocated from the kernel heap of at least size bytes.
If an error occurs during the allocation, sysbrk() returns a NULL pointer.

sysfree()

int sysfree(void *p, long size)


The function sysfree() attempts to add a memory block given by its address p
and the size given by the parameter size to the kernel heap free memory pool.
The function works for a block that has not yet been on the kernel heap as well as
for a block previously allocated with sysbrk().

Writing Device Drivers for LynxOS 237


Appendix B - LynxOS Device Driver Interface Standard

timeout()

int timeout(void (*func) (), void *arg, int interval)


timeout() creates a deferred function call, so that the func function is called
with the arg parameter when the number of clock ticks specified by interval
has elapsed. timeout() returns the descriptor of the new kernel timeout.
timeout() may fail and return -1 if the partition has exceeded its allocation of
kernel timeouts.
The timeout() function associates the new timeout with the calling thread.
When a thread dies, all timeouts associated with it are cancelled. This means that
the timeout() function may not be called from interrupt and timeout handler
code, as those handlers are executed in the context of arbitrary threads. One may
not create periodic timeouts by calling timeout() in a timeout handler.

timeout_ex()

int timeout_ex(struct clock_descr *clock, void


(*func)(void *), void *arg, int interval, int reload,
st_t *thread);
timeout_ex() creates a deferred function call, so that the func function is
called with the arg parameter when the number of clock ticks specified by
interval has elapsed. timeout_ex() returns the descriptor of the new kernel
timeout. timeout_ex() attaches the timeout to clock clock and associates the
new timeout with the thread thread. If reload is nonzero, the timeout is
automatically reinstalled at each expiration to expire in reload ticks, thus
creating a periodic timeout. When a thread dies, all timeouts associated with it are
cancelled.
timeout_ex() may fail and return -1 if the partition has exceeded its allocation
of kernel timeouts.

cancel_timeout()

int cancel_timeout(int num)


cancel_timeout() cancels the timeout identified by descriptor num, previously
created by the timeout() function.

getticks()

tick_t getticks(int num)

238 Writing Device Drivers for LynxOS


Semaphore Functions

getticks() returns the number of ticks left until the expiration of the timeout
identified by descriptor num.

usec_sleep()

void usec_sleep(unsigned long usec)


usec_sleep() busy loops for at least usec microseconds.

setprio()

int setprio(st_t *tptr, int prio)


The function setprio() is used to set the effective priority for a given thread.

freefile()

void freefile(struct file *f)


Returns a file entry to the free list.

kgetenv()

int kgetenv (char *key, void *buf, int size)


The function searches for the keyword, pointed to by key in the kernel
environment, and copies the data, associated with that keyword, into the buffer,
pointed to by buf.

tkill()

int tkill(st_t *tptr, int sig, struct l_event *ep)


Sends a signal to a thread.

Writing Device Drivers for LynxOS 239


Appendix B - LynxOS Device Driver Interface Standard

File System Driver Service Calls

dmz()

unsigned long dmz(unsigned long addr)


Adjusts resident text addresses for where the kernel is loaded.

bforget()

void bforget(struct buf_entry *p)


Returns a block to the block cache free list and makes sure its data is not used.

bread()

struct buf_entry *bread(long blk, dev_t dev)


Returns a pointer to the buffer header of the specified block.

bwrite()

int bwrite(struct buf_entry *p)


Starts writing a block and waits for it to be written.

bdev_wbsize()

int bdev_wbsize(dev_t dev, int block_size)


Converts a device number and a block size into a block device number with an
encoded block size.

bdwrite()

void bdwrite(struct buf_entry *p)


Sets up the block to be written if it is taken off the free list.

brelse()

void brelse(struct buf_entry *p)


Gives back a file system cache block (the opposite of getblk).

240 Writing Device Drivers for LynxOS


File System Driver Service Calls

cb_clean()

void cb_clean(dev_t dev, long block, long nblocks)


Cleans up buffer cache of blocks assigned to a device.

dfread()
int dfread(struct file *f, char *buffer, int count)
Reads from a disk file into a buffer.

f_len()

int f_len(char *name)


Returns the length of string name up to but not including the first slash or first
NULL.

f_name()

char *f_name(char *path)


Returns the filename from a given path.

freeup_inode()

void freeup_inode( struct inode_entry *inode)


Gives the specified inode back to system.

getblk()

struct buf_entry *getblk(long blk, dev_t dev)


Gets a file system cache block.

getblkp()

long getblkp(struct inode_entry *inode, long blockoff)


Gets the block number given the block offset and the in-core inode.

Writing Device Drivers for LynxOS 241


Appendix B - LynxOS Device Driver Interface Standard

make_entry()

int make_entry( struct inode_entry *di, char *name,


long ino)
This function makes a new entry in a directory specified by di (locked) with the
name name and the inode ino.

newcinode()

struct inode_entry *newcinode(long inum, dev_t dev)


Gets a new in-core inode.

symlinkcopy()

char *symlinkcopy( char *s1, char *s2, long maxlen)


Copies string s2 to s1 with a maximum length of maxlen.

vfs_insert()

void vfs_insert(struct vfs_entry *vfsp)


Inserts a file system into the virtual file system list.
vfs_insert() shall be called from only the Installation region of device driver
code (see “Definitions” on page 248).

wgetblkp()

long wgetblkp(struct inode_entry *ip, long blockoff)


Gets the block number given the block offset and the in-core inode.

LOCK_ILOOKUP()

LOCK_ILOOKUP()
Macro, locks the in-core inode cache.

UNLOCK_ILOOKUP()

UNLOCK_LOOKUP()
Macro, unlocks the in-core inode cache.

242 Writing Device Drivers for LynxOS


TTY Manager Service Calls

kkprintf()

int kkprintf(char *fmt, ...)


Formatted synchronous print on the kkprintf() port.

cprintf()

void cprintf(char *fmt, ...)


Formatted asynchronous print on the console device

TTY Manager Service Calls

tmgr_install()

int tmgr_install(register struct ttystatics *s,


register struct old_sgttyb *sg, int type,
void (*transmit_enable)(), char *arg)
TTY manager install function.

tmgr_uninstall()

void tmgr_uninstall(register struct ttystatics *s)


TTY manager uninstall function.

tmgr_open()

int tmgr_open(register struct ttystatics *s,


struct file *f)
TTY manager open function.

tmgr_close()

int tmgr_close(register struct ttystatics *s,


struct file *f)
TTY manager close function.

Writing Device Drivers for LynxOS 243


Appendix B - LynxOS Device Driver Interface Standard

tmgr_read()

int tmgr_read(register struct ttystatics *s,


struct file *f, char *buff, int count)
TTY manager read function.

tmgr_write()

int tmgr_write(register struct ttystatics *s,


struct file *f, char *buff, int count)
TTY manager write function.

tmgr_ioctl()

int tmgr_ioctl(register struct ttystatics *s,


struct file *f, int com, char *arg)
TTY manager ioctl function.

tmgr_select()

int tmgr_select(struct ttystatics *s, struct file *f,


int command, struct sel *se)
TTY manager select function.

tmgr_ex()

int tmgr_ex(register struct ttystatics *s, int com, int c)


TTY manager receiver interrupt function.

tmgr_tx()
int tmgr_tx(register struct ttystatics *s, char *p)
TTY manager transmitter interrupt function

244 Writing Device Drivers for LynxOS


Kernel Objects

Kernel Objects
Access is restricted to the kernel objects listed below. The access shall satisfy the
following restrictions:
• It is read-only (except File System kernel objects).
• The accessed object belongs to the memory area type that the particular
driver code region is allowed to access (see “Definitions” on page 248).
• The access complies with any additional restrictions specified in this
document.

currtptr

struct st_entry *currtptr


Pointer to the current thread’s control structure.

currpptr

struct pentry *currpptr


Pointer to the current process’s control structure.

st_table
struct st_entry *st_table
st_table is the array of thread control structures indexed by thread ID.
The elements of st_table may be read.

struct pentry *proctab

struct pentry *proctab


proctab is the array of process control structures indexed by process ID.
The elements of proctab may be read.

Writing Device Drivers for LynxOS 245


Appendix B - LynxOS Device Driver Interface Standard

drambase

unsigned long drambase


Physical address of the start of RAM.

currpid

int currpid
ID of the currently running process.

Seconds

unsigned long Seconds


System time of day in seconds.

Useconds

unsigned long Useconds


Number of real-time clock ticks in the current second.

Uptime

unsigned long Uptime


Number of clock ticks from system start.

currpss

struct pssentry *currpss


Pointer to current process’s pss entry.

tickspersec

int tickspersec
Number of clock ticks in one second.

246 Writing Device Drivers for LynxOS


File System Kernel Objects

system_nproc

int system_nproc
system_nproc is the maximum number of processes in the system (the
proctab table size).

system_nstasks

int system_nstasks
system_nstasks is the maximum number of threads in the system (the
st_table table size).

runtime_schedule

struct Interpartition_Schedule runtime_schedule


The run-time interpartition schedule.

File System Kernel Objects

freebflag

int freebflag
File system cache flag.

freebsem

ksem_t freebsem
File system cache semaphore.

prelocked_sb

struct buf_entry *prelocked_sb


File system cache control variable.

Writing Device Drivers for LynxOS 247


Appendix B - LynxOS Device Driver Interface Standard

prelocked_db_dirty

int prelocked_sb_dirty
File system cache control variable.

prelocked_sb_sync

int prelocked_sb_sync
File system cache control variable.

Definitions
Clear—The word “clear” is defined as putting a storage element, such as a register
bit, to a logic zero state.
Extremely Improbable—The probability described as “extremely improbable”
has a probability of occurrence less than 1 E 10-9 per hour.
Extremely Remote—The probability described as “extremely remote” has a
probability of occurrence less than 1 E 10-7 per hour.
Fast Ada Pages—A special range of one or more pages (one per partition) mapped
for both kernel and user code access and used to store information about
preemption level and POSIX 1C IPC synchronization objects (for example, POSIX
semaphores and mutexes). The number of pages mapped depends on the
configured number of synchronization objects specified in the configuration for
each partition.
High—The word “high” is defined as the logic one state, as in a signal or storage
element.
Kernel heap—A memory area in the kernel virtual memory dynamically managed
and used by the kernel and driver code. In LynxOS, the kernel heap is allocated and
maintained on a per-partition basis. This includes memory allocated with
sysbrk(), alloc_cmem(), and get1page() service calls.

Kernel operational memory—The kernel operational memory is used to store


kernel text and control information about kernel logical resources. This region is
located in the kernel virtual memory and is not accessible from user level code by
design.

248 Writing Device Drivers for LynxOS


Definitions

Kernel virtual memory—A range of virtual addresses, which are mapped to the
physical memory and are accessible only to the kernel.
Low—The word “low” is defined as the logic zero state, as in a signal or storage
element.
May—The word “may” dictates a desirable characteristic. Characteristics
identified by “may” will be implemented only if there is no impact to the design.
Peripheral memory—A memory area in the kernel virtual memory provided by
the peripheral devices (for example, register spaces, buffer memory, PCI memory
windows, and so on).
Remote—The probability described as “remote” has a probability of occurrence
less than 1 E 10-5 per hour.
Set—The word “set” is defined as putting a storage element, such as a register bit,
to a logic one state.
Shall—The word “shall” dictates an absolute requirement. Any sentence
containing the word “shall” represents a firm requirement.
Should—The word “should” dictates a highly desirable characteristic. Any
sentence containing the word “should” represents a design goal.
User virtual memory—A range of virtual addresses that are mapped to the
physical memory and are accessible from user mode.
Will—The word “will” is used when explaining the intended operation of a
particular feature and does not dictate a requirement at any
level.<$startrange>LynxOS Device Driver Interface Standard

Writing Device Drivers for LynxOS 249


Appendix B - LynxOS Device Driver Interface Standard

250 Writing Device Drivers for LynxOS


Index

block drivers
close entry point 16
Symbols install entry point 13
open entry point 14
211
block entry points 219
close 221
install 219
ioctl 222
A open 221
access, device 70 strategy 223
accessing uninstall 220
hardware 69–71 block install entry point 219
adding a new bus layer 162 block ioctl entry point 222
adding or removing data in an mbuf 185 block open entry point 221
address translation 29 block strategy entry points 223
address types, LynxOS 26 block uninstall entry point 220
advanced topics 162 buf_entry structure 211
alloc_cmem() 28 bus error handling 69
alloc_res_bus () 163 bus layer 149
alloc_res_dev () 163 busnode_init () 163
allocating mbufs 185
allocating memory 27
alternate queueing 202
converting existing drivers 203 C
model 203
character close entry point 215
ALTQ 202
character device drivers 73
character driver 2
character drivers
close entry point 15
B install entry point 11
Berkeley Packet Filter (BPF) support 201 open entry point 14
block close entry point 221 select entry point 19
block driver 2 character entry points 212
close 215

Writing Device Drivers for LynxOS 251


Index

install 212 interrupt management 158


ioctl 217 IO 160
open 214 tree 148
read 215 device and driver configuration file 136
select 218 device and driver uninstallation 142
uninstall 213 device driver
write 216 buf_entry structure 211
character install entry point 212 common structure 210
character ioctl entry point 217 device information structure 210
character open entry point 214 device statics structure 211
character read entry point 215 entry point 208, 211
character select entry point 218 error return codes 209
character uninstall entry point 213 file structure 211
character write entry point 216 hardware interfaces 210
close entry point 15 organization 210
combining synchronization mechanisms 58 sel structure 211
common error messages during dynamic service calls 224
installation 142 Device Driver Interface Standard 207
communicating with queues 94 device driver interfaces 208
configuration file device information structure 210
config.tbl 136 device numbers
driver 199 major 3
contacting LynuxWorks xiii minor 3
controlling interrupts 126 Device Resource Manager
copying data to mbufs 194 see DRM
counting kernel semaphores device statics structure 211
functions 43 device tree 148
creating kernel threads 114 devices
critical code region 34, 35, 51 referencing 3
critical regions, nesting 62 devinfo.h 135
direct memory access 29
disabling
interrupts 48
D preemption 49
disk.h 24
data structures, kernel 179 dldd.h 138
debugging 143 DMA
del_busnode () 167 transfers 29
del_devnode () 167 documents
determining packet type 194 LynxOS xi
device driver
access 70 basics 1–??
address space management 159 block 2
classifications 3 character 2
identification 157 configuration file 199
information declaration 9, 141 entry points 189
information definition 7, 141 installation 140
information structure 7 response time 112
insertion/removal 161 source code 136, 138
installation 141

252 Writing Device Drivers for LynxOS


structure 7–24 event synchronization 33, 54
types 1 example driver 173
drivers example, thread-based VMPC tty driver 127
referencing 3 examples
DRM 147–178 character driver 73
components 149 exclusive access 115, 122
concepts 148
initialization 153
node states 151
nodes 150 F
service routines 153
tree traversal 160 file structure 211
dynamic installation 133, 134 file system driver service calls 238
file system kernel objects 245
find_child () 164
finding the interface name 189
E free list, manipulating 58
free_cmem() 28
EAGAIN 209 free_res_bus () 164
EFAULT 209 free_res_dev () 164
EINVAL 209 free1page() 28
EIO 209 freeing mbufs 186
elementary device drivers 73
ENODEV 209
ENOMEM 209
ENOSYS 209 G
ENOTSUP 209
enqueueing packet 195 get1page() 28
entry point 208, 211
install 189
ioctl 20, 195
open 14 H
reset 197
select 18 handling
strategy 23 bus errors 69
top-half 118, 123 error conditions 55
uninstall 22 signals 54
watchdog 197 header files
entry point routines 11 devinfo.h 135
entry points 11, 189 disk.h 24
for block device driver 212 dldd.h 138
for character device driver 211 kernel.h 29, 42
ENXIO 210 mem.h 30
EPERM 210 st.h 123
error conditions, handling 55 sysmbuf.h 183
error messages 142 hints for device driver debugging 144
error return codes 209
ether_multi structure 199
ether_output function 191
Ethernet address, initializing 190
Ethernet header 180

Writing Device Drivers for LynxOS 253


Index

kernel objects 243


currpid 244
I currpptr 243
currpss 244
ifnet structure, initializing 190
currtptr 243
init_buslayer () 163
drambase 244
initializing
freebflag 245
the Ethernet address 190
freebsem 245
the ifnet structure 190
prelocked_db_dirty 246
insertnode () 167
prelocked_sb 245
install entry point 189
prelocked_sb_sync 246
installation and debugging 133–145
runtime_schedule 245
interface name, finding 189
Seconds 244
interface specification 155
st_table 243
interrupt disabling, using for mutual
struct pentry *proctab 243
exclusion 52
system_nproc 245
interrupt handler 116
system_nstasks 245
interrupt handlers 93
tickspersec 244
interrupt handlers, general format 94
Uptime 244
interrupt handling 93–110
Useconds 244
interrupts 93
kernel semaphores 41
controlling 126
counting kernel semaphores 41
disabling 48
general characteristics 41
dispatch time 112
kernel muezes 43
handlers 93, 116
priority inheritance semaphores 43
latency 112
using for mutual exclusion 51
ioctl
kernel threads and priority tracking 111–132
entry point 195
kernel.h 29, 42
ioctl() 168
IP multicasting support 199

L
K LynuxWorks, contacting xiii
LynxOS
kernel
address types 26
data structures 179
documents xi
direct mapped 26
interrupt handlers 93
priorities 121
LynxOS Device Driver Interface Standard 207–
rebuilding 137
247
semaphores 41
definitions 246
thread 116, 119, 124, 198
LynxOS device driver interfaces 208
thread processing 193
thread structure 115
threads 111, 114
threads structure 115
threads, creating 114
M
virtual 26 macros
kernel logical resources, accessing 225
LOCK_ILOOKUP() 240

254 Writing Device Drivers for LynxOS


UNLOCK_ILOOKUP() 240 pcialloc_bus_res() 171
manipulating a free list 58 pcialloc_busno() 171
map_resource () 165 pcialloc_dev_res() 171
mbuf pcialloc_init() 171
allocating 185 pcifree_bus_res() 172
clusters 186 pcifree_busno() 172
copying data to 194 pcifree_dev_res() 172
freeing 186 physical memory 27
removing data in 185 plug-in resource allocator 170
mechanisms, synchronization 37 preemption disabling, using for mutual
mem.h 30 exclusion 53
memory preemption, disabling 49
allocating 27 priority
management 25–32 tracking 111, 119, 198
management unit 25 priority tracking functions 231
memory management functions 230 probing for devices 70
memory mapped I/O access ordering 70
MM_BLOCK Device Driver API 229
multiple access 117, 123
mutex 35 Q
mutual exclusion 51
queue use 94
queueing
structure 202
N queueing, alternate 202
queues 94
nesting critical regions 62
network device drivers 179–202
node creation 141
nonatomic requests 125 R
read_device () 166
real-time response 113
O real-time response, signal handling and 61
rebuilding the kernel 137
open entry point 14 referencing devices and drivers 3
removing data in an mbuf 185
reset entry point 197
resource pool management 36, 57
P resources 168
routine, entry point 11
packet
enqueueing 195
input 193
output 191 S
queues 188
type, determining 194 sel structure 211
PCI bus layer 168 service calls 224
plug-in resource allocator 170 _exit() 226
resources 168 _getgrp() 225
services 168 _getrpriority() 226

Writing Device Drivers for LynxOS 255


Index

_kill() 226 pgeterr() 230


_killpg() 226 pi_init() 234
alloc_cmem() 230 priot_add() 231
bdev_wbsize() 238 priot_addn() 231
bdwrite() 238 priot_max() 231
bforget() 238 priot_remove() 231
bread() 238 priot_removen() 231
brelse() 238 pseterr() 231
bwrite() 238 rbounds() 232
cancel_timeout() 236 recojmp() 232
cb_clean() 239 recoset() 228
cprintf() 241 restore() 232
deliversigs() 227 scount() 235
dfread() 239 sdisable() 232
disable() 227 setprio() 237
dmz() 238 sismine() 235
execve() 227 sreset() 235
f_len() 239 srestore() 233
f_name() 239 ssignal() 234
free_cmem() 230 ssignaln() 235
free1page() 230 st_awaken() 232
freefile() 237 st_catatonia() 233
freeup_inode() 239 st_getstid() 233
get_phys() 229 stgetid() 233
get_timestamp() 227 stgetprio() 233
get1page() 230 stremove() 233
getblk() 239 stsetprio() 233
getblkp() 239 ststart() 234
getime() 227 swait() 234
getime_u() 227 symlinkcopy() 240
getpid() 226 sysbrk() 235
getpriority() 226 sysfree() 235
getticks() 236 timeout() 236
getuid() 226 timeout_ex() 236
ioint_link() 228 tkill() 237
iointclr() 228 tmgr_close() 241
iointset() 228 tmgr_ex() 242
kgetenv() 237 tmgr_install() 241
kkprintf() 241 tmgr_ioctl() 242
kmutex_init() 234 tmgr_open() 241
ksem_init() 234 tmgr_read() 242
ksprintf() 229 tmgr_select() 242
LOCK_ILOOKUP() 240 tmgr_tx() 242
make_entry() 240 tmgr_uninstall() 241
mmchain() 229 tmgr_write() 242
mmchainjob() 229 UNLOCL_ILOOKUP() 240
nanotime() 227 unswait() 234
newcinode() 240 usec_sleep() 237
noreco() 228 vfs_insert() 240
permap() 229 wbounds() 232

256 Writing Device Drivers for LynxOS


wgetblkp() 240 TTY manager service calls 241
services 168 typical structure of interrupt-based driver 95
signal handling and real-time response 61
signals, handling 54
SIOCSIFADDR 196
SIOCSIFFLAGS 196 U
sreset, caution when using 56
sreset, using with event synchronization unmap_resource () 166
semaphores 55 use of queues 94
st.h 123 user and kernel priorities 121
starting transmission 193 user virtual 26
static installation 133 using DRM facilities from device drivers 157
static installation procedure 134
static versus dynamic installation 133
statics structure 7, 10, 188
statistics counters 193, 195 V
struct arpcom 180
struct ether_header 180 variable length transfers 55
struct ifnet 182 virtual memory
struct in_addr 181 architecture 25
struct mbuf 183
struct sockaddr 181
struct sockaddr_in 181
structure of a kernel thread 115 W
structure, interrupt-based driver 95
watchdog entry point 197
structures
write_device () 166
device information 7
statics 7
synchronization 33–62, 69
combining mechanisms 58
mechanisms 37
selectivity of 37–49
sys/mbuf.h 183
sysbrk() 27
sysfree() 27

T
task completion time 113
tasks
completion time 113
response time 112
Technical Support xiii
thread processing 193
top-half entry point 116, 118, 123
transfers
direct memory 29
DMA 29
translate_addr() 167

Writing Device Drivers for LynxOS 257


Index

258 Writing Device Drivers for LynxOS

S-ar putea să vă placă și