Sunteți pe pagina 1din 8

CHAPTER-24

MEMORY MANAGEMENT
24.1 Introduction:
A kernel manages program within an embedded system via tasks. The kernel
must also have some system of loading and executing tasks within the system, since the
CPU only executes task code that is in cache or RAM. With multiple tasks sharing the
same memory space, an OS needs a security system mechanism to protect task code from
other independent tasks.
In general, a kernels memory management responsibilities include

Managing the mapping between logical (physical) memory and task memory

references.
Determining which processes to load into the available memory space.
Allocating and deallocating of memory for processes that makes up the system.
Supporting memory allocation and deallocation of code requests (within a

process).
Tracking the memory usage of system components.
Ensuring cache memory (for system with cache).
Ensuring process memory protection.

OS treats memory as one large one-dimensional array, called memory map. Either a
hardware component integrated in the master CPU or on the board does the conversion
between logical and physical addresses. Most OS typically run in two modes: kernel
mode and user mode, typically higher layers of software run in user mode, and access
anything running in kernel mode via system calls.

24.2 User Memory Space:

Because multiple processes are sharing the same physical memory when
being loaded into RAM for processing there also must be some protection
mechanism so process cannot inadvertently affect each other when being swapped
in and out of a single physical memory space. These issues are typically resolved
by the OS through memory swapping where partitions of memory are swapped
in and out of memory at runtime.

The most common partitions of memory

used in swapping are segments and pages. Segmentation and paging not only

simplify the swapping-memory allocation and deallocation-of tasks in memory,


but allow for code reuse and memory protection, as well as providing the
foundation for virtual memory.
Virtual memory is a mechanism managed by the OS to allow a device
limited memory space to be shared by multiple competing user tasks, in essence
enlarging the devices actual physical memory space into a larger virtual
memory space.
24.3 Segmentation:
A process encapsulates all the information that is involved in executing a
program; include source code, stack and data. All of different types of information within
a process are divided into logical memory units of variable size, called segments. A
segment is a set of logical addresses containing the same type of information. Segment
addresses are logical addresses that start at 0, and are made up of a segment number,
which indicates the base addresses of the segment, and a segment offset, which defines
the actual physical memory address of the segment, and a offset which defines the actual
physical address. Segments are independently protected, meaning they have assigned
accessibility characteristics, such as shared, read-only, or read/write. Most OSs typically
allows processes to have all or some combinations of five types of information within
segments: texts (or code) segment, data segment, BSS (block started by symbol) segment,
stack segment, and the heap segment. A text segment is a memory space containing the
source code. A data segment is a memory space containing the source codes initialized
variable (data).A BSS segment is a statically allocated memory space containing the
source codes un-initialized variable (data)
The data, text, and BSS segments are all fixed in size at compile time, and are
such static segments; it is these segments that typically are part of the executable file.
Executable files can differ in what segments they are composed of, but in general they
contain a header, and different sections that represent the types of segments, including
name, permissions, etc., where a segment can be made up of one or more sections. The
OS create a task image by memory mapping the contents of the executable file, meaning
loading and interpreting the segments reflected in the executable into memory. There are
several executable file, meaning loading and interpreting the segments reflected in the

executable into memory. The stack and heap segments, on the other hand, are not fixed at
compile time, and can change in size at runtime and so are dynamic allocation
components.
A stack segment is a section of memory that is structured as a LIFO queue, where
data is pushed on the stack or popped off the stack. Stacks are typically used as a simple
and efficient method within a program for allocating and freeing memory for data that is
predictable .In a stack, all used and freed memory space is located consecutively within
the memory space. However, since push and pop are the only two operations
associated with a stack, a stack can be limited in its uses. A heap segment is a section
memory that can be allocated in blocks at runtime, and is typically set up as a free linkedlist of memory fragments.

24.4 Paging and Virtual Memory:


Paging:
Either with or without segmentation, some OSs divide logical memory into some
number of fixed-size partitions, called blocks, frames, pages, or some combination of a
few or all of these. For example, with OSs that divides memory into frames.
For example, with OS that divide memory into frames, the logical address is a
compromise of a frame number and offset. The user memory space can then, also, be
divided into pages, where page sizes are typically equal to frame sizes. When a process is
loaded in its entirety into memory, its pages may not be located within a contiguous set of
frames. Dividing up logical memory into pages aids the OS in more easily managing
tasks being relocated in and out of various types of memory in the memory hierarchy, a
process called swapping.

Virtual Memory:
Virtual memory is typically implemented via demand segmentation and/or
demand paging memory fragmentation techniques. When virtual memory is implemented
via these demand techniques, it mean that only the pages and/or segments that are
currently in use are loaded into RAM. In a virtual memory system, the OS generates
virtual addresses based on logical addresses, and maintains tables for the sets of logical

addresses into virtual addresses conversions. The OS can end up managing more than one
different address space for each process.
24.5 Example of Memory Management:
The MPC860s internal memory map contains the architectures special purpose registers,
as well as dual port RAM, also referred to as parameter RAM, that contains the buffers of
the various integrated components, such as Ethernet or I 2C.On the MPC860, it is simply a
matter of configuring one of these SPRs, the Internal Memory Map Register (IMMR)
shown in fig 24.1 to contain the base address of the internal memory map, as well as
some factory related information on the specific MPC860 processor.

In the case of sample memory map used in this section, the internal memory
map start at 0x09000000,so in pseudo code form, the IMMR would be set to this value
via the mfspr or mtspr commands:

mtspr 0X090000FF //the tops 16 bit are address, bits 16-23 are the part number

The MPC860 uses the MMUs to manage the boards virtual memory
management scheme; providing logical/effective to physical/real address translation,
cache control, and memory access protections. The MPC860 MMU (shown in fig 24.2a)
allows support for a 4GB uniform(user) address space that can be decided into pages of a
variety of sizes, specifically 4KB,16KB,512KB,or8MB,that can be individually
protected and mapped to physical memory. Using the smallest page size a virtual address
space can be divided into on the MPC860, a translation table-also commonly referred to a
memory map or page table-would contain a million address translation entries, one for
each 4KB page into the 4 GB address space. The MPC 8600 MMU does not manage the
entire translation table at one time. This is because embedded boards do not have
typically have 4GB of physical memory that needs to be managed at one time.MPC8600
contain small caches within to store a subset of this memory map. These caches are
referred as TLBs (shown in fig 24.2b). The MPC860 MU contains small caches within

store a subset of this memory map. These caches are referred to as TLBs. In case of
MPC860, the TLB are 32 entry and full associative caches.
The TLB is how the MMU translates (maps) logical/virtual addresses to
physical addresses. The system software that loads the new entry into the TLB through a
process called table walk. This is basically the process of traversing the MPC860s two
level memory map tree in main memory to locate the desired entry to be loaded in the
TLB. The first level of the PowerPCs multilevel translation table scheme (its translation
table structure uses one level 1 table and one or more level 2 tables) refers to a page table
entry in the page table of the second level. There are 1024 entries where each entry is 4
bytes and represents a segment of virtual memory that s 4 MB in size. The format of an
entry in the level 1 table is made of a valid bit field (indicating the 4 MB respective
segment is valid), a level 2 base address field (if valid bit is set, pointer to base address of
the level 2 table which represents the associate 4MB segment of virtual memory), and
several attribute fields describing the various attributes of the associated memory
segment, Within each level 2 table every entry represents the pages of the respective
virtual memory segment. The number of entries of a level2 table depends on the defined
virtual memory page size (4KB, 16KB, 512KB or 8MB); see table 24.1The larger the
virtual memory page size, the less memory is used for level 2 translation tables, The page
offset of the 4KB effective address format is 12 bits wide to accommodate the offset
within the 4KB (0X000 to 0X0FFFF) pages. The page offset of the 16KB effective
address format is 14 bit wide to accommodate the offset within the 16KB (0X0000 to
0X3FFF) pages. The page offset of the 512KB effective address format is 19 bits wide to
accommodate the offset within the 512KB (0X0000 to 0X7FFFF) pages.

In short, the MMU uses these effective address fields (level1 index, level2
index, and offset) in conjunction with other registers, TLB, translation tables, and the
table walk process to determine the associated physical address.

S-ar putea să vă placă și