Sunteți pe pagina 1din 36

Access Times

The average period of time (in nanoseconds) it takes for RAM to complete one access and begin another. Access time is composed of latency (the time it takes to initiate a request for data and prepare to access it) and transfer times. DRAM chips for personal computers have accessing times of 50 to 150 nanoseconds (billionths of a second). Static RAM (SRAM) has access times as low as 10 nanoseconds. Ideally, the accessing times of memory should be fast enough to keep up with the CPU. If not, the CPU will waste a certain number of clock cycles, which makes it slower. A nanosecond (ns or nsec) is = 10-9 one billionth of a second.

Async SRAM
(Asynchronous Static Random Access Memory)
Manufacturer: many Year Introduced: Burst Timing: 3-1-1-1 Voltage: 2.7-3.1v Speed: 8.5ns Frequency: 500 MHz Pins: 85-ball PBGA Bandwidth:

Async SRAM has been with us since the days of the 386, and is still in place in the L2 cache of many PCs. It's called asynchronous because it's not in sync with the system clock, and therefore the CPU must wait for data requested from the L2 cache. The wait isn't as long as it is with DRAM, but it's still a wait.

Memory Bandwidth
Memory bandwidth refers to the rate at which data is transferred between the graphics processor and graphics RAM. Mem. bandwidth limitations are one of the key bottlenecks that must be overcome to deliver truly realistic 3D environments. To deliver truly stunning 3D requires high resolution, 32-bit color depth at high frame rates with multi-pass texturing to reveal the detail found in real world environments. The memory bandwidth is defined by the RAM chips used. SDRam and SGRam have a bus width of 64 bits, when using a dual bus structure this can become 128 bits (using two memory banks in parallel. When adding another bank in parallel this can even become 192 bits. The memory chips operate at a clockspeed, usually 100 MHz. This means that the theoretical Memory bandwidth is bus width x clockspeed. Thus 800 Mb (64 bits) , 1.6 Gb (128 bits) and 2.4 Gb (192 bits). Dual bank SGRam clocked at 200 MHz will have 128 bits x 200 MHz = 3.2 Gb memory Bandwidth.

BEDO DRAM (Burst Extended Data Output DRAM)


Manufacturer: Year Introduced: Burst Timing: Voltage: many Speed: 25ns 1995 Frequency: 40-66 MHz 4-1-1-1 Pins: 72 5.0v Bandwidth:

Burst Extended Data Output DRAM (Burst EDO, BEDO) A variant on EDO DRAM in which read or write cycles are batched in bursts of four. The memory bursts wrap around on a four byte boundary which means that only the two least significant bits of the CAS (Column Address Strobe) are modified internally to produce each address of the burst sequence. Consequently, Burst EDO memory bus speeds will range from 40 MHz to 66 MHz , well above the 33 MHz bus speeds that can be accomplished using FPM (Fast Page Mode) or EDO DRAM. Burst EDO was available sometime before May 1995. BEDO DRAM can only stay synchronized with the Central Processing Unit clock for short periods (bursts). Also, it can not keep up with processors whose buses run faster than 66

MHz. Despite the fact that BEDO arguably provides more improvement over EDO than EDO does over FPM the standard has lacked chipset support and has consequently never really caught on, losing out to Synchronous DRAM (SDRAM).

BIOS (Basic Input Output System)


BIOS stands for Basic Input Output System. It is defined as a firmware that controls much of the computers input/output functions such as communication with disk drives, the printer, RAM chips and the monitor. It is a set of software instruction parameters that are hard coded into a chip. These instruction parameters within BIOS versions have changed as technology has developed in regard to hardware. The BIOS is typically placed on a ROM chip that comes with the computer (it is often called a ROM BIOS). This ensures that the BIOS will always be available and will not be damaged by disk failures. It also makes it possible for a computer to boot itself. Because RAM is faster than ROM, though, many computer manufacturers design systems so that the BIOS is copied from ROM to RAM each time the computer is booted. This is known as shadowing. Most modern PCs have a Flash BIOS, which means that the BIOS has been recorded on a flash memory chip, which can be updated if necessary. The PC BIOS is standardized, so all PCs are alike at this level (although there are different BIOS versions). Additional DOS functions are usually added through software modules. This means you can upgrade to a newer version of DOS without changing the BIOS. PC BIOSes that can handle Plug-and-Play (PnP) devices are known as PnP BIOSes, or PnP-aware BIOSes. These BIOSes are always implemented with flash memory rather than ROM. The POST Power On Self Test is the first instruction executed during start-up. It checks the PC components and that everything works. You can recognize it during the RAM test, which occurs as soon as you turn power on. You may follow the checks being executed in this order, as the information are gathered: 1. Information about the graphics adapter 2. Information about the BIOS (name, version) 3. Information about the RAM (being counted) As users, we have only limited ability to manipulate the POST instructions. But certain system boards enable the user to order a quick system check. Some enable the user to disable the RAM test, thereby shortening the duration of the POST. The duration of the POST can vary considerably in different PCs. On the IBM PC 300 computer, it is very slow. But you can disrupt it by pressing [Esc]. Error messages If POST detects errors in the system, it will write error messages on the screen. If the monitor is not ready, or if the error is in the video card, it will also sound a pattern of beeps (for example 3 short and one long) to identify the error to the user. If you want to know more of the beeps, you can find explanations on the Award, AMI and Phoenix web sites. For instance you will receive error messages if the keyboard is not connected or if something is wrong with the cabling to the floppy drive. AMI BIOS Phoenix / Award BIOS MR BIOS

Buffered Memory
A term used to describe a memory module that contains buffers. The buffers re-drive the signals through the memory chips and allow the module to be built with a greater number of memory chips. Buffered and unbuffered RAM cannot be mixed. The design of the computer's memory controller dictates which type of RAM must be used.

Burst Mode
Bursting is a rapid data-transfer technique that automatically generates a block of data (a series of consecutive addresses) every time the processor requests a single address. The assumption is that the next data-address the processor will request will be sequential to the previous one. Bursting can be applied both to read operations (from memory) and write operations (to memory).

CDRAM, Cached DRAM


Manufacturer: Year Introduced: Burst Timing: Voltage: Speed: Frequency: Pins: Bandwidth:

Cache DRAM (CDRAM) is a development that has a localized, on chip cache with a wide internal bus composed of two sets of static data transfer buffers between cache and DRAM. This architecture achieves concurrent operation of DRAM and SRAM synchronized with an external clock. Separate control and address input terminals of the two portions enable independent control of the DRAM and SRAM, thus the system achieves continuous and concurrent operation of DRAM and SRAM. CDRAM can handle CPU, direct memory access (DMA) and video refresh at the same time, by utilizing half-time multiplexed interleaving through a high-speed video interface. The system transfers data from DRAM to SRAM during the CRT blanking period. Graphic memory, as well as main memory and cache memory, are unified in the CDRAM. As you can see, CDRAM can replace cache and main memory, and it is has already been proven that a CDRAM based system has a 10 to 50 percent performance advantage over a 256kbyte cache based system.

CAS - RAS latency


Latency is the time it takes for the memory devices, after getting a command and address to produce its first data word. CAS latency is the clock cycles between the issuance of the read command and when the data comes out. This is a critical element of speed for PC100/133 memory. DIMMs with CAS latency 2 are faster than DIMMs with CAS latency 3. An 8ns DIMM with CAS

latency 2 is faster than a 6ns DIMM with CAS latency 3. A 100 MHz DIMM operating at CAS latency 2 operates at 125 MHz enhancing system performance. A CAS 2 DIMM operates at 1525% faster depending on system application. It is recommended that P-II & P-III and Athlon chips use CAS 2 DIMMS running at 100 or 133MHZ for extra speed, and crash free operation. What are RAS and CAS? They stand for Row Access Strobe (RAS) and Column Access Strobe (CAS). Each describes how long it takes to read a row or column of memory cells, known as the CAS/RAS Latency. Each is described with a rating number, where lower numbers are better. The rating is also dependant on the front side bus speed of your motherboard so the rating may rise on higher speeds. A common marketing term attached to SDRAM modules is either "CAS2" or "CAS3". Unfortunately, this is a bit misleading, as they should be referred to as CL2 or CL3, since they refer to CAS Latency timings (2 clocks vs. 3 clocks). As you have seen from the discussion above, the CAS Latency of a chip is determined by the column access time (tCAC). This is the time it takes to transfer the data to the output buffers from the time the /CAS line is activated. The "rule" for determining CAS Latency timing is based on this equation: tCL = tCAC / tCLK In lay terms, CAS Latency times the system clock cycle length must be greater than or equal to the column access time (tCAC). In other words, if tCLK is 10ns (100 MHz system clock) and tCAC is 20ns, the CAS Latency (CL) can be 2. But if tCAC is 25ns, then CAS Latency (CL) must be 3. The SDRAM specification permits CAS Latency values of 1, 2 or 3.

CMOS RAM
(Complementary Metal Oxide Semiconductor Random Access Memory) CMOS (pronounced see-moss) stands for complementary metal-oxide semiconductor. This is a type of memory chip with very low power requirements, and in PCs it operates using small batteries. In PCs, CMOS is more specifically referred to as CMOS RAM. This is a tiny 64-byte region of memory that, thanks to the battery power, retains data when the PC is shut off. The function of CMOS RAM is to store information your computer needs when it boots up, such as hard disk types, keyboard and display type, chip set, and even the time and date. If the battery that powers your CMOS RAM dies, all this information is lost, and your PC will boot with the default information that shipped with the motherboard. In most cases, this means youll have no access to your hard disks until you supply CMOS with the necessary information. Without access to your hard disks, you wont be able to boot your operating system. Fortunately, todays CMOS RAM is protected by nickel cadmium batteries, which the computers power supply recharges. Even so, its an extremely good idea to keep a copy of all the information stored in CMOS, in case disaster strikes. The information stored in CMOS is required by your computers Basic Input/Output System, or BIOS (pronounced bye-oss). Your PC contains several BIOSes--the video BIOS that interfaces your CPU and video card, for example--but the most fundamental is the system BIOS. The system BIOS is stored on a ROM (read-only memory) chip on the motherboard and is copied at boot time to a 64K segment of upper system RAM for faster system access (RAM is faster than ROM). The role of the system BIOS is to boot the system, recognize the hardware devices, and locate and launch the operating system. Once the operating system is loaded, the BIOS then works with it to enable access to the hardware devices.

Compact Flash
A small flash memory module. The memory chips are enclosed in a plastic case and retain data after they are removed from the system. The most common uses for these are in pagers, handheld computers, cell phones, digital cameras, and audio players.

Compact Flash Performance

Type I Dimensions Contacts Capacity Transfer Rate Data Read Data Write using 6-way interleave 36.4 x 42.8 x 3.3 mm 50-pins 8-160 MBytes 8MBps (burst) 1.3-1.5 MBps 3 MBps

Type II 36.4 x 42.8 x 5.0 mm 50-pins 160-512 Mbytes 8MBps (burst) 1.3-1.5 MBps 3 MBps

BASE or Conventional Memory


Conventional memory is the first 640Kb of mem. DOS based programs load and run from within the first 640Kb of memory. Note: Application programs such as Lotus 123 version 2.x or WordPerfect 5.1 must have enough conventional memory available to load and run before they can access and use any expanded memory. The expanded memory they use is mainly for data storage; the program itself does not run from expanded mem.

DDR Synchronous DRAM


(Double Data Rate Synchronous Dynamic RAM)
Manufacturer: Kingston Speed: 6-8ns

Year Introduced: 2000 CAS latency: CL=2-2.5 Voltage: 2.5v

Frequency: 200-533 MHz Pins: 172/184 Bandwidth: 1.6-4.2 GBps

Many other alternate methods of Memory access are in development. One of the most promising is Double Data Rate (DDR) Synchronous DRAM. Like Synchronous DRAM before it, DDR Synchronous Dynamic Random Access Memory will interleave RAM accesses so that several RAM accesses can be performed simultaneously. DDR Synchronous Dynamic Random Access Memory executes twice for each tick of the Memory bus, effectively doubling the system bus speed. Currently, DDR RAM is only used in high-end graphics cards, but it will almost certainly make its way down to the main Memory of the computer soon. Interleave: The process of taking data bits (singularly or in bursts) alternately from two or more memory pages (on an Synchronous Dynamic Random Access Memory) or devices (on a memory card or subsystem).

PC4200, 533 MHz, 4.2 GBps, 266 MHz FSB (8 Bytes) x (533 MHz) = 4,200 MBps or 4.2 GBps PC4000, 500 MHz, 4.0 GBps, 250 MHz FSB (8 Bytes) x (500 MHz) = 4,000 MBps or 4.0 GBps PC3700, 466 MHz, 3.7 GBps, 233 MHz FSB (8 Bytes) x (466 MHz) = 3,700 MBps or 3.7 GBps PC3500, 433 MHz, 3.5 GBps, 217 MHz FSB (8 Bytes) x (433 MHz) = 3,500 MBps or 3.4 GBps PC3200, 400 MHz, 3.2 GBps, 200 MHz FSB (8 Bytes) x (400 MHz) = 3,200 MBps or 3.2 GBps PC3000, 366 MHz, 3.0 GBps, 200 MHz FSB (8 Bytes) x (366 MHz) = 3,000 MBps or 3.0 GBps PC2700, 333 MHz, 2.7 GBps, 166 MHz FSB (8 Bytes) x (333 MHz) = 2,700 MBps or 2.7 GBps PC2400, 300 MHz, 2.4 GBps, 150 MHz FSB (8 Bytes) x (300 MHz) = 2,400 MBps or 2.4 GBps PC2100, 266 MHz, 2.1 GBps, 133 MHz FSB (8 Bytes) x (266 MHz) = 2,100 MBps or 2.1 GBps PC1600, 200 MHz, 1.6 GBps, 100 MHz FSB (8 Bytes) x (200 MHz) = 1,600 MBps or 1.6 GBps

DIMMs (Dual In-line Memory Modules)


Manufacturer: many Year Introduced: 1997 Burst Timing: Voltage: 3.3-5.0v Speed: 6-10ns Frequency: 66-133MHz Pins: 168/184 Bandwidth:

Memory-chip modules with 168 pins that connect the modules to the PC motherboard. DIMMs are the most popular memory modules available today. They support 64-bit data transfers. As operating system Memory demands increased, larger Memory modules were required; yet the motherboard space was even more at a premium. To solve this problem the 168 pin DIMM

modules ~5.375" X 1" was developed. These are installed singly in later Pentiums, Pentium Pro's, and PowerMacs, and are offered as non-Parity Fast-Page, EDO, ECC, or SDRAM modes, 3.3v or 5v. buffered or unbuffered, and 2-clock or 4-clock. Their capacities are 8Mb, 16Mb, 32Mb, 64Mb and 128Mb. Choosing the right modules is very critical, as most machines require specific Types, sizes and upgrade configurations. The number of black components on a 184-pin DIMM may vary, but they always have 92 pins on the front and 92 pins on the back for a total of 184. 184-pin DIMMs are approximately 5.375" long and 1.375" high, though the heights may vary. While 184-pin DIMMs and 168-pin DIMMs are approximately the same size, 184-pin DIMMs have only one notch within the row of pins.

Identification of 168-pin DIMM types


You can tell that a Dual In-line Memory Module is buffered if you see the buffer logic chips which are usually marked 74FCT16244 or 74ABT16244 or similar number. The no buffer DIMM does not have the buffer logic chips. Instead, the unbuffered DIMM has a small, 8-pin serial EEPROM (usually marked 24C02) which contains 128 byte module information (a lot more information compared to the simple 10 presence detect logic chips map of the buffered DIMM).

2001, Innoventions, Inc.

All key locations are determined by their relative positioning to the center of the notch area as seen in the above diagram. The left notch determines whether the module uses a buffered or unbuffered assembly. The right notch determines the voltage requirement. Currently the 5V and 3.3V assignment are the only settings used, however, its expected that a lower settings of 1.5-2.7V will be available in the future. This new voltage requirement will take assignment on the third voltage key settings. PC SDRAM Registered DIMM Specification

External dimensions: 5.375" x 1.375"

DMA (Direct Memory Access)


DMA - A technique for transferring data from main memory to a device without passing it

through the CPU. A capability provided by some computer bus architectures that allows data to be sent directly from an attached device (such as a disk drive) to the memory on the computer's motherboard. Usually a specified portion of memory is designated as an area to be used for direct memory access. In the ISA bus standard, up to 16 megabytes of memory can be addressed for DMA. The EISA and Micro Channel Architecture standards allow access to the full range of memory addresses (assuming they're addressable with 32 bits). Peripheral Component Interconnect accomplishes DMA by using a bus master (with the microprocessor "delegating" I/O control to the PCI controller). An alternative to DMA is the Programmed Input/Output (PIO) interface in which all data transmitted between devices goes through the processor. A newer protocol for the ATA/IDE interface is Ultra DMA, which provides a burst data transfer rate up to 33 MB (megabytes) per second. Hard drives that come with Ultra DMA/33 also support PIO modes 1, 3, and 4, and multiword DMA mode 2 (at 16.6 megabytes per second).

DRAM (Dynamic RAM)


Manufacturer: Year Introduced: Burst Timing: Voltage: IBM 1966 5-5-5-5 +5v,-5v,+12v Speed: 35-200ns Frequency: 4.77-40MHz Pins: SOJ(20,24,26) Bandwidth:

Dynamic random access Memory (DRAM) is the most common kind of random access Memory (RAM) for personal computers and workstations. Memory is the network of electrically-charged points in which a computer stores quickly accessible data in the form of 0s and 1s. Random access means that the PC processor can access any part of the Memory or data storage space directly rather than having to proceed sequentially from some starting place. DRAM is dynamic in that, unlike static RAM (SRAM), it needs to have its storage cells refreshed or given a new electronic charge every few milliseconds. Static RAM does not need refreshing because it operates on the principle of moving current that is switched in one of two directions rather than a storage cell that holds a charge in place. Static RAM is generally used for cache Memory, which can be accessed more quickly than DRAM.

DRDRAM (Direct Rambus DRAM)


Manufacturer: Year Introduced: Burst Timing: Voltage: Rambus 1999 CL=2 2.5v Speed: Frequency: 800MHz Pins: 184 Bandwidth: 1.6GBps

Direct Rambus DRAM: a totally new RAM architecture, complete with bus mastering (the Rambus Channel Master) and a new pathway (the Rambus Channel) between memory devices (the Rambus Channel Slaves). A single Rambus Channel has the potential to reach 500 MBps in burst mode; a 20- fold increase over DRAM. DRDRAM is often called PC800, based on doubling the Pentium 4's 400MHz bus.

ECC (Error Correction Code)


Error Correction Code modules are an advanced form of Parity detection often used in servers and critical data applications. ECC modules use multiple Parity bits per byte (usually 3) to detect double-bit errors. They also will correct single-bit errors without creating an error message. Some systems that support ECC can use a regular Parity module by using the Parity bits to make up the ECC code. However, a Parity system cannot use a true ECC module.

EDO DRAM (Extended Data Output DRAM)


Manufacturer: Year Introduced: Burst Timing: Voltage: many 1995 5-2-2-2 3.3-5.0v Speed: Frequency: Pins: Bandwidth: 70-50ns 33-75MHz 72 266MBs

Extended Data Output Dynamic Random Access Memory (DRAM) is an improvement over Fast Page Mode design, and used in Non-Parity configurations in Pentium machines or higher. If supported by your motherboard, EDO shortens the Read cycle between the Memory and the Central Processing Unit, thereby dramatically increasing throughput. EDO chips allow the Central Processing Unit to access Memory 10 to 20 percent faster. EDO DRAMs hold the data valid even after the signal that "strobes" the column address goes inactive. This allows faster Central Processing Unit's to manage time more efficiently; i.e., while the EDO DRAM is retrieving an instruction for the microprocessor, the Central Processing Unit can perform other tasks without concern that the data become invalid. Do not use EDO in systems that don't support it, or mix EDO with Fast Page Mode as serious problems result.

EDRAM (Embedded DRAM)


Manufacturer: NEC Corp. Year Introduced: 2001 Burst Timing: Voltage: 1.2v Speed: 15-35ns Frequency: 450 MHz Pins: Bandwidth:

EDRAM (enhanced dynamic random access memory) is dynamic random access memory (dynamic or power-refreshed RAM) that includes a small amount of static RAM (SRAM) inside a larger amount of DRAM so that many memory accesses will be to the faster SRAM. EDRAM is sometimes used as L1 and L2 memory and, together with Enhanced Synchronous Dynamic DRAM, is known as cached DRAM. Data that has been loaded into the SRAM part of the EDRAM can be accessed by the microprocessor in 15 ns (nanoseconds). If data is not in the SRAM, it can be accessed in 35 ns

from the DRAM part of the EDRAM.

ESD (Electrostatic Discharge)


Short for electrostatic discharge, the rapid discharge of static electricity from one conductor to another of a different potential. An electrostatic charge can damage integrated circuits found in computer and communications equipment. ESD can easily destroy semiconductor products, even when the discharge is to small to be felt. ESD Systems.com Introduction to ESD http://www.esdsystems.com/ Electrostatic Discharge Association http://www.esda.org/ Measurement of ESD Events http://www.credencetech.com/ESD-Meas.html

EMS (Expanded Memory Specification)


Expanded memory works by using some of the DOS reserved memory address space to access memory on the Intel memory board. The area of reserved address space used to access expanded memory is made up of 16Kb sections called pages. A minimum of four contiguous 16Kb pages are taken from the reserved DOS memory between 640K and 1 Mb. Typically, the address space used is between 768K and 896K. This 64Kb area is used as a mappable space and is known as the Expanded Memory Page Frame. Additional unused 16Kb pages found in the reserved memory area can also be assigned to expanded memory. Having expanded memory requires that you have an Expanded Memory Manager (EMM.SYS for Intel memory boards), to "manage" the expanded memory. The driver associates (or maps) the pages of physical memory on the memory board to the pages in the reserved DOS area. Only programs written to use expanded memory can utilize expanded memory. Some examples of applications that use expanded memory are Lotus 123 version 2.x and WordPerfect 5.x.

ESDRAM (Enhanced Synchronous DRAM)


Manufacturer: Ramtron Year Introduced: Burst Timing: 2-1-1-1 Voltage: 3.3v Speed: 6.0 ns Frequency: 133-166MHz Pins: 168 Bandwidth:

Enhanced Synchronous DRAM, made by Enhanced Memory Systems, includes a small static RAM (SRAM) in the SDRAM chip. This means that many accesses will be from the faster SRAM. In case the SRAM doesn't have the data, there is a wide bus between the SRAM and the SDRAM because they are on the same chip. ESDRAM is the synchronous version of Enhanced Memory Systems's EDRAM architecture. Both EDRAM and ESDRAM devices are in the category of cached DRAM and are used mainly for L2 cache Memory.ESDRAM is apparently competing with

DDR SDRAM as a faster SDRAM chip for Socket 7 processors.

External dimensions: 133.35 mm x 29.41 mm, x 3.67 mm

Evolution of Memory
Year 1968 1970 1971 1974 1983 1987 1995 1995 1996 1996 1997 1997 1998 1999 1999 1999 2000 2000 2000 2001 Type RAM DRAM ROM MRAM SIMM FPM DRAM EDO DRAM BEDO DRAM SDRAM SLDRAM DIMM PC66 PC100 DRDRAM PC800 RDRAM PC133 PC150 DDR SDRAM MicroDIMM EDRAM Volts 5.0v +5v,-5v,+12v 3.3-5.0v 0.0v 5.0v 5.0v 5.0v 5.0v 3.3v 3.3v 3.3v 3.3v 3.3v 2.5v 3.3v 3.3v 3.3v 2.5v 3.3v 1.2v Freq 4.77MHz 4.77-40MHz 33 MHz 16-66 MHz 33-75MHz 60-100MHz 60-133MHz 800MHz 66MHz 66MHz 100MHz 800MHz 800MHz 133MHz 133MHz 266MHz 100MHz 450MHz Timing 5-5-5-5

5-3-3-3 5-2-2-2 5-1-1-1 5-1-1-1 5-1-1-1 5-1-1-1 4-1-1-1 2-2-2 2-3-2 CL=2.5 2,3 15-35ns

FPM DRAM (Fast Page Mode DRAM)


Manufacturer: Year Introduced: Burst Timing: Voltage: many 1987 5-3-3-3 5.0v Speed: Frequency: Pins: Bandwidth: 50ns 16-66 MHz 72/168 188.71MBs

Fast Page Mode has traditionally been the most common DRAM. A "page" is the section of Memory available within a row address. Accessing Memory is like looking up information in a book. You choose the page, then FPM gets information from that page. FPM DRAMs need only to specify the row address once for accesses within the same page addresses. Successive accesses to the same page of Memory only require a column address to be selected, which saves time in accessing the Memory.

Metallurgy: Gold vs. Tin or Lead Contacts


For best contact reliability, you should match the contact material of the SIMM sockets on your motherboard. Mixing metal Types may lead to contact corrosion, especially in high humidity

environs. Visually inspect the sockets; if they are gold, buy SIMMs with gold contacts. If they are tin, buy SIMMs with tin or lead contacts. However, this is not always a critical issue, and either kind usually works. Most Pentium boards have tin contacts, and almost all SIMMs manufactured today use a tin or lead alloy instead of gold. Gold can be deposited on the connector fingers of memory modules in two ways. Boards manufactured using cheaper processes use immersion gold plating that gives a very thin layer of gold over the entire surface of the board. The actual gold thickness is very difficult to control in the immersion process. Depositing too much gold can cause the solder joints on the board fail after becoming too brittle. Boards manufactured to the PC100 and PC133 specifications have thicker gold plating, deposited only on the connector fingers using an electroplated process. This process is more expensive, easier to control and guarantees that the proper amount of gold is present on the surface of the gold connector fingers.

immersion plating [Metallurgy]. The process of applying a metallic coating to a part simply by immersing it in a solution; for instance, by electroless plating. An immersion plating process is not the same as an electroless process. In an immersion process you have a galvanic displacement in which a metal less noble, for example copper or nickel, is displaced by gold. As soon as the copper or nickel surface is no longer exposed, the process stops. A true immersion deposit is limited in thickness and typically does not adhere extremely well to the substrate. An electroless plating process is essentially an autocatalytic process. In a process of this type the plating process will continue once it has started and as long as the plating bath contains the proper components (metal ions, reducing agents, etc.). In theory, there is no limit to the thickness of metal that can be deposited.

Electroplating [Metallurgy]. The process of producing a metallic coating on a surface by electrodeposition - i.e., by the action of an electric current. The principle of electroplating is that the coating metal is deposited from an electrolyte - an aqueous acid or alkaline solution - on to the base: i.e., the metal to be coated. The latter forms the cathode (negative electrode). A low-voltage direct current is used; the anode is gradually consumed. Electroplating is normally done with direct current. However, particularly with cyanide copper baths, improved smoothness and uniformity of the coating can be obtained by means of the so-called periodic-reverse process, in which the polarity is periodically reversed, so that the metal is alternatively plated and deplated.

Operation of the Memory Hierarchy


A memory hierarchy typically contains the local registers of the CPU at the lowest level and may contain at succeeding levels a small, very fast, local random-access memory called a cache, a slower but still fast random-access memory, and a large but slow disk. The time to move data between levels in a memory hierarchy is typically a few CPU cycles at the cache level, tens of cycles at the level of a random-access memory, and hundreds of thousands of cycles at the disk level! A CPU that accesses a random-access memory on every CPU cycle may run at about a tenth of its maximum speed, and the situation can be dramatically worse if the CPU must access the disk frequently. Thus it is highly desirable to understand for a given problem how the number of data movements between levels in a hierarchy depends on the storage capacity of each memory unit in that hierarchy. This flowchart shows how a high performance CPU processes a memory request issued by the central processing unit.

TLB = Translation Lookaside Buffers, PA = Physical Address


Flowchart John Morris, 1998

HMA (High Memory Area)


HMA refers to a specific way of accessing the first 64Kb of Extended Memory. Normally, for an 80286 to access extended memory, the processor has to switch into what is called "protected mode", make the extended memory access, and then be reset so that it returns to "real mode" (which is the mode that DOS runs under). This whole operation takes time. There is, however, a "back door" method to access the first 64K of extended memory without switching the processor into protected mode. What manages this special way of accessing the first 64K of extended memory is something called an HMA provider (also sometimes called an A20 handler). Device drivers that create the HMA in 80286-based systems: 1. HIMEM.SYS provided with DOS 5.0 2. QEXT.SYS from Quarterdeck Office Systems 3. MOVEM.MGR from Qualitas Device drivers that create the HMA in i386-based systems: 1. HIMEM.SYS provided with DOS 5.0 2. QEMM386 from Quarterdeck Office Systems 3. 386MAX from Qualitas

Timeline of Microelectronics Technology Evolution


Another name for a chip, an IC is a small electronic device made out of a semiconductor material. The first integrated circuit was developed in the 1950s by Jack Kilby of Texas Instruments and Robert Noyce of Fairchild Semiconductor. Integrated circuits are used for a variety of devices, including microprocessors, audio and video

equipment, and automobiles. Integrated circuits are often classified by the number of transistors and other electronic components they contain:

First Transistor (1948) Bell Labs


1947 The point-contact bipolar transistor is invented by Bell Lab's Bardeen, Shockley, and Brattain. 1951 Junction field-effect transistor (JFET) is invented. 1952 Single-crystal silicon is fabricated. 1954 Oxide masking process is developed.

SSI (1960's) Small Scale Integration


Up to 20 gates per chip 1960 Metal-Oxide-Silicon (MOS) transistor is invented. 1962 Transistor-transistor Logic (TTL ) is developed. 1963 Complementary Metal Oxide Silicon (CMOS ) is invented.

MSI (Late 1960's) Medium Scale Integration


20-200 gates per chip 1968 MOS memory circuits are introduced.

LSI (1970's) Large Scale Integration


200-5000 gates per chip 1970 8-bit MOS calculator chips are introduced, 7 micrometer chip geometries 1971 16-bit Microprocessors are introduced.

VLSI (1980's) Very Large Scale Integration


Over 5000 gates per chip 1981 Very High Speed Integration (VHSIC), ten's of thousands of gates per chip, chip geometries. 1984 0.5 micrometer chip geometries. 1.5 micrometer

ULSI (1990's) Ultra Large Scale Integration


Over millions transistors per chip 1997 0.25 micrometer chip geometries.

IRAM, Intelligent RAM


Manufacturer: Year Introduced: Burst Timing: Voltage: Speed: Frequency: Pins: Bandwidth:

Intelligent DRAM, or IDRAM, merges processor and memory into a single chip in order to lower memory latency and increase bandwidth. It is a research model for the next generation of DRAM and has been tested in alpha 21164 processors. The reasoning behind placing a processor in DRAM rather than increasing the on-processor SRAM is that the DRAM is approximately 25 to 50 times denser than cache memory in a microprocessor. Merging a microprocessor and DRAM on the same chip provides some rather obvious opportunities in performance, energy efficiency, and cost. It affords a reduction in latency by a factor of 5 to 10, an increase in bandwidth by a factor of 50 to 100, and has an advantage in energy efficiency at a factor of 2 to 4. Add to this and an qualified cost saving as the result of removing superfluous memory and reducing board area. Although the above figures are estimations based on early testing and present technology, it would appear that IRAM holds allot of promise.

Interleaving Memory
Memory interleaving is a way to get your machine to access your memory banks simultaneously, rather than sequentially. Interleaving allows a system to use multiple memory

modules as one. Interleaving can only take place between identical memory modules. Theoretically, system performance is enhanced because read and write activity occurs nearly simultaneously across the multiple modules in a similar fashion to Hard Drive striping. Many Apple Power Macintosh and clone systems support interleaving. Although many systems require more than one module at a time, most often they are NOT interleaving. The multiple modules are addressed as one large module, but they're not interleaved. These systems address memory in data path that exceeds the width of the memory module, so the designers use older technology to conserve cost and enhance availability of parts.

JEDEC (Joint Electron Device Engineering Council)


The JEDEC Solid State Technology Association (Once known as the Joint Electron Device Engineering Council), is the semiconductor engineering standardization body of the Electronic Industries Alliance (EIA), a trade association that represents all areas of the electronics industry. JEDEC was originally created in 1960 as a joint activity between EIA an NEMA, to cover the standardization of discrete semiconductor devices and later expanded in 1970 to include integrated circuits. JEDEC does its work through its 48 committees/ subcommittees that are overseen by the JEDEC Board of Directors. Presently there are about 300 member companies in JEDEC including both manufacturers and users of semiconductor components and others allied to the field. http://www.jedec.org/

L1 Cache
Pronounced cash, a special high-speed storage mechanism. It can be either a reserved section of main memory or an independent high-speed storage device. Two types of caching are commonly used in personal computers: memory caching and disk caching. A memory cache, sometimes called a cache store or RAM cache, is a portion of memory made of high-speed static RAM (SRAM) instead of the slower and cheaper dynamic RAM (DRAM) used for main memory. Memory caching is effective because most programs access the same data or instructions over and over. By keeping as much of this information as possible in SRAM, the computer avoids accessing the slower DRAM. Some memory caches are built into the architecture of microprocessors. The Intel 80486 microprocessor, for example, contains an 8K memory cache, and the Pentium has a 16K cache. Such internal caches are often called Level 1 (L1) caches. Most modern PCs also come with external cache memory, called Level 2 (L2) caches. These caches sit between the CPU and the DRAM. Like L1 caches, L2 caches are composed of SRAM but they are much larger. Disk caching works under the same principle as memory caching, but instead of using highspeed SRAM, a disk cache uses conventional main memory. The most recently accessed data from the disk (as well as adjacent sectors) is stored in a memory buffer. When a program needs to access data from the disk, it first checks the disk cache to see if the data is there. Disk caching can dramatically improve the performance of applications, because accessing a byte of data in RAM can be thousands of times faster than accessing a byte on a hard disk. When data is found in the cache, it is called a cache hit, and the effectiveness of a cache is judged by its hit rate. Many cache systems use a technique known as smart caching, in which the system can recognize certain types of frequently used data. The strategies for determining which information should be kept in the cache constitute some of the more interesting problems in computer science.

L2 Cache
Short for Level 2 cache, memory that is external to the microprocessor. In general, L2 cache memory, also called the secondary cache, resides on a separate chip from the microprocessor chip. Although, more and more microprocessors are including L2 caches into their architectures.

L3 Cache
As more and more processors begin to include L2 Cache into their architectures, Level 3 Cache is now the name for the extra memory built into motherboards between the microprocessor and the main memory. Quite simply, what was once L2 Cache on motherboards now becomes L3 Cache when used with microprocessors containing the built-in memory.

Memory Latency
Based on experience with current prototype systems, a system for processing structured video requires on the order of 3 to 20 frames of memory. A single device targeted for broadcast resolution will require from 20 to 150 Mbits of memory, precluding the sole use of on-chip memory. Even if wafer scale integration or multi-chip modules are used, the device memory would be physically separate from the processor, presenting a large access latency. This is a serious problem in any processing system -- the speed of a processor is irrelevant if it is stalled waiting for data. This latency problem is greatly exacerbated by parallel processing, where it is more likely that the data being read is located remotely from the processor, and accesses may conflict with those made by other processors. The typical approach to reducing latency is the use of a cache -- a relatively small amount of lower latency memory containing a copy of sections of the slower memory that the processor is using or likely to use. Although this approach can be very effective in single processor systems, given typical application memory access patterns its performance on media processing is less than optimal. There are two reasons for this: the large amount of data being processed, and atypical data access patterns. The amount of data typically accessed by a media algorithm typically exceeds the size of a typical cache, diminishing the likelihood that desired data will be found in the cache. But a cache improves performance not only through maintaining a local copy, but also by prefetching data in addresses linearly adjacent to the requested one. Unfortunately, this automatic prefetching can degrade performance when accessing data sparsely (with a step between samples greater than the size of a cache line). When using the cache mechanism in a multiprocessor system, great care must be exercised to ensure the validity of the data in the cache and the memory. The overhead of maintaining coherence does not scale well, requiring either that all processors monitor all memory accesses, or that lists of data copies be maintained and used. Cache coherence schemes have been shown useful in systems of 2 - 4 processors

International Memory Manufacturers Contact Information


Manufacturer Advantage Memory Corp. Alliance Corsair Microsystems, Inc. Fujitsu Microelectronics, Inc. Goldstar Hitachi Data Systems Hyundai Electronics America. IBM Kensington Tech. Group Kingston Tech. Co., Inc. Micron Tech., Inc. (crucial.com) Mitsubishi Motorola Mushkin inc. Nanya NEC USA, Inc. Code CM MB HM HY Phone 800-245-5299 888-222-4346 800-866-8608 800-555-6820 408-232-8000 650-572-2700 800-337-8410 208-363-3675 408-232-8123 650-572-9675 714-424-3925 FAX 949-288-4245 510-657-8748 eMail Webmaster fmicrc@fmi.fujitsu.com Customer Service TKHan@hei.co.kr Information Sales mtf@micron.com

KVR MT

800-569 1868 UPD 800-338-9549

303-534-0827 949-585-0059

Information Tech Support

NPNX Oki Data Americas, Inc. Panasonic PDP Systems, Inc. PNY Technologies, Inc. Rambus Ramtron Intl. Corp. Samsung Sharp Corp. Simple Tech. Siemens

Oki M5 PNY FM STI / STV

800-654-3282 800-800-9600 973-515-9700 650-944-7700 719-481-7000 800-237-4277 800-367-7330 ++41(0)585 587 600 208-363-5716 800-848-3927 949-455-2000 510-979-1564 973-560-5590 650-947-5001 719-481-9294 905-890-0499 949-476-1209 ++41(0)585 587 605 208-363-2520 949-859-3963

Comments Products Information Webmaster Information Webmaster Customer Support

SpecTek Texas Instruments TI or TMS Toshiba America TC Vanguard Microelectronics VML Ltd VCMM / Viking Components, Inc. VCCF VisionTek, Inc. Mosel Vitelic, Corp.

Tech Support

(44) 1604 859 542 (44) 1604 859 562 Sales 800-845-8777 800-726-9695 408-433-6000 949-643-7274 408-433-0952 Customer Service Customer Support Webmaster

Memory Controller
An essential component in any computer. Its function is to oversee the movement of data into and out of main memory. It also determines what type of data integrity checking, if any, is supported. The chipset supports the CPU. It usually contains several "controllers" which govern how information travels between the processor and other components in the system. Some systems have more than one chipset. The memory controller is part of the chipset, and this controller establishes the information flow between memory and the CPU.

Memory Size Specifications


1 MB 30-Pin SIMM, Non-Parity 30-Pin SIMM, Parity 72-Pin SIMM, Non-Parity 72-Pin SIMM, Parity/ECC 168-Pin DIMM, Non-Parity 168-Pin DIMM, Parity/ECC 1x8 1x9 2 MB 4 MB 8 MB 16 MB 32 MB 2x8 2x9 4x8 4x9 8x8 8x9 16x8 16x9 4x32 4x36 2x64 2x72 8x32 8x36 4x64 4x72 64 MB 16x32 16x36 8x64 8x72

256x32 512x32 1x32 2x32 256x36 512x36 1x36 2x36 1x64 1x72

D: This is the depth of the module in millions. For each bit of width, there are this many megabits (not bytes) of storage. This number is usually 1, 2, 4 or 8. For smaller SIMMs, it can be 256 or 512; in this case it represent the number of kilobits of depth, instead of megabits. W: This is the width of the module in bits. Each SIMM or DIMM type has the same width. This number is usually 8, 32 or 64 for non-parity modules, or 9, 36 or 72 for parity or ECC modules.

To find the size in megabytes of any module from its "DxW" specification is as follows: take the D and W numbers and multiply them together (if D is 256 or 512, use 0.25 or 0.5 instead). Then, take the product and divide by 8 (for non-parity memory) or 9 (for parity). The result is the size in megabytes.

MicroDIMM (Micro Dual In-line Memory Module)


Manufacturer: Hitachi Speed: 7ns Year Introduced: 2000 Frequency: 100MHz CAS Latency: 2,3 Pins: 144 Voltage: 3.3v Bandwidth: 2.1Gbs 144-pin MicroDIMMs are commonly found in sub-notebook computers. Each 144-pin MicroDIMM provides a 64-bit data path, so they are installed singly in 64-bit systems. 144-pin MicroDIMMs are available in PC100 SDRAM. When upgrading, be sure to match the memory technology that is already in your system.

External dimensions: 38.0 mm x 30.0 mm, x 3.8 mm

Operating Systems Minimum Memory Requirements


Windows 2000 OEM Sun Solaris v2.5 (UNIX server) Dec Alpha OSF1 4.0d (UNIX server) Mac OS X (unix based) SGI Irix 6.5 (UNIX server) Windows XP Home Edition Windows XP Professional Harris PowerMAX_OS 4.1 Hewlett Packard HP-UX 10.20 IBM RS/6000 AIX 4.3 (UNIX server) Windows 256 MB 128 MB 128 MB 128 MB 64 MB 64 MB 64 MB 32 MB 32 MB 32 MB 32 MB

Millennium (ME) Linux RedHat (UNIX clone) Mac OS 9.1 Windows NT OEM Mac OS 8.6 Win-98 OEM IBM OS/2 Mac OS 7.6 Win-95 OEM Amiga OS 3.5 Windows 3.11 DR DOS 7.03 MS DOS 6.22 Windows CE CP/M

32 MB 32 MB 32 MB 24 MB 24 MB 16 MB 8 MB 8 MB 8 MB 3 MB 512 KB 512 KB (ROM) 300 KB 20 KB

MRAM - Magnetic RAM


Manufacturer: IBM Year Introduced: 1974 Burst Timing: Voltage: 0.0v Speed: 10ns Frequency: Pins: Bandwidth:

There is great interest in the possibility of fabricating a dynamic random access memory (DRAM ) which retains its memory even after removing power from the device. Such a nonvolatile memory has important military applications for missiles and satellites. Clearly such a device could also have important commercial applications if the non-volatility were accomplished without impacting other properties of the memory, notably density, read and write speed, and lifetime. IBM has recently begun a project with significant funding from DARPA to study the feasibility of a DRAM memory using memory cells based on magnetic tunnel junctions.

nDRAM, Next Generation DRAM


Manufacturer: Rambus/Intel Year Introduced: 2000 Burst Timing: Voltage: 3.3v Speed: Frequency: 1,600MHz Pins: Bandwidth: 1.6-3.0Gbps

To be released with the Intel Merced processor Expected transfer rates of 1.6Gb to 3.0Gb Being Created by Intel and Rambus to be faster then RDRAM Available in 1999 ~ 2001

Packaging
The packaging is simply the entire makeup of a unit of memory, in most cases, the SIMM. Since the memory chips themselves are way too small, they must be combined and put on a medium that can be worked with and added to a system.

Package Terminology
BGA CBGA CDIP CDIP SB CFP CPGA Ball Grid Array Ceramic Ball Grid Array Glass-Sealed Ceramic Dual In-Line Pkg. Side-Braze Ceramic Dual In-Line Pkg. Both Formed and Unformed CFP Ceramic Pin Grid Array

CZIP DFP DIMM FC/CSP HLQFP HQFP HSOP HSSOP HTQFP HTSSOP HVQFP JLCC LCCC LGA LPCC LQFP MCM MQFP OPTO PDIP PFM PLCC PPGA QFP SDIP SIMM SIP SODIMM SOJ SOP SSOP TFP TO/SOT TQFP TSOP TSSOP TVFLGA TVSOP VQFP VSOP

Ceramic Zig-Zag Pkg. Dual Flat Pkg. Dual-In-Line Memory Module Flip Chip / Chip Scale Pkg. Thermally Enhanced Low Profile QFP Thermally Enhanced Quad Flat Pkg. Thermally Enhanced Small-Outline Pkg. Thermally Enhanced Shrink Small-Outline Pkg. Thermally Enhanced Thin Quad Flat Pack Thermally Enhanced Thin Shrink Small-Outline Pkg. Thermally Enhanced Very Thin Quad Flat Pkg. J-Leaded Ceramic or Metal Chip Carrier Leadless Ceramic Chip Carrier Land Grid Array Leadless Plastic Chip Carrier Low Profile Quad Flat Pack Multi-Chip Module Metal Quad Flat Pkg. Light Sensor Pkg. Plastic Dual-In-Line Pkg. Plastic Flange Mount Pkg. Plastic Leaded Chip Carrier Plastic Pin Grid Array Quad Flat Pkg. Shrink Dual-In-Line Pkg. Single-In-Line Memory Module Single-In-Line Pkg. Small Outline Dual-In-Line Memory Module J-Leaded Small-Outline Pkg. Small-Outline Pkg. (Japan) Shrink Small-Outline Pkg. Triple Flat Pack Cylindrical Pkg. Thin Quad Flat Pkg. Thin Small-Outline Pkg. Thin Shrink Small-Outline Pkg. Thin Very-Fine Land Grid Array Very Thin Small-Outline Pkg. Very Thin Quad Flat Pkg. Very Small Outline Pkg.

When dealing with the "packaging" RAM comes in, you also must consider single and double sided modules, sometimes referred to as single and double RAS SIMMs/DIMMs. Here is a small chart which illustrates chip sizes, and basic memory configurations: Double Type Pins Chips Size Sided 1 x 32 SIMM 2 x 32 SIMM 4 x 32 SIMM 8 x 32 SIMM 16 x 32 SIMM 32 x 32 SIMM 1 x 64 DIMM 72 72 72 72 72 72 168 8 16 8 16 8 16 8 * * * 4MB 8MB 16MB 32MB 64MB 128MB 8MB

2 x 64 DIMM 4 x 64 DIMM 4 x 64 DIMM 8 x 64 DIMM 16 x 64 DIMM

168 168 168 168 168

8 8 16 16 16 * * *

16MB 32MB 32MB 64MB 128MB

Non-Parity vs. Parity


Parity As data moves through your computer (e.g. from the Central Processing Units to the main Memory), the possibility of errors can occur . . . particularly in older 386 & 486 machines. Error detection was developed to notify the user of any data errors. By adding a single bit to each byte of data, this bit is responsible for checking the integrity of the other 8 bits while the byte is moved or stored. Once a single-bit error is detected, the user receives an error notification; however, parity checking only notifies, and does not correct a failed data bit. If your SIMM module has 3, 6, 9, 12, 18, or 36 chips then it is more than likely Parity. Logic Parity Also known as Parity Generators, or Fake-Parity, these modules were produced by some manufacturers as a less expensive alternative to True-Parity. Fake-parity modules "fool" your system into thinking that parity checking is being done. This is accomplished by sending the parity signal that the machine looks for, rather than using an actual parity bit. In a module using Fake-Parity, you will NOT be notified of a Memory error, because it is really not being checked. The result of these undetected errors can be corrupted files, wrong calculations, and even corruption of your hard disk. If you need Quality modules be cautious of suppliers with bargain prices; they may be substituting useless Fake-Parity. Non-Parity These modules are just like Parity modules without the extra chips. There are no Parity chips in Apple Computers, later 486, and most Pentium class systems. The reason for this is simply because Memory errors are rare, and a single bit error will most likely be harmless. If your SIMM module has 2, 4, 8, 16, or 32 chips, then it is more than likely Non-Parity. Always match the new Memory with what is already in your system. To determine if your system requires parity, count the number of small, black, Integrated Circuit chips on one of your modules.

PC66 SDRAM
(PC66 Synchronous Dynamic Random Access Memory) Manufacturer: many Speed: Year Introduced: 1997 Frequency: 66MHz Burst Timing: 5-1-1-1 Pins: 168 Voltage: 3.3v Bandwidth: Synchronous Dynamic Random Access Memory is the fastest DRAM technology available. It uses a clock to synchronize the signal input and output. The clock coordinates with the Central Processing Unit clock so both are in synch. The Central Processing Unit "knows" when operations are to be completed and data will become available, freeing the processor for other operations. The use of a clock allows for extremely fast consecutive read and write capability over FPM and EDO DRAMs.The clock is the main speed consideration with SDRAMs; therefore, SDRAMs are measured in megahertz (e.g. 66MHz or 100MHz). SDRAM increases the speed and performance of the system.

PC100 SDRAM (PC100 Synchronous DRAM)


Manufacturer: Year Introduced: Burst Timing: Voltage: Intel 1998 4-1-1-1 3.3v Speed: Frequency: Pins: Bandwidth: 8ns 100MHz 168 800MBps

PC100 SDRAM is synchronous DRAM (SDRAM) that states that it meets the PC100 specification from Intel. Intel created the specification to enable RAM manufacturers to make chips that would work with Intel's i440BX processor chipset. The i440BX was designed to achieve a 100MHz system bus speed. Ideally, PC100 SDRAM would work at the 100MHz speed, using a 4-1-1-1 access cycle. It's reported that PC100 SDRAM will improve performance by 10-15% in an Intel Socket 7 system (but not in a Pentium II because its L2 cache speed runs at only half of processor speed). To develop this type of Memory, a set of specifications has been developed by Intel and was endorsed by most of the Memory manufacturers. Intel established a very precise set of specifications and guide lines to ensure compatibility between Memory modules of any brands. The Intel PC100 compliance specifications are ensuring robust Memory operation from suppliers that meet these specifications and this is a great benefit to both the industry and the end users. In addition to Intel providing specs for PC100 devices and DIMMs, Intel has released module gerber (raw card) design files. Vendors using these raw card design files will have much more consistency than those using their own raw card design files. PC 133 SDRAM is downward compatible with PC 100 SDRAM memory.

PC133 SDRAM (PC133 Synchronous DRAM)


Manufacturer: Year Introduced: CAS Latency: Voltage: Viking 1999 2-2-2 3.3v Speed: Frequency: Pins: Bandwidth: 7.5ns 133MHz 168 1.1 GBps

The PC133 specification details the requirements for SDRAM used on 133MHz Front Side Bus (FSB) motherboards. PC133 SDRAM can be used on 100MHz FSB motherboards but will not yield a performance advantage over PC100 memory at 100MHz. PC 133 SDRAM is downward compatible with PC 100 SDRAM memory. PC133 Compliance by: Texas Instruments Inc.

External dimensions: 133.35mm x 34.93mm

PC150 SDRAM (PC150 Synchronous DRAM)


Manufacturer: Mushkin Year Introduced: 2000 CAS Latency: 2-3-2 Voltage: 3.3v Speed: 7.0ns Frequency: 150 MHz Pins: 168 Bandwidth: 1.28 GBps

With the memory bus quickly becoming the biggest bottleneck in todays computers, the guaranteed ability to run the memory bus at 150 MHz is important for improving the performance of 3D gaming, integrated UMA (Unified Memory Architecture) chipset based computers, high - end desktop publishing, intensive graphic editing, servers, and workstations. This module is manufactured by Enhanced Memory Systems and is organized as 16Mx64 using 16 8x8 density chips. These memory modules contain an SPD (Serial Presence Detect) EEPROM programmed for 2-3-2 by Enhanced Memory Systems. The Serial Presence Detect also contains information on the module type, module organization, component speed, and other attributes relevant to your motherboard chipsets memory controller. The 150 MHz PC150 SDRAM Tiny BGA based DIMM delivers up to the peak bandwidth of 1.2 GBps - comparing to the bandwidth of 0.8 GBps of PC100 SDRAM and 1.06 GBps of PC133 SDRAM to satisfy the need of the performance hungry users and die hard overclockers alike.

DDR Synchronous DRAM


(Double Data Rate Synchronous Dynamic RAM)
Manufacturer: Kingston Year 2000 Introduced: CAS latency: CL=2-2.5 Voltage: 2.5v Speed: 6-8ns Frequency: 200-533 MHz Pins: 172/184 Bandwidth: 1.6-4.2 GBps

Many other alternate methods of Memory access are in development. One of the most promising is Double Data Rate (DDR) Synchronous DRAM. Like Synchronous DRAM before it, DDR Synchronous Dynamic Random Access Memory will interleave RAM accesses so that several RAM accesses can be performed simultaneously. DDR Synchronous Dynamic Random Access Memory executes twice for each tick of the Memory bus, effectively doubling the system bus speed. Currently, DDR RAM is only used in high-end graphics cards, but it will almost certainly make its way down to the main Memory of the computer soon. Interleave: The process of taking data bits (singularly or in bursts) alternately from two or more memory pages (on an Synchronous Dynamic Random Access Memory) or devices (on a memory card or subsystem).

PC4200, 533 MHz, 4.2 GBps, 266 MHz FSB (8 Bytes) x (533 MHz) = 4,200 MBps or 4.2 GBps PC4000, 500 MHz, 4.0 GBps, 250 MHz FSB

(8 Bytes) x (500 MHz) = 4,000 MBps or 4.0 GBps PC3700, 466 MHz, 3.7 GBps, 233 MHz FSB (8 Bytes) x (466 MHz) = 3,700 MBps or 3.7 GBps PC3500, 433 MHz, 3.5 GBps, 217 MHz FSB (8 Bytes) x (433 MHz) = 3,500 MBps or 3.4 GBps PC3200, 400 MHz, 3.2 GBps, 200 MHz FSB (8 Bytes) x (400 MHz) = 3,200 MBps or 3.2 GBps PC3000, 366 MHz, 3.0 GBps, 200 MHz FSB (8 Bytes) x (366 MHz) = 3,000 MBps or 3.0 GBps PC2700, 333 MHz, 2.7 GBps, 166 MHz FSB (8 Bytes) x (333 MHz) = 2,700 MBps or 2.7 GBps PC2400, 300 MHz, 2.4 GBps, 150 MHz FSB (8 Bytes) x (300 MHz) = 2,400 MBps or 2.4 GBps PC2100, 266 MHz, 2.1 GBps, 133 MHz FSB (8 Bytes) x (266 MHz) = 2,100 MBps or 2.1 GBps PC1600, 200 MHz, 1.6 GBps, 100 MHz FSB (8 Bytes) x (200 MHz) = 1,600 MBps or 1.6 GBps

PCMCIA
(Personal Computer Memory Card International Association)
http://www.pcmcia.org/ An international standards body and trade association with over 300 member companies that was founded in 1989 to establish standards for Integrated Circuit cards and to promote interchangeability among mobile computers where ruggedness, low power, and small size were critical. As the needs of mobile computer users has changed, so has the PC Card Standard. By 1991, PCMCIA had defined an I/O interface for the same 68 pin connector initially used for Memory cards. At the same time, the Socket Services Specification was added and was soon followed by the Card Services Specifcation as developers realized that common software would be needed to enhance compatibility.

Types of PC Cards
The PCMCIA standards define 3 types of cards: Type I, Type II and Type III. They all have the same length and width with a 68-circuit connector as interface for the attachment to the computer. The difference is in the thickness only.

The PCMCIA standards


Cards: Type I Discription: Mainly used for various types of memory enhancements such as SRAM, flash memory, OTP, EPROM and EEPROM cards. Interface: 3.3mm Substrate: 3.3mm

Typically used for memory enhancements and/or for I/O features such as data/fax Type II modems, LANs and multimedia card applications. Type Primarily used for memory enhancements

3.3mm 3.3mm

5.0mm 10.5mm

III

and/or I/O that require more space for components such as 1.8" hard disk drive.

Serial & Parallel Presence Detect


When a computer system boots up, it must "detect" the configuration of the memory modules in order to run properly. For a number of years, Parallel Presence Detect (PPD) was the traditional method of relaying the required information by using a number of resistors. PPD used a separate pin for each bit of information and was the method used by SIMMs and some DIMMs use to identify themselves. However, the parallel type of presence-detect proved insufficiently flexible to support newer memory technologies. This led to JEDEC defining a new standard, serial-presence detect (SPD). SPD has been in use since the emergence of SDRAM technology. The Serial Presence Detect function is implemented using an 8-pin serial EEPROM chip. This stores information about the memory modules size, speed, voltage, drive strength, and number of row and column addresses, parameter read by the BIOS during POST. The SPD also contains manufacturers data such as date codes and part numbers.

SPD Information

From JEDEC o SPD General Standard o SPD - Table of Fundamental Memory type (Appendix A) o SPD - Table of Superset Memory Types (Appendix B) o SPD - FPM and EDO DRAM (Appendix C) o SPD - for SDRAM (Appendix E) PC133 SPD Specifications From IBM o Complete original PC133 SPD specifications and comparison with PC100 specifications. PC133 SPD Specifications: o PC133 Registered DIMM Specification by IBM. This is in revision stage at JEDEC.
o

Unbuffered PC133 module specifications by VIA. This has been proposed to JEDEC and is in the Preliminary stage.

RAM (Random Access Memory)


Manufacturer: Intel Year Introduced: 1968 Burst Timing: Voltage: 5.0v Speed: 200ns Frequency: 4.77MHz Pins: 16 Pin DIP Bandwidth:

Robert Dennard was the inventor of RAM: Random Access Memory, the device was patented in 1968 by Dennard. A group of Memory chips, typically of the dynamic RAM (DRAM) type, which functions as the computer's primary workspace. The "random" in RAM means that the contents of each byte can be directly accessed without regard to the bytes before or after it. This is

also true of other Types of Memory chips, including ROMs (Read Only Memory) and PROMs (Programable ROM). However, unlike Non-volatile ROMs (Read Only Memory) and PROMs (Programable ROM), Volatile RAM chips require power to maintain their data, which is why you must save your data onto disk before you turn the computer off.

PC800 RDRAM (Rambus DRAM)


Manufacturer: Rambus Year 1999 Introduced: Latency: 1.25 ns Voltage: 2.5v Speed: 8ns Frequency: 200-800MHz 184 ECC 168 Non-ECC Bandwidth: 1.6 GBps Pins:

System Memory bandwidth is more important now than ever before. With the increase in processor performance, multimedia and 3D graphics, high bandwidth Memory is essential to sustain system performance. The transition to Rambus DRAM (RDRAM) - with a Memory performance gain up to 300% over the current SDRAM technology is nothing short of revolutionary! Memory compatibility for desktop PCs is one of the important features of RDRAM. It allows RDRAM RIMM modules to be interchangeable with full system compatibility. Validated RIMM modules can be placed in an RDRAM-compatible system without regard to memory size, speed, or organization. Mix and match RIMM modules from various suppliers and they will all work together within the system. No other technology available today can make that claim. RIMM (Rambus In-line Memory Module) Rambus memory modules are called RIMMs. Because of the fast data transfer rate of these modules, a heat spreader (aluminum plate covering) is used for each module. The heat spreader wicks heat away to keep the module from overheating. Because the Rambus channel must be continuous, unused slots in a channel must be occupied by a CRIMM, in order to complete the channel's connections. CRIMM (Continuity Rambus In-line Memory Module) Since there cannot be any unused slots on a motherboard, C-RIMM is a special module used to fill any unused RIMM slots. It is basically a RIMM module without any memory chips. Because the Rambus channel must be continuous, unused slots in a channel must be occupied by a CRIMM, in order to complete the channel's connections. RDRAM must be used in pairs.

Refresh Rate
A memory module is made up of electrical cells. The refresh process recharges these cells, which are arranged on the chips in rows. The refresh rate refers to the number of rows that must be refreshed. The common refresh rates are 1K, 2K, 4K and 8K. Some specialty designed DRAMs feature self refresh technology, which enables the components to refresh on their own independent from the CPU or external consumption, and it is commonly used in notebook computers and laptop computer.

Reserved Memory

Reserved Memory is the address space between 640Kb and 1 Mb and was designed to be reserved for system operations. The system BIOS ROM, video ROM, and ROMs of some add-in cards reside in the reserved memory addr. space. Keep in mind that this is "address space", meaning that there doesn't have to be any physical memory located at all the addresses from 640K to 1 megabyte. The system BIOS ROM can be found at hex addr. F0000 or 960K. If you have a VGA or EGA video card you can almost always find video ROM at hex addr. C0000 or 768K. Between the video ROM and the system BIOS ROM at hex address D0000 for example, there might not be anything at all. This "address space" is unused and available, and drivers such as Intel's EMM.SYS (Expanded Memory Manager) can "map" blocks of memory within the C000DFFF address range (between 768K and 896K), which means that the memory EMM.SYS controls on an Intel memory board will appear within that address range.

ROM (Read-Only Memory)


Manufacturer: Intel Year Introduced: 1971 Burst Timing: Voltage: 2.7v~5.5v Speed: 70ns Frequency: 4.77MHz Pins: 28-44 DIP Bandwidth:

Types of ROM
ROM: Read-only memory. The 1s and 0s are permanent and created at the silicon foundry. Only used for circuits needed in the tens of thousands and after many prototypes. PROM: Programmable read-only memory. A hardware device connected to a PC can program the ROM, but it is write-once. EPROM: Erasable programmable read-only memory. A PROM that can be erased by exposure to ultraviolet light. EEPROM: Electrically erasable programmable read-only memory. A PROM that can be erased by high voltages. Semiconductor-based Memory that contains instructions or data that can be read but not modified. (Generally, the term ROM often means any read-only device, as in CD-ROM for Compact Disk, Read Only Memory.) Once data has been written onto a ROM chip, it cannot be removed and can only be read. Unlike main Memory (RAM), ROM retains its contents even when the computer is turned off. ROM is referred to as being nonvolatile, whereas RAM is volatile. Most personal computers contain a small amount of ROM that stores critical programs such as the program that boots the computer. In addition, ROMs are used extensively in calculators and peripheral devices such as laser printers, whose fonts are often stored in ROMs. Electrically Erasable Programmable Read-Only Memory (EEPROM) Machines with flash BIOS capability use a special type of BIOS ROM called an EEPROM; this stands for "Electrically Erasable Programmable Read-Only Memory". As you can probably tell by the name, this is a ROM that can be erased and re-written using a special program. This procedure is called flashing the BIOS and a BIOS that can do this is called a flash BIOS. The advantages of this capability are obvious; no need to open the case to pull the chip, and much lower cost. EEPROM is similar to flash memory (sometimes called flash EEPROM). The principal difference is that EEPROM requires data to be written or erased one byte at a time whereas flash memory allows data to be written or erased in blocks. This makes flash memory faster. Flash memory works much faster than traditional EEPROMs because it writes data in chunks, usually 512 bytes in size, instead of a byte at a time.

SDRAM (Synchronous DRAM)


Manufacturer: Kingston Year 1996 Introduced: Burst Timing: 5-1-1-1 Voltage: 3.3 Speed: 6-10ns Frequency: 60-133MHz Pins: 100-278 Bandwidth: 2.1GBps

Short for Synchronous DRAM, a new type of DRAM that can run at much higher clock speeds than conventional memory. SDRAM actually synchronizes itself with the CPU's bus and is capable of running at 133 MHz, about three times faster than conventional FPM RAM, and about twice as fast EDO DRAM and BEDO DRAM. SDRAM is replacing EDO DRAM in many newer computers

High Density vs. Low Density SDRAM


High Density 32Bit DRAM Low Density 16Bit DRAM Will not work in systems that can For system that max out not accept 1.5GB or More Maximum at 512MB, 768MB, or Total System Memory 1024MB

Cache Burst Timings at Various Bus Speeds


Bus Speed MHz 33 50 60 66 75 83 100 125 Async SRAM 2-1-1-1 3-2-2-2 3-2-2-2 3-2-2-2 3-2-2-2 3-2-2-2 3-2-2-2 3-2-2-2 Sync Burst SRAM 2-1-1-1 2-1-1-1 2-1-1-1 2-1-1-1 3-2-2-2 3-2-2-2 3-2-2-2 3-2-2-2 Pipelined Burst SRAM 3-1-1-1 3-1-1-1 3-1-1-1 3-1-1-1 3-1-1-1 3-1-1-1 3-1-1-1 3-1-1-1

Timing Speed
The speed rating marked on each chip (10ns, 50ns, 60ns, 70ns, 80ns or 100ns) signifies how long it takes for the read/write to occur. A chip with a lower number is usually better because it is faster; however, early systems often need slower speeds. If you are upgrading Memory in a computer, always match the speed of modules within the same bank. A nanosecond (ns or nsec) is = 10-9 one billionth of a second. 1. 2. 3. 4. 5. 6. 7. 8. Important Timing Terms: RP - The time required to switch internal memory banks. (RAS Precharge) t RCD - The time required between /RAS (Row Address Select) and /CAS (Column Address Select) access. t AC - The amount of time necessary to "prepare" for the next output in burst mode. t CAC - The Column Access Time. t CL - (or CL) CAS Latency. t CLK - The Length of a Clock Cycle. RAS - Row Address Strobe or Row Address Select. CAS - Column Address Strobe or Column Address Select.
t

9. Read Cycle Time - The time required to make data ready by the next clock cycle in burst mode.

Ultraviolet & EPROM's


This memory element was developed by Frohman-Bentchkowsky at Intel Corporation and was known as the Floating-Gate-Avalanche-Injection MOS (FAMOS) transistor. It was essentially a silicon gate MOS field effect transistor in which no connection was made to the gate. The gate was in fact electrically "floating" in an insulating layer of silicon dioxide. The devices have been fabricated in two structures: P-channel and N-channel. The P-channel devices were the first EPROMs available commercially, but many devices are now using n-channel technology. N-channel MOS devices have the advantage of being able to function with a single power supply. By application of sufficiently large potential difference between the source and drain, charge can be injected into the "floating" gate which induces a charge in the substrate. The source-to-drain impedance changes and a "P-channel" or "N-channel" is created, depending upon the type of substrate. The presence or absence of conduction is the principle of data storage. Application of short wave (254 nm) ultraviolet radiation causes the gate charge to leak away and restores the device to its original unprogrammed state. EPROM manufacturers provide "nominal erasing energies" to their devices; the

amount of UV energy required to erase a chip's memory. Erasing time can be calculated using following formula:
Time(seconds)=(nominal erasing energy (W-sec/cm)*1,000,000)/UV Irradiance(uW/cm))

Most EPROMs have a nominal erasing energy of 15W-sec/cm. Some chips, however, require as little as 6 or 10W-sec/cm, or as much as 25W-sec/cm, for complete erasure.

UMB (Upper Memory Blocks)


Also known as High RAM and Upper-Memory. Normally expanded memory only REQUIRES a 64K page frame. But, a lot of LIM 4.0 compatible expanded memory boards (such as Intel memory boards) automatically try to provide as large a page frame as possible, up to 128K (if there are no ROM's within this range also using some of this space), within the C000-DFFF range. These additional pages of expanded memory in the page frame can be used to create Upper Memory Blocks, which are blocks of memory that can be used to load device drivers and TSR's into them in order to free up more conventional memory for your applications. Basically, every extra page of expanded memory outside of the first 64K of the pageframe can be turned into a UMB. There are memory managers available that create these UMB's. UMB providers in 8088, 8086, and 80286 based systems: 1. QRAM 2. MOVEM These memory managers require that the available Upper Memory Blocks first be made mappable by the combination of a LIM 4.0 Expanded Memory Manager and a memory card such as Intel's EMM.SYS device driver and an Intel Above Board. The Above Board Plus and Above Board Plus 8 can map extra 16K blocks of memory within the C000-DFFF address range beyond the first 64K (which has to remain intact so that applications that use expanded memory will still be able to use it). UMB providers in i386 and i486 based systems: 1. EMM386.SYS provided with DOS 5.0 2. QEMM 3. 386MAX These memory managers do not require LIM 4.0 expanded memory board because the i386 and i486 microprocessors have "mapping" capabilities built into them. 386 memory managers simply use extended memory to emulate expanded and also UMB's.

VCM (Virtual Channel Memory)


VCM is a memory architecture being developed by NEC. It allows different "blocks" of memory to interface separately with the controller, each with its own buffer. This way, different system tasks can be assigned their own virtual channels. Information related to one function does not share buffer space with other tasks being completed at the same time, making overall operations much more efficient.

VCPI (Virtual Control Program Interface)


Specification for managing memory beyond the first megabyte on PCs with 80386 or later processors. VCPI can allocate memory to an application as either expanded or extended memory, as required by the application design. The VCPI standard is supported by some memory managers and DOS extenders. The Intel 80386 microprocessor has three fundamental operating modes. Real mode is provided for backward compatibility with existing 8086 programs. Protected mode allows programs written specifically for the 80386 to take advantage of the larger address space available. Virtual 8086 (V86) mode, like real mode, is used to run 8086 programs. However, V86 mode runs under the control of a protected mode operating environment. This provides certain advantages, chiefly the ability to enable the paging hardware of the 80386 and thus run multiple 8086 programs simultaneously, and also the ability to make arbitrary physical memory available within the V86 address space of an 8086 program. (The 80486 microprocessor provides the same architecture and operating modes as the 80386, thus software written for the 80386 runs without modification on an 80486). The capabilities of the 8086 have spawned the creation of several new kinds of control programs that can run under MS-DOS on a 386 machine. To date, these programs fall into three basic categories: (1) protected mode run-time environments, which allow an application program to execute in protected mode under MS-DOS; (2) EMS emulators, which use V86 mode to make all the memory on the machine available to 8086 programs which use EMS (Lotus/Intel/Microsoft Expanded Memory Specification) memory, and, (3) multitasking environments which use V86 mode to multitask 8086 MSDOS programs, while still giving each 8086 program a full 640Kb of physical memory (or more, if they use EMS memory). Such multitasking environments typically run in conjunction with a separate EMS

emulator, or implement the EMS interface as part of the multitasker. For the remainder of this specification, the terms "DOS- Extender," "EMS emulator," and "multitasker," respectively, will be used to refer to these program categories. Since these control programs run under MS-DOS, it is desirable to make them compatible with each other, so that users don't have to turn off one control program in order to run an application under another control program. The purpose of this document is to specify an interface that allows these classes of control programs to coexist successfully. This interface is called the Virtual Control Program Interface, or VCPI.

VCSDRAM (Virtual Channel SDRAM)


Manufacturer: Year Introduced: Burst Timing: Voltage: NEC 1997 CAS2 3.3v Speed: Frequency: 143MHz Pins: 168 Bandwidth:

The VCMemory is a memory core technology designed to improve memory data throughput efficiency and initial latency of memories. Intended for use in next generation memory systems, the VCMemory technology is ideal memory for a wide range of application such as Multimedia PC, Game machine, Internet Server etc. The slow core operation memory such as DRAM, Flash Memory and Mask ROM can get very significant performance improvements with VCMemory technology. VCSDRAM is the first product to utilize VCM, offers the following benefits to the end user:

Multiplies the effective data throughput performance of conventional DRAM core. Achieving close to full data bus bandwidth with low latency, interleaved random row, random column Read/Write through the channels. Transparent DRAM bank operations through the concurrent foreground and Background Operations. Very wide (256 bytes wide) internal data transfer bus between Channel and memory core. Equivalence of tens of multiple memory banks by using only a fraction of the frequency of Row Activate and Precharge of conventional DRAM core.

Operating System Memory Virtual Memory - Swap Files


Virtual memory provides applications with more RAM space than allocated in the computer. A technique which operating systems use to load more data into RAM than it can hold. Part of the data is kept on disk and is constantly swapped back and forth into system memory. For instance, when your run a CD application. Whenever the operating system needs a part of memory that is currently not in physical memory, a VIRTUAL MEMORY MANAGER picks a part of physical RAM that hasnt been used recently, writes it to a SWAP FILE on the hard disk and then reads the part of RAM that is needed from the swap file and stores it into real RAM in place of the old block. This is called SWAPPING. The blocks of RAM that are swapped around are called PAGES. Virtual memory allows for the multitasking (opening more than one program) that we do. When the amount of virtual memory in use greatly exceeds the amount of real memory, the operating system spends a lot of time swapping pages of RAM around, which greatly hampers performance. This

called THRASHING and you can see it in your LED hard disk drive light. The hard disk is thousands of times slower than the system RAM, if not more. A system that is thrashing can be perceived as either a very slow system or one that has come to a halt. Hard disk access time is measured in thousandths of a second; RAM access time is measured in billionths of a second.

Volatile Memory
All RAM except the CMOS RAM used for the BIOS is volatile. Mem. that loses its contents when the power is turned off. A computer's main mem., made up of dynamic RAM or static RAM chips, loses its content immediately upon loss of power. Contrast ROM, which is non-volatile mem. Generally abbreviated as NVRAM, Non-volatile memory is an area of data storage in the memory where data is not lost when the power is turned off. Nonvolatile memory areas include read-only memory (ROM) and flash memory

VRAM (Video RAM)


Manufacturer: Kingston Speed: 60 ns Year Introduced: Frequency: Burst Timing: Pins: 300-pin PLCC Voltage: 3.3v Bandwidth: 200MBps Video RAM. DRAM with an on-board serial register/serial access memory designed for video applications. Graphics memory must work very quickly to update, or refresh, the screen (60-70 times a second) in order to prevent screen "flicker." At the same time, graphics memory must respond very quickly to the CPU or graphics controller in order to change the image on screen. With ordinary DRAM, the CRT and CPU must compete for a single data port, causing a bottleneck of data traffic. VRAM is a "dual-ported" memory that solves this problem by using two separate data ports. One port is dedicated to the CRT, for refreshing and updating the image on the screen. The second port is dedicated for use by the CPU or graphics controller, for changing the image data stored in memory. VRAM works much like a fast food drive-through that uses two windows. After you place an order, you pay at one window, then drive up and get your food at the next window. This makes the process faster and more efficient. For a mode like 1024x768x256 with 80 Hz refresh, the amount of bandwidth taken up by monitor refresh is approximately 1024 * 768 * 80 = 63 Mb/s, which with overhead added works out to about 75 Mb/s of video memory bandwidth.

Freeing System Resources


By: Tom Anderson You've probably gotten the warning from Windows: "Ninety percent or more of your system resources are in use. Close programs now or your computer will explode." Your initial reaction, like mine, might have been, "I've got plenty of memory and hard drive space. What's going on?" Unfortunately, this somewhat cryptic message has very little to do with RAM and hard drive space. It refers to small areas of Windows memory that are used to keep track of open windows and other objects on the screen, like fonts, listboxes, timers, menus, and so forth. I learned far more about this subject than I wanted recently, when my system kept collapsing because the system resources kept disappearing. The Microsoft Web site has surprisingly little on the topic, but a search in newsgroups and another at Google yielded the information I needed.

I was surprised, though, to find an astonishing amount of misinformation as well. Far too many references, on supposedly well-informed sites, referred to system resources as RAM, and recommended closing applications to free up more RAM. While closing applications can help system resources, the problem is not with RAM.

What Causes the Problem


When Windows is running, what you see and do are built from a collection of objects that, together, make up the Windows experience. All these objects have to be tracked -- their location in memory, their status (open, checked, maximized, etc.), their menus, and much more -- so they can be displayed when necessary, closed, or restored without trampling over anything else in your Windows session. Windows 3.1 was notorious for running out of system resources. Windows 95 changed how these items are handled, and Windows 98 uses the same scheme.

Technical discussion
Briefly, Windows has five areas, or "heaps", that store information about system resources. User32.dll, which manages user interface functions like window creation and messages, uses a 16-bit heap and two 32-bit heaps. One of the 32-bit heaps stores a WND window structure for each window in the system. The other stores menus. The 16-bit heap stores message queues, windows classes, etc. GDI32.dll, the graphical device interface, holds the functions for drawing graphic images and displaying text. It uses a 16-bit heap and a 32-bit heap.

Why You Have a Problem


The point here is that this space is limited, and everything you run on the computer uses some of it. When your system slows to a crawl, the odds are good that you have a system resources problem. My problems with system resources began with an update to Eudora Pro, my e-mail program. (This falls under the heading "Free Updates Aren't Always a Good Idea.") It took me a while to realize that I had upgraded Eudora about the time the problems started. When the light bulb lit, I went to a Eudora newsgroup to search for comments. Hint: it's easier to search for this kind of thing at Deja News. I quickly discovered complaints about the system resources used by Eudora, along with suggestions on increasing the resources available. A search on the Web turned up more suggestions. It quickly became clear that this is a common problem, since I found pages at Compaq, Adobe, PC Magazine, and other sites discussing how to cope. In essence there are three steps in dealing with this problem. First you have to learn the extent of the problem. Windows 9x includes a tool, the Resource Meter, which shows the percentage of User, GDI, and System resources available (the system resource figure is a combination of the other two, although it always seems to match the lower of the other two numbers). Resource Meter is rsrcmtr.exe in the C:\Windows directory. You can open Windows Explorer, find the file, and drag it to your desktop to create a shortcut. In Win98, you can then drag it to the right side of the system tray to have it run every time you boot up. (Note that this uses some resources, too.) This will give you a constant check. The icon changes colors as the resources change: green if you've got plenty, yellow if you're getting low, and red if you're in the danger zone. With this icon in the system tray, you can start checking which programs use the resources. Start with a clean boot, then float your mouse pointer over the resource meter and note the resources available. If you're under 75%, it's a good idea to start checking your system. Click on the Start button, then Run, and type in msconfig.exe. When the window opens, click on the Startup tab. You'll get a list of programs -- the checked items run every time you boot your computer and usually put an icon in the system tray. You should go through this list carefully and uncheck those you don't want or need. Many are added automatically when you install a program. Real Audio, for example, runs a program to help it start up quickly. WordPerfect adds several items to the tray when you install it. Virus programs and other utilities are usually running in the background. But, you might find programs you don't need, or that you

no longer use. Sometimes you'll find duplicates, which are rarely necessary. If you uncheck these programs one at a time, then reboot, you can see what each individual program uses. Once you've cleared out this list, reboot and check your resources again.

Check Out Your Programs


Open a program, and when it's completely open, check the resources again. As you open more windows in the program, keep checking resources. You may be surprised at how much of your system is used. In my case, Eudora Pro was using 20% of my system resources, an astonishingly large amount. It turned out that Eudora opens the inbox, outbox and trash folders when it runs, and every message in these folders uses some resources. I have a bad habit of not cleaning out my inbox, which contributed to the problem. In the end, because the latest Eudora upgrade doesn't solve the resource problem and costs as much as some other programs, I tried alternatives, and changed to Calypso, an excellent shareware mail program which uses about 5% of resources.

Problems in Windows Itself


Finally, note that system resources are not always freed up when a program closes, due to apparently sloppy programming practices in Windows. Windows frequently puts off initializing things a program needs, like fonts, until they are requested. Once requested, those items stay available after the program is closed and the resources used by that item are not freed. Yes, this is the way they designed it. I suspect it's a crude hack to make programs appear to load faster. With 16-bit applications (anything that will run in Windows 3.1 or DOS), none of the system resources used are freed up until all such programs are closed. Microsoft says this is for compatibility purposes. My guess is that it's laziness. Rather than program in a way to tell if the resource is still needed, Microsoft leaves it open. Microsoft also says that closing a program before it has a chance to completely start up can strand resources and reduce the level of free system resources. If you're finding that you have regular problems with low system resources, as I have, first check the programs that are always running; then check the programs you use regularly. Internet browsers are notorious for using and not freeing up system resources, but you might find that one browser works better for you than others. You can also try out replacements for your programs. For example, Calypso leaves me more free resources than Eudora Pro and offers better features in the bargain. Copyright 2000 Tom Anderson, tanderson@sacpcug.org Sacramento PC Users Group.

XMS (Extended Memory Specification)


XMS is ram above the 1Mb address boundary. Only 80286 (or greater) CPU's have the ability to address ram above the 1Mb boundary. The 80286 chip can access 16Mb of total address space, and the i386 chip can access 4 gigabytes of total address space. The 8088 and 8086 microprocessors can only access 1Mb of address space, (640Kb conventional and 384Kb reserved memory). The DOS operating system was not designed to access ram above the 1Mb address boundary, which means that DOS programs do not automatically have access to XMS. However, many DOS based computers do have more than 1Mb of ram on the motherboard. Certain DOS applications, (such as Windows 3.x), and the OS/2 operating system do use XMS because they were specifically written to use XMS. No special drivers are needed to access XMS. Only programs that are written to use XMS can utilize extended memory. XMS, developed jointly by AST Research, Intel Corporation, Lotus Development, and Microsoft

Corporation for using XMS and DOS's high memory area

S-ar putea să vă placă și