Documente Academic
Documente Profesional
Documente Cultură
The average period of time (in nanoseconds) it takes for RAM to complete one access and begin another. Access time is composed of latency (the time it takes to initiate a request for data and prepare to access it) and transfer times. DRAM chips for personal computers have accessing times of 50 to 150 nanoseconds (billionths of a second). Static RAM (SRAM) has access times as low as 10 nanoseconds. Ideally, the accessing times of memory should be fast enough to keep up with the CPU. If not, the CPU will waste a certain number of clock cycles, which makes it slower. A nanosecond (ns or nsec) is = 10-9 one billionth of a second.
Async SRAM
(Asynchronous Static Random Access Memory)
Manufacturer: many Year Introduced: Burst Timing: 3-1-1-1 Voltage: 2.7-3.1v Speed: 8.5ns Frequency: 500 MHz Pins: 85-ball PBGA Bandwidth:
Async SRAM has been with us since the days of the 386, and is still in place in the L2 cache of many PCs. It's called asynchronous because it's not in sync with the system clock, and therefore the CPU must wait for data requested from the L2 cache. The wait isn't as long as it is with DRAM, but it's still a wait.
Memory Bandwidth
Memory bandwidth refers to the rate at which data is transferred between the graphics processor and graphics RAM. Mem. bandwidth limitations are one of the key bottlenecks that must be overcome to deliver truly realistic 3D environments. To deliver truly stunning 3D requires high resolution, 32-bit color depth at high frame rates with multi-pass texturing to reveal the detail found in real world environments. The memory bandwidth is defined by the RAM chips used. SDRam and SGRam have a bus width of 64 bits, when using a dual bus structure this can become 128 bits (using two memory banks in parallel. When adding another bank in parallel this can even become 192 bits. The memory chips operate at a clockspeed, usually 100 MHz. This means that the theoretical Memory bandwidth is bus width x clockspeed. Thus 800 Mb (64 bits) , 1.6 Gb (128 bits) and 2.4 Gb (192 bits). Dual bank SGRam clocked at 200 MHz will have 128 bits x 200 MHz = 3.2 Gb memory Bandwidth.
Burst Extended Data Output DRAM (Burst EDO, BEDO) A variant on EDO DRAM in which read or write cycles are batched in bursts of four. The memory bursts wrap around on a four byte boundary which means that only the two least significant bits of the CAS (Column Address Strobe) are modified internally to produce each address of the burst sequence. Consequently, Burst EDO memory bus speeds will range from 40 MHz to 66 MHz , well above the 33 MHz bus speeds that can be accomplished using FPM (Fast Page Mode) or EDO DRAM. Burst EDO was available sometime before May 1995. BEDO DRAM can only stay synchronized with the Central Processing Unit clock for short periods (bursts). Also, it can not keep up with processors whose buses run faster than 66
MHz. Despite the fact that BEDO arguably provides more improvement over EDO than EDO does over FPM the standard has lacked chipset support and has consequently never really caught on, losing out to Synchronous DRAM (SDRAM).
Buffered Memory
A term used to describe a memory module that contains buffers. The buffers re-drive the signals through the memory chips and allow the module to be built with a greater number of memory chips. Buffered and unbuffered RAM cannot be mixed. The design of the computer's memory controller dictates which type of RAM must be used.
Burst Mode
Bursting is a rapid data-transfer technique that automatically generates a block of data (a series of consecutive addresses) every time the processor requests a single address. The assumption is that the next data-address the processor will request will be sequential to the previous one. Bursting can be applied both to read operations (from memory) and write operations (to memory).
Cache DRAM (CDRAM) is a development that has a localized, on chip cache with a wide internal bus composed of two sets of static data transfer buffers between cache and DRAM. This architecture achieves concurrent operation of DRAM and SRAM synchronized with an external clock. Separate control and address input terminals of the two portions enable independent control of the DRAM and SRAM, thus the system achieves continuous and concurrent operation of DRAM and SRAM. CDRAM can handle CPU, direct memory access (DMA) and video refresh at the same time, by utilizing half-time multiplexed interleaving through a high-speed video interface. The system transfers data from DRAM to SRAM during the CRT blanking period. Graphic memory, as well as main memory and cache memory, are unified in the CDRAM. As you can see, CDRAM can replace cache and main memory, and it is has already been proven that a CDRAM based system has a 10 to 50 percent performance advantage over a 256kbyte cache based system.
latency 2 is faster than a 6ns DIMM with CAS latency 3. A 100 MHz DIMM operating at CAS latency 2 operates at 125 MHz enhancing system performance. A CAS 2 DIMM operates at 1525% faster depending on system application. It is recommended that P-II & P-III and Athlon chips use CAS 2 DIMMS running at 100 or 133MHZ for extra speed, and crash free operation. What are RAS and CAS? They stand for Row Access Strobe (RAS) and Column Access Strobe (CAS). Each describes how long it takes to read a row or column of memory cells, known as the CAS/RAS Latency. Each is described with a rating number, where lower numbers are better. The rating is also dependant on the front side bus speed of your motherboard so the rating may rise on higher speeds. A common marketing term attached to SDRAM modules is either "CAS2" or "CAS3". Unfortunately, this is a bit misleading, as they should be referred to as CL2 or CL3, since they refer to CAS Latency timings (2 clocks vs. 3 clocks). As you have seen from the discussion above, the CAS Latency of a chip is determined by the column access time (tCAC). This is the time it takes to transfer the data to the output buffers from the time the /CAS line is activated. The "rule" for determining CAS Latency timing is based on this equation: tCL = tCAC / tCLK In lay terms, CAS Latency times the system clock cycle length must be greater than or equal to the column access time (tCAC). In other words, if tCLK is 10ns (100 MHz system clock) and tCAC is 20ns, the CAS Latency (CL) can be 2. But if tCAC is 25ns, then CAS Latency (CL) must be 3. The SDRAM specification permits CAS Latency values of 1, 2 or 3.
CMOS RAM
(Complementary Metal Oxide Semiconductor Random Access Memory) CMOS (pronounced see-moss) stands for complementary metal-oxide semiconductor. This is a type of memory chip with very low power requirements, and in PCs it operates using small batteries. In PCs, CMOS is more specifically referred to as CMOS RAM. This is a tiny 64-byte region of memory that, thanks to the battery power, retains data when the PC is shut off. The function of CMOS RAM is to store information your computer needs when it boots up, such as hard disk types, keyboard and display type, chip set, and even the time and date. If the battery that powers your CMOS RAM dies, all this information is lost, and your PC will boot with the default information that shipped with the motherboard. In most cases, this means youll have no access to your hard disks until you supply CMOS with the necessary information. Without access to your hard disks, you wont be able to boot your operating system. Fortunately, todays CMOS RAM is protected by nickel cadmium batteries, which the computers power supply recharges. Even so, its an extremely good idea to keep a copy of all the information stored in CMOS, in case disaster strikes. The information stored in CMOS is required by your computers Basic Input/Output System, or BIOS (pronounced bye-oss). Your PC contains several BIOSes--the video BIOS that interfaces your CPU and video card, for example--but the most fundamental is the system BIOS. The system BIOS is stored on a ROM (read-only memory) chip on the motherboard and is copied at boot time to a 64K segment of upper system RAM for faster system access (RAM is faster than ROM). The role of the system BIOS is to boot the system, recognize the hardware devices, and locate and launch the operating system. Once the operating system is loaded, the BIOS then works with it to enable access to the hardware devices.
Compact Flash
A small flash memory module. The memory chips are enclosed in a plastic case and retain data after they are removed from the system. The most common uses for these are in pagers, handheld computers, cell phones, digital cameras, and audio players.
Type I Dimensions Contacts Capacity Transfer Rate Data Read Data Write using 6-way interleave 36.4 x 42.8 x 3.3 mm 50-pins 8-160 MBytes 8MBps (burst) 1.3-1.5 MBps 3 MBps
Type II 36.4 x 42.8 x 5.0 mm 50-pins 160-512 Mbytes 8MBps (burst) 1.3-1.5 MBps 3 MBps
Many other alternate methods of Memory access are in development. One of the most promising is Double Data Rate (DDR) Synchronous DRAM. Like Synchronous DRAM before it, DDR Synchronous Dynamic Random Access Memory will interleave RAM accesses so that several RAM accesses can be performed simultaneously. DDR Synchronous Dynamic Random Access Memory executes twice for each tick of the Memory bus, effectively doubling the system bus speed. Currently, DDR RAM is only used in high-end graphics cards, but it will almost certainly make its way down to the main Memory of the computer soon. Interleave: The process of taking data bits (singularly or in bursts) alternately from two or more memory pages (on an Synchronous Dynamic Random Access Memory) or devices (on a memory card or subsystem).
PC4200, 533 MHz, 4.2 GBps, 266 MHz FSB (8 Bytes) x (533 MHz) = 4,200 MBps or 4.2 GBps PC4000, 500 MHz, 4.0 GBps, 250 MHz FSB (8 Bytes) x (500 MHz) = 4,000 MBps or 4.0 GBps PC3700, 466 MHz, 3.7 GBps, 233 MHz FSB (8 Bytes) x (466 MHz) = 3,700 MBps or 3.7 GBps PC3500, 433 MHz, 3.5 GBps, 217 MHz FSB (8 Bytes) x (433 MHz) = 3,500 MBps or 3.4 GBps PC3200, 400 MHz, 3.2 GBps, 200 MHz FSB (8 Bytes) x (400 MHz) = 3,200 MBps or 3.2 GBps PC3000, 366 MHz, 3.0 GBps, 200 MHz FSB (8 Bytes) x (366 MHz) = 3,000 MBps or 3.0 GBps PC2700, 333 MHz, 2.7 GBps, 166 MHz FSB (8 Bytes) x (333 MHz) = 2,700 MBps or 2.7 GBps PC2400, 300 MHz, 2.4 GBps, 150 MHz FSB (8 Bytes) x (300 MHz) = 2,400 MBps or 2.4 GBps PC2100, 266 MHz, 2.1 GBps, 133 MHz FSB (8 Bytes) x (266 MHz) = 2,100 MBps or 2.1 GBps PC1600, 200 MHz, 1.6 GBps, 100 MHz FSB (8 Bytes) x (200 MHz) = 1,600 MBps or 1.6 GBps
Memory-chip modules with 168 pins that connect the modules to the PC motherboard. DIMMs are the most popular memory modules available today. They support 64-bit data transfers. As operating system Memory demands increased, larger Memory modules were required; yet the motherboard space was even more at a premium. To solve this problem the 168 pin DIMM
modules ~5.375" X 1" was developed. These are installed singly in later Pentiums, Pentium Pro's, and PowerMacs, and are offered as non-Parity Fast-Page, EDO, ECC, or SDRAM modes, 3.3v or 5v. buffered or unbuffered, and 2-clock or 4-clock. Their capacities are 8Mb, 16Mb, 32Mb, 64Mb and 128Mb. Choosing the right modules is very critical, as most machines require specific Types, sizes and upgrade configurations. The number of black components on a 184-pin DIMM may vary, but they always have 92 pins on the front and 92 pins on the back for a total of 184. 184-pin DIMMs are approximately 5.375" long and 1.375" high, though the heights may vary. While 184-pin DIMMs and 168-pin DIMMs are approximately the same size, 184-pin DIMMs have only one notch within the row of pins.
All key locations are determined by their relative positioning to the center of the notch area as seen in the above diagram. The left notch determines whether the module uses a buffered or unbuffered assembly. The right notch determines the voltage requirement. Currently the 5V and 3.3V assignment are the only settings used, however, its expected that a lower settings of 1.5-2.7V will be available in the future. This new voltage requirement will take assignment on the third voltage key settings. PC SDRAM Registered DIMM Specification
through the CPU. A capability provided by some computer bus architectures that allows data to be sent directly from an attached device (such as a disk drive) to the memory on the computer's motherboard. Usually a specified portion of memory is designated as an area to be used for direct memory access. In the ISA bus standard, up to 16 megabytes of memory can be addressed for DMA. The EISA and Micro Channel Architecture standards allow access to the full range of memory addresses (assuming they're addressable with 32 bits). Peripheral Component Interconnect accomplishes DMA by using a bus master (with the microprocessor "delegating" I/O control to the PCI controller). An alternative to DMA is the Programmed Input/Output (PIO) interface in which all data transmitted between devices goes through the processor. A newer protocol for the ATA/IDE interface is Ultra DMA, which provides a burst data transfer rate up to 33 MB (megabytes) per second. Hard drives that come with Ultra DMA/33 also support PIO modes 1, 3, and 4, and multiword DMA mode 2 (at 16.6 megabytes per second).
Dynamic random access Memory (DRAM) is the most common kind of random access Memory (RAM) for personal computers and workstations. Memory is the network of electrically-charged points in which a computer stores quickly accessible data in the form of 0s and 1s. Random access means that the PC processor can access any part of the Memory or data storage space directly rather than having to proceed sequentially from some starting place. DRAM is dynamic in that, unlike static RAM (SRAM), it needs to have its storage cells refreshed or given a new electronic charge every few milliseconds. Static RAM does not need refreshing because it operates on the principle of moving current that is switched in one of two directions rather than a storage cell that holds a charge in place. Static RAM is generally used for cache Memory, which can be accessed more quickly than DRAM.
Direct Rambus DRAM: a totally new RAM architecture, complete with bus mastering (the Rambus Channel Master) and a new pathway (the Rambus Channel) between memory devices (the Rambus Channel Slaves). A single Rambus Channel has the potential to reach 500 MBps in burst mode; a 20- fold increase over DRAM. DRDRAM is often called PC800, based on doubling the Pentium 4's 400MHz bus.
Extended Data Output Dynamic Random Access Memory (DRAM) is an improvement over Fast Page Mode design, and used in Non-Parity configurations in Pentium machines or higher. If supported by your motherboard, EDO shortens the Read cycle between the Memory and the Central Processing Unit, thereby dramatically increasing throughput. EDO chips allow the Central Processing Unit to access Memory 10 to 20 percent faster. EDO DRAMs hold the data valid even after the signal that "strobes" the column address goes inactive. This allows faster Central Processing Unit's to manage time more efficiently; i.e., while the EDO DRAM is retrieving an instruction for the microprocessor, the Central Processing Unit can perform other tasks without concern that the data become invalid. Do not use EDO in systems that don't support it, or mix EDO with Fast Page Mode as serious problems result.
EDRAM (enhanced dynamic random access memory) is dynamic random access memory (dynamic or power-refreshed RAM) that includes a small amount of static RAM (SRAM) inside a larger amount of DRAM so that many memory accesses will be to the faster SRAM. EDRAM is sometimes used as L1 and L2 memory and, together with Enhanced Synchronous Dynamic DRAM, is known as cached DRAM. Data that has been loaded into the SRAM part of the EDRAM can be accessed by the microprocessor in 15 ns (nanoseconds). If data is not in the SRAM, it can be accessed in 35 ns
Enhanced Synchronous DRAM, made by Enhanced Memory Systems, includes a small static RAM (SRAM) in the SDRAM chip. This means that many accesses will be from the faster SRAM. In case the SRAM doesn't have the data, there is a wide bus between the SRAM and the SDRAM because they are on the same chip. ESDRAM is the synchronous version of Enhanced Memory Systems's EDRAM architecture. Both EDRAM and ESDRAM devices are in the category of cached DRAM and are used mainly for L2 cache Memory.ESDRAM is apparently competing with
Evolution of Memory
Year 1968 1970 1971 1974 1983 1987 1995 1995 1996 1996 1997 1997 1998 1999 1999 1999 2000 2000 2000 2001 Type RAM DRAM ROM MRAM SIMM FPM DRAM EDO DRAM BEDO DRAM SDRAM SLDRAM DIMM PC66 PC100 DRDRAM PC800 RDRAM PC133 PC150 DDR SDRAM MicroDIMM EDRAM Volts 5.0v +5v,-5v,+12v 3.3-5.0v 0.0v 5.0v 5.0v 5.0v 5.0v 3.3v 3.3v 3.3v 3.3v 3.3v 2.5v 3.3v 3.3v 3.3v 2.5v 3.3v 1.2v Freq 4.77MHz 4.77-40MHz 33 MHz 16-66 MHz 33-75MHz 60-100MHz 60-133MHz 800MHz 66MHz 66MHz 100MHz 800MHz 800MHz 133MHz 133MHz 266MHz 100MHz 450MHz Timing 5-5-5-5
5-3-3-3 5-2-2-2 5-1-1-1 5-1-1-1 5-1-1-1 5-1-1-1 4-1-1-1 2-2-2 2-3-2 CL=2.5 2,3 15-35ns
Fast Page Mode has traditionally been the most common DRAM. A "page" is the section of Memory available within a row address. Accessing Memory is like looking up information in a book. You choose the page, then FPM gets information from that page. FPM DRAMs need only to specify the row address once for accesses within the same page addresses. Successive accesses to the same page of Memory only require a column address to be selected, which saves time in accessing the Memory.
environs. Visually inspect the sockets; if they are gold, buy SIMMs with gold contacts. If they are tin, buy SIMMs with tin or lead contacts. However, this is not always a critical issue, and either kind usually works. Most Pentium boards have tin contacts, and almost all SIMMs manufactured today use a tin or lead alloy instead of gold. Gold can be deposited on the connector fingers of memory modules in two ways. Boards manufactured using cheaper processes use immersion gold plating that gives a very thin layer of gold over the entire surface of the board. The actual gold thickness is very difficult to control in the immersion process. Depositing too much gold can cause the solder joints on the board fail after becoming too brittle. Boards manufactured to the PC100 and PC133 specifications have thicker gold plating, deposited only on the connector fingers using an electroplated process. This process is more expensive, easier to control and guarantees that the proper amount of gold is present on the surface of the gold connector fingers.
immersion plating [Metallurgy]. The process of applying a metallic coating to a part simply by immersing it in a solution; for instance, by electroless plating. An immersion plating process is not the same as an electroless process. In an immersion process you have a galvanic displacement in which a metal less noble, for example copper or nickel, is displaced by gold. As soon as the copper or nickel surface is no longer exposed, the process stops. A true immersion deposit is limited in thickness and typically does not adhere extremely well to the substrate. An electroless plating process is essentially an autocatalytic process. In a process of this type the plating process will continue once it has started and as long as the plating bath contains the proper components (metal ions, reducing agents, etc.). In theory, there is no limit to the thickness of metal that can be deposited.
Electroplating [Metallurgy]. The process of producing a metallic coating on a surface by electrodeposition - i.e., by the action of an electric current. The principle of electroplating is that the coating metal is deposited from an electrolyte - an aqueous acid or alkaline solution - on to the base: i.e., the metal to be coated. The latter forms the cathode (negative electrode). A low-voltage direct current is used; the anode is gradually consumed. Electroplating is normally done with direct current. However, particularly with cyanide copper baths, improved smoothness and uniformity of the coating can be obtained by means of the so-called periodic-reverse process, in which the polarity is periodically reversed, so that the metal is alternatively plated and deplated.
equipment, and automobiles. Integrated circuits are often classified by the number of transistors and other electronic components they contain:
Intelligent DRAM, or IDRAM, merges processor and memory into a single chip in order to lower memory latency and increase bandwidth. It is a research model for the next generation of DRAM and has been tested in alpha 21164 processors. The reasoning behind placing a processor in DRAM rather than increasing the on-processor SRAM is that the DRAM is approximately 25 to 50 times denser than cache memory in a microprocessor. Merging a microprocessor and DRAM on the same chip provides some rather obvious opportunities in performance, energy efficiency, and cost. It affords a reduction in latency by a factor of 5 to 10, an increase in bandwidth by a factor of 50 to 100, and has an advantage in energy efficiency at a factor of 2 to 4. Add to this and an qualified cost saving as the result of removing superfluous memory and reducing board area. Although the above figures are estimations based on early testing and present technology, it would appear that IRAM holds allot of promise.
Interleaving Memory
Memory interleaving is a way to get your machine to access your memory banks simultaneously, rather than sequentially. Interleaving allows a system to use multiple memory
modules as one. Interleaving can only take place between identical memory modules. Theoretically, system performance is enhanced because read and write activity occurs nearly simultaneously across the multiple modules in a similar fashion to Hard Drive striping. Many Apple Power Macintosh and clone systems support interleaving. Although many systems require more than one module at a time, most often they are NOT interleaving. The multiple modules are addressed as one large module, but they're not interleaved. These systems address memory in data path that exceeds the width of the memory module, so the designers use older technology to conserve cost and enhance availability of parts.
L1 Cache
Pronounced cash, a special high-speed storage mechanism. It can be either a reserved section of main memory or an independent high-speed storage device. Two types of caching are commonly used in personal computers: memory caching and disk caching. A memory cache, sometimes called a cache store or RAM cache, is a portion of memory made of high-speed static RAM (SRAM) instead of the slower and cheaper dynamic RAM (DRAM) used for main memory. Memory caching is effective because most programs access the same data or instructions over and over. By keeping as much of this information as possible in SRAM, the computer avoids accessing the slower DRAM. Some memory caches are built into the architecture of microprocessors. The Intel 80486 microprocessor, for example, contains an 8K memory cache, and the Pentium has a 16K cache. Such internal caches are often called Level 1 (L1) caches. Most modern PCs also come with external cache memory, called Level 2 (L2) caches. These caches sit between the CPU and the DRAM. Like L1 caches, L2 caches are composed of SRAM but they are much larger. Disk caching works under the same principle as memory caching, but instead of using highspeed SRAM, a disk cache uses conventional main memory. The most recently accessed data from the disk (as well as adjacent sectors) is stored in a memory buffer. When a program needs to access data from the disk, it first checks the disk cache to see if the data is there. Disk caching can dramatically improve the performance of applications, because accessing a byte of data in RAM can be thousands of times faster than accessing a byte on a hard disk. When data is found in the cache, it is called a cache hit, and the effectiveness of a cache is judged by its hit rate. Many cache systems use a technique known as smart caching, in which the system can recognize certain types of frequently used data. The strategies for determining which information should be kept in the cache constitute some of the more interesting problems in computer science.
L2 Cache
Short for Level 2 cache, memory that is external to the microprocessor. In general, L2 cache memory, also called the secondary cache, resides on a separate chip from the microprocessor chip. Although, more and more microprocessors are including L2 caches into their architectures.
L3 Cache
As more and more processors begin to include L2 Cache into their architectures, Level 3 Cache is now the name for the extra memory built into motherboards between the microprocessor and the main memory. Quite simply, what was once L2 Cache on motherboards now becomes L3 Cache when used with microprocessors containing the built-in memory.
Memory Latency
Based on experience with current prototype systems, a system for processing structured video requires on the order of 3 to 20 frames of memory. A single device targeted for broadcast resolution will require from 20 to 150 Mbits of memory, precluding the sole use of on-chip memory. Even if wafer scale integration or multi-chip modules are used, the device memory would be physically separate from the processor, presenting a large access latency. This is a serious problem in any processing system -- the speed of a processor is irrelevant if it is stalled waiting for data. This latency problem is greatly exacerbated by parallel processing, where it is more likely that the data being read is located remotely from the processor, and accesses may conflict with those made by other processors. The typical approach to reducing latency is the use of a cache -- a relatively small amount of lower latency memory containing a copy of sections of the slower memory that the processor is using or likely to use. Although this approach can be very effective in single processor systems, given typical application memory access patterns its performance on media processing is less than optimal. There are two reasons for this: the large amount of data being processed, and atypical data access patterns. The amount of data typically accessed by a media algorithm typically exceeds the size of a typical cache, diminishing the likelihood that desired data will be found in the cache. But a cache improves performance not only through maintaining a local copy, but also by prefetching data in addresses linearly adjacent to the requested one. Unfortunately, this automatic prefetching can degrade performance when accessing data sparsely (with a step between samples greater than the size of a cache line). When using the cache mechanism in a multiprocessor system, great care must be exercised to ensure the validity of the data in the cache and the memory. The overhead of maintaining coherence does not scale well, requiring either that all processors monitor all memory accesses, or that lists of data copies be maintained and used. Cache coherence schemes have been shown useful in systems of 2 - 4 processors
KVR MT
303-534-0827 949-585-0059
NPNX Oki Data Americas, Inc. Panasonic PDP Systems, Inc. PNY Technologies, Inc. Rambus Ramtron Intl. Corp. Samsung Sharp Corp. Simple Tech. Siemens
800-654-3282 800-800-9600 973-515-9700 650-944-7700 719-481-7000 800-237-4277 800-367-7330 ++41(0)585 587 600 208-363-5716 800-848-3927 949-455-2000 510-979-1564 973-560-5590 650-947-5001 719-481-9294 905-890-0499 949-476-1209 ++41(0)585 587 605 208-363-2520 949-859-3963
SpecTek Texas Instruments TI or TMS Toshiba America TC Vanguard Microelectronics VML Ltd VCMM / Viking Components, Inc. VCCF VisionTek, Inc. Mosel Vitelic, Corp.
Tech Support
(44) 1604 859 542 (44) 1604 859 562 Sales 800-845-8777 800-726-9695 408-433-6000 949-643-7274 408-433-0952 Customer Service Customer Support Webmaster
Memory Controller
An essential component in any computer. Its function is to oversee the movement of data into and out of main memory. It also determines what type of data integrity checking, if any, is supported. The chipset supports the CPU. It usually contains several "controllers" which govern how information travels between the processor and other components in the system. Some systems have more than one chipset. The memory controller is part of the chipset, and this controller establishes the information flow between memory and the CPU.
256x32 512x32 1x32 2x32 256x36 512x36 1x36 2x36 1x64 1x72
D: This is the depth of the module in millions. For each bit of width, there are this many megabits (not bytes) of storage. This number is usually 1, 2, 4 or 8. For smaller SIMMs, it can be 256 or 512; in this case it represent the number of kilobits of depth, instead of megabits. W: This is the width of the module in bits. Each SIMM or DIMM type has the same width. This number is usually 8, 32 or 64 for non-parity modules, or 9, 36 or 72 for parity or ECC modules.
To find the size in megabytes of any module from its "DxW" specification is as follows: take the D and W numbers and multiply them together (if D is 256 or 512, use 0.25 or 0.5 instead). Then, take the product and divide by 8 (for non-parity memory) or 9 (for parity). The result is the size in megabytes.
Millennium (ME) Linux RedHat (UNIX clone) Mac OS 9.1 Windows NT OEM Mac OS 8.6 Win-98 OEM IBM OS/2 Mac OS 7.6 Win-95 OEM Amiga OS 3.5 Windows 3.11 DR DOS 7.03 MS DOS 6.22 Windows CE CP/M
There is great interest in the possibility of fabricating a dynamic random access memory (DRAM ) which retains its memory even after removing power from the device. Such a nonvolatile memory has important military applications for missiles and satellites. Clearly such a device could also have important commercial applications if the non-volatility were accomplished without impacting other properties of the memory, notably density, read and write speed, and lifetime. IBM has recently begun a project with significant funding from DARPA to study the feasibility of a DRAM memory using memory cells based on magnetic tunnel junctions.
To be released with the Intel Merced processor Expected transfer rates of 1.6Gb to 3.0Gb Being Created by Intel and Rambus to be faster then RDRAM Available in 1999 ~ 2001
Packaging
The packaging is simply the entire makeup of a unit of memory, in most cases, the SIMM. Since the memory chips themselves are way too small, they must be combined and put on a medium that can be worked with and added to a system.
Package Terminology
BGA CBGA CDIP CDIP SB CFP CPGA Ball Grid Array Ceramic Ball Grid Array Glass-Sealed Ceramic Dual In-Line Pkg. Side-Braze Ceramic Dual In-Line Pkg. Both Formed and Unformed CFP Ceramic Pin Grid Array
CZIP DFP DIMM FC/CSP HLQFP HQFP HSOP HSSOP HTQFP HTSSOP HVQFP JLCC LCCC LGA LPCC LQFP MCM MQFP OPTO PDIP PFM PLCC PPGA QFP SDIP SIMM SIP SODIMM SOJ SOP SSOP TFP TO/SOT TQFP TSOP TSSOP TVFLGA TVSOP VQFP VSOP
Ceramic Zig-Zag Pkg. Dual Flat Pkg. Dual-In-Line Memory Module Flip Chip / Chip Scale Pkg. Thermally Enhanced Low Profile QFP Thermally Enhanced Quad Flat Pkg. Thermally Enhanced Small-Outline Pkg. Thermally Enhanced Shrink Small-Outline Pkg. Thermally Enhanced Thin Quad Flat Pack Thermally Enhanced Thin Shrink Small-Outline Pkg. Thermally Enhanced Very Thin Quad Flat Pkg. J-Leaded Ceramic or Metal Chip Carrier Leadless Ceramic Chip Carrier Land Grid Array Leadless Plastic Chip Carrier Low Profile Quad Flat Pack Multi-Chip Module Metal Quad Flat Pkg. Light Sensor Pkg. Plastic Dual-In-Line Pkg. Plastic Flange Mount Pkg. Plastic Leaded Chip Carrier Plastic Pin Grid Array Quad Flat Pkg. Shrink Dual-In-Line Pkg. Single-In-Line Memory Module Single-In-Line Pkg. Small Outline Dual-In-Line Memory Module J-Leaded Small-Outline Pkg. Small-Outline Pkg. (Japan) Shrink Small-Outline Pkg. Triple Flat Pack Cylindrical Pkg. Thin Quad Flat Pkg. Thin Small-Outline Pkg. Thin Shrink Small-Outline Pkg. Thin Very-Fine Land Grid Array Very Thin Small-Outline Pkg. Very Thin Quad Flat Pkg. Very Small Outline Pkg.
When dealing with the "packaging" RAM comes in, you also must consider single and double sided modules, sometimes referred to as single and double RAS SIMMs/DIMMs. Here is a small chart which illustrates chip sizes, and basic memory configurations: Double Type Pins Chips Size Sided 1 x 32 SIMM 2 x 32 SIMM 4 x 32 SIMM 8 x 32 SIMM 16 x 32 SIMM 32 x 32 SIMM 1 x 64 DIMM 72 72 72 72 72 72 168 8 16 8 16 8 16 8 * * * 4MB 8MB 16MB 32MB 64MB 128MB 8MB
8 8 16 16 16 * * *
PC66 SDRAM
(PC66 Synchronous Dynamic Random Access Memory) Manufacturer: many Speed: Year Introduced: 1997 Frequency: 66MHz Burst Timing: 5-1-1-1 Pins: 168 Voltage: 3.3v Bandwidth: Synchronous Dynamic Random Access Memory is the fastest DRAM technology available. It uses a clock to synchronize the signal input and output. The clock coordinates with the Central Processing Unit clock so both are in synch. The Central Processing Unit "knows" when operations are to be completed and data will become available, freeing the processor for other operations. The use of a clock allows for extremely fast consecutive read and write capability over FPM and EDO DRAMs.The clock is the main speed consideration with SDRAMs; therefore, SDRAMs are measured in megahertz (e.g. 66MHz or 100MHz). SDRAM increases the speed and performance of the system.
PC100 SDRAM is synchronous DRAM (SDRAM) that states that it meets the PC100 specification from Intel. Intel created the specification to enable RAM manufacturers to make chips that would work with Intel's i440BX processor chipset. The i440BX was designed to achieve a 100MHz system bus speed. Ideally, PC100 SDRAM would work at the 100MHz speed, using a 4-1-1-1 access cycle. It's reported that PC100 SDRAM will improve performance by 10-15% in an Intel Socket 7 system (but not in a Pentium II because its L2 cache speed runs at only half of processor speed). To develop this type of Memory, a set of specifications has been developed by Intel and was endorsed by most of the Memory manufacturers. Intel established a very precise set of specifications and guide lines to ensure compatibility between Memory modules of any brands. The Intel PC100 compliance specifications are ensuring robust Memory operation from suppliers that meet these specifications and this is a great benefit to both the industry and the end users. In addition to Intel providing specs for PC100 devices and DIMMs, Intel has released module gerber (raw card) design files. Vendors using these raw card design files will have much more consistency than those using their own raw card design files. PC 133 SDRAM is downward compatible with PC 100 SDRAM memory.
The PC133 specification details the requirements for SDRAM used on 133MHz Front Side Bus (FSB) motherboards. PC133 SDRAM can be used on 100MHz FSB motherboards but will not yield a performance advantage over PC100 memory at 100MHz. PC 133 SDRAM is downward compatible with PC 100 SDRAM memory. PC133 Compliance by: Texas Instruments Inc.
With the memory bus quickly becoming the biggest bottleneck in todays computers, the guaranteed ability to run the memory bus at 150 MHz is important for improving the performance of 3D gaming, integrated UMA (Unified Memory Architecture) chipset based computers, high - end desktop publishing, intensive graphic editing, servers, and workstations. This module is manufactured by Enhanced Memory Systems and is organized as 16Mx64 using 16 8x8 density chips. These memory modules contain an SPD (Serial Presence Detect) EEPROM programmed for 2-3-2 by Enhanced Memory Systems. The Serial Presence Detect also contains information on the module type, module organization, component speed, and other attributes relevant to your motherboard chipsets memory controller. The 150 MHz PC150 SDRAM Tiny BGA based DIMM delivers up to the peak bandwidth of 1.2 GBps - comparing to the bandwidth of 0.8 GBps of PC100 SDRAM and 1.06 GBps of PC133 SDRAM to satisfy the need of the performance hungry users and die hard overclockers alike.
Many other alternate methods of Memory access are in development. One of the most promising is Double Data Rate (DDR) Synchronous DRAM. Like Synchronous DRAM before it, DDR Synchronous Dynamic Random Access Memory will interleave RAM accesses so that several RAM accesses can be performed simultaneously. DDR Synchronous Dynamic Random Access Memory executes twice for each tick of the Memory bus, effectively doubling the system bus speed. Currently, DDR RAM is only used in high-end graphics cards, but it will almost certainly make its way down to the main Memory of the computer soon. Interleave: The process of taking data bits (singularly or in bursts) alternately from two or more memory pages (on an Synchronous Dynamic Random Access Memory) or devices (on a memory card or subsystem).
PC4200, 533 MHz, 4.2 GBps, 266 MHz FSB (8 Bytes) x (533 MHz) = 4,200 MBps or 4.2 GBps PC4000, 500 MHz, 4.0 GBps, 250 MHz FSB
(8 Bytes) x (500 MHz) = 4,000 MBps or 4.0 GBps PC3700, 466 MHz, 3.7 GBps, 233 MHz FSB (8 Bytes) x (466 MHz) = 3,700 MBps or 3.7 GBps PC3500, 433 MHz, 3.5 GBps, 217 MHz FSB (8 Bytes) x (433 MHz) = 3,500 MBps or 3.4 GBps PC3200, 400 MHz, 3.2 GBps, 200 MHz FSB (8 Bytes) x (400 MHz) = 3,200 MBps or 3.2 GBps PC3000, 366 MHz, 3.0 GBps, 200 MHz FSB (8 Bytes) x (366 MHz) = 3,000 MBps or 3.0 GBps PC2700, 333 MHz, 2.7 GBps, 166 MHz FSB (8 Bytes) x (333 MHz) = 2,700 MBps or 2.7 GBps PC2400, 300 MHz, 2.4 GBps, 150 MHz FSB (8 Bytes) x (300 MHz) = 2,400 MBps or 2.4 GBps PC2100, 266 MHz, 2.1 GBps, 133 MHz FSB (8 Bytes) x (266 MHz) = 2,100 MBps or 2.1 GBps PC1600, 200 MHz, 1.6 GBps, 100 MHz FSB (8 Bytes) x (200 MHz) = 1,600 MBps or 1.6 GBps
PCMCIA
(Personal Computer Memory Card International Association)
http://www.pcmcia.org/ An international standards body and trade association with over 300 member companies that was founded in 1989 to establish standards for Integrated Circuit cards and to promote interchangeability among mobile computers where ruggedness, low power, and small size were critical. As the needs of mobile computer users has changed, so has the PC Card Standard. By 1991, PCMCIA had defined an I/O interface for the same 68 pin connector initially used for Memory cards. At the same time, the Socket Services Specification was added and was soon followed by the Card Services Specifcation as developers realized that common software would be needed to enhance compatibility.
Types of PC Cards
The PCMCIA standards define 3 types of cards: Type I, Type II and Type III. They all have the same length and width with a 68-circuit connector as interface for the attachment to the computer. The difference is in the thickness only.
Typically used for memory enhancements and/or for I/O features such as data/fax Type II modems, LANs and multimedia card applications. Type Primarily used for memory enhancements
3.3mm 3.3mm
5.0mm 10.5mm
III
and/or I/O that require more space for components such as 1.8" hard disk drive.
SPD Information
From JEDEC o SPD General Standard o SPD - Table of Fundamental Memory type (Appendix A) o SPD - Table of Superset Memory Types (Appendix B) o SPD - FPM and EDO DRAM (Appendix C) o SPD - for SDRAM (Appendix E) PC133 SPD Specifications From IBM o Complete original PC133 SPD specifications and comparison with PC100 specifications. PC133 SPD Specifications: o PC133 Registered DIMM Specification by IBM. This is in revision stage at JEDEC.
o
Unbuffered PC133 module specifications by VIA. This has been proposed to JEDEC and is in the Preliminary stage.
Robert Dennard was the inventor of RAM: Random Access Memory, the device was patented in 1968 by Dennard. A group of Memory chips, typically of the dynamic RAM (DRAM) type, which functions as the computer's primary workspace. The "random" in RAM means that the contents of each byte can be directly accessed without regard to the bytes before or after it. This is
also true of other Types of Memory chips, including ROMs (Read Only Memory) and PROMs (Programable ROM). However, unlike Non-volatile ROMs (Read Only Memory) and PROMs (Programable ROM), Volatile RAM chips require power to maintain their data, which is why you must save your data onto disk before you turn the computer off.
System Memory bandwidth is more important now than ever before. With the increase in processor performance, multimedia and 3D graphics, high bandwidth Memory is essential to sustain system performance. The transition to Rambus DRAM (RDRAM) - with a Memory performance gain up to 300% over the current SDRAM technology is nothing short of revolutionary! Memory compatibility for desktop PCs is one of the important features of RDRAM. It allows RDRAM RIMM modules to be interchangeable with full system compatibility. Validated RIMM modules can be placed in an RDRAM-compatible system without regard to memory size, speed, or organization. Mix and match RIMM modules from various suppliers and they will all work together within the system. No other technology available today can make that claim. RIMM (Rambus In-line Memory Module) Rambus memory modules are called RIMMs. Because of the fast data transfer rate of these modules, a heat spreader (aluminum plate covering) is used for each module. The heat spreader wicks heat away to keep the module from overheating. Because the Rambus channel must be continuous, unused slots in a channel must be occupied by a CRIMM, in order to complete the channel's connections. CRIMM (Continuity Rambus In-line Memory Module) Since there cannot be any unused slots on a motherboard, C-RIMM is a special module used to fill any unused RIMM slots. It is basically a RIMM module without any memory chips. Because the Rambus channel must be continuous, unused slots in a channel must be occupied by a CRIMM, in order to complete the channel's connections. RDRAM must be used in pairs.
Refresh Rate
A memory module is made up of electrical cells. The refresh process recharges these cells, which are arranged on the chips in rows. The refresh rate refers to the number of rows that must be refreshed. The common refresh rates are 1K, 2K, 4K and 8K. Some specialty designed DRAMs feature self refresh technology, which enables the components to refresh on their own independent from the CPU or external consumption, and it is commonly used in notebook computers and laptop computer.
Reserved Memory
Reserved Memory is the address space between 640Kb and 1 Mb and was designed to be reserved for system operations. The system BIOS ROM, video ROM, and ROMs of some add-in cards reside in the reserved memory addr. space. Keep in mind that this is "address space", meaning that there doesn't have to be any physical memory located at all the addresses from 640K to 1 megabyte. The system BIOS ROM can be found at hex addr. F0000 or 960K. If you have a VGA or EGA video card you can almost always find video ROM at hex addr. C0000 or 768K. Between the video ROM and the system BIOS ROM at hex address D0000 for example, there might not be anything at all. This "address space" is unused and available, and drivers such as Intel's EMM.SYS (Expanded Memory Manager) can "map" blocks of memory within the C000DFFF address range (between 768K and 896K), which means that the memory EMM.SYS controls on an Intel memory board will appear within that address range.
Types of ROM
ROM: Read-only memory. The 1s and 0s are permanent and created at the silicon foundry. Only used for circuits needed in the tens of thousands and after many prototypes. PROM: Programmable read-only memory. A hardware device connected to a PC can program the ROM, but it is write-once. EPROM: Erasable programmable read-only memory. A PROM that can be erased by exposure to ultraviolet light. EEPROM: Electrically erasable programmable read-only memory. A PROM that can be erased by high voltages. Semiconductor-based Memory that contains instructions or data that can be read but not modified. (Generally, the term ROM often means any read-only device, as in CD-ROM for Compact Disk, Read Only Memory.) Once data has been written onto a ROM chip, it cannot be removed and can only be read. Unlike main Memory (RAM), ROM retains its contents even when the computer is turned off. ROM is referred to as being nonvolatile, whereas RAM is volatile. Most personal computers contain a small amount of ROM that stores critical programs such as the program that boots the computer. In addition, ROMs are used extensively in calculators and peripheral devices such as laser printers, whose fonts are often stored in ROMs. Electrically Erasable Programmable Read-Only Memory (EEPROM) Machines with flash BIOS capability use a special type of BIOS ROM called an EEPROM; this stands for "Electrically Erasable Programmable Read-Only Memory". As you can probably tell by the name, this is a ROM that can be erased and re-written using a special program. This procedure is called flashing the BIOS and a BIOS that can do this is called a flash BIOS. The advantages of this capability are obvious; no need to open the case to pull the chip, and much lower cost. EEPROM is similar to flash memory (sometimes called flash EEPROM). The principal difference is that EEPROM requires data to be written or erased one byte at a time whereas flash memory allows data to be written or erased in blocks. This makes flash memory faster. Flash memory works much faster than traditional EEPROMs because it writes data in chunks, usually 512 bytes in size, instead of a byte at a time.
Short for Synchronous DRAM, a new type of DRAM that can run at much higher clock speeds than conventional memory. SDRAM actually synchronizes itself with the CPU's bus and is capable of running at 133 MHz, about three times faster than conventional FPM RAM, and about twice as fast EDO DRAM and BEDO DRAM. SDRAM is replacing EDO DRAM in many newer computers
Timing Speed
The speed rating marked on each chip (10ns, 50ns, 60ns, 70ns, 80ns or 100ns) signifies how long it takes for the read/write to occur. A chip with a lower number is usually better because it is faster; however, early systems often need slower speeds. If you are upgrading Memory in a computer, always match the speed of modules within the same bank. A nanosecond (ns or nsec) is = 10-9 one billionth of a second. 1. 2. 3. 4. 5. 6. 7. 8. Important Timing Terms: RP - The time required to switch internal memory banks. (RAS Precharge) t RCD - The time required between /RAS (Row Address Select) and /CAS (Column Address Select) access. t AC - The amount of time necessary to "prepare" for the next output in burst mode. t CAC - The Column Access Time. t CL - (or CL) CAS Latency. t CLK - The Length of a Clock Cycle. RAS - Row Address Strobe or Row Address Select. CAS - Column Address Strobe or Column Address Select.
t
9. Read Cycle Time - The time required to make data ready by the next clock cycle in burst mode.
amount of UV energy required to erase a chip's memory. Erasing time can be calculated using following formula:
Time(seconds)=(nominal erasing energy (W-sec/cm)*1,000,000)/UV Irradiance(uW/cm))
Most EPROMs have a nominal erasing energy of 15W-sec/cm. Some chips, however, require as little as 6 or 10W-sec/cm, or as much as 25W-sec/cm, for complete erasure.
emulator, or implement the EMS interface as part of the multitasker. For the remainder of this specification, the terms "DOS- Extender," "EMS emulator," and "multitasker," respectively, will be used to refer to these program categories. Since these control programs run under MS-DOS, it is desirable to make them compatible with each other, so that users don't have to turn off one control program in order to run an application under another control program. The purpose of this document is to specify an interface that allows these classes of control programs to coexist successfully. This interface is called the Virtual Control Program Interface, or VCPI.
The VCMemory is a memory core technology designed to improve memory data throughput efficiency and initial latency of memories. Intended for use in next generation memory systems, the VCMemory technology is ideal memory for a wide range of application such as Multimedia PC, Game machine, Internet Server etc. The slow core operation memory such as DRAM, Flash Memory and Mask ROM can get very significant performance improvements with VCMemory technology. VCSDRAM is the first product to utilize VCM, offers the following benefits to the end user:
Multiplies the effective data throughput performance of conventional DRAM core. Achieving close to full data bus bandwidth with low latency, interleaved random row, random column Read/Write through the channels. Transparent DRAM bank operations through the concurrent foreground and Background Operations. Very wide (256 bytes wide) internal data transfer bus between Channel and memory core. Equivalence of tens of multiple memory banks by using only a fraction of the frequency of Row Activate and Precharge of conventional DRAM core.
called THRASHING and you can see it in your LED hard disk drive light. The hard disk is thousands of times slower than the system RAM, if not more. A system that is thrashing can be perceived as either a very slow system or one that has come to a halt. Hard disk access time is measured in thousandths of a second; RAM access time is measured in billionths of a second.
Volatile Memory
All RAM except the CMOS RAM used for the BIOS is volatile. Mem. that loses its contents when the power is turned off. A computer's main mem., made up of dynamic RAM or static RAM chips, loses its content immediately upon loss of power. Contrast ROM, which is non-volatile mem. Generally abbreviated as NVRAM, Non-volatile memory is an area of data storage in the memory where data is not lost when the power is turned off. Nonvolatile memory areas include read-only memory (ROM) and flash memory
I was surprised, though, to find an astonishing amount of misinformation as well. Far too many references, on supposedly well-informed sites, referred to system resources as RAM, and recommended closing applications to free up more RAM. While closing applications can help system resources, the problem is not with RAM.
Technical discussion
Briefly, Windows has five areas, or "heaps", that store information about system resources. User32.dll, which manages user interface functions like window creation and messages, uses a 16-bit heap and two 32-bit heaps. One of the 32-bit heaps stores a WND window structure for each window in the system. The other stores menus. The 16-bit heap stores message queues, windows classes, etc. GDI32.dll, the graphical device interface, holds the functions for drawing graphic images and displaying text. It uses a 16-bit heap and a 32-bit heap.
no longer use. Sometimes you'll find duplicates, which are rarely necessary. If you uncheck these programs one at a time, then reboot, you can see what each individual program uses. Once you've cleared out this list, reboot and check your resources again.