Discover millions of ebooks, audiobooks, and so much more with a free trial

Only $11.99/month after trial. Cancel anytime.

System on Chip Interfaces for Low Power Design
System on Chip Interfaces for Low Power Design
System on Chip Interfaces for Low Power Design
Ebook655 pages107 hours

System on Chip Interfaces for Low Power Design

Rating: 0 out of 5 stars

()

Read preview

About this ebook

System on Chip Interfaces for Low Power Design provides a top-down understanding of interfaces available to SoC developers, not only the underlying protocols and architecture of each, but also how they interact and the tradeoffs involved. The book offers a common context to help understand the variety of available interfaces and make sense of technology from different vendors aligned with multiple standards. With particular emphasis on power as a factor, the authors explain how each interface performs in various usage scenarios and discuss their advantages and disadvantages. Readers learn to make educated decisions on what interfaces to use when designing systems and gain insight for innovating new/custom interfaces for a subsystem and their potential impact.

  • Provides a top-down guide to SoC interfaces for memory, multimedia, sensors, display, and communication
  • Explores the underlying protocols and architecture of each interface with multiple examples
  • Guides through competing standards and explains how different interfaces might interact or interfere with each other
  • Explains challenges in system design, validation, debugging and their impact on development
LanguageEnglish
Release dateNov 17, 2015
ISBN9780128017906
System on Chip Interfaces for Low Power Design
Author

Sanjeeb Mishra

Sanjeeb Mishra is a Validation Architect with Intel. He has 15 years of experience ranging from hardware system design to SOC validation for telecom, consumer electronics, PC and mobility products; and has specific expertise on SoC architecture for mobile devices.

Related to System on Chip Interfaces for Low Power Design

Related ebooks

Industrial Design For You

View More

Related articles

Reviews for System on Chip Interfaces for Low Power Design

Rating: 0 out of 5 stars
0 ratings

0 ratings0 reviews

What did you think?

Tap to rate

Review must be at least 10 words

    Book preview

    System on Chip Interfaces for Low Power Design - Sanjeeb Mishra

    encouragement.

    Chapter 1

    SoC Design Fundamentals and Evolution

    Abstract

    This chapter discusses various system design integration methodologies along with their advantages and disadvantages. The chapter also explains the motivation for current system designs to move from system on board designs toward system on chip (SoC) designs. In discussing the motivation for the move toward SoC design, the chapter also discusses the typical chip design flow tradeoffs as well as how they influence the design choices.

    Keywords

    SoC design

    Hardware software co-design

    System on board

    System on chip

    System in a package

    ASIC

    System on programmable chip

    VLSI

    Moore’s law

    Chip design

    This chapter discusses various system design integration methodologies along with their advantages and disadvantages. The chapter also explains the motivation for current system designs to move from system on board designs toward system on chip (SoC) designs. In discussing the motivation for the move toward SoC design, the chapter also discusses the typical chip design flow tradeoffs as well as how they influence the design choices.

    Introduction

    A system is something that achieves a meaningful purpose. Like everything else, it depends on the context. A computer system will have hardware components (the actual machinery) and software components, which actually drive the hardware to achieve the purpose. For example, talking about a personal computer (also commonly known as a PC), all the electronics are hardware, and the operating system plus additional applications that you use are software.

    However, in the context of this book, by a system we mean the hardware part of the system alone. Figure 1.1 shows a rough block diagram of a system. The system in the diagram consists of a processing unit along with the input/output devices, memory, and storage components.

    ■ Figure 1.1 A system with memory, processor, input/output, and interconnects.

    Typical system components

    Roughly speaking, a typical system would have a processor to do the real processing, a memory component to store the data and code, some kind of input system to receive input, and a unit for output. In addition, we should have an interconnection network to connect the various components together so that they work in a coherent manner. It should be noted that based on the usage model and applicability of the system, the various components in the system may come in differing formats. For example, in a PC environment, keyboard, and mouse may form the input subsystem, whereas in a tablet system they may be replaced by a touch screen, and in a digital health monitoring system the input system may be formed by a group of sensors. In addition to the bare essentials, there may be other subsystems like imaging, audio, and communication. In Chapter 3 we’ll talk about various subsystems in general, involving:

    1. Processor

    2. Memory

    3. Input and output

    4. Interconnects

    5. Domain-specific subsystems (camera, audio, communication, and so on)

    Categorization of computer systems

    Computer systems are broadly categorized as: general-purpose computer systems like personal computers, embedded systems like temperature control systems, and real-time systems like braking control systems. General-purpose computing systems are designed to be more flexible so that they can be used for different types of functions, whereas embedded systems are designed to address a specific function and are not meant to be generic. They are usually embedded as part of a larger device, and the user seldom directly interacts with such a system. Real-time systems are embedded systems with stringent response time requirements. All these computing systems are built using the same basic building blocks as shown in Figure 1.1. The flavor of the building blocks may vary from system to system because the design parameters and design requirements are different. For example, since embedded systems have a fixed known usage, the components can optimally be chosen to meet that functional requirement. The general-purpose system, on the other hand, might have to support a range of functionality and workloads, and therefore components need to be chosen keeping in mind the cost and user experience for the range of applications. Similarly the components for real-time systems need to be chosen such that they can meet the response time requirement.

    System Approach to Design

    Due to the tighter budget on cost, power, and performance discussed in the previous section, the whole system is being thought about and designed as complete and not as an assembly of discrete pieces. The philosophy of system design thereby brings the opportunity to optimize the system for a particular usage. There is no real change in the system functionality; it’s just that it is a different way of thinking about the system design. We already talked about the typical system components; next we will discuss the hardware software co-design, followed by various system design methodologies.

    Hardware software co-design

    As discussed earlier, a system in general has some hardware accompanied by some software to achieve the purpose. Generally, the system’s functionality as a whole is specified by the bare definition of the system. However, what part of the system should be dedicated hardware and what should be software is a decision made by the system architect and designer. The process of looking at the system as a whole and making decisions as to what becomes hardware and what becomes a software component is called hardware software co-design. Typically there are three factors that influence the decision:

    ■ Input, output, memory, and interconnects need to have hardware (electronics) to do the fundamental part expected from them. However, each of these blocks typically requires some processing; for example, touch data received from the input block needs to be processed to detect gestures, or the output data needs to be formatted specifically to suit the display. These processing parts, generally speaking, are part of the debate as to whether a dedicated hardware piece should do the processing or whether the general-purpose processor should be able to take care of the processing in part or full.

    ■ The second factor that contributes to the decision is the experience that we want to deliver to the user. What this means is that, depending on the amount of data that needs to be processed, the quality of the output that is expected, the response time to the input, and so on, we have to decide the quality of the dedicated hardware that should be used, and also this helps make the decision as to which processing should be done by dedicated hardware and which by software running on the CPU. The assumption here is that hardware dedicated to doing specific processing will be faster and more efficient, so wherever we need faster processing, we dedicate hardware to do the processing, such as, for example, graphics processing being processed by a graphics processing unit.

    ■ The third factor is the optimality. There are certain types of processing that take a lot more time and energy when done by general-purpose processing units as opposed to a specialized custom processor, such as digital signal processing and floating point computations, which have dedicated hardware (DSP unit and floating point unit, respectively) because they are optimally done in hardware.

    System design methodologies

    Early on, the scale of integration was low, and therefore to create a system it was necessary to put multiple chips, or integrated circuits (ICs), together. Today, with very-large-scale integration (VLSI), designing a system on a single chip is possible. So, just like any other stream, system design has evolved based on the technological possibilities of the generation. Despite the fact that system on a single chip is possible, however, there is no one design that fits all. In certain cases the design is so complex that it may not fit on a single chip. Why? Based on the transistor size (which is limited by the process technology) and size of the die (again limited by the process technology) there is a limited number of transistors that can be placed on a chip. If the functionality is complex and cannot be implemented in that limited number of transistors, the design has to be broken out into multiple chips. Also, there are other scalability and modularity reasons for not designing the whole system in one single chip. In the following section we’ll discuss the three major system design approaches: system on board (SoB), system on chip (SoC), and system in a package (SiP) or on a package (SoP).

    System on board

    SoB stands for system on board. This is the earliest evolution of system design. Back in the 1970s and 1980s when a single chip could do only so much, the system was divided into multiple chips and all these chips were connected via external interconnect interfaces over a printed circuit board. SoB designs are still applicable today for large system designs and system designs in which disparate components need to be put together to work as a system.

    Advantages of SoB

    Despite the fact that this is the earliest approach to system design and back in the early days it was the only approach feasible to be able to do anything meaningful, the SoB design approach is prevalent even today and has a lot of advantages over other design approaches:

    ■ It is quick and easy to do design space exploration with different components.

    ■ Proven (prevalidated and used) components can be put together easily.

    ■ Design complexity for individual chips is divided, so the risk of a bug is less.

    ■ The debugging of issues between two components is easier because the external interfaces can be probed easily.

    ■ Individual components can be designed, manufactured, and debugged separately.

    Disadvantages of SoB

    Since there is a move toward SiP/SoP and SoC, there must be some disadvantages to the classical SoB design approach; these can be summarized as follows:

    ■ Because of long connectivity/interconnects, the system consumes more power and provides less performance when compared to SoC/SiP/SoP designs.

    ■ Overall system cost is greater because of larger size, more materials required in manufacturing, higher integration cost, and so on.

    ■ Since individual components are made and validated separately, they cannot be customized or optimized to a particular system requirement or design.

    System on chip

    By definition SoC means a complete system on a single chip with no auxiliary components outside it. The current trend today is that all the semiconductor companies are moving toward SoC designs by integrating more and more components of a system as SoC. However, there is not a single example of a pure SoC design.

    Advantages of SoC

    Some of the advantages of SoC design are

    ■ lower system cost,

    ■ compact system size,

    ■ reduced system power consumption,

    ■ increased system performance, and

    ■ intellectual property blocks (IPs) used in the design can be customized and optimized.

    Disadvantages of SoC

    Even though it looks as though SoC design is very appealing, there are limitations, challenges, and reasons that not everything has moved to SoC. Some of the reasons are outlined below:

    ■ For big designs, fitting the whole logic on a single chip may not be possible.

    ■ Silicon manufacturing yield may not be as good because of the big die size required.

    ■ There can be IP library/resource provider and legal issues.

    ■ Chip integration: Components designed with different manufacturer processes need to be integrated and manufactured on one process technology.

    ■ Chip design verification is a challenge because of the huge monolithic design.

    ■ Chip validation is a challenge, also because of the monolithic design.

    System in a package

    SiP or SoP design is a practical alternative to counter the challenges posed by the SoC approach. In this approach, various chips are manufactured separately; however, they are packaged in such a way that they are placed very closely. This is also called a multichip module (MCM) or multichip package (MCP). This is a kind of middle ground between SoB and SoC design methodologies.

    Advantages of SiP

    In this approach the chips are placed close enough to give compact size, reduced system power consumption, and increased system performance. In addition:

    ■ IPs based on different manufacturing technologies can be manufactured on their own technologies and packaged as a system.

    ■ Because of smaller sizes of individual chips, the manufacturing yield is better.

    ■ Development complexity is less because of division of design into multiple parts.

    ■ Big designs that cannot be manufactured as a single chip can be made as a SiP/SoP.

    Disadvantages of SiP

    Despite the fact that the different chips are placed very closely to minimize the transmission latency, the SiP design is less than optimal in terms of power and performance efficiency when compared to SoC designs. In addition, the packaging technology for the MCM/MCP system is more complex and more costly.

    In most of the literature, SiP and SoP are used interchangeably; however, sometimes they have different meanings. SiP refers to vertical stacking of multiple chips in a package and SoP refers to planar placement of more than one chip in a package. For example, a SiP or SoP can contain multiple components like processor, main memory, flash memory along with the interconnects, and auxiliary components like resistor/capacitor on the same substrate to make it a SiP or SoP.

    Application-specific integrated circuit

    Application-specific integrated circuit (ASIC) is a functional block that does a specific job and is not supposed to be a general-purpose processing unit. ASIC designs are customized for a specific purpose or functionality and therefore yields much better performance when compared to general-purpose processing units. ASIC designs are not a competing design methodology to SoC, but rather complementary. So, when designing an SoC, the designer makes a decision as to what IPs or functional blocks to integrate. And that decision comes based on whether the SoC is meant to be general purpose, catering to various different application needs (like a tablet SoC that can be used with different operating systems and then customized to serve as router or digital TV or GPS system), or for a specific purpose that is supposed to cater to only a specific application (e.g., a GPS navigator).

    Advantages of ASIC

    So, one might think that it is always better to make a general-purpose SoC, which can cater to more than just one application. However, there are significant reasons to choose to make an ASIC over a general-purpose SoC:

    ■ Cost: When we make a general-purpose SoC and it is customized for a specific purpose, a good piece of logic is wasted because it is not used for that specific application. In case of ASIC, the system or SoC is made to suit; there is no redundant functionality. And therefore the die area of the system is smaller.

    ■ Validation: Validation of an ASIC is much easier than the general-purpose SoC. Why? Because when a vendor creates a general-purpose SoC and markets it as such, there are an infinite number of possibilities for which that SoC can be used, and therefore the vendor needs to validate to its specification to perfection. On the other hand, when one creates an ASIC, that piece is supposed to be used for that specific purpose. Therefore, the vendor can live with validation of the ASIC for that targeted application.

    ■ Optimization: Since it’s known that the ASIC will be used for the specific application, the design choices can be made more intelligently and optimally; for example, how much memory, how much should be memory throughput, how much of processing power is needed, and so on.

    Disadvantages of ASIC

    There are always tradeoffs. Of course, there are some disadvantages to the ASIC design approach:

    ■ We all know that the hardware design and the manufacturing cycle are long and intensive (effort and cost). So, making an ASIC for every possible application is not going to be cost effective, unless we can guarantee that the volume of each such ASIC will be huge.

    ■ Customers want one system to be able to do multiple things, rather than carrying one device for GPS, one for phone calls, another for Internet browsing, another one for entertainment (media playback), and yet another one for imaging. Also, since there are common function blocks in each of these systems, it is much cheaper to make one system to do it all, when compared with amortized cost of all the different systems, each dedicated for one functionality.

    System on programmable chip

    Because of a need for fast design space exploration, a new trend is fast gaining in popularity: the system on a programmable chip, or SoPC. In an SoPC solution there is an embedded processor with on-chip peripherals and memory along with lots of gates in a field-programmable gate array (FPGA). The FPGA can be programmed with the design logic to emulate, and the system behavior, or functionality, can be verified.

    Advantage of SoPC

    SoPC designs are reconfigurable and therefore can be used for prototyping and validating the system. Bug fixes are much easier to make in this environment than in an SoC design, where in one needs to churn in another version of silicon to fix and verify a bug, which has a significant cost.

    Disadvantage of SoPC

    The SoPC design models the functionality in an FPGA, which is not as fast as real silicon would be. It is therefore best fit for the system prototyping and validation, and not really for the final product.

    System design trends

    As we see from the preceding discussion, there are many approaches to a system design: one more suitable for one scenario than other. It should, however, be noted that the SoC approach, wherever possible, brings many advantages to the design. And therefore, not surprisingly, the SoC approach is the trend. However, for various reasons a pure SoC in ideal terms is not possible for a real system. In fact, initially it was only possible to design smaller embedded devices as SoC due to the limited number of transistors on a chip. It is now possible to integrate even a general-purpose computing device onto a single chip because Moore’s law has allowed more transistors on a single chip. SoCs for general-purpose computing devices like tablets, netbooks, ultrabooks, and smartphones are possible these days. Given the advantages of SoC design, the level of integration in a chip is going to decide the fate of one corporation versus another.

    Hardware IC Design Fundamentals

    In the previous section we talked about various system design approaches and the concept of hardware software co-design. Irrespective of the system design methodology, the computer system is made of ICs. We all know that the integrated chip design is a complex pipeline of process culminating in an IC chip that comes out of manufacturing. In this section we talk a little bit about the pipeline of processes in an IC design.

    The basic building block of any IC is a transistor, and multiple transistors are put and connected together in a specific way to implement the behavior that we want from the system. Since the advent of transistors just a few decades back, the size of transistors has gone down exponentially, and therefore the number of transistors integrated in a chip has grown similarly. Just to bring in some perspective, the number of transistors on a chip in 1966 was about 10 as compared to billions of transistors on the latest one in 2014.

    The minimum width of the transistor is defined by the manufacturing process technology. For academic purposes, the level of integration has been classified based on its evolution:

    1. SSI = small-scale integration (up to 10 gates)

    2. MSI = medium-scale integration (up to 1000 gates)

    3. LSI = large-scale integration (up to 10,000 gates)

    4. VLSI = very-large-scale integration (over 10,000 gates)

    Given the complexity of the designs today, the IC design follows a very detailed and established process from specification to manufacturing the IC. Figure 1.2 illustrates the process.

    ■ Figure 1.2 High-level flow of chip design.

    Chip Design Tradeoff

    Tradeoff is the way to life. Tradeoff between cost and performance is fundamental to any system design. Cost of the silicon is a direct function of the area of the die being used, discounting the other one-time expenses in designing the IC. But changing set of the usage model and expectation from the computer system has brought in two other major design tradeoffs: that of power and that of configurability and modularity.

    Power until a few years ago was a concern only for mobile devices. It is and was important for mobile devices because, with the small battery sizes required due to portability and other similar reasons, it is imperative that the power consumption for the functionality is optimal. However, as the hardware designs got more complex and number of systems in use in enterprises grew exponentially to handle the exponential growth in the workload, enterprises realized that the electricity bill (that’s the running cost of the computer systems) was equally (or maybe more) important than one-time system cost. So the chip vendors started to quote power efficiency in terms of performance per watt. And the buyers will pay premium for power-efficient chips.

    The other parameters configurability and modularity are gaining or rather have gained importance because of the incessant pursuit to shorten time to market (TTM). The amount of time it takes to design (and validate) a functional block in the chip from base is quite significant. However, if we look at the market, new products (or systems) are launched rather quickly. So, there is a need for the chip vendor to design a base product and be able to configure the same product to cater to various different market segments with varying constraints. The other factor that is becoming (and again in fact has become) important is modularity of the design. Why is modularity important? The reason again is that the TTM from conception to launch of a product is small, and the design of the functional blocks of system is really complex and time consuming. So, the system development companies are taking an approach to use the functional blocks from other designers (vendors) as IP and integrate that in their product. The fundamental requirement for such a stitching together is that the system design and IP design being sourced must both be modular so they can work with each other seamlessly. The approach helps both the system designer and IP designer: the system designer by reducing their TTM and the IP designer by allowing them to specialize and sell their IP to as many systems vendors as possible.

    Chapter 2

    Understanding Power Consumption Fundamentals

    Abstract

    This chapter starts by explaining why power optimization is important, then tries to help the reader understand the sources of power consumption measurement or how to monitor power consumption, and discusses the strategies applied to reduce power consumption at the individual IC and system level. However, before we start to delve into the details, a few things about why it’s important.

    Keywords

    Power efficiency

    Static power

    Dynamic power

    Active power management

    Standby power management

    Idle power management

    Connected standby

    ACPI states

    Device states

    Processor states

    System states

    Power optimization

    This chapter starts by explaining why power optimization is important, then tries to help the reader understand the sources of power consumption, measurement or how to monitor power consumption, and discusses the strategies applied to reduce power consumption at the individual IC and system level. However, before we start to delve into the details, a few things about why it’s important.

    Why Power Optimization is Important

    Saving energy is beneficial for the environment and also for the user. There is a lot of literature that discusses the benefits in detail, but to give just a few obvious examples, the benefits include lower electric bills for consumers, longer uptime of the devices when running on battery power, and sleeker mobile system design made possible by smaller batteries due to energy efficiency.

    Knowing that power conservation is important, next we should discuss and understand the fundamentals of power consumption, its causes, and types. Once we understand them, we can better investigate ways to conserve power.

    Since the use of electronic devices is prevalent across every aspect of our lives, reducing power consumption must start at the semiconductor level. The power-saving techniques that are designed in at the chip level have a far-reaching impact.

    In the following section we will categorize power consumption in two ways: power consumption at the IC level and power consumption at the system level.

    Power consumption in IC

    Digital logic is made up of flip-flops and logic gates, which in turn are made up of transistors. The current drawn by these transistors results in the power being consumed.

    Figure 2.1 shows a transistor, voltage, and the current components involved while the transistor is functioning. So from the diagram, the energy required for the transition state will be CL*Vdd². And the power (energy*frequency) consumption can be expressed as CL*Vdd²*f. Going further, the power consumed by the digital logic has two major components: static and dynamic.

    ■ Figure 2.1 Diagram of a transistor depicting voltage and current flow.

    Static power

    Static power is the part of power consumption that is independent of activity. It constitutes leakage power and standby power. Leakage power is the power consumed by the transistor in off state due to reverse bias current. The other part of static power, standby power, is due to the constant current from Vdd to ground. In the following section we discuss leakage power and dynamic power.

    Leakage power

    When the transistors are in the off state they are ideally not supposed to draw any current. This is actually not the case: There is some amount of current drawn even in the off state due to reverse bias current in the source and drain diffusions, as well as the subthreshold current due to the inversion charge that exists at gate voltages under threshold voltage. All of this is collectively referred to as leakage current. This current is very small for a single transistor; however, within an IC there are millions to billions of transistors, so this current becomes significant at the IC level. The power dissipated due to this current is called leakage power. It is due to leakage current and depends primarily on the manufacturing process and technology with which the transistors are made. It does not depend on the frequency of operation of the flip-flops.

    Standby power

    Standby power consumption is due to standby current, which is DC current drawn continuously from positive supply voltage (Vdd) to ground.

    Dynamic power

    Dynamic power, due to dynamic current, depends on the frequency the transistor is operating on. Dynamic power is also the dominant part of total power consumption. Dynamic current again has two contributors:

    1. Short circuit current, which is due to the DC path between the supplies during output transition.

    2. The capacitance current, which flows to charge/discharge capacitive loads during logic changes.

    The dominant source of power dissipation in complementary metal-oxide semiconductor (CMOS) circuits is the charging and discharging. The rate at which the capacitive load is charged and discharged during logic level transitions determines the dynamic power. As per the following equation, with the increase in frequency of operation the dynamic power increases, unlike the leakage

    Enjoying the preview?
    Page 1 of 1