Advanced Memory Optimization Techniques for Low-Power Embedded Processors

Advanced Memory Optimization Techniques for Low-Power Embedded Processors
Author: Manish Verma
Publisher: Springer Science & Business Media
Total Pages: 192
Release: 2007-06-20
Genre: Technology & Engineering
ISBN: 1402058977


Download Advanced Memory Optimization Techniques for Low-Power Embedded Processors Book in PDF, Epub and Kindle

This book proposes novel memory hierarchies and software optimization techniques for the optimal utilization of memory hierarchies. It presents a wide range of optimizations, progressively increasing in the complexity of analysis and of memory hierarchies. The final chapter covers optimization techniques for applications consisting of multiple processes found in most modern embedded devices.

Energy-Aware Memory Management for Embedded Multimedia Systems

Energy-Aware Memory Management for Embedded Multimedia Systems
Author: Florin Balasa
Publisher: CRC Press
Total Pages: 352
Release: 2011-11-16
Genre: Computers
ISBN: 1439814015


Download Energy-Aware Memory Management for Embedded Multimedia Systems Book in PDF, Epub and Kindle

Energy-Aware Memory Management for Embedded Multimedia Systems: A Computer-Aided Design Approach presents recent computer-aided design (CAD) ideas that address memory management tasks, particularly the optimization of energy consumption in the memory subsystem. It explains how to efficiently implement CAD solutions, including theoretical methods an

Ultra-Low Energy Domain-Specific Instruction-Set Processors

Ultra-Low Energy Domain-Specific Instruction-Set Processors
Author: Francky Catthoor
Publisher: Springer Science & Business Media
Total Pages: 416
Release: 2010-08-05
Genre: Technology & Engineering
ISBN: 9048195284


Download Ultra-Low Energy Domain-Specific Instruction-Set Processors Book in PDF, Epub and Kindle

Modern consumers carry many electronic devices, like a mobile phone, digital camera, GPS, PDA and an MP3 player. The functionality of each of these devices has gone through an important evolution over recent years, with a steep increase in both the number of features as in the quality of the services that they provide. However, providing the required compute power to support (an uncompromised combination of) all this functionality is highly non-trivial. Designing processors that meet the demanding requirements of future mobile devices requires the optimization of the embedded system in general and of the embedded processors in particular, as they should strike the correct balance between flexibility, energy efficiency and performance. In general, a designer will try to minimize the energy consumption (as far as needed) for a given performance, with a sufficient flexibility. However, achieving this goal is already complex when looking at the processor in isolation, but, in reality, the processor is a single component in a more complex system. In order to design such complex system successfully, critical decisions during the design of each individual component should take into account effect on the other parts, with a clear goal to move to a global Pareto optimum in the complete multi-dimensional exploration space. In the complex, global design of battery-operated embedded systems, the focus of Ultra-Low Energy Domain-Specific Instruction-Set Processors is on the energy-aware architecture exploration of domain-specific instruction-set processors and the co-optimization of the datapath architecture, foreground memory, and instruction memory organisation with a link to the required mapping techniques or compiler steps at the early stages of the design. By performing an extensive energy breakdown experiment for a complete embedded platform, both energy and performance bottlenecks have been identified, together with the important relations between the different components. Based on this knowledge, architecture extensions are proposed for all the bottlenecks.

Worst-Case Execution Time Aware Compilation Techniques for Real-Time Systems

Worst-Case Execution Time Aware Compilation Techniques for Real-Time Systems
Author: Paul Lokuciejewski
Publisher: Springer Science & Business Media
Total Pages: 268
Release: 2010-09-24
Genre: Technology & Engineering
ISBN: 9048199298


Download Worst-Case Execution Time Aware Compilation Techniques for Real-Time Systems Book in PDF, Epub and Kindle

For real-time systems, the worst-case execution time (WCET) is the key objective to be considered. Traditionally, code for real-time systems is generated without taking this objective into account and the WCET is computed only after code generation. Worst-Case Execution Time Aware Compilation Techniques for Real-Time Systems presents the first comprehensive approach integrating WCET considerations into the code generation process. Based on the proposed reconciliation between a compiler and a timing analyzer, a wide range of novel optimization techniques is provided. Among others, the techniques cover source code and assembly level optimizations, exploit machine learning techniques and address the design of modern systems that have to meet multiple objectives. Using these optimizations, the WCET of real-time applications can be reduced by about 30% to 45% on the average. This opens opportunities for decreasing clock speeds, costs and energy consumption of embedded processors. The proposed techniques can be used for all types real-time systems, including automotive and avionics IT systems.

Integrated Circuit and System Design. Power and Timing Modeling, Optimization and Simulation

Integrated Circuit and System Design. Power and Timing Modeling, Optimization and Simulation
Author: Jose L. Ayala
Publisher: Springer
Total Pages: 362
Release: 2011-09-25
Genre: Computers
ISBN: 3642241549


Download Integrated Circuit and System Design. Power and Timing Modeling, Optimization and Simulation Book in PDF, Epub and Kindle

This book constitutes the refereed proceedings of the 21st International Conference on Integrated Circuit and System Design, PATMOS 2011, held in Madrid, Spain, in September 2011. The 34 revised full papers presented were carefully reviewed and selected from numerous submissions. The paper feature emerging challenges in methodologies and tools for the design of upcoming generations of integrated circuits and systems and focus especially on timing, performance and power consumption as well as architectural aspects with particular emphasis on modeling, design, characterization, analysis and optimization.

System-level Memory Power and Performance Optimization for System-on-a-chip Embedded Systems

System-level Memory Power and Performance Optimization for System-on-a-chip Embedded Systems
Author:
Publisher:
Total Pages: 159
Release: 2008
Genre: Electronic dissertations
ISBN:


Download System-level Memory Power and Performance Optimization for System-on-a-chip Embedded Systems Book in PDF, Epub and Kindle

Power has become a first rate design issue in microprocessor design. Power efficiency is especially critical for battery-powered embedded systems. Technology trends are making data communication, both on-chip and off-chip, more expensive relative to computation. Evaluating power-performance design trade-offs at the architectural level still requires more research study. In this dissertation, we will show how microprocessor power, especially in the memory sub-system, is consumed during program execution. We also show that the external memory system in a low power System-on-a-Chip (SOC) embedded system has significant impact on overall system power. The source of memory power consumption is due to the data transmission, bandwidth limitation, and memory access overhead. We review and summarize the current research work on low power microprocessor architecture design in academic research community and in industry world. The work includes power modeling, power estimation tools and power optimization techniques. In addition, we summarize different power optimization into five categories and compare their effects and impacts to the overall system. Two solutions are proposed to reduce data bandwidth and to improve the power efficiency on the external memory bus. We first propose an external bus arbitrator to schedule the external bus requests in order to achieve better bus utilization. We propose a series of power aware arbitration schemes for the external bus request scheduling. On average, we observe a 22 percent performance speed up and 13 percent power savings compared to traditional arbitration schemes. In our second approach, we present a hardware-based, programmable external memory page remapping mechanism which can significantly improve system performance and decrease the power budget on external memory bus accesses. We employ graph-coloring techniques to guide the page mapping procedure. Our algorithm can significantly reduce the memory page miss rate by 70-80 percent on average. For a 4-bank SDRAM memory system, we reduce external memory access time by 11 percent, while reducing the associated power consumed by 11 percent.

Designing Embedded Processors

Designing Embedded Processors
Author: Jörg Henkel
Publisher: Springer Science & Business Media
Total Pages: 551
Release: 2007-07-27
Genre: Technology & Engineering
ISBN: 1402058691


Download Designing Embedded Processors Book in PDF, Epub and Kindle

To the hard-pressed systems designer this book will come as a godsend. It is a hands-on guide to the many ways in which processor-based systems are designed to allow low power devices. Covering a huge range of topics, and co-authored by some of the field’s top practitioners, the book provides a good starting point for engineers in the area, and to research students embarking upon work on embedded systems and architectures.

Fast, Efficient and Predictable Memory Accesses

Fast, Efficient and Predictable Memory Accesses
Author: Lars Wehmeyer
Publisher: Springer Science & Business Media
Total Pages: 263
Release: 2006-09-08
Genre: Technology & Engineering
ISBN: 140204822X


Download Fast, Efficient and Predictable Memory Accesses Book in PDF, Epub and Kindle

Speed improvements in memory systems have not kept pace with the speed improvements of processors, leading to embedded systems whose performance is limited by the memory. This book presents design techniques for fast, energy-efficient and timing-predictable memory systems that achieve high performance and low energy consumption. In addition, the use of scratchpad memories significantly improves the timing predictability of the entire system, leading to tighter worst case execution time bounds.

Memory Optimizations of Embedded Applications for Energy Efficiency

Memory Optimizations of Embedded Applications for Energy Efficiency
Author: Jong Soo Park
Publisher: Stanford University
Total Pages: 177
Release: 2011
Genre:
ISBN:


Download Memory Optimizations of Embedded Applications for Energy Efficiency Book in PDF, Epub and Kindle

The current embedded processors often do not satisfy increasingly demanding computation requirements of embedded applications within acceptable energy efficiency, whereas application-specific integrated circuits require excessive design costs. In the Stanford Elm project, it was identified that instruction and data delivery, not computation, dominate the energy consumption of embedded processors. Consequently, the energy efficiency of delivering instructions and data must be sufficiently improved to close the efficiency gap between application-specific integrated circuits and programmable embedded processors. This dissertation demonstrates that the compiler and run-time system can play a crucial role in improving the energy efficiency of delivering instructions and data. Regarding instruction delivery, I present a compiler algorithm that manages L0 instruction scratch-pad memories that reside between processor cores and L1 caches. Despite the lack of tags, the scratch-pad memories with our algorithm can achieve lower miss rates than caches with the same capacities, saving significant instruction delivery energy. Regarding data delivery, I present methods that minimize memory-space requirements for parallelizing stream applications, applications that are commonly found in the embedded domain. When stream applications are parallelized in pipelining, large enough buffers are required between pipeline stages to sustain the throughput (e.g., double buffering). For static stream applications where production and consumption rates of stages are close to compile-time constants, a compiler analysis is presented, which computes the minimum buffer capacity that maximizes the throughput. Based on this analysis, a new static streamscheduling algorithm is developed, which yields considerable speed-up and data delivery energy saving compared to a previous algorithm. For dynamic stream applications, I present a dynamically-sized array-based queue design that achieves speed-up and data delivery energy saving compared to a linked-list based queue design.