《经典原版书库计算机体系结构:量化研究方法(英文版.原书第6版)》[美]约翰・L.亨尼斯(JohnL

《经典原版书库计算机体系结构:量化研究方法(英文版.原书第6版)》[美]约翰・L.亨尼斯(JohnL | PDF下载|ePub下载

经典原版书库计算机体系结构:量化研究方法(英文版.原书第6版) 版权信息

  • 出版社:机械工业出版社
  • 出版时间:2019-07-01
  • ISBN:9787111631101
  • 条形码:9787111631101 ; 978-7-111-63110-1

经典原版书库计算机体系结构:量化研究方法(英文版.原书第6版) 本书特色

图灵奖得主经典之作,在摩尔定律失效之日预言计算机体系结构的重生!新版采用RISC-V,新增特定领域体系结构

经典原版书库计算机体系结构:量化研究方法(英文版.原书第6版) 内容简介

在过去20多年的时间里,本书一直是计算机领域的教师、学生和体系结构设计人员的推荐阅读之作。两位作者Hennessy和Patterson于2017年荣获图灵奖,肯定了他们对计算机领域持久而重要的技术贡献。随着处理器和系统架构的*新发展,第6版进行了全面修订。这一版采用RISC-V指令集体系结构,这是一个现代的RISC指令集,被设计为免费且可公开采用的标准。书中还增加了一个关于领域特定体系结构的新章节,并更新了关于仓储级计算的章节,其中介绍了谷歌*新的WSC。与本书之前版本的目标一样,本书致力于揭开计算机体系结构的神秘面纱,关注那些令人兴奋的技术创新,同时强调良好的工程设计。

经典原版书库计算机体系结构:量化研究方法(英文版.原书第6版) 目录

Chapter 1 Fundamentals of Quantitative Design and Analysis
1.1 Introduction 2
1.2 Classes of Computers 6
1.3 Defining Computer Architecture 11
1.4 Trends in Technology 18
1.5 Trends in Power and Energy in Integrated Circuits 23
1.6 Trends in Cost 29
1.7 Dependability 36
1.8 Measuring, Reporting, and Summarizing Performance 39
1.9 Quantitative Principles of Computer Design 48
1.10 Putting It All Together: Performance, Price, and Power 55
1.11 Fallacies and Pitfalls 58
1.12 Concluding Remarks 64
1.13 Historical Perspectives and References 67
Case Studies and Exercises by Diana Franklin 67
Chapter 2 Memory Hierarchy Design
2.1 Introduction 78
2.2 Memory Technology and Optimizations 84
2.3 Ten Advanced Optimizations of Cache Performance 94
2.4 Virtual Memory and Virtual Machines 118
2.5 Cross-Cutting Issues: The Design of Memory Hierarchies 126
2.6 Putting It All Together: Memory Hierarchies in the ARM Cortex-A53 and Intel Core i7 6700 129
2.7 Fallacies and Pitfalls 142
2.8 Concluding Remarks: Looking Ahead 146
2.9 Historical Perspectives and References 148
Case Studies and Exercises by Norman P. Jouppi, Rajeev
Balasubramonian, Naveen Muralimanohar, and Sheng Li

Chapter 3 Instruction-Level Parallelism and Its Exploitation
3.1 Instruction-Level Parallelism: Concepts and Challenges 168
3.2 Basic Compiler Techniques for Exposing ILP 176
3.3 Reducing Branch Costs With Advanced Branch Prediction 182
3.4 Overcoming Data Hazards With Dynamic Scheduling 191
3.5 Dynamic Scheduling: Examples and the Algorithm 201
3.6 Hardware-Based Speculation 208
3.7 Exploiting ILP Using Multiple Issue and Static Scheduling 218
3.8 Exploiting ILP Using Dynamic Scheduling, Multiple Issue, and Speculation 222
3.9 Advanced Techniques for Instruction Delivery and Speculation 228
3.10 Cross-Cutting Issues 240
3.11 Multithreading: Exploiting Thread-Level Parallelism to Improve Uniprocessor Throughput 242
3.12 Putting It All Together: The Intel Core i7 6700 and ARM Cortex-A53 247
3.13 Fallacies and Pitfalls 258
3.14 Concluding Remarks: What’s Ahead? 264
3.15 Historical Perspective and References 266
Case Studies and Exercises by Jason D. Bakos and Robert P. Colwell 266
Chapter 4 Data-Level Parallelism in Vector, SIMD, and GPU Architectures
4.1 Introduction 282
4.2 Vector Architecture 283
4.3 SIMD Instruction Set Extensions for Multimedia 304
4.4 Graphics Processing Units 310
4.5 Detecting and Enhancing Loop-Level Parallelism 336
4.6 Cross-Cutting Issues 345
4.7 Putting It All Together: Embedded Versus Server GPUs and Tesla Versus Core i7 346
4.8 Fallacies and Pitfalls 353
4.9 Concluding Remarks 357
4.10 Historical Perspective and References 357
Case Study and Exercises by Jason D. Bakos 357
Chapter 5 Thread-Level Parallelism
5.1 Introduction 368
5.2 Centralized Shared-Memory Architectures 377
5.3 Performance of Symmetric Shared-Memory Multiprocessors 393
5.4 Distributed Shared-Memory and Directory-Based Coherence 404
5.5 Synchronization: The Basics 412
5.6 Models of Memory Consistency: An Introduction 417
5.7 Cross-Cutting Issues 422
5.8 Putting It All Together: Multicore Processors and Their Performance 426
5.9 Fallacies and Pitfalls 438
5.10 The Future of Multicore Scaling 442
5.11 Concluding Remarks 444
5.12 Historical Perspectives and References 445
Case Studies and Exercises by Amr Zaky and David A. Wood 446
Chapter 6 Warehouse-Scale Computers to Exploit Request-Level and Data-Level Parallelism
6.1 Introduction 466
6.2 Programming Models and Workloads for Warehouse-Scale Computers 471
6.3 Computer Architecture of Warehouse-Scale Computers 477
6.4 The Efficiency and Cost of Warehouse-Scale Computers 482
6.5 Cloud Computing: The Return of Utility Computing 490
6.6 Cross-Cutting Issues 501
6.7 Putting It All Together: A Google Warehouse-Scale Computer 503
6.8 Fallacies and Pitfalls 514
6.9 Concluding Remarks 518
6.10 Historical Perspectives and References 519
Case Studies and Exercises by Par

经典原版书库计算机体系结构:量化研究方法(英文版.原书第6版) 作者简介

约翰・L.亨尼斯(John L.Hennessy),Hennessy与Patterson共同荣获了2017年度“图灵奖”,以表彰他们在计算机体系结构领域的开创性贡献。Hennessy现为Google母公司Alphabet的董事长,之前曾任斯坦福大学第十任校长。他是IEEE和ACM会士,美国国家工程院、国家科学院、美国哲学院以及美国艺术与科学院院士。他于1981年开始研究MIPS项目,之后创办MIPS Computer Systems公司,开发了商用RISC微处理器之一。他还领导了DASH项目,设计了一个可扩展cache-致性多处理器原型。戴维・A.帕特森(David A.Patterson),Patterson与Hennessy共同荣获了2017年度“图灵奖”。Patterson现为Google杰出工程师,之前为加州大学伯克利分校教授。他曾任ACM主席一职,目前是ACM和IEEE会士,美国艺术与科学院和计算机历史博物馆院士,并入选了美国国家工程院、国家科学院和硅谷工程名人堂。他领导了RISC I的设计与实现工作,并且是RAID项目的领导者。

备用下载地址:

链接2:点击下载 (夸克网盘备用,解压密码: 8986)

链接3:点击下载 (UC网盘备用,解压密码: 8986)

链接4:点击下载 (迅雷网盘备用,解压密码: 8986)