Creativity in Computing and DataFlow SuperComputing ( Volume 104 )

Publication series :Volume 104

Author: Hurson   Ali R.;Milutinovic   Veljko  

Publisher: Elsevier Science‎

Publication year: 2017

E-ISBN: 9780128119563

P-ISBN(Paperback): 9780128119556

Subject: TP1 自动化基础理论;TP301.6 algorithm theory;TP31 computer software

Keyword: 数学,计算机软件,算法理论,自动化技术、计算机技术

Language: ENG

Access to resources Favorite

Disclaimer: Any content in publications that violate the sovereignty, the constitution or regulations of the PRC is not accepted or approved by CNPIEC.

Description

Creativity in Computing and DataFlow Supercomputing, the latest release in the Advances in Computers series published since 1960, presents detailed coverage of innovations in computer hardware, software, theory, design, and applications. In addition, it provides contributors with a medium in which they can explore topics in greater depth and breadth than journal articles typically allow. As a result, many articles have become standard references that continue to be of significant, lasting value in this rapidly expanding field.

  • Provides in-depth surveys and tutorials on new computer technology
  • Presents well-known authors and researchers in the field
  • Includes extensive bibliographies with most chapters
  • Contains extensive chapter coverage that is devoted to single themes or subfields of computer science

Chapter

2. Classification of Innovation Methods

2.1. Mendeleyevization (M)

2.2. Generalization (G)

2.3. Specialization (S)

2.4. Revitalization (R)

2.5. Crossdisciplinarization (C)

2.6. Implantation (I)

2.7. Adaptation (A)

2.8. Hybridization (H)

2.9. Transgranularization (T)

2.10. Extraparameterization (E)

3. Representative Examples From the Authors' PhD Theses

4. Conclusions

Acknowledgments

References

Methodology-Related References

Common Examples

Author´s PhD-Related References

Selected References of Young Researchers on the Faculty of the Department of Computer Engineering and Informatics, School o ...

Chapter Two: Exploring Future Many-Core Architectures: The TERAFLUX Evaluation Framework

1. Introduction

2. Terminology and Related Work

2.1. The Trade-off Between Simulation Accuracy and Speed

2.2. Simulation vs Emulation

2.3. The ``Functional-Directed´´ Simulation Technique

2.4. Using Sampling and FPGAs to Accelerate Simulation of Large Systems

2.5. Other Relevant Simulator Features

3. COTSon Framework Organization

4. Targeting a 1000-Core Simulation

4.1. Comparison Among Approaches to Evaluate Novel 1000-Core Architectures

4.2. Notes on the Evaluations Based on Physical Machines

5. How to Simulate 1000 Cores

5.1. Setup #1: Physical Machines, MPI Programming Model

5.2. Setup #2: Virtual Machines Running on Several Physical Machines, MPI Programming Model

5.3. Setup #3: Virtual Machines Running on a Single Physical Computer, MPI Programming Model

5.4. Setup #4: Virtual Machines Running on a Single Physical Computer, Flexible Programming Model on Top of a Distributed ...

5.5. Setup #5: Virtual Machines Running on a Single Physical Computer, Flexible Programming Model on Top of a Shared-Memo ...

5.6. Setup #6: Single Virtual Machine Running on a Single Physical Computer, Flexible Programming Model on Top of a Share ...

6. The Search for ``Efficient Benchmarks´´

7. Simulation Experiments

7.1. TERAFLUX Basic Node With up to 32 Cores

7.2. TERAFLUX Basic Communication Case With Two Nodes

7.3. TERAFLUX 1024-Core Machine (32 Nodes by 32 Cores)

8. Conclusions

Acknowledgments

References

Chapter Three: Dataflow-Based Parallelization of Control-Flow Algorithms

1. Introduction

2. Problem Statement

3. Dataflow Approaches and the Feynman Paradigm

4. Existing Solutions and Their Criticism

4.1. Methods Inherited From the Theory of Systolic Arrays

4.2. Methods Inherited From the Theory of Dataflow Analysis in Compilers

4.3. Methods Inherited From the Theory of Dataflow Programming Tools

5. Exploring Dataflow Potentials

5.1. The LBM

5.2. A Lattice–Boltzmann Implementation in the C Programming Language

5.3. Analytical Analysis of Potentials

5.4. A Lattice–Boltzmann Implementation for the Maxeler Dataflow Architecture

6. Performance Evaluation

6.1. Case Study: A Control-Flow and a Dataflow Implementation of the LBM

6.2. Dataflow Acceleration for Other Algorithms

6.3. Threat to Validity

7. Conclusions

Acknowledgments

Appendix

References

Chapter Four: Data Flow Computing in Geoscience Applications

1. Introduction

2. Data Flow Computing in HPC

2.1. Brief Summary of Data Flow Computing Model

2.2. Maxeler DFE

3. Geoscience Applications in HPC

3.1. Brief Summary

3.2. Climate Modeling

3.3. Exploration Geophysics

4. Case Study 1: Global Shallow Water Equations

4.1. Problem Description

4.1.1. Equations and Discretization

4.1.2. SWE Algorithm and Challenges

4.2. Hybrid Domain Partition Scheme

4.2.1. Hybrid Domain Decomposition Methodology

4.2.2. Adjustable Task Partition

4.3. Mixed Precision Arithmetic

4.3.1. Range Analysis

4.3.2. Precision Analysis

4.4. Performance and Power Efficiency

5. Case Study 2: Euler Atmospheric Equations

5.1. Problem Description

5.2. Algorithmic Offsetting

5.3. Fast Memory Table and Mixed Precision Arithmetic

5.4. Performance and Power Efficiency

6. Case Study 3: Reverse Time Migration

6.1. Problem Description

6.1.1. The Reverse Time Migration Algorithm

6.1.2. Computational Challenges in Reverse Time Migration

6.2. Random Boundary

6.3. A Customized Window Buffer

6.4. Cascading Multiple Computations

6.5. Number Representations

6.6. Hardware (De)compression Scheme

6.7. Performance and Power Efficiency

7. Summary and Concluding Remarks

Acknowledgments

Appendix

References

Chapter Five: A Streaming Dataflow Implementation of Parallel Cocke-Younger-Kasami Parser

1. Introduction

2. Problem Statement

2.1. Context-Free Languages

2.2. The CYK Algorithm

2.3. Modifications to the CYK Algorithm

2.4. Parallelizing CYK Parsing

3. Existing Solutions and Their Criticism

3.1. Existing Solutions for Shared Memory Multicore Systems

3.2. Existing Solutions for Distributed Memory Systems

3.3. Existing Solutions for Reconfigurable Hardware Systems

3.4. Existing Solutions for Many-Core (GPU) Systems

3.5. Summary of Presented Solutions

4. A Dataflow Implementation of a CYK Parser

5. Performance Analysis

5.1. Modeling Space and Time Requirements

5.2. Experimental Analysis

6. Conclusion

Acknowledgment

Appendix

References

Author Index

Subject Index

Contents of Volumes in this Series

Back Cover

The users who browse this book also browse