Skip to main content Accessibility help
×
  • Coming soon
    • Ryoma Sato, National Institute of Informatics, Chiyoda, Japan
    Show more authors
  • Select format
  • Publisher:
    Cambridge University Press
    ISBN:
    9781009687119
    9781009687089
    Dimensions:
    Weight & Pages:
    311 Pages
    Dimensions:
    Weight & Pages:
Selected: Digital
Add to cart View cart Buy from Cambridge.org

Book description

Deep learning models are powerful, but often large, slow, and expensive to run. This book is a practical guide to accelerating and compressing neural networks using proven techniques such as quantization, pruning, distillation, and fast architectures. It explains how and why these methods work, fostering a comprehensive understanding. Written for engineers, researchers, and advanced students, the book combines clear theoretical insights with hands-on PyTorch implementations and numerical results. Readers will learn how to reduce inference time and memory usage, lower deployment costs, and select the right acceleration strategy for their task. Whether you're working with large language models, vision systems, or edge devices, this book gives you the tools and intuition needed to build faster, leaner AI systems, without sacrificing performance. It is perfect for anyone who wants to go beyond intuition and take a principled approach to optimizing AI systems

Metrics

Full text views

Total number of HTML views: 0
Total number of PDF views: 0 *
Loading metrics...

Book summary page views

Total views: 0 *
Loading metrics...

* Views captured on Cambridge Core between #date#. This data will be updated every 24 hours.

Usage data cannot currently be displayed.

Accessibility standard: Unknown

Why this information is here

This section outlines the accessibility features of this content - including support for screen readers, full keyboard navigation and high-contrast display options. This may not be relevant for you.

Accessibility Information

Accessibility compliance for the PDF of this book is currently unknown and may be updated in the future.