Neural Networks with Model Compression (1st ed. 2023)

, , ,
Deep learning has achieved impressive results in image classification, computer vision and natural language processing. To achieve better performance, deeper and wider networks have been designed, which increase the demand for computational resources. The number of floating-point operations (FLOPs) has increased dramatically with larger networks, and this has become an obstacle for convolutional neural networks (CNNs) being developed for mobile and embedded devices. In this context, our book will focus on CNN compression and acceleration, which are important for the research community. We will describe numerous methods, including parameter quantization, network pruning, low-rank decomposition and knowledge distillation. More recently, to reduce the burden of handcrafted architecture design, neural architecture search (NAS) has been used to automatically build neural networks by searching over a vast architecture space. Our book will also introduce NAS due to its superiority and state-of-the-art performance in various applications, such as image classification and object detection. We also describe extensive applications of compressed deep models on image classification, speech recognition, object detection and tracking. These topics can help researchers better understand the usefulness and the potential of network compression on practical applications. Moreover, interested readers should have basic knowledge about machine learning and deep learning to better understand the methods described in this book.

R4,827

Or split into 4x interest-free payments of 25% on orders over R50
Learn more

Discovery Miles48270
Mobicred@R452pm x 12* Mobicred Info
Free Delivery
Delivery AdviceShips in 10 - 15 working days


Toggle WishListAdd to wish list
Review this Item

Product Description

Deep learning has achieved impressive results in image classification, computer vision and natural language processing. To achieve better performance, deeper and wider networks have been designed, which increase the demand for computational resources. The number of floating-point operations (FLOPs) has increased dramatically with larger networks, and this has become an obstacle for convolutional neural networks (CNNs) being developed for mobile and embedded devices. In this context, our book will focus on CNN compression and acceleration, which are important for the research community. We will describe numerous methods, including parameter quantization, network pruning, low-rank decomposition and knowledge distillation. More recently, to reduce the burden of handcrafted architecture design, neural architecture search (NAS) has been used to automatically build neural networks by searching over a vast architecture space. Our book will also introduce NAS due to its superiority and state-of-the-art performance in various applications, such as image classification and object detection. We also describe extensive applications of compressed deep models on image classification, speech recognition, object detection and tracking. These topics can help researchers better understand the usefulness and the potential of network compression on practical applications. Moreover, interested readers should have basic knowledge about machine learning and deep learning to better understand the methods described in this book.

Customer Reviews

No reviews or ratings yet - be the first to create one!

Product Details

General

Imprint

Springer Verlag, Singapore

Country of origin

Singapore

Series

Computational Intelligence Methods and Applications

Release date

October 2023

Availability

Expected to ship within 10 - 15 working days

First published

2023

Authors

, , ,

Dimensions

235 x 155mm (L x W)

Edition

1st ed. 2023

ISBN-13

978-981-9950-67-6

Barcode

9789819950676

Categories

LSN

981-9950-67-8



Trending On Loot