Tech »  Topic »  A pruning approach for neural network design optimized for specific hardware configurations

A pruning approach for neural network design optimized for specific hardware configurations


Multi-objective evolutionary optimization for hardware-aware neural network pruning. Credit: Wenjing Hong, et al.

Neural network pruning is a key technique for deploying artificial intelligence (AI) models based on deep neural networks (DNNs) on resource-constrained platforms, such as mobile devices. However, hardware conditions and resource availability vary greatly across different platforms, making it essential to design pruned models optimally suited to specific hardware configurations.

Hardware-aware neural network pruning offers an effective way to automate this process, but it requires balancing multiple conflicting objectives, such as network accuracy, inference latency, and memory usage, that traditional mathematical methods struggle to solve.

In a study published in the journal Fundamental Research, a group of researchers from Shenzhen, China, present a novel hardware-aware neural network pruning approach based on multi-objective evolutionary optimization.

"We propose to employ Multi-Objective Evolutionary Algorithms (MOEAs) to solve the hardware neural network pruning problem," says Ke Tang, senior and corresponding author ...


Copyright of this story solely belongs to phys.org . To see the full text click HERE