| XIE Xiaoyan(谢晓燕)*,YANG Tianjiao*,ZHU Yun**,LUO Xing*,JIN Luochen*,YU Jinhao*,REN Xun*.[J].高技术通讯(英文),2025,31(3):266~272 |
|
| Computation graph pruning based on critical path retention in evolvable networks |
| |
| DOI:10. 3772 / j. issn. 1006-6748. 2025. 03. 006 |
| 中文关键词: |
| 英文关键词: evolvable network, computation graph traversing, dynamic routing, critical path retention pruning |
| 基金项目: |
| Author Name | Affiliation | | XIE Xiaoyan(谢晓燕)* | (* School of Computer, Xi’an University of Posts and Telecommunications, Xi’an 710121, P. R. China)
(** School of Electronic Engineering, Xi’an University of Posts and Telecommunications, Xi’an 710121, P. R. China) | | YANG Tianjiao* | | | ZHU Yun** | | | LUO Xing* | | | JIN Luochen* | | | YU Jinhao* | | | REN Xun* | |
|
| Hits: 19 |
| Download times: 27 |
| 中文摘要: |
| |
| 英文摘要: |
| The dynamic routing mechanism in evolvable networks enables adaptive reconfiguration of topological structures and transmission pathways based on real-time task requirements and data characteristics. However, the heightened architectural complexity and expanded parameter dimensionality in evolvable networks present significant implementation challenges when deployed in resource-constrained environments. Due to the critical paths ignored, traditional pruning strategies cannot get a desired trade-off between accuracy and efficiency. For this reason, a critical path retention pruning (CPRP) method is proposed. By deeply traversing the computational graph, the dependency relationship among nodes is derived. Then the nodes are grouped and sorted according to their contribution value. The redundant operations are removed as much as possible while ensuring that the critical path is not affected. As a result, computational efficiency is improved while a higher accuracy is maintained. On the CIFAR benchmark, the experimental results demonstrate that CPRP-induced pruning incurs accuracy degradation below 4. 00% , while outperforming traditional feature-agnostic grouping methods by an average 8. 98% accuracy improvement. Simultaneously, the pruned model attains a 2. 41 times inference acceleration while achieving 48. 92% parameter compression and 53. 40% floating-point operations (FLOPs) reduction. |
|
View Full Text
View/Add Comment Download reader |
| Close |
|
|
|