WU Yuzhang(吴豫章)*,ZHI Tian**,SONG Xinkai**,LI Xi*.[J].高技术通讯(英文),2023,29(4):416~426 |
|
Design space exploration of neural network acceleratorbased on transfer learning |
|
DOI: |
中文关键词: |
英文关键词: design space exploration (DSE), transfer learning, neural network accelerator,
multi-task learning |
基金项目: |
Author Name | Affiliation | WU Yuzhang(吴豫章)* | (*School of Computer Science, University of Science and Technology of China, Hefei 230027, P. R. China)
(**State Key Lab of Processors, Institute of Computing Technology, Chinese Academy of Sciences,Beijing 100190, P. R. China) | ZHI Tian** | | SONG Xinkai** | | LI Xi* | |
|
Hits: 530 |
Download times: 549 |
中文摘要: |
|
英文摘要: |
With the increasing demand of computational power in artificial intelligence (AI) algorithms,
dedicated accelerators have become a necessity. However, the complexity of hardware architectures,
vast design search space, and complex tasks of accelerators have posed significant challenges. Traditional
search methods can become prohibitively slow if the search space continues to be expanded.
A design space exploration (DSE) method is proposed based on transfer learning, which reduces the
time for repeated training and uses multi-task models for different tasks on the same processor. The
proposed method accurately predicts the latency and energy consumption associated with neural network
accelerator design parameters, enabling faster identification of optimal outcomes compared with
traditional methods. And compared with other DSE methods by using multilayer perceptron (MLP),
the required training time is shorter. Comparative experiments with other methods demonstrate that
the proposed method improves the efficiency of DSE without compromising the accuracy of the results. |
View Full Text
View/Add Comment Download reader |
Close |
|
|
|