QI Donglin (亓东林),CHEN Shudong,DU Rong,TONG Da,YU Yong.[J].高技术通讯(英文),2024,30(1):13~22 |
|
SHEL: a semantically enhanced hardware-friendly entity linking method |
|
DOI:10. 3772 / j. issn. 1006-6748. 2024. 01. 002 |
中文关键词: |
英文关键词: entity linking (EL), pre-trained models, knowledge graph, text summarization,semantic enhancement |
基金项目: |
Author Name | Affiliation | QI Donglin (亓东林) | (Institute of Microelectronics of Chinese Academy of Sciences, Beijing 100029, P. R. China)
(University of Chinese Academy of Sciences, Beijing 100190, P. R. China) | CHEN Shudong | | DU Rong | | TONG Da | | YU Yong | |
|
Hits: 542 |
Download times: 453 |
中文摘要: |
|
英文摘要: |
With the help of pre-trained language models, the accuracy of the entity linking task has made great strides in recent years. However, most models with excellent performance require fine-tuning on a large amount of training data using large pre-trained language models, which is a hardware threshold to accomplish this task. Some researchers have achieved competitive results with less training data through ingenious methods, such as utilizing information provided by the named entity recognition model. This paper presents a novel semantic-enhancement-based entity linking approach, named semantically enhanced hardware-friendly entity linking ( SHEL), which is designed to be hardware friendly and efficient while maintaining good performance. Specifically, SHEL's semantic enhancement approach consists of three aspects: (1)semantic compression of entity descriptions using a text summarization model; (2)maximizing the capture of mention contexts using asymmetric heuristics;(3)calculating a fixed size mention representation through pooling operations. These series of semantic enhancement methods effectively improve the model's ability to capture semantic information while taking into account the hardware constraints, and significantly improve the model's convergence speed by more than 50% compared with the strong baseline model proposed in this paper. In terms of performance, SHEL is comparable to the previous method, with superior performance on six well-established datasets, even though SHEL is trained using a smaller pre-trained language model as the encoder. |
View Full Text
View/Add Comment Download reader |
Close |
|
|
|