Hostname: page-component-cd9895bd7-gbm5v Total loading time: 0 Render date: 2024-12-26T09:38:07.785Z Has data issue: false hasContentIssue false

Machine learning-based named entity recognition via effective integration of various evidences

Published online by Cambridge University Press:  19 May 2005

GUODONG ZHOU
Affiliation:
Institute for Infocomm Research, 21 Heng Mui Keng Terrace Singapore 119613 e-mail: zhougd@i2r.a-star.edu.sg, sujian@i2r.a-star.edu.sg
JIAN SU
Affiliation:
Institute for Infocomm Research, 21 Heng Mui Keng Terrace Singapore 119613 e-mail: zhougd@i2r.a-star.edu.sg, sujian@i2r.a-star.edu.sg

Abstract

Named entity recognition identifies and classifies entity names in a text document into some predefined categories. It resolves the “who”, “where” and “how much” problems in information extraction and leads to the resolution of the “what” and “how” problems in further processing. This paper presents a Hidden Markov Model (HMM) and proposes a HMM-based named entity recognizer implemented as the system PowerNE. Through the HMM and an effective constraint relaxation algorithm to deal with the data sparseness problem, PowerNE is able to effectively apply and integrate various internal and external evidences of entity names. Currently, four evidences are included: (1) a simple deterministic internal feature of the words, such as capitalization and digitalization; (2) an internal semantic feature of the important triggers; (3) an internal gazetteer feature, which determines the appearance of the current word string in the provided gazetteer list; and (4) an external macro context feature, which deals with the name alias phenomena. In this way, the named entity recognition problem is resolved effectively. PowerNE has been benchmarked with the Message Understanding Conferences (MUC) data. The evaluation shows that, using the formal training and test data of the MUC-6 and MUC-7 English named entity tasks, and it achieves the F-measures of 96.6 and 94.1, respectively. Compared with the best reported machine learning system, it achieves a 1.7 higher F-measure with one quarter of the training data on MUC-6, and a 3.6 higher F-measure with one ninth of the training data on MUC-7. In addition, it performs slightly better than the best reported handcrafted rule-based systems on MUC-6 and MUC-7.

Type
Papers
Copyright
2005 Cambridge University Press

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

Footnotes

A previous version of this paper appeared in ACL'2002 (Zhou and Su 2002).