Published online by Cambridge University Press: 13 September 2018
Summarizing content contributed by individuals can be challenging, because people make different lexical choices even when describing the same events. However, there remains a significant need to summarize such content. Examples include the student responses to post-class reflective questions, product reviews, and news articles published by different news agencies related to the same events. High lexical diversity of these documents hinders the system’s ability to effectively identify salient content and reduce summary redundancy. In this paper, we overcome this issue by introducing an integer linear programming-based summarization framework. It incorporates a low-rank approximation to the sentence-word cooccurrence matrix to intrinsically group semantically similar lexical items. We conduct extensive experiments on datasets of student responses, product reviews, and news documents. Our approach compares favorably to a number of extractive baselines as well as a neural abstractive summarization system. The paper finally sheds light on when and why the proposed framework is effective at summarizing content with high lexical variety.
*This research is supported by an internal grant from the Learning Research and Development Center at the University of Pittsburgh as well as by an Andrew Mellon Predoctoral Fellowship to the first author. We are grateful to Logan Lebanoff for helping with the experiments. We also thank Muhsin Menekse, the CourseMIRROR team, and Wenting Xiong for providing or helping to collect some of our datasets. We thank Jingtao Wang, Fan Zhang, Huy Nguyen, and Zahra Rahimi for valuable suggestions about the proposed summarization algorithm.