Hostname: page-component-cd9895bd7-dk4vv Total loading time: 0 Render date: 2024-12-26T09:03:47.090Z Has data issue: false hasContentIssue false

Imparting interpretability to word embeddings while preserving semantic structure

Published online by Cambridge University Press:  09 June 2020

Lütfi Kerem Şenel
Affiliation:
Center for Information and Language Processing (CIS), Ludwig Maximilian University (LMU), Munich, Germany
İhsan Utlu
Affiliation:
Electrical and Electronics Engineering Department, Bilkent University, Ankara, Turkey ASELSAN Research Center, Ankara, Turkey
Furkan Şahinuç
Affiliation:
Electrical and Electronics Engineering Department, Bilkent University, Ankara, Turkey ASELSAN Research Center, Ankara, Turkey
Haldun M. Ozaktas
Affiliation:
Electrical and Electronics Engineering Department, Bilkent University, Ankara, Turkey
Aykut Koç*
Affiliation:
Electrical and Electronics Engineering Department, Bilkent University, Ankara, Turkey National Magnetic Resonance Research Center (UMRAM), Bilkent University, Ankara, Turkey
*
*Corresponding author. Email: aykut.koc@bilkent.edu.tr

Abstract

As a ubiquitous method in natural language processing, word embeddings are extensively employed to map semantic properties of words into a dense vector representation. They capture semantic and syntactic relations among words, but the vectors corresponding to the words are only meaningful relative to each other. Neither the vector nor its dimensions have any absolute, interpretable meaning. We introduce an additive modification to the objective function of the embedding learning algorithm that encourages the embedding vectors of words that are semantically related to a predefined concept to take larger values along a specified dimension, while leaving the original semantic learning mechanism mostly unaffected. In other words, we align words that are already determined to be related, along predefined concepts. Therefore, we impart interpretability to the word embedding by assigning meaning to its vector dimensions. The predefined concepts are derived from an external lexical resource, which in this paper is chosen as Roget’s Thesaurus. We observe that alignment along the chosen concepts is not limited to words in the thesaurus and extends to other related words as well. We quantify the extent of interpretability and assignment of meaning from our experimental results. Manual human evaluation results have also been presented to further verify that the proposed method increases interpretability. We also demonstrate the preservation of semantic coherence of the resulting vector space using word-analogy/word-similarity tests and a downstream task. These tests show that the interpretability-imparted word embeddings that are obtained by the proposed framework do not sacrifice performances in common benchmark tests.

Type
Article
Copyright
© The Author(s), 2020. Published by Cambridge University Press

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Arora, S., Li, Y., Liang, Y., Ma, T. and Risteski, A. (2018). Linear algebraic structure of word senses, with applications to polysemy. Transactions of the Association of Computational Linguistics 6, 483495.CrossRefGoogle Scholar
Bojanowski, P., Grave, E., Joulin, A. and Mikolov, T. (2017). Enriching word vectors with subword information. Transactions of the Association for Computational Linguistics 5, 135146.CrossRefGoogle Scholar
Bollagela, D., Alsuhaibani, M., Maehara, T. and Kawarabayashi, K. (2016). Joint word representation learning using a corpus and a semantic lexicon. In Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence. Phoenix, AZ, USA: Association for the Advancement of Artificial Intelligence (AAAI), pp. 26902696.Google Scholar
Camacho-Collados, J. and Pilehvar, M.T. (2018). From word to sense embeddings: a survey on vector representations of meaning. Journal of Artificial Intelligence Research 63(1), 743788.CrossRefGoogle Scholar
Chang, J., Gerrish, S., Wang, C., Boyd-Graber, J.L. and Blei, D.M. (2009). Reading tea leaves: how humans interpret topic models. In Bengio Y., Schuurmans D., Lafferty J. D., Williams C. K. I. and Culotta A. (eds.), Advances in Neural Information Processing Systems, pp. 288296. Curran Associates, Inc.Google Scholar
Chen, D. and Manning, C. (2014). A fast and accurate dependency parser using neural networks. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP). Doha, Qatar: Association for Computational Linguistics, pp. 740750CrossRefGoogle Scholar
Chen, X., Duan, Y., Houthooft, R., Schulman, J., Sutskever, I. and Abbeel, P. (2016). Infogan: interpretable representation learning by information maximizing generative adversarial nets. In Lee D. D., Sugiyama M., Luxburg U. V., Guyon I. and Garnett R. (eds.), Advances in Neural Information Processing Systems, pp. 21722180. Curran Associates, Inc.Google Scholar
Das, R., Zaheer, M. and Dyer, C. (2015). Gaussian LDA for topic models with word embeddings. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers). Beijing, China: Association for Computational Linguistics, pp. 795804CrossRefGoogle Scholar
De Vine, L., Kholgi, M., Zuccon, G., Sitbon, L. and Nguyen, A. (2015). Analysis of word embeddings and sequence features for clinical information extraction. In Proceedings of the Australasian Language Technology Association Workshop 2015. Parramatta, Australia, pp. 2130.Google Scholar
Dufter, P. and Schutze, H. (2019). Analytical methods for interpretable ultradense word embeddings. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). Hong Kong, China: Association for Computational Linguistics, pp. 11851191.CrossRefGoogle Scholar
Faruqui, M. and Dyer, C. (2014). Community evaluation and exchange of word vectors at wordvectors.org. In Proceedings of 52nd Annual Meeting of the Association for Computational Linguistics: System Demonstrations. Baltimore, MD, USA: Association for Computational Linguistics, pp. 1924.CrossRefGoogle Scholar
Faruqui, M., Tsvetkov, Y., Yogatama, D., Dyer, C. and Smith, N.A. (2015a). Sparse overcomplete word vector representations. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers). Beijing, China: Association for Computational Linguistics, pp. 14911500.CrossRefGoogle Scholar
Faruqui, M., Dodge, J., Juahar, S.K., Dyer, C., Hovy, E. and Smith, N.A. (2015b). Retrofitting word vectors to semantic lexicons. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL). Denver, CO, USA: Association for Computational Linguistics, pp. 16061615.CrossRefGoogle Scholar
Firth, J.R. (1957a). Papers in Linguistics, 1934-1951. London: Oxford University Press.Google Scholar
Firth, J.R. (1957b). A synopsis of linguistic theory. In Philological Society (Great Britain) (ed.), Studies in Linguistic Analysis, Oxford: Blackwell, pp. 19301955.Google Scholar
Fyshe, A., Talukdar, P.P., Murphy, B. and Mitchell, T.M. (2014). Interpretable semantic vectors from a joint model of brain-and text-based meaning. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Baltimore, MD, USA: Association for Computational Linguistics, pp. 489499.Google Scholar
Glavaš, G. and Vulić, I. (2018). Explicit retrofitting of distributional word vectors. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Melbourne, Australia: Association for Computational Linguistics, pp. 3445.Google Scholar
Goldberg, Y. and Hirst, G. (2017). Neural network methods in natural language processing. In Synthesis Lectures on Human Language Technologies. Morgan & Claypool Publishers.Google Scholar
Goodman, B. and Flaxman, S. (2017). European Union regulations on algorithmic decision-making and a “right to explanation”. AI Magazine 38(3), 5057CrossRefGoogle Scholar
Harris, Z.S. (1954). Distributional structure. Word 10(2–3), 146162.CrossRefGoogle Scholar
Herbelot, A. and Vecchi, E.M. (2015). Building a shared world: mapping distributional to model-theoretic semantic spaces. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing (EMNLP). Lisbon, Portugal: Association for Computational Linguistics, pp. 2232.CrossRefGoogle Scholar
Jang, K.-R., Myaeng, S.-H. and Kim, S.-B. (2018). Interpretable word embedding contextualization. In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP. Brussels, Belgium: Association for Computational Linguistics, pp. 341343.CrossRefGoogle Scholar
Jauhar, S.K., Dyer, C. and Hovy, E. (2015). Ontologically grounded multi-sense representation learning for semantic vector space models. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL). Denver, CO, USA: Association for Computational Linguistics, pp. 683693.CrossRefGoogle Scholar
Johansson, R. and Nieto, P.L. (2015). Embedding a semantic network in a word space. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL). Denver, CO, USA: Association for Computational Linguistics, pp. 14281433.CrossRefGoogle Scholar
Joshi, A., Tripathi, V., Patel, K., Bhattacharyya, P. and Carman, M. (2016). Are word embedding-based features useful for sarcasm detection?. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing (EMNLP). Austin, TX, USA: Association for Computational Linguistics, pp. 10061011.CrossRefGoogle Scholar
Iacobacci, I., Pilehvar, M.T. and Navigli, R. (2016). Embeddings for word sense disambiguation: an evaluation study. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Berlin, Germany: Association for Computational Linguistics, pp. 897907.CrossRefGoogle Scholar
Levy, O. and Goldberg, Y. (2014). Dependency-based word embeddings. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers). Baltimore, MD, USA: Association for Computational Linguistics, pp. 302308.CrossRefGoogle Scholar
Liu, Y., Liu, Z., Chua, T.-S. and Sun, M. (2015). Topical word embeddings. In Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence. Austin, TX, USA: Association for the Advancement of Artificial Intelligence (AAAI), pp. 2418–2424.Google Scholar
Liu, Q., Jiang, H., Wei, S., Ling, Z.-H. and Hu, Y. (2015). Learning semantic word embeddings based on ordinal knowledge constraints. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers). Beijing, China: Association for Computational Linguistics, pp. 15011511.Google Scholar
Luo, H., Liu, Z., Luan, H.-B. and Sun, M. (2015). Online learning of interpretable word embeddings. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing (EMNLP). Lisbon, Portugal: Association for Computational Linguistics, pp. 16871692CrossRefGoogle Scholar
Mikolov, T., Chen, K., Corrado, G. and Dean, J. (2013a). Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781.Google Scholar
Mikolov, T., Le, Q.V. and Sutskever, I. (2013b). Exploiting similarities among languages for machine translation. arXiv preprint arXiv:1309.4168.Google Scholar
Mikolov, T., Sutskever, I., Chen, K., Corrado, G.S. and Dean, J. (2013c). Distributed representations of words and phrases and their compositionality. In Burges C. J. C., Bottou L., Welling M., Ghahramani Z. and Weinberger K. Q. (eds.), Advances in Neural Information Processing Systems, pp. 31113119. Curran Associates, Inc.Google Scholar
Miller, G.A. (1995). WordNet: a lexical database for English. Communications of the ACM 38(11), 3941.CrossRefGoogle Scholar
Moody, C.E. (2016). Mixing dirichlet topic models and word embeddings to make lda2vec. arXiv preprint arXiv:1605.02019.Google Scholar
Mrkšić, N., Ó Séaghdha, D., Thomson, B., Gašić, M., Rojas-Barahona, L.M., Su, P.-H., David, V., Wen, T.-H. and Young, S. (2016). Counter-fitting word vectors to linguistic constraints. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL). San Diego, CA, USA: Association for Computational Linguistics, pp. 142148.CrossRefGoogle Scholar
Mrkšić, N., Vulić, I., Óséaghdha, D., Leviant, I., Reichart, R., Gašić, M., Korhonen, A. and Young, S. (2017). Semantic specialization of distributional word vector spaces using monolingual and cross-lingual constraints. Transactions of the Association for Computational Linguistics 5, 309324.CrossRefGoogle Scholar
Murphy, B., Talukdar, P.P. and Mitchell, T.M. (2012). Learning effective and interpretable semantic models using non-negative sparse embedding. In Proceedings of COLING 2012. Mumbai, India: The COLING 2012 Organizing Committee, pp. 1933–1950.Google Scholar
Panigrahi, A., Simhadri, H.V. and Bhattacharyya, C. (2019). Word2Sense: sparse interpretable word embeddings. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. Florence, Italy: Association for Computational Linguistics, pp. 56925705.CrossRefGoogle Scholar
Park, S., Bak, J. and Oh, A. (2017). Rotated word vector representations and their interpretability. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing (EMNLP). Copenhagen, Denmark: Association for Computational Linguistics, pp. 401411.CrossRefGoogle Scholar
Pennington, J., Socher, R. and Manning, C. (2014). GloVe: global vectors for word representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP). Doha, Qatar: Association for Computational Linguistics, pp. 15321543.CrossRefGoogle Scholar
Ponti, E.M., Vulić, I., Glavaš, G., Mrkšić, N. and Korhonen, A. (2018). Adversarial propagation and zero-shot cross-lingual transfer of word vector specialization. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing (EMNLP). Brussels, Belgium: Association for Computational Linguistics, pp. 282293.CrossRefGoogle Scholar
Roget, P.M. (1911). Roget’s Thesaurus of English Words and Phrases. New York: T.Y. Crowell Company.Google Scholar
Roget, P.M. (2008). Roget’s International Thesaurus, 3/E. New Delhi: Oxford & IBH Publishing Company Pvt. Limited.Google Scholar
Senel, L.K., Utlu, I., Yucesoy, V., Koç, A. and Cukur, T. (2018). Semantic structure and interpretability of word embeddings. IEEE/ACM Transactions on Audio, Speech, and Language Processing 26(10), 17691779.CrossRefGoogle Scholar
Senel, L.K., Yucesoy, V., Koç, A. and Cukur, T. (2018). Interpretability analysis for Turkish word embeddings. In 26th Signal Processing and Communications Applications Conference (SIU). Izmir, Turkey: IEEE, pp. 14.CrossRefGoogle Scholar
Senel, L.K., Yucesoy, V., Koç, A. and Cukur, T. (2017). Measuring cross-lingual semantic similarity across European languages. In 40th International Conference on Telecommunications and Signal Processing (TSP). Barcelona, Spain: IEEE, pp. 359363.Google Scholar
Sienčnik, S.K. (2015). Adapting word2vec to named entity recognition. In Proceedings of the 20th Nordic Conference of Computational Linguistics (NODALIDA 2015). Vilnius, Lithuania: Linköping University Electronic Press, Sweden, pp. 239243.Google Scholar
Shi, B., Lam, W., Jameel, S., Schockaert, S. and Lai, K.P. (2017). Jointly learning word embeddings and latent topics. In Proceedings of the 40th International ACM SIGIR Conference on Research and Development in Information Retrieval. Shinjuku, Tokyo, Japan: ACM, pp. 375384.CrossRefGoogle Scholar
Socher, R., Pennington, J., Huang, E.H., Ng, A.Y. and Manning, C.D. (2011). Semi-supervised recursive autoencoders for predicting sentiment distributions. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing (EMNLP). Edinburgh, Scotland, UK: Association for Computational Linguistics, pp. 151161.Google Scholar
Socher, R., Perelygin, A., Wu, J., Chuang, J., Manning, C.D., Ng, A.Y. and Potts, C. (2013). Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing (EMNLP). Seattle, WA, USA: Association for Computational Linguistics, pp. 16311642.Google Scholar
Subramanian, A., Pruthi, D., Jhamtani, H., Berg-Kirkpatrick, T. and Hovy, E. (2018). SPINE: sparse interpretable neural embeddings. In Proceedings of the Thirty Second AAAI Conference on Artificial Intelligence New Orleans, LA, USA: Association for the Advancement of Artificial Intelligence (AAAI), pp. 49214928.Google Scholar
Turian, J., Ratinov, L.-A. and Bengio, Y. (2010). Word representations: a simple and general method for semi-supervised learning. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics. Uppsala, Sweden: Association for Computational Linguistics, pp. 384394.Google Scholar
Xu, C., Bai, Y., Bian, J., Gao, B., Wang, G., Liu, X. and Liu, T.-Y. (2014). RC-NET: a general framework for incorporating knowledge into word representations. In Proceedings of the 23rd ACM International Conference on Conference on Information and Knowledge Management. Shanghai, China: ACM, pp. 12191228.CrossRefGoogle Scholar
Yu, L.-C., Wang, J., Lai, K.R. and Zhang, X. (2017). Refining word embeddings for sentiment analysis. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing (EMNLP). Copenhagen, Denmark: Association for Computational Linguistics, pp. 545550.CrossRefGoogle Scholar
Yu, M. and Dredze, M. (2014). Improving lexical embeddings with semantic knowledge. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers). Baltimore, MD, USA: Association for Computational Linguistics, pp. 545550.CrossRefGoogle Scholar
Zobnin, A. (2017). Rotations and interpretability of word embeddings: the case of the Russian language. In International Conference on Analysis of Images, Social Networks and Texts. Moscow, Russia: Springer International Publishing, pp. 116128.Google Scholar