Book contents
- Frontmatter
- Contents
- List of Figures
- List of Tables
- Preface
- 1 Introduction
- 2 The Perceptron
- 3 Logistic Regression
- 4 Implementing Text Classification Using Perceptron and Logistic Regression
- 5 Feed-Forward Neural Networks
- 6 Best Practices in Deep Learning
- 7 Implementing Text Classification with Feed-Forward Networks
- 8 Distributional Hypothesis and Representation Learning
- 9 Implementing Text Classification Using Word Embeddings
- 10 Recurrent Neural Networks
- 11 Implementing Part-of-Speech Tagging Using Recurrent Neural Networks
- 12 Contextualized Embeddings and Transformer Networks
- 13 Using Transformers with the Hugging Face Library
- 14 Encoder-Decoder Methods
- 15 Implementing Encoder-Decoder Methods
- 16 Neural Architectures for Natural Language Processing Applications
- Appendix A Overview of the Python Language and Key Libraries
- Appendix B Character Encodings: ASCII and Unicode
- References
- Index
9 - Implementing Text Classification Using Word Embeddings
Published online by Cambridge University Press: 01 February 2024
- Frontmatter
- Contents
- List of Figures
- List of Tables
- Preface
- 1 Introduction
- 2 The Perceptron
- 3 Logistic Regression
- 4 Implementing Text Classification Using Perceptron and Logistic Regression
- 5 Feed-Forward Neural Networks
- 6 Best Practices in Deep Learning
- 7 Implementing Text Classification with Feed-Forward Networks
- 8 Distributional Hypothesis and Representation Learning
- 9 Implementing Text Classification Using Word Embeddings
- 10 Recurrent Neural Networks
- 11 Implementing Part-of-Speech Tagging Using Recurrent Neural Networks
- 12 Contextualized Embeddings and Transformer Networks
- 13 Using Transformers with the Hugging Face Library
- 14 Encoder-Decoder Methods
- 15 Implementing Encoder-Decoder Methods
- 16 Neural Architectures for Natural Language Processing Applications
- Appendix A Overview of the Python Language and Key Libraries
- Appendix B Character Encodings: ASCII and Unicode
- References
- Index
Summary
In the previous chapter, we introduced word embeddings, which are real-valued vectors that encode semantic representation of words. We discussed how to learn them and how they capture semantic information that makes them useful for downstream tasks. In this chapter, we show how to use word embeddings that have been pretrained using a variant of the algorithm discussed in the previous chapter. We show how to load them, explore some of their characteristics, and show their application for a text classification task.
Keywords
- Type
- Chapter
- Information
- Deep Learning for Natural Language ProcessingA Gentle Introduction, pp. 132 - 146Publisher: Cambridge University PressPrint publication year: 2024