vignettes/pkgdown/replication/text2vec.Rmd
text2vec.Rmd
This is intended to show how quanteda can be used with the text2vec package in order to replicate its gloVe example.
Download a corpus comprising the texts used in the text2vec vignette:
wiki_corp <- quanteda.corpora::download(url = "https://www.dropbox.com/s/9mubqwpgls3qi9t/data_corpus_wiki.rds?dl=1")
First, we tokenize the corpus, and then get the names of the features that occur five times or more. Trimming the features before constructing the fcm:
wiki_toks <- tokens(wiki_corp)
feats <- dfm(wiki_toks, verbose = TRUE) |>
dfm_trim(min_termfreq = 5) |>
featnames()
## Creating a dfm from a tokens object...
## ...complete, elapsed time: 1.53 seconds.
## Finished constructing a 1 x 253,854 sparse dfm.
# leave the pads so that non-adjacent words will not become adjacent
wiki_toks <- tokens_select(wiki_toks, feats, padding = TRUE)
wiki_fcm <- fcm(wiki_toks, context = "window", count = "weighted", weights = 1 / (1:5), tri = TRUE)
Fit the GloVe model using rsparse.
## Warning: package 'text2vec' was built under R version 4.4.1
GloVe is an unsupervised learning algorithm for obtaining vector representations for words. Training is performed on aggregated global word-word co-occurrence statistics from a corpus, and the resulting representations showcase interesting linear substructures of the word vector space.
GloVe encodes the ratios of word-word co-occurrence probabilities, which is thought to represent some crude form of meaning associated with the abstract concept of the word, as vector difference. The training objective of GloVe is to learn word vectors such that their dot product equals the logarithm of the words’ probability of co-occurrence.
glove <- GlobalVectors$new(rank = 50, x_max = 10)
wv_main <- glove$fit_transform(wiki_fcm, n_iter = 10,
convergence_tol = 0.01, n_threads = 8)
## INFO [15:30:45.418] epoch 1, loss 0.1697
## INFO [15:30:49.505] epoch 2, loss 0.1232
## INFO [15:30:53.550] epoch 3, loss 0.1086
## INFO [15:30:57.605] epoch 4, loss 0.1008
## INFO [15:31:01.690] epoch 5, loss 0.0957
## INFO [15:31:05.764] epoch 6, loss 0.0921
## INFO [15:31:09.868] epoch 7, loss 0.0893
## INFO [15:31:13.924] epoch 8, loss 0.0871
## INFO [15:31:17.966] epoch 9, loss 0.0853
## INFO [15:31:22.033] epoch 10, loss 0.0838
dim(wv_main)
## [1] 71290 50
The two vectors are main and context. According to the Glove paper, averaging the two word vectors results in more accurate representation.
wv_context <- glove$components
dim(wv_context)
## [1] 50 71290
word_vectors <- wv_main + t(wv_context)
Now we can find the closest word vectors for
paris - france + germany
berlin <- word_vectors["paris", , drop = FALSE] -
word_vectors["france", , drop = FALSE] +
word_vectors["germany", , drop = FALSE]
library("quanteda.textstats")
cos_sim <- textstat_simil(x = as.dfm(word_vectors), y = as.dfm(berlin),
method = "cosine")
head(sort(cos_sim[, 1], decreasing = TRUE), 5)
## paris berlin munich germany vienna
## 0.7730811 0.7067626 0.6915460 0.6726788 0.6686973
Here is another example for
london = paris - france + uk + england
london <- word_vectors["paris", , drop = FALSE] -
word_vectors["france", , drop = FALSE] +
word_vectors["uk", , drop = FALSE] +
word_vectors["england", , drop = FALSE]
cos_sim <- textstat_simil(as.dfm(word_vectors), y = as.dfm(london),
margin = "documents", method = "cosine")
head(sort(cos_sim[, 1], decreasing = TRUE), 5)
## uk england london at york
## 0.7649296 0.7557188 0.7450707 0.7435699 0.7427765