如何为R中的目录中的文本文件创建wordcloud

问题描述 投票:0回答:3

我正在尝试为目录中的每个文本文件创建一个文字云。他们是四次总统宣布演讲。我一直收到以下消息:

> cname <- file.path("C:", "texts")
> cname
[1] "C:/texts"

> cname <- file.path("C:\\Users\\BonitaW\\Documents\\DATA630\\texts")
> dir(cname)
[1] "berniesandersspeechtranscript20115.txt"
[2] "hillaryclintonspeechtranscript2015.txt"
[3] "jebbushspeechtranscript2015.txt"       
[4] "randpaulspeechtranscript2015.txt"      
> library(tm)
> docs <- Corpus(DirSource(cname)) 
> summary (docs)
                                   Length
berniesandersspeechtranscript20115.txt 2     
hillaryclintonspeechtranscript2015.txt 2     
jebbushspeechtranscript2015.txt        2     
randpaulspeechtranscript2015.txt       2     
                                   Class            
berniesandersspeechtranscript20115.txt PlainTextDocument
hillaryclintonspeechtranscript2015.txt PlainTextDocument
jebbushspeechtranscript2015.txt        PlainTextDocument
randpaulspeechtranscript2015.txt       PlainTextDocument
                                   Mode
berniesandersspeechtranscript20115.txt list
hillaryclintonspeechtranscript2015.txt list
jebbushspeechtranscript2015.txt        list
randpaulspeechtranscript2015.txt       list
> docs <- tm_map(docs, removePunctuation) 
> docs <- tm_map(docs, removeNumbers)
> docs <- tm_map(docs, removeWords, stopwords("english"))
> library(SnowballC) 
Warning message:
package ‘SnowballC’ was built under R version 3.1.3 
> docs <- tm_map(docs, stemDocument)
> docs <- tm_map(docs, stripWhitespace) 
> docs <- tm_map(docs, PlainTextDocument)
> dtm <- DocumentTermMatrix(docs)
> dtm
<<DocumentTermMatrix (documents: 4, terms: 1887)>>
Non-/sparse entries: 2862/4686
Sparsity           : 62%
Maximal term length: 20
Weighting          : term frequency (tf)
> tdm <- TermDocumentMatrix(docs) 
> tdm
<<TermDocumentMatrix (terms: 1887, documents: 4)>>
Non-/sparse entries: 2862/4686
Sparsity           : 62%
Maximal term length: 20
Weighting          : term frequency (tf)

> library(wordcloud)
> Berniedoc <- wordcloud(names(freq), freq, min.freq=25)   
Warning message:
In wordcloud(names(freq), freq, min.freq = 25) :
american could not be fit on page. It will not be plotted.

最初,我能够绘制Berniedoc,但丢失了图形,但现在不会绘制。

 Berniedoc <- wordcloud(names(freq), freq, min.freq=25)   
Warning messages:
1: In wordcloud(names(freq), freq, min.freq = 25) :
american could not be fit on page. It will not be plotted.
2: In wordcloud(names(freq), freq, min.freq = 25) :
 work could not be fit on page. It will not be plotted.
3: In wordcloud(names(freq), freq, min.freq = 25) :
countri could not be fit on page. It will not be plotted.
4: In wordcloud(names(freq), freq, min.freq = 25) :
year could not be fit on page. It will not be plotted.
5: In wordcloud(names(freq), freq, min.freq = 25) :
new could not be fit on page. It will not be plotted.
6: In wordcloud(names(freq), freq, min.freq = 25) :
see could not be fit on page. It will not be plotted.
7: In wordcloud(names(freq), freq, min.freq = 25) :
and could not be fit on page. It will not be plotted.
8: In wordcloud(names(freq), freq, min.freq = 25) :
can could not be fit on page. It will not be plotted.
9: In wordcloud(names(freq), freq, min.freq = 25) :
time could not be fit on page. It will not be plotted.

你能告诉我我做错了什么吗?可能是缩放?或者我应该将'Berniedoc'更改为其他内容?

r text-mining word-cloud term-document-matrix quanteda
3个回答
0
投票

您应该为单词数添加“max.words”限制。

Berniedoc <- wordcloud(names(freq), freq, min.freq=25, max.words = 50)

0
投票

我认为用一个可重复的例子会更简单。我不知道“C:\\Users\\BonitaW\\Documents\\DATA630\\texts”是什么。但我可以告诉你,我只是来解决一个非常相似的问题。

您需要做的就是使用scalewordcloud参数。特别是,第一个数字代表range(不是size)。


0
投票

使用quanteda包的替代方法怎么样?

您需要为自己的示例更改目录引用。设置pdf窗口的大小应该会使警告消失。

require(quanteda)

# load the files into a quanteda corpus
myCorpus <- corpus(textfile("~/Dropbox/QUANTESS/corpora/inaugural/*.txt"))
ndoc(myCorpus)
## [1] 57

# create a document-feature matrix, removing stopwords
myDfm <- dfm(myCorpus, remove = stopwords("english"))
## Creating a dfm from a corpus ...
## ... lowercasing
## ... tokenizing
## ... indexing 57 documents
## ... shaping tokens into data.table, found 134,024 total tokens
## ... ignoring 174 feature types, discarding 69,098 total features (51.6%)
## ... summing tokens by document
## ... indexing 8,958 feature types
## ... building sparse matrix
## ... created a 57 x 8958 sparse dfm
## ... complete. Elapsed time: 0.256 seconds.

# just do first four
for (i in 1:4) {
    pdf(file = paste0("~/tmp/", docnames(myCorpus)[i], ".pdf"), height=12, width=12)
    textplot_wordcloud(myDfm[i, ])  # pass through any arguments you wish to wordcloud()
    dev.off()
}
© www.soinside.com 2019 - 2024. All rights reserved.