我正在使用Python,TextBlob和NLTK进行快速情绪分析控制台应用程序。
目前我正在使用西班牙语的wiki文章的链接,所以我不需要翻译它,我可以使用nltk西班牙语禁用词列表,但如果我想使这个代码适用于不同的语言链接怎么办?
如果我使用TextFinal=TextFinal.translate(to="es")
(下面的代码)下方的textFinal=TextBlob(texto)
行,我会收到错误,因为它无法将西班牙语翻译成西班牙语。
我可以通过使用try / catch来防止这种情况吗?有没有办法让代码尝试翻译成不同的语言(以及使用不同的禁用词列表),具体取决于提供给应用程序的链接的语言?
import nltk
nltk.download('stopwords')
from nltk import word_tokenize
from nltk.corpus import stopwords
import string
from textblob import TextBlob, Word
import urllib.request
from bs4 import BeautifulSoup
response = urllib.request.urlopen('https://es.wikipedia.org/wiki/Valencia')
html = response.read()
soup = BeautifulSoup(html,'html5lib')
text = soup.get_text(strip = True)
tokens = word_tokenize(text)
tokens = [w.lower() for w in tokens]
table = str.maketrans('', '', string.punctuation)
stripped = [w.translate(table) for w in tokens]
words = [word for word in stripped if word.isalpha()]
stop_words = set(stopwords.words('spanish'))
words = [w for w in words if not w in stop_words]
with open('palabras.txt', 'w') as f:
for word in words:
f.write(" " + word)
with open('palabras.txt', 'r') as myfile:
texto=myfile.read().replace('\n', '')
textFinal=TextBlob(texto)
print (textFinal.sentiment)
freq = nltk.FreqDist(words)
freq.plot(20, cumulative=False)
看看包langdetect。如果页面语言与翻译语言匹配,您可以检查您正在输入的页面的语言并跳过翻译。类似于以下内容:
import string
import urllib.request
import nltk
from bs4 import BeautifulSoup
from langdetect import detect
from nltk import word_tokenize
from nltk.corpus import stopwords
from textblob import TextBlob, Word
nltk.download("stopwords")
# nltk.download("punkt")
response = urllib.request.urlopen("https://es.wikipedia.org/wiki/Valencia")
html = response.read()
soup = BeautifulSoup(html, "html5lib")
text = soup.get_text(strip=True)
lang = detect(text)
tokens = word_tokenize(text)
tokens = [w.lower() for w in tokens]
table = str.maketrans("", "", string.punctuation)
stripped = [w.translate(table) for w in tokens]
words = [word for word in stripped if word.isalpha()]
stop_words = set(stopwords.words("spanish"))
words = [w for w in words if w not in stop_words]
with open("palabras.txt", "w", encoding="utf-8") as f:
for word in words:
f.write(" " + word)
with open("palabras.txt", "r", encoding="utf-8") as myfile:
texto = myfile.read().replace("\n", "")
textFinal = TextBlob(texto)
translate_to = "es"
if lang != translate_to:
textFinal = textFinal.translate(to=translate_to)
print(textFinal.sentiment)
freq = nltk.FreqDist(words)
freq.plot(20, cumulative=False)