两个短文本语料库之间无监督地比较语义相似性的正确方法是什么?比较两者的LDA主题分布似乎不是一个解决方案,因为对于简短的文档,生成的主题并不能很好地理解语义。分块没有帮助,因为后续的推文不必放在同一主题上。是例如在这些语料库中创建文档TF-IDF之间的余弦相似度矩阵是一个好方法吗?
[这里是找到的一种方法here。相似度分数越高,句子(语义上)越接近。
#Invoke libraries
from nltk import pos_tag, word_tokenize
from nltk.corpus import wordnet as wn
#Build functions to compute similarity
def ptb_to_wn(tag):
if tag.startswith('N'):
return 'n'
if tag.startswith('V'):
return 'v'
if tag.startswith('J'):
return 'a'
if tag.startswith('R'):
return 'r'
return None
def tagged_to_synset(word, tag):
wn_tag = ptb_to_wn(tag)
if wn_tag is None:
return None
try:
return wn.synsets(word, wn_tag)[0]
except:
return None
def sentence_similarity(s1, s2):
s1 = pos_tag(word_tokenize(s1))
s2 = pos_tag(word_tokenize(s2))
synsets1 = [tagged_to_synset(*tagged_word) for tagged_word in s1]
synsets2 = [tagged_to_synset(*tagged_word) for tagged_word in s2]
#suppress "none"
synsets1 = [ss for ss in synsets1 if ss]
synsets2 = [ss for ss in synsets2 if ss]
score, count = 0.0, 0
for synset in synsets1:
best_score = max([synset.path_similarity(ss) for ss in synsets2])
if best_score is not None:
score += best_score
count += 1
# Average the values
score /= count
return score
#compute the symmetric sentence similarity
def symSentSim(s1, s2):
sss_score = (sentence_similarity(s1, s2) + sentence_similarity(s2,s1)) / 2
return (sss_score)
s1 = 'We rented a vehicle to drive to New York'
s2 = 'The car broke down on our jouney'
s1tos2 = symSentSim(s1, s2)
print(s1tos2)
#0.142509920635