我正在研究SparkML,尝试使用Spark的OOB功能建立模糊匹配。一路上,我正在构建n = 2的NGram。但是,我的数据集中的某些行包含Spark管道失败的单个单词。无论Spark如何,都想知道解决此问题的通用方法是什么。即。如果代币怎么办
SCALA方法。通常,它也应该使用1个单词,并且不会失败,崩溃。使用非MLLIB但滑动您会得到1的二元组,使用句子解析当然是有争议的。像这样:
val rdd = sc.parallelize(Array("Hello my Friend. How are",
"you today? bye my friend.",
"singleword"))
rdd.map{
// Split each line into substrings by periods
_.split('.').map{ substrings =>
// Trim substrings and then tokenize on spaces
substrings.trim.split(' ').map{_.replaceAll("""\W""", "").toLowerCase()}.
// Find bigrams, etc.
sliding(2)
}.
// Flatten, and map the ngrams to concatenated strings
flatMap{identity}.map{_.mkString(" ")}.
// Group the bigrams and count their frequency
groupBy{identity}.mapValues{_.size}
}.
// Reduce to get a global count, then collect.
flatMap{identity}.reduceByKey(_+_).collect.
// Print
foreach{x=> println(x._1 + ", " + x._2)}
这在“单个单词”上不会失败,但只给您一个单词:
you today, 1
hello my, 1
singleword, 1
my friend, 2
how are, 1
bye my, 1
today bye, 1
使用mllib并使用此输入内容浏览行:
the quick brown fox.
singleword.
two words.
使用:
import org.apache.spark.mllib.rdd.RDDFunctions._
val wordsRdd = sc.textFile("/FileStore/tables/sliding.txt",1)
val wordsRDDTextSplit = wordsRdd.map(line => (line.trim.split(" "))).flatMap(x => x).map(x => (x.toLowerCase())).map(x => x.replaceAll(",{1,}","")).map(x => x.replaceAll("!{1,}",".")).map(x => x.replaceAll("\\?{1,}",".")).map(x => x.replaceAll("\\.{1,}",".")).map(x => x.replaceAll("\\W+",".")).filter(_ != ".").filter(_ != "")
.map(x => x.replace(".","")).sliding(2).collect
您得到:
wordsRDDTextSplit: Array[Array[String]] = Array(Array(the, quick), Array(quick, brown), Array(brown, fox), Array(fox, singleword), Array(singleword, two), Array(two, words))
请注意,我解析行的差异。
仅用1个单词的一行执行上述操作时,输出为空。
wordsRDDTextSplit: Array[Array[String]] = Array()
所以,您看到您可以处理行与否,等等。