Roz*_*ita 6 text scala lemmatization apache-spark databricks
我想在文本文件中使用词形还原:
surprise heard thump opened door small seedy man clasping package wrapped.
upgrading system found review spring 2008 issue moody audio backed.
omg left gotta wrap review order asap . understand hand delivered dali lama
speak hands wear earplugs lives . listen maintain link long .
cables cables finally able hear gem long rumored music .
...
Run Code Online (Sandbox Code Playgroud)
和预期产量是:
surprise heard thump open door small seed man clasp package wrap.
upgrade system found review spring 2008 issue mood audio back.
omg left gotta wrap review order asap . understand hand deliver dali lama
speak hand wear earplug live . listen maintain link long .
cable cable final able hear gem long rumor music .
...
Run Code Online (Sandbox Code Playgroud)
有谁能够帮我 ?谁知道在Scala和Spark中实现的最简单的词形还原方法?
Spark中的"Adavanced analitics"一书中有一个函数,关于词形还原的章节:
val plainText = sc.parallelize(List("Sentence to be precessed."))
val stopWords = Set("stopWord")
import edu.stanford.nlp.pipeline._
import edu.stanford.nlp.ling.CoreAnnotations._
import scala.collection.JavaConversions._
def plainTextToLemmas(text: String, stopWords: Set[String]): Seq[String] = {
val props = new Properties()
props.put("annotators", "tokenize, ssplit, pos, lemma")
val pipeline = new StanfordCoreNLP(props)
val doc = new Annotation(text)
pipeline.annotate(doc)
val lemmas = new ArrayBuffer[String]()
val sentences = doc.get(classOf[SentencesAnnotation])
for (sentence <- sentences; token <- sentence.get(classOf[TokensAnnotation])) {
val lemma = token.get(classOf[LemmaAnnotation])
if (lemma.length > 2 && !stopWords.contains(lemma)) {
lemmas += lemma.toLowerCase
}
}
lemmas
}
val lemmatized = plainText.map(plainTextToLemmas(_, stopWords))
lemmatized.foreach(println)
Run Code Online (Sandbox Code Playgroud)
现在只需将它用于mapper中的每一行.
val lemmatized = plainText.map(plainTextToLemmas(_, stopWords))
Run Code Online (Sandbox Code Playgroud)
编辑:
我添加到代码行
import scala.collection.JavaConversions._
Run Code Online (Sandbox Code Playgroud)
这是必要的,因为否则句子是Java而不是Scala List.这应该现在编译没有问题.
我使用了scala 2.10.4和fallowing stanford.nlp依赖项:
<dependency>
<groupId>edu.stanford.nlp</groupId>
<artifactId>stanford-corenlp</artifactId>
<version>3.5.2</version>
</dependency>
<dependency>
<groupId>edu.stanford.nlp</groupId>
<artifactId>stanford-corenlp</artifactId>
<version>3.5.2</version>
<classifier>models</classifier>
</dependency>
Run Code Online (Sandbox Code Playgroud)
您还可以查看stanford.nlp页面中有很多示例(在Java中)http://nlp.stanford.edu/software/corenlp.shtml.
编辑:
MapPartition版本:
虽然我不知道它是否会显着加快工作.
def plainTextToLemmas(text: String, stopWords: Set[String], pipeline: StanfordCoreNLP): Seq[String] = {
val doc = new Annotation(text)
pipeline.annotate(doc)
val lemmas = new ArrayBuffer[String]()
val sentences = doc.get(classOf[SentencesAnnotation])
for (sentence <- sentences; token <- sentence.get(classOf[TokensAnnotation])) {
val lemma = token.get(classOf[LemmaAnnotation])
if (lemma.length > 2 && !stopWords.contains(lemma)) {
lemmas += lemma.toLowerCase
}
}
lemmas
}
val lemmatized = plainText.mapPartitions(p => {
val props = new Properties()
props.put("annotators", "tokenize, ssplit, pos, lemma")
val pipeline = new StanfordCoreNLP(props)
p.map(q => plainTextToLemmas(q, stopWords, pipeline))
})
lemmatized.foreach(println)
Run Code Online (Sandbox Code Playgroud)
| 归档时间: |
|
| 查看次数: |
4974 次 |
| 最近记录: |