我一直在尝试编辑我的连接字符串,以便将我的网站上传到服务器.
我对此并不十分熟悉.我遇到了这个例外:不支持关键字:'server'.
这是我的连接字符串:
<add name="AlBayanEntities" connectionString="Server=xx.xx.xxx.xxx,xxxx;Database=AlBayan;Uid=bayan;Password=xxxxx;" providerName="System.Data.EntityClient" />
Run Code Online (Sandbox Code Playgroud)
我已经尝试将此字符串嵌入到我的旧连接字符串中,该字符串在本地工作得非常好,但它不适合:S
asp.net ado.net entity-framework connection-string sql-server-2008
我正在尝试使用StanfordCoreNLP来区分单个和复数名词.首先,我正在使用http://nlp.stanford.edu/software/corenlp.shtml中的代码.在netbeans 8.0中,我打开了一个新的java项目.我已经下载stanford-corenlp-full-2014-06-16并将jar文件(包括模型jar)添加到我的项目中:

代码类SingularORPlural:
import java.util.LinkedList;
import java.util.List;
import java.util.Properties;
import edu.stanford.nlp.ling.CoreAnnotations.LemmaAnnotation;
import edu.stanford.nlp.ling.CoreAnnotations.SentencesAnnotation;
import edu.stanford.nlp.ling.CoreAnnotations.TokensAnnotation;
import edu.stanford.nlp.ling.CoreLabel;
import edu.stanford.nlp.pipeline.Annotation;
import edu.stanford.nlp.pipeline.StanfordCoreNLP;
import edu.stanford.nlp.util.CoreMap;
/**
*
* @author ha
*/
public class SingularORPlural {
protected StanfordCoreNLP pipeline;
public SingularORPlural() {
// Create StanfordCoreNLP object properties, with POS tagging
// (required for lemmatization), and lemmatization
Properties props;
props = new Properties();
props.put("annotators", "tokenize, ssplit, pos, lemma");
/*
* This is a pipeline that takes in a string and …Run Code Online (Sandbox Code Playgroud) 在我的代码中,我从第一个分类器获得Person识别,对于我创建的第二个分类器,我添加了一些要被识别或注释为Organization的单词,但它没有注释Person.
我需要从他们两个中获益,我该怎么做?
我正在使用Netbeans,这是代码:
String serializedClassifier = "classifiers/english.all.3class.distsim.crf.ser.gz";
String serializedClassifier2 = "/Users/ha/stanford-ner-2014-10-26/classifiers/dept-model.ser.gz";
if (args.length > 0) {
serializedClassifier = args[0];
}
AbstractSequenceClassifier<CoreLabel> classifier = CRFClassifier.getClassifier(serializedClassifier);
AbstractSequenceClassifier<CoreLabel> classifier2 = CRFClassifier.getClassifier(serializedClassifier2);
String fileContents = IOUtils.slurpFile("/Users/ha/NetBeansProjects/NERtry/src/nertry/input.txt");
List<List<CoreLabel>> out = classifier.classify(fileContents);
List<List<CoreLabel>> out2 = classifier2.classify(fileContents);
for (List<CoreLabel> sentence : out) {
System.out.print("\nenglish.all.3class.distsim.crf.ser.gz: ");
for (CoreLabel word : sentence) {
System.out.print(word.word() + '/' + word.get(CoreAnnotations.AnswerAnnotation.class) + ' ');
}
for (List<CoreLabel> sentence2 : out2) {
System.out.print("\ndept-model.ser.gz");
for (CoreLabel word2 : …Run Code Online (Sandbox Code Playgroud)