当我们 运行 共指解析程序时它会抛出一个错误,我该如何解决?
When we run coreference resolution program it will throw an error how can i solve?
当我们运行程序时,我是共指解析的新手,它会抛出一个错误,我很难解决,请帮助我
Annotation document = new Annotation("Barack Obama was born in Hawaii. He is the president. Obama was elected in 2008.");
Properties props = new Properties();
props.setProperty("annotators", "tokenize,ssplit,pos,lemma,ner,parse,coref");
StanfordCoreNLP pipeline = new StanfordCoreNLP(props);
pipeline.annotate(document);
System.out.println("---");
System.out.println("coref chains");
for (CorefChain cc : document.get(CorefCoreAnnotations.CorefChainAnnotation.class).values()) {
System.out.println("\t" + cc);
}
for (CoreMap sentence : document.get(CoreAnnotations.SentencesAnnotation.class)) {
System.out.println("---");
System.out.println("mentions");
for (Mention m : sentence.get(CorefCoreAnnotations.CorefMentionsAnnotation.class)) {
System.out.println("\t" + m);
}
发生的错误如下所述
> Exception in thread "main" java.lang.OutOfMemoryError: GC overhead limit exceeded
at java.util.Arrays.copyOfRange(Unknown Source)
at java.lang.String.<init>(Unknown Source)
at edu.stanford.nlp.util.StringUtils.splitOnChar(StringUtils.java:537)
at edu.stanford.nlp.coref.data.Dictionaries.loadGenderNumber(Dictionaries.java:405)
at edu.stanford.nlp.coref.data.Dictionaries.<init>(Dictionaries.java:676)
at edu.stanford.nlp.coref.data.Dictionaries.<init>(Dictionaries.java:576)
at edu.stanford.nlp.coref.CorefSystem.<init>(CorefSystem.java:32)
at edu.stanford.nlp.pipeline.CorefAnnotator.<init>(CorefAnnotator.java:66)
at edu.stanford.nlp.pipeline.AnnotatorImplementations.coref(AnnotatorImplementations.java:196)
at edu.stanford.nlp.pipeline.StanfordCoreNLP.lambda$getNamedAnnotators(StanfordCoreNLP.java:555)
at edu.stanford.nlp.pipeline.StanfordCoreNLP$$Lambda/544724190.apply(Unknown Source)
at edu.stanford.nlp.pipeline.StanfordCoreNLP.lambda$null(StanfordCoreNLP.java:625)
at edu.stanford.nlp.pipeline.StanfordCoreNLP$$Lambda/1673605040.get(Unknown Source)
at edu.stanford.nlp.util.Lazy.compute(Lazy.java:126)
at edu.stanford.nlp.util.Lazy.get(Lazy.java:31)
at edu.stanford.nlp.pipeline.AnnotatorPool.get(AnnotatorPool.java:149)
at edu.stanford.nlp.pipeline.StanfordCoreNLP.construct(StanfordCoreNLP.java:495)
at edu.stanford.nlp.pipeline.StanfordCoreNLP.<init>(StanfordCoreNLP.java:201)
at edu.stanford.nlp.pipeline.StanfordCoreNLP.<init>(StanfordCoreNLP.java:194)
at edu.stanford.nlp.pipeline.StanfordCoreNLP.<init>(StanfordCoreNLP.java:181)
at test.test.main(test.java:17)
您需要 运行 您的 Java 进程使用更多内存。在命令行上,您通常使用 java -Xmx5g ...
来执行此操作。我不确定具体应该为代码使用多少内存,但我认为 4g
左右应该没问题。 less 也可能有效。
当我们运行程序时,我是共指解析的新手,它会抛出一个错误,我很难解决,请帮助我
Annotation document = new Annotation("Barack Obama was born in Hawaii. He is the president. Obama was elected in 2008.");
Properties props = new Properties();
props.setProperty("annotators", "tokenize,ssplit,pos,lemma,ner,parse,coref");
StanfordCoreNLP pipeline = new StanfordCoreNLP(props);
pipeline.annotate(document);
System.out.println("---");
System.out.println("coref chains");
for (CorefChain cc : document.get(CorefCoreAnnotations.CorefChainAnnotation.class).values()) {
System.out.println("\t" + cc);
}
for (CoreMap sentence : document.get(CoreAnnotations.SentencesAnnotation.class)) {
System.out.println("---");
System.out.println("mentions");
for (Mention m : sentence.get(CorefCoreAnnotations.CorefMentionsAnnotation.class)) {
System.out.println("\t" + m);
}
发生的错误如下所述
> Exception in thread "main" java.lang.OutOfMemoryError: GC overhead limit exceeded
at java.util.Arrays.copyOfRange(Unknown Source)
at java.lang.String.<init>(Unknown Source)
at edu.stanford.nlp.util.StringUtils.splitOnChar(StringUtils.java:537)
at edu.stanford.nlp.coref.data.Dictionaries.loadGenderNumber(Dictionaries.java:405)
at edu.stanford.nlp.coref.data.Dictionaries.<init>(Dictionaries.java:676)
at edu.stanford.nlp.coref.data.Dictionaries.<init>(Dictionaries.java:576)
at edu.stanford.nlp.coref.CorefSystem.<init>(CorefSystem.java:32)
at edu.stanford.nlp.pipeline.CorefAnnotator.<init>(CorefAnnotator.java:66)
at edu.stanford.nlp.pipeline.AnnotatorImplementations.coref(AnnotatorImplementations.java:196)
at edu.stanford.nlp.pipeline.StanfordCoreNLP.lambda$getNamedAnnotators(StanfordCoreNLP.java:555)
at edu.stanford.nlp.pipeline.StanfordCoreNLP$$Lambda/544724190.apply(Unknown Source)
at edu.stanford.nlp.pipeline.StanfordCoreNLP.lambda$null(StanfordCoreNLP.java:625)
at edu.stanford.nlp.pipeline.StanfordCoreNLP$$Lambda/1673605040.get(Unknown Source)
at edu.stanford.nlp.util.Lazy.compute(Lazy.java:126)
at edu.stanford.nlp.util.Lazy.get(Lazy.java:31)
at edu.stanford.nlp.pipeline.AnnotatorPool.get(AnnotatorPool.java:149)
at edu.stanford.nlp.pipeline.StanfordCoreNLP.construct(StanfordCoreNLP.java:495)
at edu.stanford.nlp.pipeline.StanfordCoreNLP.<init>(StanfordCoreNLP.java:201)
at edu.stanford.nlp.pipeline.StanfordCoreNLP.<init>(StanfordCoreNLP.java:194)
at edu.stanford.nlp.pipeline.StanfordCoreNLP.<init>(StanfordCoreNLP.java:181)
at test.test.main(test.java:17)
您需要 运行 您的 Java 进程使用更多内存。在命令行上,您通常使用 java -Xmx5g ...
来执行此操作。我不确定具体应该为代码使用多少内存,但我认为 4g
左右应该没问题。 less 也可能有效。