将斯坦福依赖关系转换为点格式
Converting Stanford dependency relation to dot format
我是这个领域的新手。我有这种形式的依赖关系:
amod(clarity-2, sound-1)
nsubj(good-6, clarity-2)
cop(good-6, is-3)
advmod(good-6, also-4)
neg(good-6, not-5)
root(ROOT-0, good-6)
nsubj(ok-10, camera-8)
cop(ok-10, is-9)
ccomp(good-6, ok-10)
如链接中所述,我们必须将此依赖关系转换为点格式,然后使用 Graphviz 绘制 'dependency tree'。我无法理解如何将此依赖关系传递给 edu.stanford.nlp.semgraph.SemanticGraph 的 toDotFormat() 函数。当我给出这个字符串时,'amod(clarity-2, sound-1)' 作为 toDotFormat() 的输入得到了这种形式的输出 digraph amod(clarity-2, sound-1) { }。
我正在尝试此处给出的解决方案 how to get a dependency tree with Stanford NLP parser
您需要在整个依赖树上调用 toDotFormat
。您最初是如何生成这些依赖关系树的?
如果您使用的是 StanfordCoreNLP 管道,添加 toDotFormat
调用很简单:
Properties properties = new Properties();
props.put("annotators", "tokenize, ssplit, pos, depparse");
StanfordCoreNLP pipeline = new StanfordCoreNLP(props);
String text = "This is a sentence I want to parse.";
Annotation document = new Annotation(text);
pipeline.annotate(document);
// these are all the sentences in this document
// a CoreMap is essentially a Map that uses class objects as keys and has values with custom types
List<CoreMap> sentences = document.get(SentencesAnnotation.class);
for (CoreMap sentence : sentences) {
// this is the Stanford dependency graph of the current sentence
SemanticGraph dependencies = sentence.get(CollapsedCCProcessedDependenciesAnnotation.class);
System.out.println(dependencies.toDotFormat());
}
我是这个领域的新手。我有这种形式的依赖关系:
amod(clarity-2, sound-1)
nsubj(good-6, clarity-2)
cop(good-6, is-3)
advmod(good-6, also-4)
neg(good-6, not-5)
root(ROOT-0, good-6)
nsubj(ok-10, camera-8)
cop(ok-10, is-9)
ccomp(good-6, ok-10)
如链接中所述,我们必须将此依赖关系转换为点格式,然后使用 Graphviz 绘制 'dependency tree'。我无法理解如何将此依赖关系传递给 edu.stanford.nlp.semgraph.SemanticGraph 的 toDotFormat() 函数。当我给出这个字符串时,'amod(clarity-2, sound-1)' 作为 toDotFormat() 的输入得到了这种形式的输出 digraph amod(clarity-2, sound-1) { }。 我正在尝试此处给出的解决方案 how to get a dependency tree with Stanford NLP parser
您需要在整个依赖树上调用 toDotFormat
。您最初是如何生成这些依赖关系树的?
如果您使用的是 StanfordCoreNLP 管道,添加 toDotFormat
调用很简单:
Properties properties = new Properties();
props.put("annotators", "tokenize, ssplit, pos, depparse");
StanfordCoreNLP pipeline = new StanfordCoreNLP(props);
String text = "This is a sentence I want to parse.";
Annotation document = new Annotation(text);
pipeline.annotate(document);
// these are all the sentences in this document
// a CoreMap is essentially a Map that uses class objects as keys and has values with custom types
List<CoreMap> sentences = document.get(SentencesAnnotation.class);
for (CoreMap sentence : sentences) {
// this is the Stanford dependency graph of the current sentence
SemanticGraph dependencies = sentence.get(CollapsedCCProcessedDependenciesAnnotation.class);
System.out.println(dependencies.toDotFormat());
}