如何使用 OpenNLP 从给定文本中提取关键短语?

How to extract key phrases from a given text with OpenNLP?

我正在使用 Apache OpenNLP,我想提取给定文本的关键短语。我已经在收集实体 - 但我想要关键字短语。

我遇到的问题是我无法使用 TF-IDF,因为我没有相应的模型而且我只有一个文本(不是多个文档)

这是一些代码(原型 - 不太干净)

 public List<KeywordsModel> extractKeywords(String text, NLPProvider pipeline) {

        SentenceDetectorME sentenceDetector = new SentenceDetectorME(pipeline.getSentencedetecto("en"));
        TokenizerME tokenizer = new TokenizerME(pipeline.getTokenizer("en"));
        POSTaggerME posTagger = new POSTaggerME(pipeline.getPosmodel("en"));
        ChunkerME chunker = new ChunkerME(pipeline.getChunker("en"));

        ArrayList<String> stopwords = pipeline.getStopwords("en");

        Span[] sentSpans = sentenceDetector.sentPosDetect(text);
        Map<String, Float> results = new LinkedHashMap<>();
        SortedMap<String, Float> sortedData = new TreeMap(new MapSort.FloatValueComparer(results));

        float sentenceCounter = sentSpans.length;
        float prominenceVal = 0;
        int sentences = sentSpans.length;
        for (Span sentSpan : sentSpans) {
            prominenceVal = sentenceCounter / sentences;
            sentenceCounter--;
            String sentence = sentSpan.getCoveredText(text).toString();
            int start = sentSpan.getStart();
            Span[] tokSpans = tokenizer.tokenizePos(sentence);
            String[] tokens = new String[tokSpans.length];
            for (int i = 0; i < tokens.length; i++) {
                tokens[i] = tokSpans[i].getCoveredText(sentence).toString();
            }
            String[] tags = posTagger.tag(tokens);
            Span[] chunks = chunker.chunkAsSpans(tokens, tags);
            for (Span chunk : chunks) {
                if ("NP".equals(chunk.getType())) {
                    int npstart = start + tokSpans[chunk.getStart()].getStart();
                    int npend = start + tokSpans[chunk.getEnd() - 1].getEnd();
                    String potentialKey = text.substring(npstart, npend);
                    if (!results.containsKey(potentialKey)) {
                        boolean hasStopWord = false;
                        String[] pKeys = potentialKey.split("\s+");
                        if (pKeys.length < 3) {
                            for (String pKey : pKeys) {
                                for (String stopword : stopwords) {
                                    if (pKey.toLowerCase().matches(stopword)) {
                                        hasStopWord = true;
                                        break;
                                    }
                                }
                                if (hasStopWord == true) {
                                    break;
                                }
                            }
                        }else{
                            hasStopWord=true;
                        }
                        if (hasStopWord == false) {
                            int count = StringUtils.countMatches(text, potentialKey);
                            results.put(potentialKey, (float) (Math.log(count) / 100) + (float)(prominenceVal/5));
                        }
                    }
                }
            }
        }
        sortedData.putAll(results);
        System.out.println(sortedData);
        return null;
    }

它基本上做的是把名词还给我,然后按显着值对它们进行排序(它在文本中的什么位置?)并计数。

但老实说 - 这不太管用。

我也用lucene分析器试过,但结果也不太好。

所以 - 我怎样才能实现我想做的事情?我已经知道 KEA/Maui-indexer 等(但由于 GPL,我恐怕不能使用它们 :( )


也有意思?我可以使用哪些其他算法来代替 TF-IDF?

示例:

本文:http://techcrunch.com/2015/09/04/etsys-pulling-the-plug-on-grand-st-at-the-end-of-this-month/

我认为输出良好:Etsy、Grand St.、太阳能充电器、制造商市场、科技硬件

终于,我发现了一些东西:

https://github.com/srijiths/jtopia

它正在使用来自 opennlp/stanfordnlp 的 POS。它具有 ALS2 许可证。尚未测量精度和召回率,但我认为它提供了很好的结果。

这是我的代码:

 Configuration.setTaggerType("openNLP");
        Configuration.setSingleStrength(6);
        Configuration.setNoLimitStrength(5);
        // if tagger type is "openNLP" then give the openNLP POS tagger path
        //Configuration.setModelFileLocation("model/openNLP/en-pos-maxent.bin"); 
        // if tagger type is "default" then give the default POS lexicon file
        //Configuration.setModelFileLocation("model/default/english-lexicon.txt");
        // if tagger type is "stanford "
        Configuration.setModelFileLocation("Dont need that here");
        Configuration.setPipeline(pipeline);
        TermsExtractor termExtractor = new TermsExtractor();
        TermDocument topiaDoc = new TermDocument();
        topiaDoc = termExtractor.extractTerms(text);
        //logger.info("Extracted terms : " + topiaDoc.getExtractedTerms());
        Map<String, ArrayList<Integer>> finalFilteredTerms = topiaDoc.getFinalFilteredTerms();
        List<KeywordsModel> keywords = new ArrayList<>();
        for (Map.Entry<String, ArrayList<Integer>> e : finalFilteredTerms.entrySet()) {
            KeywordsModel keyword = new KeywordsModel();
            keyword.setLabel(e.getKey());
            keywords.add(keyword);
        }

我稍微修改了配置文件,以便从管道实例加载 POSModel。