OpenNLP 模型构建器插件不会继续

OpenNLP model builder addon doesnt continue

我正在为 OpenNLP 使用 model builder addon 来创建更好的 NER 模型。 根据这个 post, I have used the code posted by markg :

public class ModelBuilderAddonUse {

  private static List<String> getSentencesFromSomewhere() throws Exception 
  {
      List<String> list = new ArrayList<String>();
      BufferedReader reader = new BufferedReader(new FileReader("D:\Work\workspaces\default\UpdateModel\documentrequirements.docx"));
      String line;
      while ((line = reader.readLine()) != null) 
      {
          list.add(line);
      }
      reader.close();
      return list;

    }

  public static void main(String[] args) throws Exception {
    /**
     * establish a file to put sentences in
     */
    File sentences = new File("D:\Work\workspaces\default\UpdateModel\sentences.text");

    /**
     * establish a file to put your NER hits in (the ones you want to keep based
     * on prob)
     */
    File knownEntities = new File("D:\Work\workspaces\default\UpdateModel\knownentities.txt");

    /**
     * establish a BLACKLIST file to put your bad NER hits in (also can be based
     * on prob)
     */
    File blacklistedentities = new File("D:\Work\workspaces\default\UpdateModel\blentities.txt");

    /**
     * establish a file to write your annotated sentences to
     */
    File annotatedSentences = new File("D:\Work\workspaces\default\UpdateModel\annotatedSentences.txt");

    /**
     * establish a file to write your model to
     */
    File theModel = new File("D:\Work\workspaces\default\UpdateModel\nl-ner-person.bin");


//------------create a bunch of file writers to write your results and sentences to a file

    FileWriter sentenceWriter = new FileWriter(sentences, true);
    FileWriter blacklistWriter = new FileWriter(blacklistedentities, true);
    FileWriter knownEntityWriter = new FileWriter(knownEntities, true);

//set some thresholds to decide where to write hits, you don't have to use these at all...
    double keeperThresh = .95;
    double blacklistThresh = .7;


    /**
     * Load your model as normal
     */
    TokenNameFinderModel personModel = new TokenNameFinderModel(new File("D:\Work\workspaces\default\UpdateModel\nl-ner-person.bin"));
    NameFinderME personFinder = new NameFinderME(personModel);
    /**
     * do your normal NER on the sentences you have
     */
   for (String s : getSentencesFromSomewhere()) {
      sentenceWriter.write(s.trim() + "\n");
      sentenceWriter.flush();

      String[] tokens = s.split(" ");//better to use a tokenizer really
      Span[] find = personFinder.find(tokens);
      double[] probs = personFinder.probs();
      String[] names = Span.spansToStrings(find, tokens);
      for (int i = 0; i < names.length; i++) {
        //YOU PROBABLY HAVE BETTER HEURISTICS THAN THIS TO MAKE SURE YOU GET GOOD HITS OUT OF THE DEFAULT MODEL
        if (probs[i] > keeperThresh) {
          knownEntityWriter.write(names[i].trim() + "\n");
        }
        if (probs[i] < blacklistThresh) {
          blacklistWriter.write(names[i].trim() + "\n");
        }
      }
      personFinder.clearAdaptiveData();
      blacklistWriter.flush();
      knownEntityWriter.flush();
    }
    //flush and close all the writers
    knownEntityWriter.flush();
    knownEntityWriter.close();
    sentenceWriter.flush();
    sentenceWriter.close();
    blacklistWriter.flush();
    blacklistWriter.close();

    /**
     * THIS IS WHERE THE ADDON IS GOING TO USE THE FILES (AS IS) TO CREATE A NEW MODEL. YOU SHOULD NOT HAVE TO RUN THE FIRST PART AGAIN AFTER THIS RUNS, JUST NOW PLAY WITH THE
     * KNOWN ENTITIES AND BLACKLIST FILES AND RUN THE METHOD BELOW AGAIN UNTIL YOU GET SOME DECENT RESULTS (A DECENT MODEL OUT OF IT).
     */
    DefaultModelBuilderUtil.generateModel(sentences, knownEntities, blacklistedentities, theModel, annotatedSentences, "person", 3);


  }
}

它也能运行,但我的输出在 :

处退出
    annotated sentences: 1862
    knowns: 58
    Building Model using 1862 annotations
    reading training data...

但在 post 的例子中,它应该更进一步:

Indexing events using cutoff of 5

    Computing event counts...  done. 561755 events
    Indexing...  done.
Sorting and merging events... done. Reduced 561755 events to 127362.
Done indexing.
Incorporating indexed data for training...  
done.
    Number of Event Tokens: 127362
        Number of Outcomes: 3
      Number of Predicates: 106490
...done.

谁能帮我解决这个问题,这样我就可以生成模型了? 我确实进行了很多搜索,但找不到任何关于它的好文档。 非常感谢,谢谢。

像这样更正训练数据文件的路径:

File sentences = new File("D:/Work/workspaces/default/UpdateModel/sentences.text");

而不是

File sentences = new File("D:\Work\workspaces\default\UpdateModel\sentences.text");

更新

这是使用方法,将文件添加到项目文件夹中。像这样尝试 -

File sentences = new File("src/training/resources/CreateModel/sentences.txt");

Check my respository for reference on Github

这应该有所帮助。