如何向CrawlerManager发送crawler4j数据?

How to send crawler4j data to CrawlerManager?

我正在处理一个项目,用户可以在其中搜索一些网站并查找具有唯一标识符的图片。

public class ImageCrawler extends WebCrawler {

private static final Pattern filters = Pattern.compile(
        ".*(\.(css|js|mid|mp2|mp3|mp4|wav|avi|mov|mpeg|ram|m4v|pdf" +
                "|rm|smil|wmv|swf|wma|zip|rar|gz))$");

private static final Pattern imgPatterns = Pattern.compile(".*(\.(bmp|gif|jpe?g|png|tiff?))$");

public ImageCrawler() {
}

@Override
public boolean shouldVisit(Page referringPage, WebURL url) {
    String href = url.getURL().toLowerCase();
    if (filters.matcher(href).matches()) {
        return false;
    }

    if (imgPatterns.matcher(href).matches()) {
        return true;
    }

    return false;
}

@Override
public void visit(Page page) {
    String url = page.getWebURL().getURL();

    byte[] imageBytes = page.getContentData();
    String imageBase64 = Base64.getEncoder().encodeToString(imageBytes);
    try {
        SecurityContextHolder.getContext().setAuthentication(new UsernamePasswordAuthenticationToken(urlScan.getOwner(), null));
        DecodePictureResponse decodePictureResponse = decodePictureService.decodePicture(imageBase64);
        URLScanResult urlScanResult = new URLScanResult();
        urlScanResult.setPicture(pictureRepository.findByUuid(decodePictureResponse.getPictureDTO().getUuid()).get());
        urlScanResult.setIntegrity(decodePictureResponse.isIntegrity());
        urlScanResult.setPictureUrl(url);
        urlScanResult.setUrlScan(urlScan);
        urlScan.getResults().add(urlScanResult);
        urlScanRepository.save(urlScan);
    }

    } catch (ResourceNotFoundException ex) {
        //Picture is not in our database
    }
}

抓取工具将 运行 独立。 ImageCrawlerManager class,它是单例的,运行 爬虫。

public class ImageCrawlerManager {

private static ImageCrawlerManager instance = null;


private ImageCrawlerManager(){
}

public synchronized static ImageCrawlerManager getInstance()
{
    if (instance == null)
    {
        instance = new ImageCrawlerManager();
    }
    return instance;
}

@Transactional(propagation=Propagation.REQUIRED)
@PersistenceContext(type = PersistenceContextType.EXTENDED)
public void startCrawler(URLScan urlScan, DecodePictureService decodePictureService, URLScanRepository urlScanRepository, PictureRepository pictureRepository){

    try {
        CrawlConfig config = new CrawlConfig();
        config.setCrawlStorageFolder("/tmp");
        config.setIncludeBinaryContentInCrawling(true);

        PageFetcher pageFetcher = new PageFetcher(config);
        RobotstxtConfig robotstxtConfig = new RobotstxtConfig();
        RobotstxtServer robotstxtServer = new RobotstxtServer(robotstxtConfig, pageFetcher);

        CrawlController controller = new CrawlController(config, pageFetcher, robotstxtServer);
        controller.addSeed(urlScan.getUrl());

        controller.start(ImageCrawler.class, 1);
        urlScan.setStatus(URLScanStatus.FINISHED);
        urlScanRepository.save(urlScan);
    } catch (Exception e) {
        e.printStackTrace();
        urlScan.setStatus(URLScanStatus.FAILED);
        urlScan.setFailedReason(e.getMessage());
        urlScanRepository.save(urlScan);
    }
}

如何将每张图片数据发送给管理器,管理器对图片进行解码,得到搜索的发起者,并将结果保存到数据库中?在上面的代码中,我可以 运行 多个爬虫并将其保存到数据库中。但不幸的是,当我同时 运行 两个爬虫时,我可以存储两个搜索结果,但它们都连接到第一个 运行 的爬虫。

您应该将您的数据库服务注入到您的ẀebCrawler实例中,而不是使用单例来管理网络抓取的结果。

crawler4j 支持自定义 CrawlController.WebCrawlerFactory(参考 here),它可以与 Spring 一起使用,将您的数据库服务注入 ImageCrawler实例。

每个爬虫线程都应负责您描述的整个过程(例如,通过为其使用某些特定服务):

decode this image, get the initiator of search and save results to database

像这样设置,您的数据库将是唯一的真实来源,您将不必处理不同实例或用户会话之间的爬虫状态同步。