如何用pythonicrawler抓取多个关键词
How to crawl multiple keywords with python icrawler
我有一个包含很多关键字的数组:
array = ['table', 'chair', 'pen']
我想从 Google 中抓取 5 张图片 用 python icrawler
在我的 array
中搜索每个项目
这里是初始化:
from icrawler.builtin import GoogleImageCrawler
google_crawler = GoogleImageCrawler(
parser_threads=2,
downloader_threads=4,
storage={ 'root_dir': 'images' }
)
我使用循环来抓取每个项目:
for item in array:
google_crawler.crawl(
keyword=item,
offset=0,
max_num=5,
min_size=(500, 500)
)
但是,我收到错误日志:
File "crawler.py", line 20, in <module>
min_size=(500, 500)
File "/home/user/opt/miniconda3/envs/pak/lib/python3.6/site-packages/icrawler/builtin/google.py", line 83, in crawl
feeder_kwargs=feeder_kwargs, downloader_kwargs=downloader_kwargs)
File "/home/user/opt/miniconda3/envs/pak/lib/python3.6/site-packages/icrawler/crawler.py", line 166, in crawl
self.feeder.start(**feeder_kwargs)
File "/home/user/opt/miniconda3/envs/pak/lib/python3.6/site-packages/icrawler/utils/thread_pool.py", line 66, in start
worker.start()
File "/home/user/opt/miniconda3/envs/pak/lib/python3.6/threading.py", line 842, in start
raise RuntimeError("threads can only be started once")
RuntimeError: threads can only be started once
这似乎我不能多次使用 google_crawler.crawl
。我该如何解决?
在最新版本中,您可以这样使用。
from icrawler.builtin import GoogleImageCrawler
google_crawler = GoogleImageCrawler(
parser_threads=2,
downloader_threads=4,
storage={'root_dir': 'images'}
)
for keyword in ['cat', 'dog']:
google_crawler.crawl(
keyword=keyword, max_num=5, min_size=(500, 500), file_idx_offset='auto')
# set `file_idx_offset` to 'auto' will prevent naming the 5 images
# of dog from 000001.jpg to 000005.jpg, but naming it from 000006.jpg.
或者如果您想将这些图像下载到不同的文件夹,您可以简单地创建两个 GoogleImageCrawler
实例。
from icrawler.builtin import GoogleImageCrawler
for keyword in ['cat', 'dog']:
google_crawler = GoogleImageCrawler(
parser_threads=2,
downloader_threads=4,
storage={'root_dir': 'images/{}'.format(keword)}
)
google_crawler.crawl(
keyword=keyword, max_num=5, min_size=(500, 500))
我有一个包含很多关键字的数组:
array = ['table', 'chair', 'pen']
我想从 Google 中抓取 5 张图片 用 python icrawler
array
中搜索每个项目
这里是初始化:
from icrawler.builtin import GoogleImageCrawler
google_crawler = GoogleImageCrawler(
parser_threads=2,
downloader_threads=4,
storage={ 'root_dir': 'images' }
)
我使用循环来抓取每个项目:
for item in array:
google_crawler.crawl(
keyword=item,
offset=0,
max_num=5,
min_size=(500, 500)
)
但是,我收到错误日志:
File "crawler.py", line 20, in <module>
min_size=(500, 500)
File "/home/user/opt/miniconda3/envs/pak/lib/python3.6/site-packages/icrawler/builtin/google.py", line 83, in crawl
feeder_kwargs=feeder_kwargs, downloader_kwargs=downloader_kwargs)
File "/home/user/opt/miniconda3/envs/pak/lib/python3.6/site-packages/icrawler/crawler.py", line 166, in crawl
self.feeder.start(**feeder_kwargs)
File "/home/user/opt/miniconda3/envs/pak/lib/python3.6/site-packages/icrawler/utils/thread_pool.py", line 66, in start
worker.start()
File "/home/user/opt/miniconda3/envs/pak/lib/python3.6/threading.py", line 842, in start
raise RuntimeError("threads can only be started once")
RuntimeError: threads can only be started once
这似乎我不能多次使用 google_crawler.crawl
。我该如何解决?
在最新版本中,您可以这样使用。
from icrawler.builtin import GoogleImageCrawler
google_crawler = GoogleImageCrawler(
parser_threads=2,
downloader_threads=4,
storage={'root_dir': 'images'}
)
for keyword in ['cat', 'dog']:
google_crawler.crawl(
keyword=keyword, max_num=5, min_size=(500, 500), file_idx_offset='auto')
# set `file_idx_offset` to 'auto' will prevent naming the 5 images
# of dog from 000001.jpg to 000005.jpg, but naming it from 000006.jpg.
或者如果您想将这些图像下载到不同的文件夹,您可以简单地创建两个 GoogleImageCrawler
实例。
from icrawler.builtin import GoogleImageCrawler
for keyword in ['cat', 'dog']:
google_crawler = GoogleImageCrawler(
parser_threads=2,
downloader_threads=4,
storage={'root_dir': 'images/{}'.format(keword)}
)
google_crawler.crawl(
keyword=keyword, max_num=5, min_size=(500, 500))