当我在列表中搜索多个关键字时,脚本不起作用

Script doesn't work when I go for multiple search keywords in the list

我创建了一个脚本,当我使用不同的关键字开始搜索时,从 search engine 中获取不同的报纸名称,如 CMG제약DB하이텍 e.t.c .在该页面右上角的搜索框中。

我还在参数中使用了一些自定义日期来获取这些日期的结果。只要我在搜索列表中使用 单个关键字 ,脚本就可以正常运行。

但是,当我在搜索列表中使用多个关键字时,脚本只会跟上最后一个关键字。这是我想使用的关键字列表:

keywords = ['CMG제약','DB하이텍','ES큐브','EV첨단소재']

脚本比较短但是因为参数的高度,看起来比较大

到目前为止我已经尝试过 (works as intended as I used single search keyword in the list):

import requests
import concurrent.futures
from bs4 import BeautifulSoup
from urllib.parse import urljoin

year_list_start = ['2013.01.01','2014.01.02']
year_list_upto = ['2014.01.01','2015.01.01']

base = 'https://search.naver.com/search.naver'
link = 'https://search.naver.com/search.naver'
params = {
    'where': 'news',
    'sm': 'tab_pge',
    'query': '',
    'sort': '1',
    'photo': '0',
    'field': '0',
    'pd': '',
    'ds': '',
    'de': '',
    'cluster_rank': '',
    'mynews': '0',
    'office_type': '0',
    'office_section_code': '0',
    'news_office_checked': '',
    'nso': '',
    'start': '',
}

def fetch_content(s,keyword,link,params):
    for start_date,date_upto in zip(year_list_start,year_list_upto):
        ds = start_date.replace(".","")
        de = date_upto.replace(".","")
        params['query'] = keyword
        params['ds'] = ds
        params['de'] = de
        params['nso'] = f'so:r,p:from{ds}to{de},a:all'
        params['start'] = 1

        while True:
            res = s.get(link,params=params)
            print(res.status_code)
            print(res.url)
            soup = BeautifulSoup(res.text,"lxml")
            if not soup.select_one("ul.list_news .news_area .info_group > a.press"): break
            for item in soup.select("ul.list_news .news_area"):
                newspaper_name = item.select_one(".info_group > a.press").get_text(strip=True).lstrip("=")
                print(newspaper_name)

            if soup.select_one("a.btn_next[aria-disabled='true']"): break
            next_page = soup.select_one("a.btn_next").get("href")
            link = urljoin(base,next_page)
            params = None


if __name__ == '__main__':
    with requests.Session() as s:
        s.headers['User-Agent'] = 'Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/88.0.4324.150 Safari/537.36'
        
        keywords = ['CMG제약']

        with concurrent.futures.ThreadPoolExecutor(max_workers=4) as executor:
            future_to_url = {executor.submit(fetch_content, s, keyword, link, params): keyword for keyword in keywords}
            concurrent.futures.as_completed(future_to_url)

How can I make the script work when there are more than one keyword in the search list?

我认为问题在于变量 params 过早地被后续请求的数据覆盖,而先前的请求仍在处理中。 params 需要移动到 fetch_content 并且 不是 作为参数传递:

import requests
import concurrent.futures
from bs4 import BeautifulSoup
from urllib.parse import urljoin
from threading import Lock

year_list_start = ['2013.01.01','2014.01.02']
year_list_upto = ['2014.01.01','2015.01.01']

base = 'https://search.naver.com/search.naver'
link = 'https://search.naver.com/search.naver'

print_lock = Lock()

def fetch_content(f, s,keyword,link):
    params = {
        'where': 'news',
        'sm': 'tab_pge',
        'query': '',
        'sort': '1',
        'photo': '0',
        'field': '0',
        'pd': '',
        'ds': '',
        'de': '',
        'cluster_rank': '',
        'mynews': '0',
        'office_type': '0',
        'office_section_code': '0',
        'news_office_checked': '',
        'nso': '',
        'start': '',
    }


    for start_date,date_upto in zip(year_list_start,year_list_upto):
        my_params = params
        ds = start_date.replace(".","")
        de = date_upto.replace(".","")
        my_params['query'] = keyword
        my_params['ds'] = ds
        my_params['de'] = de
        my_params['nso'] = f'so:r,p:from{ds}to{de},a:all'
        my_params['start'] = 1

        while True:
            res = s.get(link,params=my_params)
            with print_lock:
                print(keyword, res.status_code, file=f)
                print(keyword, res.url, file=f, flush=True)
            soup = BeautifulSoup(res.text,"lxml")
            if not soup.select_one("ul.list_news .news_area .info_group > a.press"): break
            for item in soup.select("ul.list_news .news_area"):
                newspaper_name = item.select_one(".info_group > a.press").get_text(strip=True).lstrip("=")
                with print_lock:
                    print(keyword, newspaper_name, file=f, flush=True)

            if soup.select_one("a.btn_next[aria-disabled='true']"): break
            next_page = soup.select_one("a.btn_next").get("href")
            link = urljoin(base,next_page)
            my_params = None


if __name__ == '__main__':
    with requests.Session() as s:
        with open('output.txt', 'w', encoding='utf8') as f:
            s.headers['User-Agent'] = 'Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/88.0.4324.150 Safari/537.36'

            keywords = ['CMG제약','DB하이텍','ES큐브','EV첨단소재']

            with concurrent.futures.ThreadPoolExecutor(max_workers=4) as executor:
                future_to_url = {executor.submit(fetch_content, f, s, keyword, link): keyword for keyword in keywords}
                concurrent.futures.as_completed(future_to_url)

备注

你有...

future_to_url = {executor.submit(fetch_content, s, keyword, link): keyword for keyword in keywords}
concurrent.futures.as_completed(future_to_url)

... 其中 concurrent.futures.as_completed(future_to_url) returns 一个您没有迭代的 迭代器 。您不妨将上面两行替换为:

for keyword in keywords:
    executor.submit(fetch_content, s, keyword, link)

或者您可以将 关键字 作为 fetch_content ...

的最后一个参数
def fetch_content(s, link, keyword):

...然后

from functools import partial
executor.map(partial(fetch_content, s, link), keywords)

更新 2

这是一个修改后的版本,其中 fetch_content returns 它找到的报纸名称列表而不是将它们打印出来(然后主线程可以打印列表),其他打印语句是注释掉以减少额外的“噪音”,以便将结果包含在此处。我已经更改了参数的顺序,以防您想使用 map 代替:

import requests
import concurrent.futures
from bs4 import BeautifulSoup
from urllib.parse import urljoin
from functools import partial

year_list_start = ['2013.01.01','2014.01.02']
year_list_upto = ['2014.01.01','2015.01.01']

base = 'https://search.naver.com/search.naver'
link = 'https://search.naver.com/search.naver'

def fetch_content(s, link, keyword):
    params = {
        'where': 'news',
        'sm': 'tab_pge',
        'query': '',
        'sort': '1',
        'photo': '0',
        'field': '0',
        'pd': '',
        'ds': '',
        'de': '',
        'cluster_rank': '',
        'mynews': '0',
        'office_type': '0',
        'office_section_code': '0',
        'news_office_checked': '',
        'nso': '',
        'start': '',
    }

    newspaper_names = []
    for start_date,date_upto in zip(year_list_start,year_list_upto):
        my_params = params
        ds = start_date.replace(".","")
        de = date_upto.replace(".","")
        my_params['query'] = keyword
        my_params['ds'] = ds
        my_params['de'] = de
        my_params['nso'] = f'so:r,p:from{ds}to{de},a:all'
        my_params['start'] = 1

        while True:
            res = s.get(link,params=my_params)
            #print(res.status_code, flush=True)
            #print(res.url, flush=True)
            soup = BeautifulSoup(res.text,"lxml")
            if not soup.select_one("ul.list_news .news_area .info_group > a.press"): break
            for item in soup.select("ul.list_news .news_area"):
                newspaper_name = item.select_one(".info_group > a.press").get_text(strip=True).lstrip("=")
                newspaper_names.append(newspaper_name)

            if soup.select_one("a.btn_next[aria-disabled='true']"): break
            next_page = soup.select_one("a.btn_next").get("href")
            link = urljoin(base,next_page)
            my_params = None

    return newspaper_names


if __name__ == '__main__':
    with requests.Session() as s:
        s.headers['User-Agent'] = 'Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/88.0.4324.150 Safari/537.36'

        keywords = ['CMG제약','DB하이텍','ES큐브','EV첨단소재']

        with concurrent.futures.ThreadPoolExecutor(max_workers=4) as executor:
            future_to_url = {executor.submit(fetch_content, s, link, keyword): keyword for keyword in keywords}
            for future in concurrent.futures.as_completed(future_to_url):
                print('keyword = ', future_to_url[future], 'newspaper names =', future.result())
            """
            from functools import partial
            results = executor.map(partial(fetch_content, s, link), keywords)
            for idx, result in enumerate(results):
                print('keyword = ', keywords[idx], 'newspaper names =', result)
            """

打印:

keyword =  DB하이텍 newspaper names = []
keyword =  ES큐브 newspaper names = ['국제신문', '스포츠월드', '스포츠조선', '이뉴스투데이', '중앙SUNDAY', '중앙SUNDAY', '매일경제', '디지털데일리', '전자신문', '머니투데이', '한경비즈니스', '한경비즈니스', '동아일보', '뉴시스', '데일리안', '매일경제', '한경비즈니스', '한경비즈니스', '동아일보', '뉴시스', '데일리안', '매일경제']
keyword =  EV첨단소재 newspaper names = ['머니S', '아주경제', 'EBN', '오토타임즈', '머니S', '서울경제', '뉴시스', '파이낸셜뉴스', '연합뉴스', '연합뉴스', 'EBN', '뉴스핌', '포브스코리아', 'EBN', '시민일보', '매일경제', '세계일보', 'TV리포트', '전기신문', '뉴시스', '기호일보', '스포츠월드', 'OSEN', '뉴시스', '경북매일신문', '파이낸셜뉴스', '이투데이', '뉴시스', '헤럴드경제', '헤럴드POP', '조선비즈', 'EBN', '아주경제', '뉴스1', '아시아경제', '헤럴드경제', '전자신문', '뉴시스', '뉴시스', '전기신문', '전자신문', '오토타임즈', '연합뉴스', '에너지경제', '서울경제', 'EBN', '서울경제', '파이낸셜뉴스', '전자신문', '오토타임즈', '연합뉴스', '에너지경제', '서울경제', 'EBN', '서울경제', '파이낸셜뉴스']
keyword =  CMG제약 newspaper names = ['국민일보', '국민일보', '메디컬투데이', '한국경제', '서울경제', '매일경제', '시민일보', '아시아경제', '데일리안', '조선비즈', '메디파나뉴스', '매일경제', 'TBS', '매일경제', 'MBN', '아시아경제', 'KBS', '뉴스토마토', '연합뉴스', '뉴스1', '국민일보', '뉴시스', '국민일보', '뉴스토마토', '아시아투데이', '청년의사', '메디파나뉴스', '이데일리', '메디컬투데이', '한국경제', '아시아경제', '이투데이', '머니투데이', '뉴스토마토', '연합뉴스', '약업신문', '뉴스토마토', '뉴스토마토', '뉴스토마토', '뉴스토마토', '이데일리', '뉴스토마토', '머니투데이', '뉴스토마토', '뉴스토마토', '뉴스토마토', '뉴스토마토', '머니투데이', '머니투데이', '뉴스토마토', '뉴스토마토', '뉴스토마토', '뉴스토마토', '이투데이', '한국경제', '뉴스토마토', '뉴스토마토', '뉴스토마토', '뉴스토마토', '헤럴드POP', '뉴스토마토', '한국경제', '서울경제', '매일경제', '뉴스토마토', '서울파이낸스', '뉴스토마토', '이데일리', '헤럴드POP', '뉴스토마토', '뉴스토마토', '뉴스토마토', '머니투데이', '뉴스토마토', '한국경제', '이투데이', '파이낸셜뉴스', '매일경제', '뉴시스', '뉴스토마토', '뉴스토마토', '이투데이', 'EBN', 'NSP통신', '이투데이', '아주경제', '한국경제', '뉴스핌', '뉴스토마토', '이데일리', '헤럴드POP', '머니투데이', '머니투데이', '아시아경제', 'NSP통신', '서울파이낸스', '아시아경제', '뉴스토마토', '이데일리', '이투데이', '뉴스토마토', '이데일리', '뉴스핌', '머니투데이', '헤럴드POP', '이데일리', '이투데이', '세계일보', '뉴스토마토', '서울파이낸스', '머니투데이', '이데일리', '이투데이', '컨슈머타임스', '뉴스토마토', '뉴스토마토', '뉴스토마토', '뉴스토마토', '뉴스토마토', '뉴스토마토', '뉴스토마토', '뉴스토마토', '뉴스토마토', '뉴스토마토', '약업신문', '뉴스토마토', '뉴스토마토', '뉴스토마토', '뉴스토마토', '뉴스토마토', '뉴스토마토', '약업신문', '약업신문', '뉴시스', '연합뉴스', '뉴스토마토', '약업신문', '뉴스토마토', '뉴스토마토', '뉴스토마토', '뉴스토마토', '뉴스토마토', '뉴스토마토', '뉴스토마토', '약업신문', '약업신문', '한국경제', '서울경제', '이데일리', '이투데이', '한국경제', '매일경제', '이데일리', '서울경제', '매일경제', '이데일리', '서울경제', '이투데이', '파이낸셜뉴스', '조선비즈', '뉴스핌', '한국경제', '머니투데이', '파이낸셜뉴스', '매일경제', '파이낸셜뉴스', '파이낸셜뉴스', '연합뉴스', '데일리팜', '데일리팜', '조선비즈', '이투데이', '한국경제', 'MTN', '서울경제', '뉴스토마토', '메디파나뉴스', '조선비즈', '파이낸셜뉴스', '한국경제', '아시아경제', '이투데이', '연합뉴스', '한국경제', '뉴스핌', '이데일리', '머니투데이', '매일경제', '약업신문', '뉴스토마토', '메디파나뉴스', '파이낸셜뉴스', '파이낸셜뉴스', '한국경제', '이투데이', '머니투데이', '연합뉴스', '이투데이', '매일경제', '매일경제', '뉴스토마토', '서울경제', '이투데이', '아주경제', '이데일리', '한국경제', '헤럴드POP', '매일경제', '뉴스핌', '머니투데이', '머니투데이', '서울파이낸스', '뉴스토마토', '헤럴드POP', '뉴스토마토', '한국경제', '한국경제', '서울경제', '한국경제', '이데일리', '헤럴드POP', '조선비즈', '아주경제', '서울경제', '매일경제', '뉴시스', '뉴스토마토', '뉴스핌', '연합뉴스', '파이낸셜뉴스', '매일경제', '이투데이', '아시아경제', '매일경제', '이투데이', '아시아경제']

备注

如果您取消注释掉其他打印语句 您在完成时打印结果(即通过使用方法 as_completed),然后打印输出报纸列表将散布在其他打印行中,可能很难看到。在那种情况下,您可能希望使用我已经包含但已注释掉的 map 方法,以便仅在返回所有结果并发出所有调试打印语句后才打印报纸列表。