Pandas 当 运行 多处理代码时数据帧不更新
Pandas Dataframe doesnt update when running Multiprocessing code
我正在尝试使用来自临时 df(self.df_temp['linkedin_profile']) 的列更新数据帧 (self.df),并具有以下 class但它似乎没有更新任何东西。
代码:
class NameToSocialURLScraper:
def __init__(self, csv_file_name, person_name_column, organization_name_column):
self.proxy_list = PROXY_LIST
pool = Pool()
self.csv_file_name = csv_file_name
self.person_name_column = person_name_column
self.organization_name_column = organization_name_column
self.df = pd.read_csv(csv_file_name)
self.df_temp = pd.DataFrame()
def internal_linkedin_job(self):
self.df['linkedin_profile'] = np.nan
self.df_temp['linkedin_profile'] = np.nan
self.df_temp['linkedin_profile'] = self.df.apply(
lambda row: term_scraper(
str(row[self.person_name_column]) + " " + str(row[self.organization_name_column]), self.proxy_list,
'link', output_generic=False), axis=1)
self.df['linkedin_profile'] = self.df_temp['linkedin_profile']
print(self.df.values)
...
def multiprocess_job(self):
multiprocessing.log_to_stderr(logging.DEBUG)
linkedin_profile_proc = Process(target=self.internal_linkedin_job, args=())
jobs = [linkedin_profile_proc]
# Start the processes (i.e. calculate the random number lists)
for j in jobs:
j.start()
# Ensure all of the processes have finished
for j in jobs:
j.join()
在 internal_linkedin_job 内打印时,它显示带有新列 'linkedin_profile' 的 df,但是当我在 j.join() 之后打印时,该列不存在。
进行多处理时,每个进程都在自己的内存中运行space。您需要重构代码,以便 internal_linkedin_job returns 数据框。
我正在尝试使用来自临时 df(self.df_temp['linkedin_profile']) 的列更新数据帧 (self.df),并具有以下 class但它似乎没有更新任何东西。 代码:
class NameToSocialURLScraper:
def __init__(self, csv_file_name, person_name_column, organization_name_column):
self.proxy_list = PROXY_LIST
pool = Pool()
self.csv_file_name = csv_file_name
self.person_name_column = person_name_column
self.organization_name_column = organization_name_column
self.df = pd.read_csv(csv_file_name)
self.df_temp = pd.DataFrame()
def internal_linkedin_job(self):
self.df['linkedin_profile'] = np.nan
self.df_temp['linkedin_profile'] = np.nan
self.df_temp['linkedin_profile'] = self.df.apply(
lambda row: term_scraper(
str(row[self.person_name_column]) + " " + str(row[self.organization_name_column]), self.proxy_list,
'link', output_generic=False), axis=1)
self.df['linkedin_profile'] = self.df_temp['linkedin_profile']
print(self.df.values)
...
def multiprocess_job(self):
multiprocessing.log_to_stderr(logging.DEBUG)
linkedin_profile_proc = Process(target=self.internal_linkedin_job, args=())
jobs = [linkedin_profile_proc]
# Start the processes (i.e. calculate the random number lists)
for j in jobs:
j.start()
# Ensure all of the processes have finished
for j in jobs:
j.join()
在 internal_linkedin_job 内打印时,它显示带有新列 'linkedin_profile' 的 df,但是当我在 j.join() 之后打印时,该列不存在。
进行多处理时,每个进程都在自己的内存中运行space。您需要重构代码,以便 internal_linkedin_job returns 数据框。