如何修复 pandas 数据帧中的薄片 8 错误 "E712 comparison to False should be 'if cond is False:' or 'if not cond:'"

how to fix the flake 8 error "E712 comparison to False should be 'if cond is False:' or 'if not cond:'" in pandas dataframe

我在 "added_parts = new_part_set[(new_part_set["duplicate"] == False) & (new_part_set["version"] == "target" 行收到 E712 的 flake 8 错误)]"**

以下是我们用于电子表格比较的代码片段

source_df = pd.read_excel(self.source, sheet).fillna('NA')
target_df = pd.read_excel(self.target, sheet).fillna('NA')
file_path = os.path.dirname(self.source)

column_list = source_df.columns.tolist()

source_df['version'] = "source"
target_df['version'] = "target"

source_df.sort_values(by=unique_col)
source_df = source_df.reindex()
target_df.sort_values(by=unique_col)
target_df = target_df.reindex()

# full_set = pd.concat([source_df, target_df], ignore_index=True)
diff_panel = pd.concat([source_df, target_df],
                       axis='columns', keys=['df1', 'df2'], join='outer', sort=False)
diff_output = diff_panel.apply(self.__report_diff, axis=0)
diff_output['has_change'] = diff_output.apply(self.__has_change)

full_set = pd.concat([source_df, target_df], ignore_index=True)
changes = full_set.drop_duplicates(subset=column_list, keep='last')
dupe_records = changes.set_index(unique_col).index.unique()

changes['duplicate'] = changes[unique_col].isin(dupe_records)
removed_parts = changes[(changes["duplicate"] == False) & (changes["version"] == "source")]
new_part_set = full_set.drop_duplicates(subset=column_list, keep='last')
new_part_set['duplicate'] = new_part_set[unique_col].isin(dupe_records)
added_parts = new_part_set[(new_part_set["duplicate"] == False) & (new_part_set["version"] == "target")]

diff_file = file_path + "file_diff.xlsx"
if os.path.exists(diff_file):
    os.remove(diff_file)
writer = pd.ExcelWriter(file_path + "file_diff.xlsx")
diff_output.to_excel(writer, "changed")
removed_parts.to_excel(writer, "removed", index=False, columns=column_list)
added_parts.to_excel(writer, "added", index=False, columns=column_list)
writer.save()

是否有任何其他方法可以避免这种情况,不确定是否要进一步进行。

在您的 DataFrame 掩码中,您有 (changes["duplicate"] == False)(new_part_set["duplicate"] == False) flake8 建议您更改这些。它抱怨的原因是,在 python 中,使用 == 运算符与布尔值进行比较被认为是不好的做法,而您应该编写 if my_bool:...if not my_bool:... 等。在 pandas 如果你有一个布尔系列,你可以使用 ~ 运算符对其取反,这样你的新掩码就会写成:

~changes["duplicate"] # & ... blah blah
~new_part_set["duplicate"] # & ... blah blah