How can I fix "TypeError: cannot serialize '_io.BufferedReader' object" error when trying to multiprocess

How can I fix "TypeError: cannot serialize '_io.BufferedReader' object" error when trying to multiprocess

我正在尝试将代码中的线程切换为多处理以衡量其性能,并希望获得更好的暴力破解潜力,因为我的程序旨在暴力破解受密码保护的 .zip 文件。但是每当我尝试 运行 我得到这个程序时:

BruteZIP2.py -z "Generic ZIP.zip" -f  Worm.txt
Traceback (most recent call last):
  File "C:\Users\User\Documents\Jetbrains\PyCharm\BruteZIP\BruteZIP2.py", line 40, in <module>
    main(args.zip, args.file)
  File "C:\Users\User\Documents\Jetbrains\PyCharm\BruteZIP\BruteZIP2.py", line 34, in main
    p.start()
  File "C:\Users\User\AppData\Local\Programs\Python\Python37\lib\multiprocessing\process.py", line 112, in start
self._popen = self._Popen(self)
  File "C:\Users\User\AppData\Local\Programs\Python\Python37\lib\multiprocessing\context.py", line 223, in _Popen
return _default_context.get_context().Process._Popen(process_obj)
  File "C:\Users\User\AppData\Local\Programs\Python\Python37\lib\multiprocessing\context.py", line 322, in _Popen
return Popen(process_obj)
  File "C:\Users\User\AppData\Local\Programs\Python\Python37\lib\multiprocessing\popen_spawn_win32.py", line 65, in __init__
reduction.dump(process_obj, to_child)
  File "C:\Users\User\AppData\Local\Programs\Python\Python37\lib\multiprocessing\reduction.py", line 60, in dump
ForkingPickler(file, protocol).dump(obj)
TypeError: cannot serialize '_io.BufferedReader' object

我确实找到了与我有相同问题的线程,但它们都是 unanswered/unsolved。我还尝试在 p.start() 上方插入 Pool,因为我认为这是由于我在基于 Windows 的机器上造成的,但这没有帮助。我的代码如下:

  import argparse
  from multiprocessing import Process
  import zipfile

  parser = argparse.ArgumentParser(description="Unzips a password protected .zip by performing a brute-force attack using either a word list, password list or a dictionary.", usage="BruteZIP.py -z zip.zip -f file.txt")
  # Creates -z arg
  parser.add_argument("-z", "--zip", metavar="", required=True, help="Location and the name of the .zip file.")
  # Creates -f arg
  parser.add_argument("-f", "--file", metavar="", required=True, help="Location and the name of the word list/password list/dictionary.")
  args = parser.parse_args()


  def extract_zip(zip_file, password):
      try:
          zip_file.extractall(pwd=password)
          print(f"[+] Password for the .zip: {password.decode('utf-8')} \n")
      except:
          # If a password fails, it moves to the next password without notifying the user. If all passwords fail, it will print nothing in the command prompt.
          print(f"Incorrect password: {password.decode('utf-8')}")
          # pass


  def main(zip, file):
      if (zip == None) | (file == None):
          # If the args are not used, it displays how to use them to the user.
          print(parser.usage)
          exit(0)
      zip_file = zipfile.ZipFile(zip)
      # Opens the word list/password list/dictionary in "read binary" mode.
      txt_file = open(file, "rb")
      for line in txt_file:
          password = line.strip()
          p = Process(target=extract_zip, args=(zip_file, password))
          p.start()
          p.join()


  if __name__ == '__main__':
      # BruteZIP.py -z zip.zip -f file.txt.
      main(args.zip, args.file)

正如我之前所说,我认为这主要是因为我现在使用的是基于 Windows 的计算机。我与其他一些使用基于 Linux 的机器的人分享了我的代码,他们对上面的代码 运行 没有任何问题。

我的主要目标是开始 8 processes/pools 以最大化与线程相比完成的尝试次数,但由于我无法修复 TypeError: cannot serialize '_io.BufferedReader' object 消息,我不确定在这里做什么以及如何继续修复它。如有任何帮助,我们将不胜感激。

文件句柄不能很好地序列化...但是您可以发送 zip 文件的 name 而不是 zip filehandle(一个字符串在进程之间序列化没问题)。并避免 zip 作为文件名,因为它是内置的。我选择了 zip_filename

p = Process(target=extract_zip, args=(zip_filename, password))

然后:

def extract_zip(zip_filename, password):
      try:
          zip_file = zipfile.ZipFile(zip_filename)
          zip_file.extractall(pwd=password)

另一个问题是您的代码不会 运行 并行,因为:

      p.start()
      p.join()

p.join 等待进程完成...几乎没用。您必须将进程标识符存储到最后 join 它们。

这可能会导致其他问题:并行创建太多进程对您的机器来说可能是个问题,并且在某些时候不会有太大帮助。考虑使用 multiprocessing.Pool 来限制工人的数量。

简单的例子是:

with multiprocessing.Pool(5) as p:
    print(p.map(f, [1, 2, 3, 4, 5, 6, 7]))

适应你的例子:

with multiprocessing.Pool(5) as p:
    p.starmap(extract_zip, [(zip_filename,line.strip()) for line in txt_file])

(starmap expands the tuples as 2 separate arguments to fit your extract_zip method, as explained in Python multiprocessing pool.map for multiple arguments)