我应该从 "urllib.request.urlretrieve(..)" 切换到 "urllib.request.urlopen(..)" 吗?

Should I switch from "urllib.request.urlretrieve(..)" to "urllib.request.urlopen(..)"?

1。弃用问题

Python 3.7 中,我使用 urllib.request.urlretrieve(..) 函数从 URL 下载了一个大文件。在文档 (https://docs.python.org/3/library/urllib.request.html) 中,我在 urllib.request.urlretrieve(..) 文档上方阅读了以下内容:

Legacy interface
The following functions and classes are ported from the Python 2 module urllib (as opposed to urllib2). They might become deprecated at some point in the future.


2。寻找替代方案

为了让我的代码永不过时,我正在寻找替代方案。官方 Python 文档没有提到具体的一个,但看起来 urllib.request.urlopen(..) 是最直接的候选者。它位于文档页面的顶部。

不幸的是,替代方案 - 如 urlopen(..) - 不提供 reporthook 参数。 这个参数是你传递给 urlretrieve(..) 函数。反过来,urlretrieve(..) 使用以下参数定期调用它:

我用它来更新进度条。这就是为什么我错过了替代方案中的 reporthook 参数。


3。 urlretrieve(..) 与 urlopen(..)

我发现 urlretrieve(..) 只是使用 urlopen(..)。参见Python3.7安装中的request.py代码文件(Python37/Lib/urllib/request.py):

_url_tempfiles = []
def urlretrieve(url, filename=None, reporthook=None, data=None):
    """
    Retrieve a URL into a temporary location on disk.

    Requires a URL argument. If a filename is passed, it is used as
    the temporary file location. The reporthook argument should be
    a callable that accepts a block number, a read size, and the
    total file size of the URL target. The data argument should be
    valid URL encoded data.

    If a filename is passed and the URL points to a local resource,
    the result is a copy from local file to new file.

    Returns a tuple containing the path to the newly created
    data file as well as the resulting HTTPMessage object.
    """
    url_type, path = splittype(url)

    with contextlib.closing(urlopen(url, data)) as fp:
        headers = fp.info()

        # Just return the local path and the "headers" for file://
        # URLs. No sense in performing a copy unless requested.
        if url_type == "file" and not filename:
            return os.path.normpath(path), headers

        # Handle temporary file setup.
        if filename:
            tfp = open(filename, 'wb')
        else:
            tfp = tempfile.NamedTemporaryFile(delete=False)
            filename = tfp.name
            _url_tempfiles.append(filename)

        with tfp:
            result = filename, headers
            bs = 1024*8
            size = -1
            read = 0
            blocknum = 0
            if "content-length" in headers:
                size = int(headers["Content-Length"])

            if reporthook:
                reporthook(blocknum, bs, size)

            while True:
                block = fp.read(bs)
                if not block:
                    break
                read += len(block)
                tfp.write(block)
                blocknum += 1
                if reporthook:
                    reporthook(blocknum, bs, size)

    if size >= 0 and read < size:
        raise ContentTooShortError(
            "retrieval incomplete: got only %i out of %i bytes"
            % (read, size), result)

    return result

4。结论

从这一切,我看到三个可能的决定:

  1. 我保持我的代码不变。希望 urlretrieve(..) 函数不会很快被弃用。

  2. 我给自己写了一个替换函数,在外面表现得像urlretrieve(..),在里面使用urlopen(..)。实际上,这样的功能将是上面代码的复制粘贴。这样做感觉不干净 - 与使用官方 urlretrieve(..).

  3. 相比
  4. 我给自己写了一个 替换函数 ,在外部表现得像 urlretrieve(..),但在内部使用完全不同的东西。但是,嘿,我为什么要那样做? urlopen(..) 没有被弃用,为什么不使用它呢?

你会做出什么决定?

以下示例使用 urllib.request.urlopen 从 FAO 统计数据库下载包含大洋洲作物生产数据的 zip 文件。在该示例中,有必要定义一个最小值 header,否则 FAOSTAT 将抛出一个 Error 403: Forbidden.

import shutil
import urllib.request
import tempfile

# Create a request object with URL and headers    
url = “http://fenixservices.fao.org/faostat/static/bulkdownloads/Production_Crops_Livestock_E_Oceania.zip”
header = {'User-Agent': 'Mozilla/5.0 (X11; Linux x86_64) '}
req = urllib.request.Request(url=url, headers=header)

# Define the destination file
dest_file = tempfile.gettempdir() + '/' + 'crop.zip'
print(f“File located at:{dest_file}”)

# Create an http response object
with urllib.request.urlopen(req) as response:
    # Create a file object
    with open(dest_file, "wb") as f:
        # Copy the binary content of the response to the file
        shutil.copyfileobj(response, f)

基于https://whosebug.com/a/48691447/2641825 for the request part and https://whosebug.com/a/66591873/2641825 for the header part, see also urllib's documentation at https://docs.python.org/3/howto/urllib2.html