高效的并行下载和解压缩与服务器上文件列表的匹配模式

Efficient parallel downloading and decompressing with matching pattern for list of files on server

每天每 6 小时我必须从 Web 服务器下载 bz2 个文件,解压缩它们并将它们合并到一个文件中。这需要尽可能高效和快速,因为我必须等待下载和解压缩阶段完成才能继续合并。

我写了一些 bash 函数,这些函数将一些字符串作为输入来构建 URL 要下载的文件作为匹配模式。这样我就可以将匹配模式直接传递给 wget,而不必在本地构建服务器的内容列表,然后将其作为包含 -i 的列表传递给 wget。我的函数看起来像这样

parallelized_extraction(){
    i=0
    until [ `ls -1 .bz2 2>/dev/null | wc -l ` -gt 0 -o $i -ge 30 ]; do
        ((i++))
        sleep 1
    done
    while [ `ls -1 .bz2 2>/dev/null | wc -l ` -gt 0 ]; do
        ls .bz2| parallel -j+0 bzip2 -d '{}' 
        sleep 1
    done
}
download_merge_2d_variable()
{
    filename="file_${year}${month}${day}${run}_*_.grib2"
    wget -b -r -nH -np -nv -nd -A "${filename}.bz2" "url/${run}/${1,,}/"
    parallelized_extraction ${filename}
    # do the merging 
    rm ${filename}
} 

我称之为 download_merge_2d_variable name_of_variable 我能够通过编写函数 parallelized_extraction 来加速代码,该函数负责解压缩下载的文件,而 wget 在后台 运行ning。为此,我首先等待第一个 .bz2 文件出现,然后 运行 并行提取,直到最后一个 .bz2 出现在服务器上(这是两个 untilwhile 循环正在做)。

我对这种方法非常满意,但我认为它可以改进。这是我的问题:

parallelized_extraction(){
    # ...................
    # same as before ....
    # ...................
    while [ `ls -1 .bz2 2>/dev/null | wc -l ` -gt 0 -a kill -0  >/dev/null 2>&1 ]; do
        ls .bz2| parallel -j+0 bzip2 -d '{}' 
        sleep 1
    done
}
download_merge_2d_variable()
{
    filename="ifile_${year}${month}${day}${run}_*_.grib2"
    wget -r -nH -np -nv -nd -A "${filename}.bz2" "url/${run}/${1,,}/" &
    # get ID of process running in background
    PROC_ID=$!
    parallelized_extraction ${filename} ${PROC_ID}
    # do the merging
    rm ${filename}
}

关于为什么这不起作用的任何线索?关于如何改进我的代码的任何建议? 谢谢

更新 我将根据已接受的答案在此处发布我的工作解决方案,以防有人感兴趣。

# Extract a plain list of URLs by using --spider option and filtering
# only URLs from the output 
listurls() {
    filename=""
    url=""
    wget --spider -r -nH -np -nv -nd --reject "index.html" --cut-dirs=3 \
        -A $filename.bz2 $url 2>&1\
        | grep -Eo '(http|https)://(.*).bz2'
}
# Extract each file by redirecting the stdout of wget to bzip2
# note that I get the filename from the URL directly with
# basename and by removing the bz2 extension at the end 
get_and_extract_one() {
  url=""
  file=`basename $url | sed 's/\.bz2//g'`
  wget -q -O - "$url" | bzip2 -dc > "$file"
}
export -f get_and_extract_one
# Here the main calling function 
download_merge_2d_variable()
{
    filename="filename.grib2"
    url="url/where/the/file/is/"
    listurls $filename $url | parallel get_and_extract_one {}
    # merging and processing
}
export -f download_merge_2d_variable_icon_globe

你能列出要下载的网址吗?

listurls() {
  # do something that lists the urls without downloading them
  # Possibly something like:
  # lynx -listonly -image_links -dump "$starturl"
  # or
  # wget --spider -r -nH -np -nv -nd -A "${filename}.bz2" "url/${run}/${1,,}/"
  # or
  # seq 100 | parallel echo ${url}${year}${month}${day}${run}_{}_${id}.grib2
}

get_and_extract_one() {
  url=""
  file=""
  wget -O - "$url" | bzip2 -dc > "$file"
}
export -f get_and_extract_one

# {=s:/:_:g; =} will generate a file name from the URL with / replaced by _
# You probably want something nicer.
# Possibly just {/.}
listurls | parallel get_and_extract_one {} '{=s:/:_:g; =}'

这样您将在下载和并行执行所有操作的同时进行解压缩。