在 python 中训练嘈杂的语音合成器多处理过程中的零除法错误

Zero division error during training a noisy speech synthesizer multiprocessing in python

我在这里尝试使用清晰且嘈杂的音频文件来训练数据集 但在这里,我得到这个 error.Please 调查并帮助我。 https://github.com/breizhn/DTLN.git 所有细节都可用 here.I 我正在尝试 运行 noiyspeech 合成器多处理文件。

代码:

global clean_counter, noise_counter

if is_clean:
    source_files = params['cleanfilenames']
    idx_counter = clean_counter

else:    
    source_files = params['noisefilenames']        
    idx_counter = noise_counter


# initialize silence
silence = np.zeros(int(fs_output*silence_length))

# iterate through multiple clips until we have a long enough signal
tries_left = MAXTRIES
while remaining_length > 0 and tries_left > 0:

    # read next audio file and resample if necessary
    with idx_counter.get_lock():
        idx_counter.value += 1
        idx = idx_counter.value % np.size(source_files)

错误:

Traceback (most recent call last):
  File "/home/drstrange/anaconda3/envs/train_env/lib/python3.7/multiprocessing/pool.py", line 121, in worker
    result = (True, func(*args, **kwds))
  File "/home/drstrange/anaconda3/envs/train_env/lib/python3.7/multiprocessing/pool.py", line 47, in starmapstar
    return list(itertools.starmap(args[0], args[1]))
  File "noisyspeech_synthesizer_multiprocessing.py", line 156, in main_gen
    gen_audio(True, params, filenum)
  File "noisyspeech_synthesizer_multiprocessing.py", line 124, in gen_audio
    build_audio(is_clean, params, filenum, audio_samples_length)
  File "noisyspeech_synthesizer_multiprocessing.py", line 73, in build_audio
    idx = idx_counter.value % np.size(source_files)
ZeroDivisionError: integer division or modulo by zero

数据集有问题,最大文件数为 corrupted.So 这就是为什么我再次获得此 error.Downloaded 数据集(这次是正确的)现在代码工作正常。