我的说话人识别神经网络运行不正常

My speaker recognition neural network doesn’t work well

我的第一个学位有一个期末项目,我想建立一个神经网络,它将采用 wav 文件的前 13 个 mfcc 系数和 return 在音频文件中说话的人说话者。

我希望你注意到:

  1. 我的音频文件与文本无关,因此它们的长度和字数不同
  2. 我已经用 10 个演讲者的大约 35 个音频文件训练了机器(第一个演讲者大约有 15 个,第二个有 10 个,第三个和第四个大约有 5 个)

我定义了:

X=mfcc(sound_voice)

Y=zero_array + 1 在 i_th 位置(其中 i_th 位置为第一个扬声器的 0,第二个扬声器的 1,第三个扬声器的 2...)

然后训练机器,然后检查机器输出的一些文件...

我就是这么做的……但不幸的是,结果看起来完全是随机的……

你能帮我理解为什么吗?

这是我的代码 python -

from sklearn.neural_network import MLPClassifier
import python_speech_features
import scipy.io.wavfile as wav
import numpy as np
from os import listdir
from os.path import isfile, join
from random import shuffle
import matplotlib.pyplot as plt
from tqdm import tqdm

winner = []  # this array count how much Bingo we had when we test the NN
for TestNum in tqdm(range(5)):  # in every round we build NN with X,Y that out of them we check 50 after we build the NN
    X = []
    Y = []
    onlyfiles = [f for f in listdir("FinalAudios/") if isfile(join("FinalAudios/", f))]   # Files in dir
    names = []  # names of the speakers
    for file in onlyfiles:  # for each wav sound
        # UNESSECERY TO UNDERSTAND THE CODE
        if " " not in file.split("_")[0]:
            names.append(file.split("_")[0])
        else:
            names.append(file.split("_")[0].split(" ")[0])
    names = list(dict.fromkeys(names))  # names of speakers
    vector_names = []  # vector for each name
    i = 0
    vector_for_each_name = [0] * len(names)
    for name in names:
        vector_for_each_name[i] += 1
        vector_names.append(np.array(vector_for_each_name))
        vector_for_each_name[i] -= 1
        i += 1
    for f in onlyfiles:
        if " " not in f.split("_")[0]:
            f_speaker = f.split("_")[0]
        else:
            f_speaker = f.split("_")[0].split(" ")[0]
        (rate, sig) = wav.read("FinalAudios/" + f)  # read the file
        try:
            mfcc_feat = python_speech_features.mfcc(sig, rate, winlen=0.2, nfft=512)  # mfcc coeffs
            for index in range(len(mfcc_feat)):  # adding each mfcc coeff to X, meaning if there is 50000 coeffs than
                # X will be [first coeff, second .... 50000'th coeff] and Y will be [f_speaker_vector] * 50000
                X.append(np.array(mfcc_feat[index]))
                Y.append(np.array(vector_names[names.index(f_speaker)]))
        except IndexError:
            pass
    Z = list(zip(X, Y))

    shuffle(Z)  # WE SHUFFLE X,Y TO PERFORM RANDOM ON THE TEST LEVEL

    X, Y = zip(*Z)
    X = list(X)
    Y = list(Y)
    X = np.asarray(X)
    Y = np.asarray(Y)

    Y_test = Y[:50]  # CHOOSE 50 FOR TEST, OTHERS FOR TRAIN
    X_test = X[:50]
    X = X[50:]
    Y = Y[50:]

    clf = MLPClassifier(solver='lbfgs', alpha=1e-2, hidden_layer_sizes=(5, 3), random_state=2)  # create the NN
    clf.fit(X, Y)  # Train it

    for sample in range(len(X_test)):  # add 1 to winner array if we correct and 0 if not, than in the end it plot it
        if list(clf.predict([X[sample]])[0]) == list(Y_test[sample]):
            winner.append(1)
        else:
            winner.append(0)

# plot winner
plot_x = []
plot_y = []
for i in range(1, len(winner)):
    plot_y.append(sum(winner[0:i])*1.0/len(winner[0:i]))
    plot_x.append(i)
plt.plot(plot_x, plot_y)
plt.xlabel('x - axis')
# naming the y axis
plt.ylabel('y - axis')

# giving a title to my graph
plt.title('My first graph!')

# function to show the plot
plt.show()

这是我的 zip 文件,其中包含代码和音频文件:https://ufile.io/eggjm1gw

您的代码中存在许多问题,一次完成正确几乎是不可能的,但让我们试一试。主要有两个问题:

  • 目前您正在尝试使用非常少的训练示例来教授您的神经网络,每个说话人只有一个 (!)。任何机器学习算法都不可能学到任何东西。
  • 更糟糕的是,您所做的是在每次记录的前 25 毫秒内仅向 ANN 提供 MFCC(25 来自 python_speech_featureswinlen 参数)。在这些录音中的每一个中,前 25 毫秒将接近相同。即使每个演讲者有 10k 的录音,使用这种方法也无济于事。

我会给你具体的建议,但不会做所有的编码——毕竟这是你的作业。

  • 使用所有 MFCC,而不仅仅是前 25 毫秒。其中许多应该被跳过,仅仅是因为没有声音 activity。通常应该有 VOD(语音 Activity 检测器)告诉您要拍摄哪些,但在本练习中我将跳过它作为初学者(您需要先学习基础知识)。
  • 不要使用字典。它不仅不会为每个说话者提供一个以上的 MFCC 矢量,而且对于您的任务来说,它是一种非常低效的数据结构。使用 numpy 数组,它们速度更快且内存效率更高。有大量教程,包括演示如何在此上下文中使用 numpyscikit-learn。本质上,您创建了两个数组:一个包含训练数据,第二个包含标签。示例:如果 omersk speaker "produces" 50000 MFCC 向量,您将得到 (50000, 13) 训练数组。相应的标签数组将是 50000,具有与说话者对应的单个常量值 (id)(例如,omersk 为 0,lucas为 1,依此类推)。我会考虑花更长的时间 windows(也许 200 毫秒,实验!)来减少方差。

不要忘记拆分数据以进行训练、验证和测试。您将拥有足够多的数据。此外,对于这个练习,我会注意不要为任何单个说话者提供过多的数据——不要采取措施确保算法没有偏见。

稍后,当您进行预测时,您将再次计算说话人的 MFCC。使用 10 秒记录、200 毫秒 window 和 100 毫秒重叠,您将获得 99 个 MFCC 矢量,形状 (99, 13)。对于每个生成概率,模型应该在 99 个向量中的每一个上 运行。当您对其求和(并标准化,以使其更好)并取最高值时,您将获得最有可能的演讲者。

通常会考虑许多其他事情,但在这种情况下(作业)我会专注于正确掌握基础知识。

编辑:我决定尝试以您的想法为核心来创建模型,但基础是固定的。它并不完全干净 Python,因为它改编自我 运行ning 的 Jupyter Notebook。

import python_speech_features
import scipy.io.wavfile as wav
import numpy as np
import glob
import os

from collections import defaultdict
from sklearn.neural_network import MLPClassifier
from sklearn import preprocessing
from sklearn.model_selection import cross_validate
from sklearn.ensemble import RandomForestClassifier


audio_files_path = glob.glob('audio/*.wav')
win_len = 0.04 # in seconds
step = win_len / 2
nfft = 2048

mfccs_all_speakers = []
names = []
data = []

for path in audio_files_path:
    fs, audio = wav.read(path)
    if audio.size > 0:
        mfcc = python_speech_features.mfcc(audio, samplerate=fs, winlen=win_len,
                                            winstep=step, nfft=nfft, appendEnergy=False)
        filename = os.path.splitext(os.path.basename(path))[0]
        speaker = filename[:filename.find('_')]
        data.append({'filename': filename,
                     'speaker': speaker,
                     'samples': mfcc.shape[0],
                     'mfcc': mfcc})
    else:
        print(f'Skipping {path} due to 0 file size')

speaker_sample_size = defaultdict(int)
for entry in data:
    speaker_sample_size[entry['speaker']] += entry['samples']

person_with_fewest_samples = min(speaker_sample_size, key=speaker_sample_size.get)
print(person_with_fewest_samples)

max_accepted_samples = int(speaker_sample_size[person_with_fewest_samples] * 0.8)
print(max_accepted_samples)

training_idx = []
test_idx = []
accumulated_size = defaultdict(int)

for entry in data:
    if entry['speaker'] not in accumulated_size:
        training_idx.append(entry['filename'])
        accumulated_size[entry['speaker']] += entry['samples']
    elif accumulated_size[entry['speaker']] < max_accepted_samples:
        accumulated_size[entry['speaker']] += entry['samples']
        training_idx.append(entry['filename'])

X_train = []
label_train = []

X_test = []
label_test = []

for entry in data:
    if entry['filename'] in training_idx:
        X_train.append(entry['mfcc'])
        label_train.extend([entry['speaker']] * entry['mfcc'].shape[0])
    else:
        X_test.append(entry['mfcc'])
        label_test.extend([entry['speaker']] * entry['mfcc'].shape[0])

X_train = np.concatenate(X_train, axis=0)
X_test = np.concatenate(X_test, axis=0)

assert (X_train.shape[0] == len(label_train))
assert (X_test.shape[0] == len(label_test))

print(f'Training: {X_train.shape}')
print(f'Testing: {X_test.shape}')

le = preprocessing.LabelEncoder()
y_train = le.fit_transform(label_train)
y_test = le.transform(label_test)

clf = MLPClassifier(solver='lbfgs', alpha=1e-2, hidden_layer_sizes=(5, 3), random_state=42, max_iter=1000)

cv_results = cross_validate(clf, X_train, y_train, cv=4)
print(cv_results)

{'fit_time': array([3.33842635, 4.25872731, 4.73704267, 5.9454329 ]),
 'score_time': array([0.00125694, 0.00073504, 0.00074005, 0.00078583]),
 'test_score': array([0.40380048, 0.52969121, 0.48448687, 0.46043165])}

test_score 不是很出色。有很多需要改进的地方(对于初学者来说,算法的选择),但基础就在那里。请注意初学者如何获得训练样本。这不是随机的,我只考虑整个录音。您不能将给定录音中的样本同时放入 trainingtest,因为 test 应该是新颖的。

您的代码有什么问题?我会说很多。您正在采集 200 毫秒的样本,但非常短 fftpython_speech_features 可能向您抱怨 fft 应该比您正在处理的帧长。

我留给你测试模型。它不会很好,但它是一个启动器。