ANN:学习矢量量化不起作用

ANN: Learning Vector Quantization not working

我希望这里有人可以帮助我: 我正在尝试实现一个神经网络来查找以二维集群形式呈现的数据集群。我尝试遵循 wikipedia 中描述的标准算法:我寻找每个数据点的最小距离,并更新该神经元对数据点的权重。当总距离足够小时,我停止这样做。

我的结果找到了大部分聚类,但在视图上是错误的,虽然它计算了一个永久距离,但它不再收敛。我的错误在哪里?

typedef struct{
    double x;
    double y;
}Data;

typedef struct{
    double x;
    double y;
}Neuron;

typedef struct{
    size_t numNeurons;
    Neuron* neurons;
}Network;

int main(void){
    srand(time(NULL));

    Data trainingData[1000];
    size_t sizeTrainingData = 0;
    size_t sizeClasses = 0;
    Network network;

    getData(trainingData, &sizeTrainingData, &sizeClasses);

    initializeNetwork(&network, sizeClasses);
    normalizeData(trainingData, sizeTrainingData);
    train(&network, trainingData, sizeTrainingData);

    return 0;
}

void train(Network* network, Data trainingData[], size_t sizeTrainingData){
    for(int epoch=0; epoch<TRAINING_EPOCHS; ++epoch){
        double learningRate = getLearningRate(epoch);
        double totalDistance = 0;
        for(int i=0; i<sizeTrainingData; ++i){
            Data currentData = trainingData[i];
            int winningNeuron = 0;
            totalDistance += findWinningNeuron(network, currentData, &winningNeuron);
            //update weight
            network->neurons[i].x += learningRate * (currentData.x - network->neurons[i].x);
            network->neurons[i].y += learningRate * (currentData.y - network->neurons[i].y);
        }
        if(totalDistance<MIN_TOTAL_DISTANCE) break;
    }
}

double getLearningRate(int epoch){
    return LEARNING_RATE * exp(-log(LEARNING_RATE/LEARNING_RATE_MIN_VALUE)*((double)epoch/TRAINING_EPOCHS));
}

double findWinningNeuron(Network* network, Data data, int* winningNeuron){
    double smallestDistance = 9999;
    for(unsigned int currentNeuronIndex=0; currentNeuronIndex<network->numNeurons; ++currentNeuronIndex){
        Neuron neuron = network->neurons[currentNeuronIndex];
        double distance = sqrt(pow(data.x-neuron.x,2)+pow(data.y-neuron.y,2));
        if(distance<smallestDistance){
            smallestDistance = distance;
            *winningNeuron = currentNeuronIndex;
        }
    }
    return smallestDistance;
}

initializeNetwork(...) 启动所有具有 -1 和 1 范围内随机权重的神经元。 normalizeData(...) 以某种方式归一化,因此最大值为 1。

一个例子: 如果我为网络提供大约 50 个(标准化的)数据点,这些数据点被分成 3 个簇,剩余的 totaldistance 保持在大约 7.3。当我检查神经元的位置时,它应该代表集群的中心,两个是完美的,一个在集群的边界。算法不应该把它移到中心吗?我重复了几次算法,输出总是相似的(完全相同错误点)

你的代码看起来不像 LVQ,特别是你从来没有使用获胜的神经元,而你应该只移动这个

void train(Network* network, Data trainingData[], size_t sizeTrainingData){
    for(int epoch=0; epoch<TRAINING_EPOCHS; ++epoch){
        double learningRate = getLearningRate(epoch);
        double totalDistance = 0;
        for(int i=0; i<sizeTrainingData; ++i){
            Data currentData = trainingData[i];
            int winningNeuron = 0;
            totalDistance += findWinningNeuron(network, currentData, &winningNeuron);
            //update weight
            network->neurons[i].x += learningRate * (currentData.x - network->neurons[i].x);
            network->neurons[i].y += learningRate * (currentData.y - network->neurons[i].y);
        }
        if(totalDistance<MIN_TOTAL_DISTANCE) break;
    }
}

你要移动的神经元在 winningNeuron 中,但你更新了第 i 个神经元,其中 i 实际上迭代了 训练样本 ,我是很惊讶你的记忆力没有下降(network->neurons 应该小于 sizeTrainingData)。我猜你的意思是

void train(Network* network, Data trainingData[], size_t sizeTrainingData){
    for(int epoch=0; epoch<TRAINING_EPOCHS; ++epoch){
        double learningRate = getLearningRate(epoch);
        double totalDistance = 0;
        for(int i=0; i<sizeTrainingData; ++i){
            Data currentData = trainingData[i];
            int winningNeuron = 0;
            totalDistance += findWinningNeuron(network, currentData, &winningNeuron);
            //update weight
            network->neurons[winningNeuron].x += learningRate * (currentData.x - network->neurons[winningNeuron].x);
            network->neurons[winningNeuron].y += learningRate * (currentData.y - network->neurons[winningNeuron].y);
        }
        if(totalDistance<MIN_TOTAL_DISTANCE) break;
    }
}