想要在 python 中构建支持向量机而不使用 Scikit-Learn 的真正建议

Want genuine suggestion to build Support Vector Machine in python without using Scikit-Learn

我知道如何使用 Scikit-Learn 构建 支持向量机 但现在我想在 python 中从头开始制作它而不使用 Scikit-Learn。 由于我很困惑并且对内部流程缺乏了解,如果能得到帮助并解决问题,我将非常高兴。

您可以使用 numpy 实现一个简单的线性 SVM,如下所示。顺便说一句,下次提问之前请google。网上有很多资源和教程。

    import numpy as np
    def my_svm(dataset, label):
        rate = 1 # rate for gradient descent
        epochs = 10000 # no of iterations
        weights = np.zeros(dataset.shape[1]) # Create an array for storing the weights

        # Min. the objective function(Hinge loss) by gradient descent
        for epoch in range(1,epochs):
            for n, data in enumerate(dataset):
                if (label[n] * np.dot(dataset[n], weights)) < 1:
                    weights = weights + rate * ( (dataset[n] * label[n]) + (-2  *(1/epoch)* weights) )
                else:
                    weights = weights + rate * (-2  * (1/epoch) * weights)

        return weights

    def predict(test_data,weights):
        results = []
        for data in test_data:
            result = np.dot(data,weights)
            results.append(-1 if result < 0 else 1)
        return results

生成用于训练和测试的数据集

    dataset = np.array([
        [-2, 4,-1], #x_cood,y_cood,bias
        [4, 1, -1],
        [0, 2, -1],
        [1, 6, -1],
        [2, 5, -1],
        [6, 2, -1]
        ])
    label = np.array([-1,-1,-1,1,1,1])

    weights = my_svm(dataset,label)

测试一下

    test_data = np.array([
                [0,3,-1], #Should belong to -1
                [4,5,-1]  #Should belong to 1
                ])
    predict(test_data, weights)
    >Out[10]: [-1, 1]