为什么我的逻辑回归模型的准确率超过 100%?
Why does my accuracy go over 100% on my logistic regression model?
我正在处理一个数据集,它是几个医学预测变量和一个目标变量的集合,用于对患者是否患有糖尿病进行分类。我在不使用 scikit learn / sklearn 库的情况下构建我的模型。我已将 link 附加到下面的数据集。
https://www.kaggle.com/uciml/pima-indians-diabetes-database
我已经训练和测试了模式,但我的准确率一直超过 100%。
我是这个领域的新手,所以如果我犯了愚蠢的错误,我深表歉意。下面是我的代码(我只使用 Glucose 和 DiabetesPedigreeFunction)进行分类。
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
df = pd.read_csv('diabetes.csv')
df.head()
df.drop(['BloodPressure', 'SkinThickness', 'Insulin', 'BMI',
'Pregnancies', 'Age'], axis = 1, inplace=True)
df
positive = df[df['Outcome'].isin([1])]
negative = df[df['Outcome'].isin([0])]
fig, ax = plt.subplots(figsize=(12,8))
ax.scatter(positive['DiabetesPedigreeFunction'],positive['Glucose'],
s=50, c='b', marker='o', label='Diabetes')
ax.scatter(negative['DiabetesPedigreeFunction'],negative['Glucose'],
s=50, c='r', marker='x', label='Not Diabetes')
ax.legend()
def sigmoid(x):
return 1/(1 + np.exp(-x))
nums = np.arange(-10, 10, step=1)
fig, ax = plt.subplots(figsize=(12,8))
ax.plot(nums, sigmoid(nums), 'r')
def cost(theta, X, y):
theta = np.matrix(theta)
X = np.matrix(X)
y = np.matrix(y)
first = np.multiply(-y, np.log(sigmoid(X * theta.T)))
second = np.multiply((1 - y), np.log(1 - sigmoid(X * theta.T)))
return np.sum(first - second) / (len(X))
X.shape, theta.shape, y.shape
cost(theta, X, y)
def gradient(theta, X, y):
theta = np.matrix(theta)
X = np.matrix(X)
y = np.matrix(y)
parameters = int(theta.ravel().shape[1])
grad = np.zeros(parameters)
error = sigmoid(X * theta.T) - y
for i in range(parameters):
term = np.multiply(error, X[:,i])
grad[i] = np.sum(term) / len(X)
return grad
gradient(theta, X, y)
import scipy.optimize as opt
result = opt.fmin_tnc(func=cost, x0=theta, fprime=gradient, args=(X,
y))
cost(result[0], X, y)
def predict(theta, X):
probability = sigmoid(X * theta.T)
return [1 if x >= 0.5 else 0 for x in probability]
theta_min = np.matrix(result[0])
predictions = predict(theta_min, X)
correct = [1 if ((a == 1 and b == 1) or (a == 0 and b == 0)) else 0
for (a, b) in zip(predictions, y)]
accuracy = (sum(map(int, correct)) % len(correct))
print ('accuracy = {}%'.format(accuracy))
我的准确率是 574%。我需要一些反馈。提前致谢。
您使用 mod 而不是除法。
准确度应该这样计算:
accuracy = sum(correct) / len(correct)
我正在处理一个数据集,它是几个医学预测变量和一个目标变量的集合,用于对患者是否患有糖尿病进行分类。我在不使用 scikit learn / sklearn 库的情况下构建我的模型。我已将 link 附加到下面的数据集。
https://www.kaggle.com/uciml/pima-indians-diabetes-database
我已经训练和测试了模式,但我的准确率一直超过 100%。 我是这个领域的新手,所以如果我犯了愚蠢的错误,我深表歉意。下面是我的代码(我只使用 Glucose 和 DiabetesPedigreeFunction)进行分类。
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
df = pd.read_csv('diabetes.csv')
df.head()
df.drop(['BloodPressure', 'SkinThickness', 'Insulin', 'BMI',
'Pregnancies', 'Age'], axis = 1, inplace=True)
df
positive = df[df['Outcome'].isin([1])]
negative = df[df['Outcome'].isin([0])]
fig, ax = plt.subplots(figsize=(12,8))
ax.scatter(positive['DiabetesPedigreeFunction'],positive['Glucose'],
s=50, c='b', marker='o', label='Diabetes')
ax.scatter(negative['DiabetesPedigreeFunction'],negative['Glucose'],
s=50, c='r', marker='x', label='Not Diabetes')
ax.legend()
def sigmoid(x):
return 1/(1 + np.exp(-x))
nums = np.arange(-10, 10, step=1)
fig, ax = plt.subplots(figsize=(12,8))
ax.plot(nums, sigmoid(nums), 'r')
def cost(theta, X, y):
theta = np.matrix(theta)
X = np.matrix(X)
y = np.matrix(y)
first = np.multiply(-y, np.log(sigmoid(X * theta.T)))
second = np.multiply((1 - y), np.log(1 - sigmoid(X * theta.T)))
return np.sum(first - second) / (len(X))
X.shape, theta.shape, y.shape
cost(theta, X, y)
def gradient(theta, X, y):
theta = np.matrix(theta)
X = np.matrix(X)
y = np.matrix(y)
parameters = int(theta.ravel().shape[1])
grad = np.zeros(parameters)
error = sigmoid(X * theta.T) - y
for i in range(parameters):
term = np.multiply(error, X[:,i])
grad[i] = np.sum(term) / len(X)
return grad
gradient(theta, X, y)
import scipy.optimize as opt
result = opt.fmin_tnc(func=cost, x0=theta, fprime=gradient, args=(X,
y))
cost(result[0], X, y)
def predict(theta, X):
probability = sigmoid(X * theta.T)
return [1 if x >= 0.5 else 0 for x in probability]
theta_min = np.matrix(result[0])
predictions = predict(theta_min, X)
correct = [1 if ((a == 1 and b == 1) or (a == 0 and b == 0)) else 0
for (a, b) in zip(predictions, y)]
accuracy = (sum(map(int, correct)) % len(correct))
print ('accuracy = {}%'.format(accuracy))
我的准确率是 574%。我需要一些反馈。提前致谢。
您使用 mod 而不是除法。
准确度应该这样计算:
accuracy = sum(correct) / len(correct)