如何将 SVM class 概率转换为 logits?
How can convert SVM class probabilities to logits?
我想将 SVM 输出的概率 classes 转换为 logits 。
为了得到每个class
的概率
model = svm.SVC(probability=True)
model.fit(X, Y)
results = model.predict_proba(test_data)[0]
# gets a dictionary of {'class_name': probability}
prob_per_class_dictionary = dict(zip(model.classes_, results))
# gets a list of ['most_probable_class', 'second_most_probable_class', ..., 'least_class']
results_ordered_by_probability = map(lambda x: x[0], sorted(zip(model.classes_, results), key=lambda x: x[1], reverse=True))
我想用这些概率做什么?
将概率转换为对数。
为什么?
我想将 SVM 的结果与神经网络的结果合并。这样损失神经网络输出对数。因此,我正在寻找一种方法将 SVM 输出的概率转换为 logits,而不是使用相等权重将神经网络 logits 与 SVM logits 合并:
SVM logits + neural network logits = overal_logits
overal_probabilities= softmax(overal_logits)
编辑:
是不是相当于sum logits 然后得到概率直接对概率除以2求和?
proba_nn_class_1=[0.8,0.002,0.1,...,0.00002]
proba_SVM_class_1=[0.6,0.1,0.21,...,0.000003]
overall_proba=[(0.8+0.6)/2,(0.002+0.1)/2,(0.1+0.21)/2,..., (0.00002+0.000003)/2 ]
这个过程在数值上是否等同于 SVM 和 NN 的 sum logits 然后通过 softmax 得到概率?
谢谢
def probs_to_logits(probs, is_binary=False):
r"""
Converts a tensor of probabilities into logits. For the binary case,
this denotes the probability of occurrence of the event indexed by `1`.
For the multi-dimensional case, the values along the last dimension
denote the probabilities of occurrence of each of the events.
"""
ps_clamped = clamp_probs(probs)
if is_binary:
return torch.log(ps_clamped) - torch.log1p(-ps_clamped)
return torch.log(ps_clamped)
def clamp_probs(probs):
eps = _finfo(probs).eps
return probs.clamp(min=eps, max=1 - eps)
来自https://github.com/pytorch/pytorch/blob/master/torch/distributions/utils.py#L107
我想将 SVM 输出的概率 classes 转换为 logits 。
为了得到每个class
的概率model = svm.SVC(probability=True)
model.fit(X, Y)
results = model.predict_proba(test_data)[0]
# gets a dictionary of {'class_name': probability}
prob_per_class_dictionary = dict(zip(model.classes_, results))
# gets a list of ['most_probable_class', 'second_most_probable_class', ..., 'least_class']
results_ordered_by_probability = map(lambda x: x[0], sorted(zip(model.classes_, results), key=lambda x: x[1], reverse=True))
我想用这些概率做什么?
将概率转换为对数。
为什么?
我想将 SVM 的结果与神经网络的结果合并。这样损失神经网络输出对数。因此,我正在寻找一种方法将 SVM 输出的概率转换为 logits,而不是使用相等权重将神经网络 logits 与 SVM logits 合并:
SVM logits + neural network logits = overal_logits
overal_probabilities= softmax(overal_logits)
编辑:
是不是相当于sum logits 然后得到概率直接对概率除以2求和?
proba_nn_class_1=[0.8,0.002,0.1,...,0.00002]
proba_SVM_class_1=[0.6,0.1,0.21,...,0.000003]
overall_proba=[(0.8+0.6)/2,(0.002+0.1)/2,(0.1+0.21)/2,..., (0.00002+0.000003)/2 ]
这个过程在数值上是否等同于 SVM 和 NN 的 sum logits 然后通过 softmax 得到概率?
谢谢
def probs_to_logits(probs, is_binary=False):
r"""
Converts a tensor of probabilities into logits. For the binary case,
this denotes the probability of occurrence of the event indexed by `1`.
For the multi-dimensional case, the values along the last dimension
denote the probabilities of occurrence of each of the events.
"""
ps_clamped = clamp_probs(probs)
if is_binary:
return torch.log(ps_clamped) - torch.log1p(-ps_clamped)
return torch.log(ps_clamped)
def clamp_probs(probs):
eps = _finfo(probs).eps
return probs.clamp(min=eps, max=1 - eps)
来自https://github.com/pytorch/pytorch/blob/master/torch/distributions/utils.py#L107