为什么 Skealearn-NearestNeighbors returns 一个 werid/wrong 结果?
Why Skealearn-NearestNeighbors returns a werid/wrong results?
- 代码
from sklearn.neighbors import NearestNeighbors
import numpy as np
import matplotlib.pyplot as plt
X = np.array([[-1, -1], [-2, -1], [-3, -2], [1, 1], [2, 1], [3, 2]])
plt.scatter(X[:,0], X[:,1])
nbrs = NearestNeighbors(n_neighbors=4, algorithm='kd_tree').fit(X)
distances, indices = nbrs.kneighbors(X)
print(indices)
print(distances)
print(nbrs.kneighbors_graph(X).toarray())
- 结果
[[0 1 2 3]
[1 0 2 3]
[2 1 0 3]
[3 4 5 0]
[4 3 5 0]
[5 4 3 0]]
[[0. 1. 2.23606798 2.82842712]
[0. 1. 1.41421356 3.60555128]
[0. 1.41421356 2.23606798 5. ]
[0. 1. 2.23606798 2.82842712]
[0. 1. 1.41421356 3.60555128]
[0. 1.41421356 2.23606798 5. ]]
[[1. 1. 1. 1. 0. 0.]
[1. 1. 1. 1. 0. 0.]
[1. 1. 1. 1. 0. 0.]
[1. 0. 0. 1. 1. 1.]
[1. 0. 0. 1. 1. 1.]
[1. 0. 0. 1. 1. 1.]]
最后三点,我觉得第4个数据最接近。
为什么代码returns第一个数据最接近?
不知道你说的最后三个点最接近第4个数据是什么意思。但是如果你比较 distances
和 indices
,输出对我来说似乎是正确的。
- 代码
from sklearn.neighbors import NearestNeighbors
import numpy as np
import matplotlib.pyplot as plt
X = np.array([[-1, -1], [-2, -1], [-3, -2], [1, 1], [2, 1], [3, 2]])
plt.scatter(X[:,0], X[:,1])
nbrs = NearestNeighbors(n_neighbors=4, algorithm='kd_tree').fit(X)
distances, indices = nbrs.kneighbors(X)
print(indices)
print(distances)
print(nbrs.kneighbors_graph(X).toarray())
- 结果
[[0 1 2 3]
[1 0 2 3]
[2 1 0 3]
[3 4 5 0]
[4 3 5 0]
[5 4 3 0]]
[[0. 1. 2.23606798 2.82842712]
[0. 1. 1.41421356 3.60555128]
[0. 1.41421356 2.23606798 5. ]
[0. 1. 2.23606798 2.82842712]
[0. 1. 1.41421356 3.60555128]
[0. 1.41421356 2.23606798 5. ]]
[[1. 1. 1. 1. 0. 0.]
[1. 1. 1. 1. 0. 0.]
[1. 1. 1. 1. 0. 0.]
[1. 0. 0. 1. 1. 1.]
[1. 0. 0. 1. 1. 1.]
[1. 0. 0. 1. 1. 1.]]
最后三点,我觉得第4个数据最接近。 为什么代码returns第一个数据最接近?
不知道你说的最后三个点最接近第4个数据是什么意思。但是如果你比较 distances
和 indices
,输出对我来说似乎是正确的。