在 python / pyspark 中获取 k-means 质心和异常值
Obtaining k-means centroids and outliers in python / pyspark
有人知道 Python / PySpark 中的任何简单算法来检测 K 均值聚类中的异常值并创建这些异常值的列表或数据框吗?我不确定如何获得质心。我正在使用以下代码:
n_clusters = 10
kmeans = KMeans(k = n_clusters, seed = 0)
model = kmeans.fit(Data.select("features"))
model.clusterCenters()
会给你质心。
要获取异常值,一种直接的方法是获取大小为 1 的聚类。
示例:
data.show()
+-------------+
| features|
+-------------+
| [0.0,0.0]|
| [1.0,1.0]|
| [9.0,8.0]|
| [8.0,9.0]|
|[100.0,100.0]|
+-------------+
from pyspark.ml.clustering import KMeans
kmeans = KMeans()
model = kmeans.fit(data)
model.summary.predictions.show()
+-------------+----------+
| features|prediction|
+-------------+----------+
| [0.0,0.0]| 0|
| [1.0,1.0]| 0|
| [9.0,8.0]| 0|
| [8.0,9.0]| 0|
|[100.0,100.0]| 1|
+-------------+----------+
print(model.clusterCenters())
[array([4.5, 4.5]), array([100., 100.])]
print(model.summary.clusterSizes)
[4, 1]
# Get outliers with cluster size = 1
import pyspark.sql.functions as F
model.summary.predictions.filter(
F.col('prediction').isin(
[cluster_id for (cluster_id, size) in enumerate(model.summary.clusterSizes) if size == 1]
)
).show()
+-------------+----------+
| features|prediction|
+-------------+----------+
|[100.0,100.0]| 1|
+-------------+----------+
有人知道 Python / PySpark 中的任何简单算法来检测 K 均值聚类中的异常值并创建这些异常值的列表或数据框吗?我不确定如何获得质心。我正在使用以下代码:
n_clusters = 10
kmeans = KMeans(k = n_clusters, seed = 0)
model = kmeans.fit(Data.select("features"))
model.clusterCenters()
会给你质心。
要获取异常值,一种直接的方法是获取大小为 1 的聚类。
示例:
data.show()
+-------------+
| features|
+-------------+
| [0.0,0.0]|
| [1.0,1.0]|
| [9.0,8.0]|
| [8.0,9.0]|
|[100.0,100.0]|
+-------------+
from pyspark.ml.clustering import KMeans
kmeans = KMeans()
model = kmeans.fit(data)
model.summary.predictions.show()
+-------------+----------+
| features|prediction|
+-------------+----------+
| [0.0,0.0]| 0|
| [1.0,1.0]| 0|
| [9.0,8.0]| 0|
| [8.0,9.0]| 0|
|[100.0,100.0]| 1|
+-------------+----------+
print(model.clusterCenters())
[array([4.5, 4.5]), array([100., 100.])]
print(model.summary.clusterSizes)
[4, 1]
# Get outliers with cluster size = 1
import pyspark.sql.functions as F
model.summary.predictions.filter(
F.col('prediction').isin(
[cluster_id for (cluster_id, size) in enumerate(model.summary.clusterSizes) if size == 1]
)
).show()
+-------------+----------+
| features|prediction|
+-------------+----------+
|[100.0,100.0]| 1|
+-------------+----------+