python(sklearn)中的二维核密度估计如何工作?
how does 2d kernel density estimation in python (sklearn) work?
对于这个可能很愚蠢的问题,我深表歉意,但我现在正在尝试几个小时来根据一组二维数据估算密度。假设我的数据由数组给出: sample = np.random.uniform(0,1,size=(50,2))
。我只想使用 scipys scikit 学习包来估计样本数组的密度(这里当然是二维均匀密度),我正在尝试以下操作:
import numpy as np
from sklearn.neighbors.kde import KernelDensity
from matplotlib import pyplot as plt
sp = 0.01
samples = np.random.uniform(0,1,size=(50,2)) # random samples
x = y = np.linspace(0,1,100)
X,Y = np.meshgrid(x,y) # creating grid of data , to evaluate estimated density on
kde = KernelDensity(kernel='gaussian', bandwidth=0.2).fit(samples) # creating density from samples
kde.score_samples(X,Y) # I want to evaluate the estimated density on the X,Y grid
但是最后一步总是报错:score_samples() takes 2 positional arguments but 3 were given
所以可能 .score_samples 不能将网格作为输入,但是对于 2d 情况没有 tutorials/docs 所以我不知道如何解决这个问题。如果有人能提供帮助,那就太好了。
查看 Kernel Density Estimate of Species Distributions 示例,您必须将 x,y 数据打包在一起(训练数据和新样本网格)。
下面是一个简化sklearn的函数API。
from sklearn.neighbors import KernelDensity
def kde2D(x, y, bandwidth, xbins=100j, ybins=100j, **kwargs):
"""Build 2D kernel density estimate (KDE)."""
# create grid of sample locations (default: 100x100)
xx, yy = np.mgrid[x.min():x.max():xbins,
y.min():y.max():ybins]
xy_sample = np.vstack([yy.ravel(), xx.ravel()]).T
xy_train = np.vstack([y, x]).T
kde_skl = KernelDensity(bandwidth=bandwidth, **kwargs)
kde_skl.fit(xy_train)
# score_samples() returns the log-likelihood of the samples
z = np.exp(kde_skl.score_samples(xy_sample))
return xx, yy, np.reshape(z, xx.shape)
这会为您提供散点图或 pcolormesh 图等所需的 xx、yy、zz。我从 gaussian_kde 函数的 scipy 页面复制了示例。
import numpy as np
import matplotlib.pyplot as plt
m1 = np.random.normal(size=1000)
m2 = np.random.normal(scale=0.5, size=1000)
x, y = m1 + m2, m1 - m2
xx, yy, zz = kde2D(x, y, 1.0)
plt.pcolormesh(xx, yy, zz)
plt.scatter(x, y, s=2, facecolor='white')
对于这个可能很愚蠢的问题,我深表歉意,但我现在正在尝试几个小时来根据一组二维数据估算密度。假设我的数据由数组给出: sample = np.random.uniform(0,1,size=(50,2))
。我只想使用 scipys scikit 学习包来估计样本数组的密度(这里当然是二维均匀密度),我正在尝试以下操作:
import numpy as np
from sklearn.neighbors.kde import KernelDensity
from matplotlib import pyplot as plt
sp = 0.01
samples = np.random.uniform(0,1,size=(50,2)) # random samples
x = y = np.linspace(0,1,100)
X,Y = np.meshgrid(x,y) # creating grid of data , to evaluate estimated density on
kde = KernelDensity(kernel='gaussian', bandwidth=0.2).fit(samples) # creating density from samples
kde.score_samples(X,Y) # I want to evaluate the estimated density on the X,Y grid
但是最后一步总是报错:score_samples() takes 2 positional arguments but 3 were given
所以可能 .score_samples 不能将网格作为输入,但是对于 2d 情况没有 tutorials/docs 所以我不知道如何解决这个问题。如果有人能提供帮助,那就太好了。
查看 Kernel Density Estimate of Species Distributions 示例,您必须将 x,y 数据打包在一起(训练数据和新样本网格)。
下面是一个简化sklearn的函数API。
from sklearn.neighbors import KernelDensity
def kde2D(x, y, bandwidth, xbins=100j, ybins=100j, **kwargs):
"""Build 2D kernel density estimate (KDE)."""
# create grid of sample locations (default: 100x100)
xx, yy = np.mgrid[x.min():x.max():xbins,
y.min():y.max():ybins]
xy_sample = np.vstack([yy.ravel(), xx.ravel()]).T
xy_train = np.vstack([y, x]).T
kde_skl = KernelDensity(bandwidth=bandwidth, **kwargs)
kde_skl.fit(xy_train)
# score_samples() returns the log-likelihood of the samples
z = np.exp(kde_skl.score_samples(xy_sample))
return xx, yy, np.reshape(z, xx.shape)
这会为您提供散点图或 pcolormesh 图等所需的 xx、yy、zz。我从 gaussian_kde 函数的 scipy 页面复制了示例。
import numpy as np
import matplotlib.pyplot as plt
m1 = np.random.normal(size=1000)
m2 = np.random.normal(scale=0.5, size=1000)
x, y = m1 + m2, m1 - m2
xx, yy, zz = kde2D(x, y, 1.0)
plt.pcolormesh(xx, yy, zz)
plt.scatter(x, y, s=2, facecolor='white')