从图像的 GLCM 计算熵

Calculating entropy from GLCM of an image

我正在使用 skimage 库进行大部分图像分析工作。

我有一张 RGB 图像,我打算从图像中提取 texture 特征,例如 entropyenergyhomogeneitycontrast

以下是我正在执行的步骤:

from skimage import io, color, feature
from skimage.filters import rank
rgbImg = io.imread(imgFlNm)
grayImg = color.rgb2gray(rgbImg)
print(grayImg.shape)  # (667,1000), a 2 dimensional grayscale image

glcm = feature.greycomatrix(grayImg, [1], [0, np.pi/4, np.pi/2, 3*np.pi/4])
print(glcm.shape) # (256, 256, 1, 4)

rank.entropy(glcm, disk(5)) # throws an error since entropy expects a 2-D array in its arguments

rank.entropy(grayImg, disk(5)) # given an output.

我的问题是,从灰度图像(直接)计算出的熵是否与从 GLCM 中提取的熵特征(纹理特征)相同?

如果不是,从图像中提取所有纹理特征的正确方法是什么?

备注:我已经提到了:

Entropy - skimage

GLCM - Texture features

from skimage.feature import greycomatrix, greycoprops

    dis = (greycoprops(glcm, 'dissimilarity'))
    plt.hist(dis.ravel(), normed=True, bins=256, range=(0, 30),facecolor='0.5');plt.show()

Is the calculated entropy from the gray-scale image (directly) same as the entropy feature extracted from the GLCM (a texture feature)?

不,这两个熵相当不同:

  1. skimage.filters.rank.entropy(grayImg, disk(5)) 生成与 grayImg 大小相同的数组,其中包含在圆盘上计算的图像的局部熵,圆盘的中心位于相应像素,半径为 5 像素。查看 Entropy (information theory) to find out how entropy is calculated. The values in this array are useful for segmentation (follow this link 以查看 entropy-based 对象检测的示例)。如果您的目标是通过单个(标量)值描述图像的熵,您可以使用 skimage.measure.shannon_entropy(grayImg)。此函数基本上将以下公式应用于完整图像:

    其中 is the number of gray levels (256 for 8-bit images), is the probability of a pixel having gray level , and is the base of the logarithm function. When 设置为 2,返回值以 .
  2. 为单位
  3. 灰度级 co-occurence 矩阵 (GLCM) 是图像上给定偏移处的 co-occurring 灰度值的直方图。为了描述图像的纹理,通常从针对不同偏移量计算的几个 co-occurrence 矩阵中提取熵、能量、对比度、相关性等特征。在这种情况下,熵定义如下:

    其中 and are again the number of gray levels and the base of the logarithm function, respectively, and stands for the probability of two pixels separated by the specified offset having intensities and . Unfortunately the entropy is not one of the properties of a GLCM that you can calculate through scikit-image*. If you wish to compute this feature you need to pass the GLCM to skimage.measure.shannon_entropy.

*上次编辑此 post 时,scikit-image 的最新版本是 0.13.1。

If not, what is the right way to extract all the texture features from an image?

描述图像纹理的特征有很多种,例如局部二进制模式、Gabor 滤波器、小波、Laws 掩码等等。 Haralick 的 GLCM 是最流行的纹理描述符之一。通过 GLCM 特征描述图像纹理的一种可能方法包括计算不同偏移量的 GLCM(每个偏移量通过距离和角度定义),并从每个 GLCM 中提取不同的属性。

让我们考虑例如三个距离(1、2 和 3 个像素)、四个角度(0、45、90 和 135 度)和两个属性(能量和均匀性)。这导致 offsets (and hence 12 GLCM's) and a feature vector of dimension 。这是代码:

import numpy as np
from skimage import io, color, img_as_ubyte
from skimage.feature import greycomatrix, greycoprops
from sklearn.metrics.cluster import entropy

rgbImg = io.imread('https://i.stack.imgur.com/1xDvJ.jpg')
grayImg = img_as_ubyte(color.rgb2gray(rgbImg))

distances = [1, 2, 3]
angles = [0, np.pi/4, np.pi/2, 3*np.pi/4]
properties = ['energy', 'homogeneity']

glcm = greycomatrix(grayImg, 
                    distances=distances, 
                    angles=angles,
                    symmetric=True,
                    normed=True)

feats = np.hstack([greycoprops(glcm, prop).ravel() for prop in properties])

使用此图像获得的结果:

:

In [56]: entropy(grayImg)
Out[56]: 5.3864158185167534

In [57]: np.set_printoptions(precision=4)

In [58]: print(feats)
[ 0.026   0.0207  0.0237  0.0206  0.0201  0.0207  0.018   0.0206  0.0173
  0.016   0.0157  0.016   0.3185  0.2433  0.2977  0.2389  0.2219  0.2433
  0.1926  0.2389  0.1751  0.1598  0.1491  0.1565]