使用numpy将图像中的颜色替换为调色板中最接近的颜色

Replace colors in image by closest color in palette using numpy

我有一个颜色列表,我有一个函数 closest_color(pixel, colors),它将给定像素的 RGB 值与我的颜色列表进行比较,并输出最接近的颜色列表。

我需要将此功能应用于整个图像。当我尝试逐个像素地使用它时,(通过使用 2 个嵌套的 for 循环)它很慢。有没有更好的方法用 numpy 实现这个?

1.选项:单图像评估(慢)

Pros

 - any palette any time (flexible)

Cons

 - slow
 - memory for large number of colors in palette
 - not good for batch processing

2。选项:批处理(超快)

Pros
 - super fast (50ms per image), independent of palette size
 - low memory, independent of image size or pallete size
 - ideal for batch processing if palette doesnt change
 - simple code 
Cons
 - requires creation of color cube (once, up to 3 minutes)
 - color cube can contain only one palette

Requirements
 - color cube requires 1.5mb of space on disk in form of compressed np matrix

选项 1:

拍摄图像,创建与图像大小相同的调色板对象,计算距离,使用 np.argmin 索引检索新图像

import numpy as np
from PIL import Image
import requests

# get some image
im = Image.open(requests.get("https://upload.wikimedia.org/wikipedia/commons/thumb/7/77/Big_Nature_%28155420955%29.jpeg/800px-Big_Nature_%28155420955%29.jpeg", stream=True).raw)
newsize = (1000, 1000)
im = im.resize(newsize)
# im.show()
im = np.asarray(im)
new_shape = (im.shape[0],im.shape[1],1,3)

# Ignore above
# Now we have image of shape (1000,1000,1,3). 1 is there so its easy to subtract from color container
image = im.reshape(im.shape[0],im.shape[1],1,3)



# test colors
colors = [[0,0,0],[255,255,255],[0,0,255]]

# Create color container 
## It has same dimensions as image (1000,1000,number of colors,3)
colors_container = np.ones(shape=[image.shape[0],image.shape[1],len(colors),3])
for i,color in enumerate(colors):
    colors_container[:,:,i,:] = color



def closest(image,color_container):
    shape = image.shape[:2]
    total_shape = shape[0]*shape[1]

    # calculate distances
    ### shape =  (x,y,number of colors)
    distances = np.sqrt(np.sum((color_container-image)**2,axis=3))

    # get position of the smalles distance
    ## this means we look for color_container position ????-> (x,y,????,3)
    ### before min_index has shape (x,y), now shape = (x*y)
    #### reshaped_container shape = (x*y,number of colors,3)
    min_index = np.argmin(distances,axis=2).reshape(-1)
    # Natural index. Bind pixel position with color_position
    natural_index = np.arange(total_shape)

    # This is due to easy index access
    ## shape is (1000*1000,number of colors, 3)
    reshaped_container = colors_container.reshape(-1,len(colors),3)

    # Pass pixel position with corresponding position of smallest color
    color_view = reshaped_container[natural_index,min_index].reshape(shape[0],shape[1],3)
    return color_view

# NOTE: Dont pass uint8 due to overflow during subtract
result_image = closest(image,colors_container)

Image.fromarray(result_image.astype(np.uint8)).show()

选项 2:

根据您的调色板构建 256x256x256x3 大小的颜色立方体。换句话说,为每种现有颜色分配最接近的相应调色板颜色。保存颜色立方体(once/first 次)。加载颜色立方体。拍摄图像并将图像中的每种颜色用作颜色立方体中的索引。

import numpy as np
from PIL import Image
import requests
import time
# get some image
im = Image.open(requests.get("https://helpx.adobe.com/content/dam/help/en/photoshop/using/convert-color-image-black-white/jcr_content/main-pars/before_and_after/image-before/Landscape-Color.jpg", stream=True).raw)
newsize = (1000, 1000)
im = im.resize(newsize)
im = np.asarray(im)


### Initialization: Do just once
# Step 1: Define palette
palette = np.array([[255,255,255],[125,0,0],[0,0,125],[0,0,0]])

# Step 2: Create/Load precalculated color cube
try:
    # for all colors (256*256*256) assign color from palette
    precalculated = np.load('view.npz')['color_cube']
except:
    precalculated = np.zeros(shape=[256,256,256,3])
    for i in range(256):
        print('processing',100*i/256)
        for j in range(256):
            for k in range(256):
                index = np.argmin(np.sqrt(np.sum(((palette)-np.array([i,j,k]))**2,axis=1)))
                precalculated[i,j,k] = palette[index]
    np.savez_compressed('view', color_cube = precalculated)
        

# Processing part
#### Step 1: Take precalculated color cube for defined palette and 

def get_view(color_cube,image):
    shape = image.shape[0:2]
    indices = image.reshape(-1,3)
    # pass image colors and retrieve corresponding palette color
    new_image = color_cube[indices[:,0],indices[:,1],indices[:,2]]
   
    return new_image.reshape(shape[0],shape[1],3).astype(np.uint8)

start = time.time()
result = get_view(precalculated,im)
print('Image processing: ',time.time()-start)
Image.fromarray(result).show()

任务是将图片变成它的调色板版本。您定义了一个调色板,然后您需要为每个像素在定义的调色板中找到该像素颜色的最近邻匹配项。您从该查找中获得一个索引,然后您可以将其转换为该像素的调色板颜色。

这可以使用 FLANN(OpenCV 自带)。代码也不多。在我的旧电脑上查找需要两秒钟。

这种方法的一个优点是它可以处理“大”调色板而不需要大量内存。然而,这并不是 FLANN 独有的。 FLANN 的独特之处可能在于它需要的代码很少 (user-side)。

缺点:这仍然需要几秒钟。

FLANN 使用索引结构,可以处理任意向量,并且使用 float32 类型。由于 FLANN 中的索引结构,它执行 sub-linearly(可能 O(log(n)) 或其他),即比“线性扫描”(O(n))更好。然而,一旦调色板变得 巨大 ,FLANN 的复杂性和通用性的成本只会通过更好的查找复杂性来分摊。 “线性扫描”,带有特定于此问题的代码,我在另一个答案中使用 numba.

完整笔记本:https://gist.github.com/crackwitz/bbb1aff9b7c6c744665715a5337192c0

# set up FLANN
# somewhat arbitrary parameters because under-documented
norm = cv.NORM_L2
FLANN_INDEX_KDTREE = 1
index_params = dict(algorithm=FLANN_INDEX_KDTREE, trees=5)
search_params = dict(checks=50)
fm = cv.FlannBasedMatcher(index_params, search_params)

# make up a palette and give it to FLANN
levels = (0, 64, 128, 192, 255)
palette = np.uint8([
    [b,g,r]
    for b in levels
    for g in levels
    for r in levels
])
print("palette size:", len(palette))
fm.add(np.float32([palette])) # extra dimension is "pictures", unused
fm.train()

# find nearest neighbor matches for all pixels
queries = im.reshape((-1, 3)).astype(np.float32)
matches = fm.match(queries)

# get match indices and distances
assert len(palette) <= 256
indices = np.uint8([m.trainIdx for m in matches]).reshape(height, width)
dist = np.float32([m.distance for m in matches]).reshape(height, width)

# indices to palette colors
output = palette[indices]
# imshow(output)

这里有 两个 变体,使用 numba,python 代码的 JIT 编译器。

from numba import njit, prange

first 变体使用了更多的 numpy 基元 (np.argmin),因此使用了“更多”内存。也许一点点内存会产生影响,或者 numba 可能会按原样调用 numpy 例程,而无法优化它们。

@njit(parallel=True)
def lookup1(palette, im):
    palette = palette.astype(np.int32)
    (rows,cols) = im.shape[:2]
    result = np.zeros((rows, cols), dtype=np.uint8)
    
    for i in prange(rows):
        for j in range(cols):
            sqdists = ((im[i,j] - palette) ** 2).sum(axis=1)
            index = np.argmin(sqdists)
            result[i,j] = index

    return result

我在 lena.jpg 上每 运行 得到 ~180-190 毫秒和 125 种颜色的调色板。

second 变体使用更多 hand-written 代码来替换大部分 numpy 基元,这使得它更快。

@njit(parallel=True)
def lookup2(palette, im):
    (rows,cols) = im.shape[:2]
    result = np.zeros((rows, cols), dtype=np.uint8)
    
    for i in prange(rows): # parallelize over this
        for j in range(cols):
            pb,pg,pr = im[i,j] # take pixel apart
            bestindex = -1
            bestdist = 2**20
            for index in range(len(palette)):
                cb,cg,cr = palette[i] # take palette color apart
                dist = (pb-cb)**2 + (pg-cg)**2 + (pr-cr)**2
                if dist < bestdist:
                    bestdist = dist
                    bestindex = index
            
            result[i,j] = bestindex
    
    return result

30 毫秒每 运行!

我认为这在一个数量级内接近理论最大值。我从所需的数学运算中得出这一点。

  • 每个调色板条目:A = 10 ops

    3 次减法,3 次平方,3 次加法,1 次比较

  • 每像素:B = 1375 ops

    len(palette) * (A+1), 一个索引增量

  • 每行:C = 704512 ops

    ncols * (B+1), 一个索引增量

  • 每张图片:D = 360710656 ops

    nrows * (C+1), 一个索引增量

因此,在 30 毫秒内,在我具有超线程的古老四核上,给出 12000 MIPS(我不会说 flop/s,因为没有浮点数)。这意味着每个周期接近一条指令。我确信该代码缺少一些 SIMD 向量化...可以调查 LLVM 对这些循环的看法,但我现在不会为此烦恼。

cython 中的某些代码可能能够解决这个问题,因为在那里您可以进一步限制变量的类型。