如何加快搜索最近的地理点?

How to speed up the search of the closest geopoint?

目前,我使用以下代码找到最近的给定地理点 -

def get_closest_stops(lat, lng, data=db.STOPS):
    def distance(lat1, lon1, lat2, lon2):
        p = 0.017453292519943295
        a = 0.5 - cos((lat2-lat1)*p)/2 + cos(lat1*p)*cos(lat2*p) * (1-cos((lon2-lon1)*p)) / 2
        return 12742 * asin(sqrt(a))

    v = {'lat': lat, 'lng': lng}

    return sorted(data.values(), key=lambda p: distance(v['lat'],v['lng'],p['lat'],p['lng']), reverse=False)

这里是 db.STOPS:

STOPS = {
    '1282': {'lat': 59.773368, 'lng': 30.125112, 'image_osm': '1652229/464dae0763a8b1d5495e', 'name': 'name 1', 'descr': ''},
    '1285': {'lat': 59.941117, 'lng': 30.271756, 'image_osm': '965417/63af068831d93dac9830', 'name': 'name 2', 'descr': ''},
...
}

字典中有大约7000条记录,搜索速度很慢。 有什么办法可以加快搜索速度吗? 我只需要 5 个结果。我可以重新排序字典。如果需要,我什至可以创建字典的副本。

使用多线程,并将记录分解为可用线程数

希望对您有所帮助Tutorial point page about multithreading

TLDR:numpy 提供 10 倍的加速,分区距离再加速 3 倍,总计约 30..50 倍。使用曼哈顿距离作为更便宜的近似值来消除不可行的候选者具有大致相同的效果,但不太直观。

这里是一个 colab 完整的实验代码,结果如下:

# Setup:
import numpy as np
DTYPE = np.float64
STOPS_NP = np.array([(stop['lat'], stop['lng']) for stop in STOPS.values()], dtype=[('lat', DTYPE), ('lng', DTYPE)])

简单的实现:每个循环 8 毫秒

def distance(lat1, lon1, lat2, lon2):
    p = 0.017453292519943295
    a = 0.5 - cos((lat2-lat1)*p)/2 + cos(lat1*p)*cos(lat2*p) * (1-cos((lon2-lon1)*p)) / 2
    return 12742 * asin(sqrt(a))

def get_closest_stops(lat, lng, data=STOPS):
    v = {'lat': lat, 'lng': lng}
    return sorted(data.values(), key=lambda p: distance(v['lat'],v['lng'],p['lat'],p['lng']), reverse=False)

移植到 Numpy 的相同算法:使用 float64 每个循环 1.3ms,使用 float32 500us(13 倍改进!):

def distance_np(lat1, lon1, lat2, lon2):
    # numpy version of distance
    p = 0.017453292519943295
    a = 0.5 - np.cos((lat2-lat1)*p)/2 + np.cos(lat1*p)*np.cos(lat2*p) * (1-np.cos((lon2-lon1)*p)) / 2
    # multiplication by constant does not affect sorting
    return 12742 * np.arcsin(np.sqrt(a))

def get_closest_stops_np(lat, lng, data=STOPS_NP):
    distances = distance_np(lat, lng, data['lat'], data['lng'])
    indexes = distances.argsort()
    return data[indexes]

使用部分数组排序获得前 5 个候选 - 500us float64、160us float32(50 倍改进!):

def get_closest_stops_np_argpartition(lat, lng, data=STOPS_NP, n=5):
    distances = distance_np(lat, lng, data['lat'], data['lng'])
    indexes = np.argpartition(distances, n)[:n]
    return data[indexes]

使用更便宜的距离近似值来避免计算更昂贵的精确距离——我最初在评论中提出的建议:600us

def get_closest_stops_np_early_stop(lat, lng, data=STOPS_NP, n=5):
    # shortcut: use cheaper approximate distance as an upper bound,
    # create a much smaller shortlist, then sort by real (expensive) distance
    distances = manhattan_distance_np(lat, lng, data['lat'], data['lng'])
    indexes = distances.argsort()
    nth_large_manhattan_distance = distances[indexes[n-1]]
    # manhattan distance is at most 1.5x higher than real distance
    # on a plane; perhaps should be adjusted for the globe
    max_manhattan_for_equivalent_distance = nth_large_manhattan_distance * 1.5
    n_feasible_candidates = np.searchsorted(distances, max_manhattan_for_equivalent_distance, sorter=indexes)
    data_shortlist = data[indexes[:n_feasible_candidates]]

    # the rest is the same as get_closest_stops_np
    distances = distance_np(lat, lng, data_shortlist['lat'], data_shortlist['lng'])
    indexes = distances.argsort()
    return data_shortlist[indexes[:n]]

所有四种算法在模拟数据上产生相同的结果。 请注意,地球上的经度通常小于纬度,因此此模拟图上的水平比例与垂直比例不匹配: