Elasticsearch Python 客户端:'illegal_argument_exception' 创建新索引时

Elasticsearch Python client: 'illegal_argument_exception' when creating a new index

因此,我需要创建一个新索引并将我的 geohash 存储为 geo_point 类型,因为我需要使用 Kibana 的地理热图。我通过 UDP 获取数据。每次我收到数据时,脚本都会崩溃并出现以下异常:

Traceback (most recent call last):
  File "fieldsrv", line 103, in <module>
    processmsg(addr, data)
  File "fieldsrv", line 68, in processmsg
    es.indices.create(index=index, body=mappings)
  File "/usr/local/lib/python2.7/dist-packages/elasticsearch/client/utils.py", line 76, in _wrapped
    return func(*args, params=params, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/elasticsearch/client/indices.py", line 91, in create
    params=params, body=body)
  File "/usr/local/lib/python2.7/dist-packages/elasticsearch/transport.py", line 318, in perform_request
    status, headers_response, data = connection.perform_request(method, url, params, body, headers=headers, ignore=ignore, timeout=timeout)
  File "/usr/local/lib/python2.7/dist-packages/elasticsearch/connection/http_urllib3.py", line 185, in perform_request
    self._raise_error(response.status, raw_data)
  File "/usr/local/lib/python2.7/dist-packages/elasticsearch/connection/base.py", line 125, in _raise_error
    raise HTTP_EXCEPTIONS.get(status_code, TransportError)(status_code, error_message, additional_info)
elasticsearch.exceptions.RequestError: TransportError(400, u'illegal_argument_exception', u'unknown setting [index.mapping.measurements.properties.ECL.type] please check that any required plugins are installed, or check the breaking changes documentation for removed settings')

我不太明白我在 'mapping' 变量中做错了什么。在这里你会找到我的代码(IP 显然是假的):

import socket
import datetime
import os
import re
import Geohash
import json
from elasticsearch import Elasticsearch
from elasticsearch_dsl import GeoPoint

UDP_IP_ADDRESS = '133.713.371.337'
UDP_PORT_NO = 16666
host = "localhost"
index = 'test'
es = Elasticsearch(hosts=[{'host': 'localhost', 'port': 9200}])

dev_ids = ["AD01", "AD02", "MM01", "AU01", "OS01"]

srvSock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
srvSock.bind((UDP_IP_ADDRESS, UDP_PORT_NO))


def processmsg(ip, a):
    try:
        ab = a.split(";")
        dev_id = ab[0]
        mode = ab[1]
        cell_id = ab[2]
        ecl = ab[3]
        snr = ab[4]
        lat = round(float(re.sub('[\n]', '', ab[5])), 6)
        long = round(float(re.sub('[\n]', '', ab[6])), 6)
        msg_date = datetime.datetime.now().strftime('%d.%m.%Y %H:%M:%S')
        datapoint = []
        ghash = Geohash.encode(lat, long)

    except ValueError:
        print("Data received by IP " + str(ip) + " not valid, ignoring")
        print(a)
    mappings = {
        "mapping": {
            "measurements": {
                "properties": {
                    "ECL": {
                        "type": "string",
                    },
                    "SNR": {
                        "type": "string",
                    },
                    "cell-id": {
                        "type": "string",
                    },
                    "date": {
                        "type": "date",
                    },
                    "device-id": {
                        "type": "string",
                    },
                    "geohash": {
                        "type": "geo_point",
                    },
                    "mode": {
                        "type": "text",
                    }
                }
            }
        }
    }
    es.indices.create(index=index, body=mappings)

    if dev_id in dev_ids:
        msg = msg_date + ' - ' + dev_id + ', ' + 'MODE: ' + mode + ', '
        msg = msg + 'CELLID: ' + cell_id + ', ' + 'ECL: ' + ecl + ', ' + 'SNR: ' + snr + ' '
        msg = msg + 'LAT: ' + str(lat) + ', ' + 'LONG: ' + str(long) + ', ' + 'GEOHASH: ' + str(ghash)
        print(msg)
        bulk_index = {
            "index": {
                "_index": index,
                "_type": 'measurements',
                "_id": 'test'
            }
        }
        values = {
            "measurement": "test",
            "date": msg_date,
            "device-id": dev_id,
            "mode": mode,
            "cell-id": cell_id,
            "ECL": ecl,
            "SNR": re.sub('[\n]', '', snr),
            "geohash": ghash
        }
        datapoint.append(bulk_index)
        datapoint.append(values)
        res = es.bulk(index=index, doc_type='measurements', body=datapoint, refresh=True)
        es.indices.refresh(index=index)

    else:
        print("Unknown device")


while True:
    data, addr = srvSock.recvfrom(1024)
    processmsg(addr, data)

编辑:这是我的elasticsearch.yml

# ======================== Elasticsearch Configuration =========================
#
# NOTE: Elasticsearch comes with reasonable defaults for most settings.
#       Before you set out to tweak and tune the configuration, make sure you
#       understand what are you trying to accomplish and the consequences.
#
# The primary way of configuring a node is via this file. This template lists
# the most important settings you may want to configure for a production cluster.
#
# Please consult the documentation for further information on configuration options:
# https://www.elastic.co/guide/en/elasticsearch/reference/index.html
#
# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
#cluster.name: my-application
#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
#node.name: node-1
#
# Add custom attributes to the node:
#
#node.attr.rack: r1
#
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
path.data: /var/lib/elasticsearch
#
# Path to log files:
#
path.logs: /var/log/elasticsearch
#
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
#
#bootstrap.memory_lock: true
#
# Make sure that the heap size is set to about half the memory available
# on the system and that the owner of the process is allowed to use this
# limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# Set the bind address to a specific IP (IPv4 or IPv6):
#
network.host: localhost
#
# Set a custom port for HTTP:
#
http.port: 9200
#
# For more information, consult the network module documentation.
#
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when new node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
#discovery.zen.ping.unicast.hosts: ["host1", "host2"]
#
# Prevent the "split brain" by configuring the majority of nodes (total number of master-eligible nodes / 2 + 1):
#
#discovery.zen.minimum_master_nodes: 
#
# For more information, consult the zen discovery module documentation.
#
# ---------------------------------- Gateway -----------------------------------
#
# Block initial recovery after a full cluster restart until N nodes are started:
#
#gateway.recover_after_nodes: 3
#
# For more information, consult the gateway module documentation.
#
# ---------------------------------- Various -----------------------------------
#
# Require explicit names when deleting indices:
#
#action.destructive_requires_name: true

答案很简单:需要 "mappings": {... 而不是 "mapping": {...

已接受的答案对我不起作用。我所做的是首先创建 ES 索引,然后 运行 映射并且它起作用了。

es.indices.create(index = indexname)
es.indices.put_mapping(index = indexname, doc_type='_doc', body = request_body)

仅供参考,我正在使用 Python 3.7 和 ES 6.5。