使用ES电源批量插入后删除重复项

问题描述 投票:1回答:1

我有这种类型的索引:

{
"email": email,
"data": {
    domain: [{
        "purchase_date": date,
        "amount": amount,
    }]
}

这是我编写的Python方法,它将数据插入ES:

# 1: check if mail exists
mailExists = es.exists(index=index_param, doc_type=doctype_param, id=email)

# if mail does not exists => insert entire doc
if mailExists is False:
    doc = {
        "email": email,
        "data": {
            domain: [{
                "purchase_date": date,
                "amount": amount
            }]
        }
    }

    res = es.index(index=index_param, doc_type=doctype_param, id=email, body=doc)
# 2: check if already exists a domain
else:
    query = es.get(index=index_param, doc_type=doctype_param, id=email)
    # save json content into mydata
    mydata = query['_source']['data']

    # if domain exists => check if 'purchase_date' is the same as the one I'm trying to insert
    if domain in mydata:
        differentPurchaseDate = True
        for element in mydata[domain]:
            if element['purchase_date'] == purchase_date:
                differentPurchaseDate = False
        # if 'purchase_date' does not exists => add it to current domain
        if differentPurchaseDate:
            es.update(index=index_param, doc_type=doctype_param, id=email,
                 body={
                    "script": {
                        "inline":"ctx._source.data['"+domain+"'].add(params.newPurchaseDate)",
                        "params":{
                            "newPurchaseDate": {
                                "purchase_date": purchase_date, 
                                "amount": amount
                            }
                    }
                }
            })

    # add entire domain
    else:
        es.update(index=index_param, doc_type=doctype_param, id=email,
         body={
            "script": {
                "inline":"ctx._source.data['"+domain+"'] = params.newDomain",
                "params":{
                    "newDomain": [{
                        "purchase_date": purchase_date, 
                        "amount": amount
                    }]
                }
            }
        })

问题是,如果我使用这个算法,每个新插入的行需要大约50秒,但我正在使用非常大的文件。所以,我想:是否可以使用每个文件的批量插入减少导入时间,并在处理每个文件后删除重复项?谢谢!

python-2.7 elasticsearch bigdata
1个回答
1
投票

尝试使用parallel_bulk,documentation here

from elasticsearch import helpers



paramL = []


# 1: check if mail exists
mailExists = es.exists(index=index_param, doc_type=doctype_param, id=email)

# if mail does not exists => insert entire doc
if mailExists is False:
    doc = {
        "email": email,
        "data": {
            domain: [{
                "purchase_date": date,
                "amount": amount
            }]
        }
    }

    ogg={
        '_op_type': 'index',
        '_index': index_param,
        '_type': doctype_param,
        '_id': email,
        '_source': doc
    }

    paramL.append(ogg)


# 2: check if already exists a domain
else:
    query = es.get(index=index_param, doc_type=doctype_param, id=email)
    # save json content into mydata
    mydata = query['_source']['data']

    # if domain exists => check if 'purchase_date' is the same as the one I'm trying to insert
    if domain in mydata:
        differentPurchaseDate = True
        for element in mydata[domain]:
            if element['purchase_date'] == purchase_date:
                differentPurchaseDate = False
        # if 'purchase_date' does not exists => add it to current domain
        if differentPurchaseDate:
             body={
                    "script": {
                        "inline":"ctx._source.data['"+domain+"'].add(params.newPurchaseDate)",
                        "params":{
                            "newPurchaseDate": {
                                "purchase_date": purchase_date, 
                                "amount": amount
                            }
                    }
                }
            }
            ogg={
            '_op_type': 'update',
            '_index': index_param,
            '_type': doctype_param,
            '_id': email,
            '_source': body
            }

            paramL.append(ogg)

    # add entire domain
    else:
         body={
            "script": {
                "inline":"ctx._source.data['"+domain+"'] = params.newDomain",
                "params":{
                    "newDomain": [{
                        "purchase_date": purchase_date, 
                        "amount": amount
                    }]
                }
            }
        }
        ogg={
        '_op_type': 'update',
        '_index': index_param,
        '_type': doctype_param,
        '_id': email,
        '_source': body
            }

        paramL.append(ogg)


for success, info in helpers.parallel_bulk(client=es, actions=paramL, thread_count=4):
    if not success: 
        print 'Doc failed', info

如果您还要对get和exists查询进行批量处理,则应使用elastic - documentation here中的msearch查询。在这种情况下,您将生成一个有序的查询列表,您应该更改脚本的结构,因为您将收到一个唯一的输出,其中包含所有存在查询的结果的有序列表,或者获取查询,因此您无法使用if -else语句,如您当前使用的那样。如果您将为我提供更多信息,我将帮助您实现多搜索查询。

这里是get查询的mget查询示例:

 emails = [ <list_of_email_ID_values> ]
 results = es.mget(index = index_param,
                doc_type = doctype_param,
                body = {'ids': emails})
© www.soinside.com 2019 - 2024. All rights reserved.