如何使用logstash将Mysql数据迁移到elasticsearch

问题描述 投票:-2回答:3

我需要简要说明如何使用logstash将MySQL数据转换为Elastic Search。任何人都可以解释一步一步的过程

mysql elasticsearch logstash kibana
3个回答
1
投票

您可以使用jdbc input plugin进行logstash。

Here是一个配置示例。


0
投票

这是一个广泛的问题,我不知道你对MySQLES有多了解。假设你有一张桌子user。你可能只是把它作为csv转储并加载到你的ES将是好的。但是如果你有一个动态数据,就像MySQL就像一个管道,你需要写一个Script来做那些东西。无论如何,在询问How之前,您可以查看以下链接以建立您的基本知识。

How to dump mysql?

How to load data to ES

此外,由于您可能想知道如何将CSV转换为json文件,这是ES理解的最佳套件。

How to covert CSV to JSON


0
投票

让我为您提供高级指令集。

  • 安装Logstash和Elasticsearch。
  • 在Logstash bin文件夹中复制jar ojdbc7.jar。
  • 对于logstash,请创建一个配置文件ex:config.yml
# 
input {
    # Get the data from database, configure fields to get data incrementally
    jdbc {
        jdbc_driver_library => "./ojdbc7.jar"
        jdbc_driver_class => "Java::oracle.jdbc.driver.OracleDriver"
        jdbc_connection_string => "jdbc:oracle:thin:@db:1521:instance"
        jdbc_user => "user"
        jdbc_password => "pwd"

        id => "some_id"

        jdbc_validate_connection => true
        jdbc_validation_timeout => 1800
        connection_retry_attempts => 10
        connection_retry_attempts_wait_time => 10

        #fetch the db logs using logid
        statement => "select * from customer.table where logid > :sql_last_value order by logid asc"

        #limit how many results are pre-fetched at a time from the cursor into the client’s cache before retrieving more results from the result-set
        jdbc_fetch_size => 500
        jdbc_default_timezone => "America/New_York"

        use_column_value => true
        tracking_column => "logid"
        tracking_column_type => "numeric"
        record_last_run => true

        schedule => "*/2 * * * *"

        type => "log.customer.table"
        add_field => {"source" => "customer.table"}
        add_field => {"tags" => "customer.table" } 
        add_field => {"logLevel" => "ERROR" }

        last_run_metadata_path => "last_run_metadata_path_table.txt"
    }

}

# Massage the data to store in index
filter {
    if [type] == 'log.customer.table' {
        #assign values from db column to custom fields of index
        ruby{
            code => "event.set( 'errorid', event.get('ssoerrorid') );
                    event.set( 'msg', event.get('errormessage') );
                    event.set( 'logTimeStamp', event.get('date_created'));
                    event.set( '@timestamp', event.get('date_created'));
                    "
        }
        #remove the db columns that were mapped to custom fields of index
        mutate {
            remove_field => ["ssoerrorid","errormessage","date_created" ]
        }
    }#end of [type] == 'log.customer.table' 
} #end of filter

# Insert into index
output {
    if [type] == 'log.customer.table' {
        amazon_es {
            hosts => ["vpc-xxx-es-yyyyyyyyyyyy.us-east-1.es.amazonaws.com"]
            region => "us-east-1"
            aws_access_key_id => '<access key>'
            aws_secret_access_key => '<secret password>'
            index => "production-logs-table-%{+YYYY.MM.dd}"
        }
    }
}
  • 转到bin,以logstash -f config.yml运行
© www.soinside.com 2019 - 2024. All rights reserved.