我正在使用 filebeat 使用logstash将我的日志推送到elasticsearch,并且之前的设置对我来说工作正常。我现在变得
Failed to publish events error
。
filebeat | 2020-06-20T06:26:03.832969730Z 2020-06-20T06:26:03.832Z INFO log/harvester.go:254 Harvester started for file: /logs/app-service.log
filebeat | 2020-06-20T06:26:04.837664519Z 2020-06-20T06:26:04.837Z ERROR logstash/async.go:256 Failed to publish events caused by: write tcp YY.YY.YY.YY:40912->XX.XX.XX.XX:5044: write: connection reset by peer
filebeat | 2020-06-20T06:26:05.970506599Z 2020-06-20T06:26:05.970Z ERROR pipeline/output.go:121 Failed to publish events: write tcp YY.YY.YY.YY:40912->XX.XX.XX.XX:5044: write: connection reset by peer
filebeat | 2020-06-20T06:26:05.970749223Z 2020-06-20T06:26:05.970Z INFO pipeline/output.go:95 Connecting to backoff(async(tcp://xx.com:5044))
filebeat | 2020-06-20T06:26:05.972790871Z 2020-06-20T06:26:05.972Z INFO pipeline/output.go:105 Connection to backoff(async(tcp://xx.com:5044)) established
Logstash 管道
02-beats-input.conf
input {
beats {
port => 5044
}
}
10-syslog-filter.conf
filter {
json {
source => "message"
}
}
30-elasticsearch-output.conf
output {
elasticsearch {
hosts => ["localhost:9200"]
manage_template => false
index => "index-%{+YYYY.MM.dd}"
}
}
Filebeat 配置 在 /usr/share/filebeat/filebeat.yml 分享我的 filebeat 配置
filebeat.inputs:
- type: log
# Change to true to enable this input configuration.
enabled: true
# Paths that should be crawled and fetched. Glob based paths.
paths:
- /logs/*
#============================= Filebeat modules ===============================
filebeat.config.modules:
# Glob pattern for configuration loading
path: ${path.config}/modules.d/*.yml
# Set to true to enable config reloading
reload.enabled: false
# Period on which files under path should be checked for changes
#reload.period: 10s
#==================== Elasticsearch template setting ==========================
setup.template.settings:
index.number_of_shards: 3
#index.codec: best_compression
#_source.enabled: false
#============================== Kibana =====================================
# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
# This requires a Kibana endpoint configuration.
setup.kibana:
# Kibana Host
# Scheme and port can be left out and will be set to the default (http and 5601)
# In case you specify and additional path, the scheme is required: http://localhost:5601/path
# IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
#host: "localhost:5601"
# Kibana Space ID
# ID of the Kibana Space into which the dashboards should be loaded. By default,
# the Default Space will be used.
#space.id:
#----------------------------- Logstash output --------------------------------
output.logstash:
# The Logstash hosts
hosts: ["xx.com:5044"]
# Optional SSL. By default is off.
# List of root certificates for HTTPS server verifications
#ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]
# Certificate for SSL client authentication
#ssl.certificate: "/etc/pki/client/cert.pem"
# Client Certificate Key
#ssl.key: "/etc/pki/client/cert.key"
#================================ Processors =====================================
# Configure processors to enhance or manipulate events generated by the beat.
processors:
- add_host_metadata: ~
- add_cloud_metadata: ~
当我执行 telnet xx.xx 5044 时,这是我在终端中看到的内容
Trying X.X.X.X...
Connected to xx.xx.
Escape character is '^]'
我也有同样的问题。这里有一些步骤,可以帮助您找到问题的核心。 首先我测试了这样的方式:filebeat(localhost)->logstash(localhost)->elastic->kibana。每个服务都在同一台机器上。
我的/etc/logstash/conf.d/config.conf:
input {
beats {
port => 5044
ssl => false
}
}
output {
elasticsearch {
hosts => ["localhost:9200"]
}
}
在这里,我特别禁用了 ssl(在我的例子中,这是问题的主要原因,即使证书是正确的,神奇)。 之后不要忘记重新启动logstash并使用
sudo filebeat -e
命令进行测试。
如果一切正常,您将不会看到“连接被对等方重置”错误
我也有同样的问题。以 sudo 用户身份启动 filebeat 对我来说很有效。
sudo ./filebeat -e
我对输入插件配置进行了一些更改,如指定
ssl => false
但如果不以 sudo 特权用户或 root 身份启动 filebeat,则无法工作。
为了以 sudo 用户身份启动 filebeat,filebeat.yml 文件必须由 root 拥有。使用 sudo
chown -R sime_sudo_user:some_group filebeat-7.15.0-linux-x86_64/
将整个 filebeat 文件夹权限更改为 sudo 特权用户,然后 chown root filebeat.yml
将更改文件的权限。