新聞中心
這里有您想知道的互聯(lián)網(wǎng)營銷解決方案
詳解日志收集處理工具ELK
ELK是三個(gè)開源軟件的縮寫,分別表示:Elasticsearch , Logstash, Kibana , 它們都是開源軟件。新增了一個(gè)FileBeat,它是一個(gè)輕量級的日志收集處理工具(Agent),F(xiàn)ilebeat占用資源少,適合于在各個(gè)服務(wù)器上搜集日志后傳輸給Logstash

專注于為中小企業(yè)提供成都做網(wǎng)站、網(wǎng)站制作、成都外貿(mào)網(wǎng)站建設(shè)服務(wù),電腦端+手機(jī)端+微信端的三站合一,更高效的管理,為中小企業(yè)晉中免費(fèi)做網(wǎng)站提供優(yōu)質(zhì)的服務(wù)。我們立足成都,凝聚了一批互聯(lián)網(wǎng)行業(yè)人才,有力地推動(dòng)了成百上千家企業(yè)的穩(wěn)健成長,幫助中小企業(yè)通過網(wǎng)站建設(shè)實(shí)現(xiàn)規(guī)模擴(kuò)充和轉(zhuǎn)變。
Logstash是一個(gè)開源的用于收集,分析和存儲日志的工具。 Kibana4用來搜索和查看Logstash已索引的日志的web接口。這兩個(gè)工具都基于Elasticsearch。
● Logstash: Logstash服務(wù)的組件,用于處理傳入的日志。
● Elasticsearch: 存儲所有日志
● Kibana 4: 用于搜索和可視化的日志的Web界面,通過nginx反代
● Logstash Forwarder: 安裝在將要把日志發(fā)送到logstash的服務(wù)器上,作為日志轉(zhuǎn)發(fā)的道理,通過 lumberjack 網(wǎng)絡(luò)協(xié)議與 Logstash 服務(wù)通訊
注意:logstash-forwarder要被beats替代了,關(guān)注后續(xù)內(nèi)容。后續(xù)會轉(zhuǎn)到logstash+elasticsearch+beats上。
ELK架構(gòu)如下:
elasticsearch-1.7.2.tar.gz
kibana-4.1.2-linux-x64.tar.gz
logstash-1.5.6-1.noarch.rpm
logstash-forwarder-0.4.0-1.x86_64.rpm
單機(jī)模式
#OS
CentOS release 6.5 (Final)
#Base and JDK
groupadd elk
useradd -g elk elk
passwd elk
yum install vim lsof man wget ntpdate vixie-cron -y
crontab -e
*/1 * * * * /usr/sbin/ntpdate time.windows.com > /dev/null 2>&1
service crond restart
禁用selinux,關(guān)閉iptables
sed -i "s#SELINUX=enforcing#SELINUX=disabled#" /etc/selinux/config
service iptables stop
reboot
tar -zxvf jdk-8u92-linux-x64.tar.gz -C /usr/local/
vim /etc/profile
export JAVA_HOME=/usr/local/jdk1.8.0_92
export JRE_HOME=/usr/local/jdk1.8.0_92/jre
export PATH=$JAVA_HOME/bin:$JAVA_HOME/jre/bin:$PATH
export CLASSPATH=$CLASSPATH:.:$JAVA_HOME/lib:$JAVA_HOME/jre/lib
source /etc/profile
#Elasticsearch
#(cluster時(shí)在其他server安裝elasticsearch,并配置相同集群名稱,不同節(jié)點(diǎn)名稱即可)
RPM安裝
rpm --import http://packages.elastic.co/GPG-KEY-elasticsearch
wget -c https://download.elastic.co/elasticsearch/elasticsearch/elasticsearch-1.7.2.noarch.rpm
rpm -ivh elasticsearch-1.7.2.noarch.rpm
tar安裝
wget -c https://download.elastic.co/elasticsearch/elasticsearch/elasticsearch-1.7.2.tar.gz
tar zxvf elasticsearch-1.7.2.tar.gz -C /usr/local/
cd /usr/local/elasticsearch-1.7.2/
mkdir -p /data/{db,logs}
vim config/elasticsearch.yml
#cluster.name: elasticsearch
#node.name: "es-node1"
#node.master: true
#node.data: true
path.data: /data/db
path.logs: /data/logs
network.host: 192.168.28.131
#插件安裝
cd /usr/local/elasticsearch-1.7.2/
bin/plugin -install mobz/elasticsearch-head
#https://github.com/mobz/elasticsearch-head
bin/plugin -install lukas-vlcek/bigdesk
bin/plugin install lmenezes/elasticsearch-kopf
#會提示版本過低
解決辦法就是手動(dòng)下載該軟件,不通過插件安裝命令...
cd /usr/local/elasticsearch-1.7.2/plugins
wget https://github.com/lmenezes/elasticsearch-kopf/archive/master.zip
unzip master.zip
mv elasticsearch-kopf-master kopf
以上操作就完全等價(jià)于插件的安裝命令
cd /usr/local/
chown elk:elk elasticsearch-1.7.2/ -R
chown elk:elk /data/* -R
supervisord安裝:
yum install supervisor -y
末尾添加針對elasticsearch的配置項(xiàng)
vim /etc/supervisord.conf
[program:elasticsearch]
directory = /usr/local/elasticsearch-1.7.2/
;command = su -c "/usr/local/elasticsearch-1.7.2/bin/elasticsearch" elk
command =/usr/local/elasticsearch-1.7.2/bin/elasticsearch
numprocs = 1
autostart = true
startsecs = 5
autorestart = true
startretries = 3
user = elk
;stdout_logfile_maxbytes = 200MB
;stdout_logfile_backups = 20
;stdout_logfile = /var/log/pvs_elasticsearch_stdout.log
#Kibana(注意版本搭配)
https://download.elastic.co/kibana/kibana/kibana-4.1.2-linux-x64.tar.gz
tar zxvf kibana-4.1.2-linux-x64.tar.gz -C /usr/local/
cd /usr/local/kibana-4.1.2-linux-x64
vim config/kibana.yml
port: 5601
host: "192.168.28.131"
elasticsearch_url: "http://192.168.28.131:9200"
./bin/kibana -l /var/log/kibana.log #啟動(dòng)服務(wù),kibana 4.0開始是以socket服務(wù)啟動(dòng)的
#cd /etc/init.d && curl -o kibana https://gist.githubusercontent.com/thisismitch/8b15ac909aed214ad04a/raw/fc5025c3fc499ad8262aff34ba7fde8c87ead7c0/kibana-4.x-init
#cd /etc/default && curl -o kibana https://gist.githubusercontent.com/thisismitch/8b15ac909aed214ad04a/raw/fc5025c3fc499ad8262aff34ba7fde8c87ead7c0/kibana-4.x-default
#修改對應(yīng)信息,添加可執(zhí)行權(quán)限
或者如下:
cat >> /etc/init.d/kibana "$KIBANA_LOG" 2>&1 &
sleep 2
pidofproc node > $PID_FILE
RETVAL=$?
[[ $? -eq 0 ]] && success || failure
echo
[ $RETVAL = 0 ] && touch $LOCK_FILE
return $RETVAL
fi
}
reload()
{
echo "Reload command is not implemented for this service."
return $RETVAL
}
stop() {
echo -n "Stopping $DESC : "
killproc -p $PID_FILE $DAEMON
RETVAL=$?
echo
[ $RETVAL = 0 ] && rm -f $PID_FILE $LOCK_FILE
}
case "$1" in
start)
start
;;
stop)
stop
;;
status)
status -p $PID_FILE $DAEMON
RETVAL=$?
;;
restart)
stop
start
;;
reload)
reload
;;
*)
# Invalid Arguments, print the following message.
echo "Usage: $0 {start|stop|status|restart}" >&2
exit 2
;;
esac
EOF
chmod +x kibana
mv kibana /etc/init.d/
#Nginx
yum install nginx -y
vim /etc/nginx/conf.d/elk.conf
server {
server_name elk.sudo.com;
auth_basic "Restricted Access";
auth_basic_user_file passwd;
location / {
proxy_pass http://192.168.28.131:5601;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
#htpsswd添加:yum install httpd-tools –y
echo -n 'sudo:' >> /etc/nginx/passwd #添加用戶
openssl passwd elk.sudo.com >> /etc/nginx/passwd #添加密碼
cat /etc/nginx/passwd #查看
chkconfig nginx on && service nginx start
#Logstash--Setup
rpm --import https://packages.elasticsearch.org/GPG-KEY-elasticsearch
vi /etc/yum.repos.d/logstash.repo
[logstash-1.5]
name=Logstash repository for 1.5.x packages
baseurl=http://packages.elasticsearch.org/logstash/1.5/centos
gpgcheck=1
gpgkey=http://packages.elasticsearch.org/GPG-KEY-elasticsearch
enabled=1
yum install logstash -y
#創(chuàng)建SSL證書(在logstash服務(wù)器上生成ssl證書。創(chuàng)建ssl證書有兩種方式,一種指定IP地址,一種指定fqdn(dns)),選其一即可
#1、IP地址
在[ v3_ca ]配置段下設(shè)置上面的參數(shù)。192.168.28.131是logstash服務(wù)端的地址。
vi /etc/pki/tls/openssl.cnf
subjectAltName = IP: 192.168.28.131
cd /etc/pki/tls
openssl req -config /etc/pki/tls/openssl.cnf -x509 -days 3650 -batch -nodes -newkey rsa:2048 -keyout private/logstash-forwarder.key -out certs/logstash-forwarder.crt
#注意將-days設(shè)置大點(diǎn),以免證書過期。
#2、fqdn
# 不需要修改openssl.cnf文件。
cd /etc/pki/tls
openssl req -subj '/CN=logstash.sudo.com/' -x509 -days 3650 -batch -nodes -newkey rsa:2048 -keyout private/logstash-forwarder.key -out certs/logstash-forwarder.crt
logstash.sudo.com是我自己測試的域名,所以無需添加logstash.sudo.com的A記錄
#Logstash-Config
#添加GeoIP數(shù)據(jù)源
#wget http://geolite.maxmind.com/download/geoip/database/GeoLiteCity.dat.gz
#gzip -d GeoLiteCity.dat.gz && mv GeoLiteCity.dat /etc/logstash/.
logstash配置文件是以json格式設(shè)置參數(shù)的,配置文件位于/etc/logstash/conf.d目錄下,配置包括三個(gè)部分:輸入端,過濾器和輸出。
首先,創(chuàng)建一個(gè)01-lumberjack-input.conf文件,設(shè)置lumberjack輸入,Logstash-Forwarder使用的協(xié)議。
vi /etc/logstash/conf.d/01-lumberjack-input.conf
input {
lumberjack {
port => 5043
type => "logs"
ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder.crt"
ssl_key => "/etc/pki/tls/private/logstash-forwarder.key"
}
}
再來創(chuàng)建一個(gè)11-nginx.conf用于過濾nginx日志
vi /etc/logstash/conf.d/11-nginx.conf
filter {
if [type] == "nginx" {
grok {
match => { "message" => "%{IPORHOST:clientip} - %{NOTSPACE:remote_user} \[%{HTTPDATE:timestamp}\] \"(?:%{WORD:method} %{NOTSPACE:request}(?: %{URIPROTO:proto}/%{NUMBER:httpversion})?|%{DATA:rawrequest})\" %{NUMBER:status} (?:%{NUMBER:size}|-) %{QS:referrer} %{QS:a gent} %{QS:xforwardedfor}" }
add_field => [ "received_at", "%{@timestamp}" ]
add_field => [ "received_from", "%{host}" ]
}
date {
match => [ "timestamp" , "dd/MMM/YYYY:HH:mm:ss Z" ]
}
# geoip {
# source => "clientip"
# add_tag => [ "geoip" ]
# fields => ["country_name", "country_code2","region_name", "city_name", "real_region_name", "latitude", "longitude"]
# remove_field => [ "[geoip][longitude]", "[geoip][latitude]" ]
# }
}
}
這個(gè)過濾器會尋找被標(biāo)記為“nginx”類型(Logstash-forwarder定義的)的日志,嘗試使用“grok”來分析傳入的nginx日志,使之結(jié)構(gòu)化和可查詢。type要與logstash-forwarder相匹配。
同時(shí),要注意nginx日志格式設(shè)置,我這里采用默認(rèn)log_format。
#負(fù)載均衡反向代理時(shí)可修改為如下格式:
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $upstream_response_time $request_time $body_bytes_sent '
'"$http_referer" "$http_user_agent" "$http_x_forwarded_for" "$request_body" '
'$scheme $upstream_addr';
日志格式不對,grok匹配規(guī)則要重寫。
可以通過http://grokdebug.herokuapp.com/在線工具進(jìn)行調(diào)試。多數(shù)情況下ELK沒數(shù)據(jù)的錯(cuò)誤在此處。
#Grok Debug -- http://grokdebug.herokuapp.com/
grok 匹配日志不成功,不要往下看測試。之道匹配成功對為止??蓞⒖紅tp://grokdebug.herokuapp.com/patterns# grok匹配模式,對后面寫規(guī)則匹配很受益的。
最后,創(chuàng)建一文件,來定義輸出。
vi /etc/logstash/conf.d/99-lumberjack-output.conf
output {
if "_grokparsefailure" in [tags] {
file { path => "/var/log/logstash/grokparsefailure-%{type}-%{+YYYY.MM.dd}.log" }
}
elasticsearch {
host => "192.168.28.131"
protocol => "http"
index => "logstash-%{type}-%{+YYYY.MM.dd}"
document_type => "%{type}"
workers => 5
template_overwrite => true
}
#stdout { codec =>rubydebug }
}
定義結(jié)構(gòu)化的日志存儲到elasticsearch,對于不匹配grok的日志寫入到文件。注意,后面添加的過濾器文件名要位于01-99之間。因?yàn)閘ogstash配置文件有順序的。
在調(diào)試時(shí)候,先不將日志存入到elasticsearch,而是標(biāo)準(zhǔn)輸出,以便排錯(cuò)。同時(shí),多看看日志,很多錯(cuò)誤在日志里有體現(xiàn),也容易定位錯(cuò)誤在哪。
在啟動(dòng)logstash服務(wù)之前,最好先進(jìn)行配置文件檢測,如下:
# /opt/logstash/bin/logstash --configtest -f /etc/logstash/conf.d/*
Configuration OK
也可指定文件名檢測,直到OK才行。不然,logstash服務(wù)器起不起來。最后,就是啟動(dòng)logstash服務(wù)了。
#logstash-forwarder
需要將在安裝logstash時(shí)候創(chuàng)建的ssl證書的公鑰logstash.crt拷貝到每臺logstash-forwarder服務(wù)器(需監(jiān)控日志的server)
wget https://download.elastic.co/logstash-forwarder/binaries/logstash-forwarder-0.4.0-1.x86_64.rpm
rpm -ivh logstash-forwarder-0.4.0-1.x86_64.rpm
vi /etc/logstash-forwarder.conf
{
"network": {
"servers": [ "192.168.28.131:5043" ],
"ssl ca": "/etc/pki/tls/certs/logstash-forwarder.crt",
"timeout": 30
},
"files": [
{
"paths": [ "/var/log/nginx/*-access.log" ],
"fields": { "type": "nginx" }
}
]
}
配置文件是json格式,格式不對logstash-forwarder服務(wù)是啟動(dòng)不起來的。
后面就是啟動(dòng)logstash-forwarder服務(wù)了
echo -e "192.168.28.131 Test1\n192.168.28.130 Test2\n192.168.28.138 Test3">>/etc/hosts #不添加elasticsearch啟動(dòng)會報(bào)錯(cuò)(無法識別Test*)
su - elk
cd /usr/local/elasticsearch-1.7.2
nohup ./bin/elasticsearch &
(可以通過supervisord進(jìn)行管理,與其他服務(wù)一同開機(jī)啟動(dòng))
elk:
service logstash restart
service kibana restart
訪問http://elk.sudo.com:9200/查詢啟動(dòng)是否成功
client:
service nginx start && service logstash-forwarder start
#使用redis存儲日志(隊(duì)列),創(chuàng)建對應(yīng)的配置文件
vi /etc/logstash/conf.d/redis-input.conf
input {
lumberjack {
port => 5043
type => "logs"
ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder.crt"
ssl_key => "/etc/pki/tls/private/logstash-forwarder.key"
}
}
filter {
if [type] == "nginx" {
grok {
match => { "message" => "%{IPORHOST:clientip} - %{NOTSPACE:remote_user} \[%{HTTPDATE:timestamp}\] \"(?:%{WORD:method} %{NOTSPACE:request}(?: %{URIPROTO:proto}/%{NUMBER:httpversion})?|%{DATA:rawrequest})\" %{NUMBER:status} (?:%{NUMBER:size}|-) %{QS:referrer} %{QS:a gent} %{QS:xforwardedfor}" }
add_field => [ "received_at", "%{@timestamp}" ]
add_field => [ "received_from", "%{host}" ]
}
date {
match => [ "timestamp" , "dd/MMM/YYYY:HH:mm:ss Z" ]
}
#test
}
}
output {
####將接收的日志放入redis消息隊(duì)列####
redis {
host => "127.0.0.1"
port => 6379
data_type => "list"
key => "logstash:redis"
}
}
vi /etc/logstash/conf.d/redis-output.conf
input {
# 讀取redis
redis {
data_type => "list"
key => "logstash:redis"
host => "192.168.28.131" #redis-server
port => 6379
#threads => 5
}
}
output {
elasticsearch {
host => "192.168.28.131"
protocol => "http"
index => "logstash-%{type}-%{+YYYY.MM.dd}"
document_type => "%{type}"
workers => 36
template_overwrite => true
}
#stdout { codec =>rubydebug }
}
# /opt/logstash/bin/logstash --configtest -f /etc/logstash/conf.d/*
Configuration OK
登錄redis查詢,可以看到日志的對應(yīng)鍵值信息已經(jīng)寫入
名稱欄目:詳解日志收集處理工具ELK
網(wǎng)址分享:http://fisionsoft.com.cn/article/cceogic.html


咨詢
建站咨詢
