ICode9

精准搜索请尝试: 精确搜索
首页 > 其他分享> 文章详细

结合springboot搭建日志采集系统EFK

2021-04-13 10:58:14  阅读:364  来源: 互联网

标签:filebeat springboot elastic kibana EFK elasticsearch ecs7 日志 efk


目录

EFK架构(elasticsearch\filebeat\kibana)

1、下载elasticsearch、kibana、filebeat

2、创建用户并授权

3、安装并启动

3.1 使用elasticsearch账号安装启动

>3.1.1 解压 elasticsearch

>3.1.2 配置 elasticsearch

>3.1.3 启动elasticsearch

>3.1.4 访问

3.2 安装启动kibana

>3.2.1 解压 kibana

>3.2.2 配置 kibana

>3.2.3 启动kibana

>3.2.4 访问

3.3 安装启动filebeat

>3.3.1 解压 & 配置 filebeat

>3.3.3 启动filebeat

3.4 springboot logback配置


EFK架构(elasticsearch\filebeat\kibana)

1、filebeat采集日志(可以采集多种日志类型log\http\system\tcp\mq\docker\aws...,具体采集配置参照:https://www.elastic.co/guide/en/beats/filebeat/current/configuration-filebeat-options.html

2、filebeat将采集到的日志解析推送至es

3、kibana展示

如果日志量特别大,可以参考此方案

1、filebeat采集日志到kafka,利用kafka高并发处理能力,kafka也可以是集群

2、logstash消费kafka数据,并存储到es集群,logstash也可以是集群

当然可以选择其他方案,大型日志系统复杂性高,选择自己合适的日志解决方案

本次部署版本为7.12.0,部署的时候确保版本一致。

1、下载elasticsearch、kibana、filebeat

[root@ecs7 efk]# curl -L -O https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.12.0-linux-x86_64.tar.gz
[root@ecs7 efk]# curl -L -O https://artifacts.elastic.co/downloads/kibana/kibana-7.12.0-linux-x86_64.tar.gz
[root@ecs7 efk]# curl -L -O https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-7.12.0-linux-x86_64.tar.gz

2、创建用户并授权

[root@ecs7 efk]# groupadd elastic
[root@ecs7 efk]# useradd -g elastic elasticsearch
[root@ecs7 efk]# chown -R elasticsearch:elastic /data/efk/

3、安装并启动

3.1 使用elasticsearch账号安装启动

>3.1.1 解压 elasticsearch

[root@ecs7 efk]# su elasticsearch
[elasticsearch@ecs7 efk]$ tar -zxvf elasticsearch-7.12.0-linux-x86_64.tar.gz

>3.1.2 配置 elasticsearch

[elasticsearch@ecs7 efk]$ cd elasticsearch-7.12.0/config/

备份原始配置文件

[elasticsearch@ecs7 config]$ cp elasticsearch.yml elasticsearch.yml.org 

elasticsearch.yml 全文(本次部署为单节点部署)

# 集群名称
cluster.name: test-efk
# 节点名称
node.name: master
# 索引数据存储目录
path.data: /data/efk/elasticsearch-7.12.0/data
# 日志
path.logs: /data/efk/elasticsearch-7.12.0/dlogs
# 外网访问
network.host: 0.0.0.0

# 端口
http.port: 9200
# 自动创建索引
action.auto_create_index: .security,.monitoring*,.watches,.triggered_watches,.watcher-history*,app-a-*,app-b-*

# 节点网络
discovery.seed_hosts: ["191.168.0.107"]
# 当前节点类型
cluster.initial_master_nodes: ["master"]      

 

>3.1.3 启动elasticsearch

[elasticsearch@ecs7 efk]$ cd elasticsearch-7.12.0/bin/
[elasticsearch@ecs7 bin]$ ./elasticsearch -d
[elasticsearch@ecs7 bin]$ ERROR: [1] bootstrap checks failed. You must address the points described in the following [1] lines before starting Elasticsearch.
bootstrap check failure [1] of [1]: max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]
ERROR: Elasticsearch did not exit normally - check the logs at /data/efk/elasticsearch-7.12.0/dlogs/test-efk.log

启动es会有出现异常,解决方案,参照博客:https://blog.csdn.net/F1004145107/article/details/106279907/

>3.1.4 访问

http://localhost:9200

[elasticsearch@ecs7 bin]$ curl http://localhost:9200
{
  "name" : "master",
  "cluster_name" : "test-efk",
  "cluster_uuid" : "Hovo67CRTF2zMnygQJ-2NQ",
  "version" : {
    "number" : "7.12.0",
    "build_flavor" : "default",
    "build_type" : "tar",
    "build_hash" : "78722783c38caa25a70982b5b042074cde5d3b3a",
    "build_date" : "2021-03-18T06:17:15.410153305Z",
    "build_snapshot" : false,
    "lucene_version" : "8.8.0",
    "minimum_wire_compatibility_version" : "6.8.0",
    "minimum_index_compatibility_version" : "6.0.0-beta1"
  },
  "tagline" : "You Know, for Search"
}

 

3.2 安装启动kibana

>3.2.1 解压 kibana

[root@ecs7 efk]# su elasticsearch
[elasticsearch@ecs7 efk]$ tar -zxvf kibana-7.12.0-linux-x86_64.tar.gz 

>3.2.2 配置 kibana

[elasticsearch@ecs7 efk]$ cd kibana-7.12.0-linux-x86_64
[elasticsearch@ecs7 kibana-7.12.0-linux-x86_64]$ cd config/
[elasticsearch@ecs7 config]$ cp kibana.yml kibana.yml.org 

备份原始配置文件

[elasticsearch@ecs7 config]$ cp kibana.yml kibana.yml.org 

kibana.yml 全文

# 端口
server.port: 5601
# 主机
server.host: "0.0.0.0"
# 名称
server.name: "master"
# es集群地址
elasticsearch.hosts: ["http://127.0.0.1:9200"]
# 日志目录
logging.dest: /data/efk/kibana-7.12.0-linux-x86_64/logs/kibana.log

 

>3.2.3 启动kibana

[elasticsearch@ecs7 kibana-7.12.0-linux-x86_64]$ ll
total 1476
drwxr-xr-x   2 elasticsearch elastic    4096 Mar 18 13:56 bin
drwxr-xr-x   2 elasticsearch elastic    4096 Apr 13 09:53 config
drwxr-xr-x   2 elasticsearch elastic    4096 Apr  7 11:26 data
-rw-r--r--   1 elasticsearch elastic    3860 Mar 18 13:55 LICENSE.txt
drwxr-xr-x   2 elasticsearch elastic    4096 Apr  7 11:26 logs
drwxr-xr-x   6 elasticsearch elastic    4096 Mar 18 13:55 node
drwxr-xr-x 831 elasticsearch elastic   36864 Mar 18 13:55 node_modules
-rw-r--r--   1 elasticsearch elastic 1428396 Mar 18 13:55 NOTICE.txt
-rw-r--r--   1 elasticsearch elastic     740 Mar 18 13:55 package.json
drwxr-xr-x   2 elasticsearch elastic    4096 Mar 18 13:55 plugins
-rw-r--r--   1 elasticsearch elastic    3968 Mar 18 13:55 README.txt
drwxr-xr-x  12 elasticsearch elastic    4096 Mar 18 13:55 src
drwxr-xr-x   3 elasticsearch elastic    4096 Mar 18 13:55 x-pack
[elasticsearch@ecs7 kibana-7.12.0-linux-x86_64]$ cd bin
[elasticsearch@ecs7 bin]$ ll
total 16
-rwxr-xr-x 1 elasticsearch elastic 850 Mar 18 13:55 kibana
-rwxr-xr-x 1 elasticsearch elastic 783 Mar 18 13:55 kibana-encryption-keys
-rwxr-xr-x 1 elasticsearch elastic 776 Mar 18 13:55 kibana-keystore
-rwxr-xr-x 1 elasticsearch elastic 813 Mar 18 13:55 kibana-plugin
[elasticsearch@ecs7 bin]$ ./kibana &

>3.2.4 访问

浏览器访问 http://localhost:5601

 

3.3 安装启动filebeat

filebeat 不一定要跟es放在同一台服务器,可以是本机可以是其他服务器,它是一个采集工具,可以将数据推送至es,以下采用本机演示

>3.3.1 解压 & 配置 filebeat

 

 

备份原始配置文件

filebeat.yml 全文


# ============================== Filebeat inputs ===============================
filebeat.inputs:
- type: log
  enabled: true
  encoding: UTF-8
  # Paths that should be crawled and fetched. Glob based paths.
  paths:
  #需要采集的日志文件
    - D:/data/**/*.log
  json.key_under_root: true
  json.overwrite_keys: true
  json.message_key: message
  json.add_error_key: true
  tags: ["saas"]
    
# ============================== Filebeat modules ==============================
filebeat.config.modules:
  # Glob pattern for configuration loading
  path: ${path.config}/modules.d/*.yml
  # Set to true to enable config reloading
  reload.enabled: false
  # Period on which files under path should be checked for changes
  #reload.period: 10s
# ======================= Elasticsearch template setting =======================
setup.template.settings:
  index.number_of_shards: 1
  #index.codec: best_compression
  #_source.enabled: false

setup.kibana:
# ---------------------------- Elasticsearch Output ----------------------------
output.elasticsearch:
  # Array of hosts to connect to.
# es 地址
  hosts: ["191.168.0.107:9200"]
  
processors:
  - add_host_metadata:
      when.not.contains.tags: forwarded
  - add_cloud_metadata: ~
  - add_docker_metadata: ~
  - add_kubernetes_metadata: ~
# 日志时间处理
  - timestamp:
      field: json.@timestamp
      timezone: Asia/Shanghai
      layouts:
        - '2006-01-02T15:04:05+08:00'
        - '2006-01-02T15:04:05.999+08:00'
      test:
        - '2019-06-22T16:33:51+08:00'
        - '2019-11-18T04:59:51.123+08:00'
# 删除相关字段        
  - drop_fields:
      fields: [json.@version,json.level_value,json.@timestamp]
# 重命名字段
  - rename:
      fields:
        - from: "json.logName"
          to: "json.appName"
      ignore_missing: false
      fail_on_error: true
 

 

>3.3.3 启动filebeat

使用cmd运行 filebeat.exe

 

3.4 springboot logback配置

pom.xml 新增logstash-logback-encoder依赖,logstash-logback-encoder可以将日志以json的方式输出,也不用我们单独处理多行记录问题

<dependency>
    <groupId>net.logstash.logback</groupId>
    <artifactId>logstash-logback-encoder</artifactId>
    <version>5.3</version>
</dependency>

 

<?xml version="1.0" encoding="UTF-8"?>
<configuration scan="true" scanPeriod="30 seconds">
    <!-- 部分参数需要来源于properties文件 -->
    <springProperty scope="context" name="logName" source="spring.application.name" defaultValue="localhost.log"/>

    <!-- %m输出的信息,%p日志级别,%t线程名,%d日期,%c类的全名,,,, -->
    <appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender">
        <encoder>
            <pattern>%d{yyyy-MM-dd HH:mm:ss.SSS} [%thread] %-5level %logger - %msg%n</pattern>
            <charset>UTF-8</charset>
        </encoder>
    </appender>

    <appender name="FILE" class="ch.qos.logback.core.rolling.RollingFileAppender">
        <file>logs/${logName}/${logName}.log</file>    <!-- 使用方法 -->
        <append>true</append>
        <rollingPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedRollingPolicy">
            <fileNamePattern>logs/${logName}/${logName}-%d{yyyy-MM-dd}.log.%i</fileNamePattern>
            <maxFileSize>64MB</maxFileSize>
            <maxHistory>30</maxHistory>
            <totalSizeCap>1GB</totalSizeCap>
        </rollingPolicy>
        <encoder class="net.logstash.logback.encoder.LogstashEncoder" >
            <providers>
                <timestamp>
                    <timeZone>Asia/Shanghai</timeZone>
                </timestamp>
                <pattern>
                    <pattern>{"level": "%level","class": "%logger{40}","message": "%message","stack_trace": "%exception"}</pattern>
                </pattern>
            </providers>
        </encoder>
    </appender>

    <!-- 只打印error级别的内容 -->
    <logger name="com.netflix" level="ERROR" />
    <logger name="net.sf.json" level="ERROR" />
    <logger name="org.springframework" level="ERROR" />
    <logger name="springfox" level="ERROR" />

    <!-- sql 打印 配置-->
    <logger name="com.github.pagehelper.mapper" level="DEBUG" />
    <logger name="org.apache.ibatis" level="DEBUG" />

    <root level="info">
        <appender-ref ref="STDOUT" />
        <appender-ref ref="FILE" />
    </root>
</configuration>

启动springboot服务,生成的日志会自动被filebeat采集并推送到es。

最终效果

 

 

相关文章参考

https://blog.csdn.net/ctypyb2002/article/details/106095377

filebeat:https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-input-log.html

 

 

 

 

 

 

 

 

 

 

 

 

标签:filebeat,springboot,elastic,kibana,EFK,elasticsearch,ecs7,日志,efk
来源: https://blog.csdn.net/liurui_wuhan/article/details/115480511

本站声明: 1. iCode9 技术分享网(下文简称本站)提供的所有内容,仅供技术学习、探讨和分享;
2. 关于本站的所有留言、评论、转载及引用,纯属内容发起人的个人观点,与本站观点和立场无关;
3. 关于本站的所有言论和文字,纯属内容发起人的个人观点,与本站观点和立场无关;
4. 本站文章均是网友提供,不完全保证技术分享内容的完整性、准确性、时效性、风险性和版权归属;如您发现该文章侵犯了您的权益,可联系我们第一时间进行删除;
5. 本站为非盈利性的个人网站,所有内容不会用来进行牟利,也不会利用任何形式的广告来间接获益,纯粹是为了广大技术爱好者提供技术内容和技术思想的分享性交流网站。

专注分享技术,共同学习,共同进步。侵权联系[81616952@qq.com]

Copyright (C)ICode9.com, All Rights Reserved.

ICode9版权所有