ICode9

精准搜索请尝试: 精确搜索
首页 > 其他分享> 文章详细

日志系统升级_ELK+KafKa

2021-11-25 10:58:16  阅读:183  来源: 互联网

标签:ELK zookeeper 系统升级 ############## 配置 kafka 日志 KafKa localhost


日志系统升级_ELK+KafKa的相关配置

安装zookeeper教程

(26条消息) windows环境下安装zookeeper教程详解(单机版)_风轩雨墨的博客-CSDN博客

windows环境下安装配置Kafka集群

(26条消息) windows环境下安装配置Kafka集群_qq1170993239的博客-CSDN博客

Logback+kafka+ELK搭建微服务日志

(27条消息) Logback+kafka+ELK搭建微服务日志_a294634473的博客-CSDN博客

(27条消息) springboot+logback+kafka+logstash将分布式日志汇集到ElasticSearch中_风轻衣的博客-CSDN博客

SpringBoot+kafka+ELK分布式日志收集 - 聂晨 - 博客园 (cnblogs.com)

logback连接kafka报错

(27条消息) kafka报错:Connection with localhost/127.0.0.1 disconnected java.net.ConnectException: Connection refus_lyxuefeng的博客-CSDN博客

系统中logback-spring.xml的配置,关于kafka

配置

<!-- 为Kafka输出的JSON格式的Appender -->
    <appender name="kafka" class="com.github.danielwegener.logback.kafka.KafkaAppender">
        <encoder>
            <pattern>
                <pattern>
                    {
                    "thread": "%thread",
                    "logLevel": "%level",
                    "message": "%message",
                    "class": "%logger{40}",
                    "serviceName": "${springAppName:-}",
                    "trace": "%X{X-B3-TraceId:-}",
                    "span": "%X{X-B3-SpanId:-}",
                    "exportable": "%X{X-Span-Export:-}",
                    "pid": "${PID:-}"

                    }
                </pattern>
            </pattern>
        </encoder>
        <topic>kafka-log</topic>
        <keyingStrategy class="com.github.danielwegener.logback.kafka.keying.NoKeyKeyingStrategy"/>
        <deliveryStrategy class="com.github.danielwegener.logback.kafka.delivery.AsynchronousDeliveryStrategy"/>
        <!-- Optional parameter to use a fixed partition -->
        <!-- <partition>0</partition> -->
        <!-- Optional parameter to include log timestamps into the kafka message -->
        <!-- <appendTimestamp>true</appendTimestamp> -->
        <!-- each <producerConfig> translates to regular kafka-client config (format: key=value) -->
        <!-- producer configs are documented here: https://kafka.apache.org/documentation.html#newproducerconfigs -->
        <!-- bootstrap.servers is the only mandatory producerConfig -->
        <producerConfig>bootstrap.servers=10.11.83.80:9093</producerConfig>

        <!-- 如果kafka不可用则输出到控制台 -->
        <appender-ref ref="STDOUT"/>

    </appender>

    <!--异步写入kafka,尽量不占用主程序的资源-->
    <appender name="ASYNC" class="ch.qos.logback.classic.AsyncAppender">
        <neverBlock>true</neverBlock>
        <includeCallerData>true</includeCallerData>
        <discardingThreshold>0</discardingThreshold>
        <queueSize>2048</queueSize>
        <appender-ref ref="kafka" />
    </appender>


    <!-- 日志输出级别 -->
    <root level="INFO">
        <appender-ref ref="STDOUT" />
        <appender-ref ref="logstash" />
        <appender-ref ref="syslog"/>
        <appender-ref ref="ASYNC"/>
    </root>

说明

在这里插入图片描述

kafka集群中的配置

在这里插入图片描述

在这里插入图片描述

kafka1:

##############主要配置部分 start ##############
# broker 编号,集群内必须唯一
broker.id=1
# host 地址
host.name=127.0.0.1
# 端口
port=9092
# 允许外部端口连接                                            
listeners=PLAINTEXT://:9092  
# 外部代理地址                                                
advertised.listeners=PLAINTEXT://10.11.83.80:9092
# 消息日志存放地址
log.dirs=C:/install/Kafka/kafka_1/log
# ZooKeeper 地址,多个用,分隔
zookeeper.connect=localhost:2181,localhost:2182,localhost:2183
#zookeeper.connect=localhost:2181
##############主要配置部分 end ##############

kafka2:

##############主要配置部分 start ##############
# broker 编号,集群内必须唯一
broker.id=2
# host 地址
host.name=127.0.0.1
# 端口
port=9093
# 允许外部端口连接                                            
listeners=PLAINTEXT://:9093
# 外部代理地址                                                
advertised.listeners=PLAINTEXT://10.11.83.80:9093
# 消息日志存放地址
log.dirs=C:/install/Kafka/kafka_2/log
# ZooKeeper 地址,多个用,分隔
zookeeper.connect=localhost:2181,localhost:2182,localhost:2183
#zookeeper.connect=localhost:2181
##############主要配置部分 end ##############

kafka3:

##############主要配置部分 start ##############
# broker 编号,集群内必须唯一
broker.id=3
# host 地址
host.name=127.0.0.1
# 端口
port=9094
# 允许外部端口连接                                            
listeners=PLAINTEXT://:9094
# 外部代理地址                                                
advertised.listeners=PLAINTEXT://10.11.83.80:9094
# 消息日志存放地址
log.dirs=C:/install/Kafka/kafka_3/log
# ZooKeeper 地址,多个用,分隔
zookeeper.connect=localhost:2181,localhost:2182,localhost:2183
#zookeeper.connect=localhost:2181
##############主要配置部分 end ##############

logstash.conf的配置

配置

input { 
# stdin { }

 kafka {
        #Value type is string
        #There is no default value for this setting.
        #A topic regex pattern to subscribe to.The topics configuration will be ignored when using this configuration.
        topics => ["kafka-log"]
        #Value type isstring
        #Default value is "localhost:9092"
        #A list of URLs of Kafka instances to use for establishing the initial connection to the cluster. This list should be in the form of host1:port1,host2:port2 These urls are just used for the initial connection to discover the full cluster membership (which may change dynamically) so this list need not contain the full set of servers (you may want more than one, though, in case a server is down).
        bootstrap_servers => "10.11.83.80:9093"
        #Value type is codec
        #Default value is "plain"
        #The codec used for input data. Input codecs are a convenient method for decoding your data before it enters the input, without needing a separate filter in your Logstash pipeline.
        codec => "json"
    }

}

filter {
	#过滤info 中所有以 "logInfo切面日志" 开头的日志
	if([logLevel]=~"INFO"){
		if ("logInfo切面日志" in [message]) {
			
		}else{
			 drop {}
		}
	}
	
	
  grok {
   
    match => [
	  "message","%{LOGLEVEL:logLevel}%{NOTSPACE:tag}[T ]%{NOTSPACE:method}[\n]%{NOTSPACE:api}[\n]%{NOTSPACE:params}",
      "message","%{NOTSPACE:tag}[T ]%{NOTSPACE:author}[T ]%{NOTSPACE:msg}"
      ]
	
  }
  
}

output {

     elasticsearch { 
	hosts => ["10.11.53.54:9200"]
	index => "logstash-test-%{+YYYY.MM.dd}"
	

	action => "index"
	template=>"C:/install/elk/logstash-7.10.2-windows-x86_64/logstash-7.10.2/config/logstash-test-.json"
	template_name=>"logstash-test-"
	manage_template => true
	template_overwrite => true

     }
stdout { codec => rubydebug }

} 

说明:

在这里插入图片描述

其他参考:

基于Kafka+ELK搭建海量日志平台 - 苍青浪 - 博客园 (cnblogs.com)

说明:

在这里插入图片描述

标签:ELK,zookeeper,系统升级,##############,配置,kafka,日志,KafKa,localhost
来源: https://blog.csdn.net/qq_35146059/article/details/121531843

本站声明: 1. iCode9 技术分享网(下文简称本站)提供的所有内容,仅供技术学习、探讨和分享;
2. 关于本站的所有留言、评论、转载及引用,纯属内容发起人的个人观点,与本站观点和立场无关;
3. 关于本站的所有言论和文字,纯属内容发起人的个人观点,与本站观点和立场无关;
4. 本站文章均是网友提供,不完全保证技术分享内容的完整性、准确性、时效性、风险性和版权归属;如您发现该文章侵犯了您的权益,可联系我们第一时间进行删除;
5. 本站为非盈利性的个人网站,所有内容不会用来进行牟利,也不会利用任何形式的广告来间接获益,纯粹是为了广大技术爱好者提供技术内容和技术思想的分享性交流网站。

专注分享技术,共同学习,共同进步。侵权联系[81616952@qq.com]

Copyright (C)ICode9.com, All Rights Reserved.

ICode9版权所有