ICode9

精准搜索请尝试: 精确搜索
首页 > 其他分享> 文章详细

Log4j采集日志分析(模拟)

2019-08-11 17:00:55  阅读:334  来源: 互联网

标签:String Log4j 采集 agent1 org apache log4j 日志 appender


原文链接:https://class.imooc.com

test下新建directory然后idea的右上角项目结构使其变成Test,新建LogGenerator

import org.apache.log4j.Logger;

public class LoggerGenerator {

    private static Logger logger = Logger.getLogger(LoggerGenerator.class.getName());

    public static void main(String[] args) throws InterruptedException {
        int index=0;
        while(true){
            Thread.sleep(1000);
            logger.info("current value is"+ index++);
        }
    }
}

test下再建resource目录,使其变成testresources,新建log4j.properties

log4j.rootLogger=INFO,stdout

log4j.appender.stdout=org.apache.log4j.ConsoleAppender
log4j.appender.stdout.target=System.out
log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
log4j.appender.stdout.ConversionPattern=%d{yyyy-MM-dd HH:mm:ss,SSS} [%t] [$c] [$p] - %m%n

运行便会重复出现日志

对flume的配置

#streaming.conf
#名称
agent1.sources=avro-source
agent1.channels=logger-channel
agent1.sinks=log-sink

#define source
agent1.sources.avro-source.type=avro
agent1.sources.avro-source.bind=0.0.0.0
agent1.sources.avro-source.port=41414

#define channel
agent1.channels.logger-channel.type=memory

#define sink
agent1.sinks.log-sink=logger

agent1.sources.avro-source.channels=logger-channel
agent1.sinks.log-sink.channel=logger-channel

flume-ng agent \

--conf $FLUME_HOME/conf \

--conf-file $FLUME_HOME/conf/streaming.conf \

--name agent1 \

-Dflume.root.logger=INFO,console

 

log4j整合flume,修改log4j的配置文件

log4j.rootLogger=INFO,stdout,flume

log4j.appender.stdout=org.apache.log4j.ConsoleAppender
log4j.appender.stdout.target=System.out
log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
log4j.appender.stdout.ConversionPattern=%d{yyyy-MM-dd HH:mm:ss,SSS} [%t] [$c] [$p] - %m%n

log4j.appender.flume=org.apache.flume.clients.log4jappender.Log4jAppender
log4j.appender.flume.Hostname=hadoop000
log4j.appender.flume,Port=41414
log4j.appender.flume.UnsafeMode=true

整合flume和kafka,修改配置文件

kafka对接sparkstreaming

package Kafka2SparkStreaming

import kafka.serializer.StringDecoder
import org.apache.spark.streaming.dstream.{DStream, InputDStream}
import org.apache.spark.streaming.kafka.KafkaUtils
import org.apache.spark.streaming.{Seconds, StreamingContext}
import org.apache.spark.{SparkConf, SparkContext}

class LowApi_CreateDirectDstream {
  /*低级api*/
  def main(args: Array[String]): Unit = {
    //appName设置为当前类名LowApi_CreateDirectDstream
    val sparkConf: SparkConf =new SparkConf().setAppName("LowApi_CreateDirectDstream").setMaster("local[2]")
    val sc: SparkContext =new SparkContext(sparkConf)
    sc.setLogLevel("WARN")
    //创建SparkstreamingContext
    val ssc: StreamingContext =new StreamingContext(sc,Seconds(10))
    ssc.checkpoint("./checkpoint")
    //设置kafka相关参数
    val kafkaParam: Map[String, String] =Map("metadata.broker.list"->"192.168.213.8:9092,192.168.213.9:9092,192.168.213.10:9092",
        "group.id"->"Kafka_Direct")
    //定义topic,可以拉取多个topic的数据
    val topics: Set[String] =Set("spark_01")
    //创建DStream
    val dstream: InputDStream[(String, String)] =KafkaUtils.createDirectStream[String,String,StringDecoder,StringDecoder](ssc,kafkaParam,topics)
    //获取kafka中topic的数据  取当前元组的第二位
    val topicData: DStream[String] =dstream.map(_._2).count()
    //打印输出
    result.print()
    ssc.start()
    ssc.awaitTermination()

  
  }
}

 

标签:String,Log4j,采集,agent1,org,apache,log4j,日志,appender
来源: https://blog.csdn.net/someInNeed/article/details/99185756

本站声明: 1. iCode9 技术分享网(下文简称本站)提供的所有内容,仅供技术学习、探讨和分享;
2. 关于本站的所有留言、评论、转载及引用,纯属内容发起人的个人观点,与本站观点和立场无关;
3. 关于本站的所有言论和文字,纯属内容发起人的个人观点,与本站观点和立场无关;
4. 本站文章均是网友提供,不完全保证技术分享内容的完整性、准确性、时效性、风险性和版权归属;如您发现该文章侵犯了您的权益,可联系我们第一时间进行删除;
5. 本站为非盈利性的个人网站,所有内容不会用来进行牟利,也不会利用任何形式的广告来间接获益,纯粹是为了广大技术爱好者提供技术内容和技术思想的分享性交流网站。

专注分享技术,共同学习,共同进步。侵权联系[81616952@qq.com]

Copyright (C)ICode9.com, All Rights Reserved.

ICode9版权所有