ICode9

精准搜索请尝试: 精确搜索
首页 > 其他分享> 文章详细

安装ELK-单机版

2022-01-26 16:31:47  阅读:267  来源: 互联网

标签:ELK 单机版 log -- root zookeeper kibana data 安装


前言:

首先要有一个全面的认识,什么是ELK?

Elastic Stack也就是ELK,ELK是三款软件的集合,分别是Elasticsearch,logstas,Kibana,在发展过程中,有了新的成员Beats加入,所以就形成了Elastic Starck.也是就是说ELK是旧的称呼,Elastic Stack是新的名字。

先通过Beats采集一切的数据如日志文件,网络流量,Win事件日志,服务指标,健康检查等,然后把数据发送给elasticsearch保存起来,也可以发送给logstas处理然后再发送给elasticsearch,最后通过kibana的组件将数据可视化的展示出来。

Elasticsearch
Elasticsearch基于java,是个开源分布式手术引擎,它的特点有:分布式,零配置,自动发现,索引自动分片,索引副本机制,restful风格接口,多数据源,自动搜索负载等。

Logstash
也是基于java,是一个开源的用于收集,分析和存储日志的工具。

Kibana
Kibana基于nodejs,也是开源和免费的工具,Kibana开源为logsash和Elasticsearch提供日志分析友好的web界面,可以汇总,分析和搜索重要的数据日志。

Beats
Bests是elastic公司开源的一款采集系统监控数据的代理agent,是在被监控服务器上以客户端形式运行的数据收集器的统称,可以直接把数据发送给Elasticsearch或者通过Logstash发送给Elasticsearch,然后进行后续的数据分析活动

ELK官网:

直接从官网下载ELK的所有安装包

Free and Open Search: The Creators of Elasticsearch, ELK & Kibana | Elastic

Note: 怎样部署ELK?

ELK需要使用单独的机器部署,为什么? 因为ELK不涉及业务,而且日志量大  会占用业务的资源,所以需要单独部署。

本文中需要安装ELK,以及kafka,zookeeper, kafka作为logstash的数据源。

1.安装单机版Elasticsearch

1、基础环境配置

添加用户

出于安全的考虑,elasticsearch不允许使用root用户来启动,所以需要创建一个新的用户,并为这个账户赋予相应的权限来启动elasticsearch集群(赋予权限的操作在后面启动ES的环节)。 

  #useradd es

修改服务打开文件数

(1).vim /etc/security/limits.conf

root soft nofile 65535  (这个不需要修改成262144,无需改动)
root hard nofile 65535  (这个不需要修改成262144,无需改动)
  *  soft nofile 262144
  *  hard nofile 262144

#下面两句没加,但是看很多文章里加了

* soft memlock unlimited  
* hard memlock unlimited

防止进程虚拟内存不足: 

(2).vim /etc/sysctl.conf

   #文件尾部设置
    vm.max_map_count=262144

使上面的设置生效:

sysctl -p

2、修改ES配置文件elasticsearch.yml

cluster.name: wit
node.name: node-es
node.master: true
node.data: true
network.host: 0.0.0.0
path.data: /usr/local/elasticsearch-7.8.1/data
path.logs: /usr/local/elasticsearch-7.8.1/logs
http.port: 9200
transport.tcp.port: 9300

#写⼊候选主节点的设备地址,在开启服务后可以被选为主节点

#因为要配置三个主节点,所以这里写了三个,如果只需要一个主节点,则这里只写一个

#这里不要写端口,只写IP,写端口的话,集群搭建不成功。

#由于不是集群,这里只写自己就好了

discovery.seed_hosts: ["10.17.64.172"]

#这里填写的是节点的名字,不是节点的IP地址,由于不是集群,这里只写自己就好了
cluster.initial_master_nodes: ["node-es"]

#下面的几个配置还没使用,看情况吧 

http.cors.enabled: true
http.cors.allow-origin: "*"
http.cors.allow-headers: Authorization
xpack.security.enabled: true
xpack.security.transport.ssl.enabled: true

3. 其它配置文件

(1).ES自带java,所以不需要在系统中安装java,但是如果系统中已经有了java,那么ES会用哪一个?

(2).由于Elasticsearch是Java开发的,所以可以通过../elasticsearch/config/jvm.options配置文件来设定JVM的相关设定。如果没有特殊需求按默认即可。 不过其中还是有两项最重要的-Xmx1g 与

-Xms1gJVM的最大最小Heap。如果太小会导致Elasticsearch刚刚启动就立刻停止。太大会拖慢

系统本身。

4.启动es

1、给elasticsearch文件夹分配es权限

chown -R es:es elasticsearch文件路径

2、切换至es用户su - es

3、启动ES

bin/elasticsearch --前台启动 
bin/elasticsearch -d --后台启动 
 

5.验证ES集群:

shell命令行访问 :

虽然不是集群,但是也可以用这个API进行查看。

 curl localhost:9200/_cluster/health?pretty

命令输出信息:

[root@ecs-zhihuiyingxiao-dev config]# curl localhost:9200/_cluster/health?pretty
{
  "cluster_name" : "wit",
  "status" : "yellow",
  "timed_out" : false,
  "number_of_nodes" : 1,
  "number_of_data_nodes" : 1,
  "active_primary_shards" : 7,
  "active_shards" : 7,
  "relocating_shards" : 0,
  "initializing_shards" : 0,
  "unassigned_shards" : 1,
  "delayed_unassigned_shards" : 0,
  "number_of_pending_tasks" : 0,
  "number_of_in_flight_fetch" : 0,
  "task_max_waiting_in_queue_millis" : 0,
  "active_shards_percent_as_number" : 87.5
}

本地http访问:

curl  http://localhost:9200

外部http访问:

开放防火墙端口:

firewall-cmd --zone=public --add-port=9200/tcp --permanent

firewall-cmd --reload

systemctl daemon-reload

导入数据到ES:

json数据文件data.json如下:

它的第一行定义了_index,_type,_id等信息;第二行定义了字段的信息(关于ES数据格式问题,比如要有哪些字段,请自行研究)。 

{ "index" : { "_index" : "test", "_type" : "type1", "_id" : "1" } }
{ "field1" : "value1" }

curl -H "Content-Type: application/json" -X POST localhost:9200/_bulk --data-binary @data.json

执行结果如下:

二、安装Kibana

Kibana是一个针对Elasticsearch的开源分析及可视化平台,使用Kibana可以查询、查看并与存储在ES索引的数据进行交互操作,使用Kibana能执行高级的数据分析,并能以图表、表格和地图的形式查看数据。

1.获取到安装包

获取kibana-7.8.1-linux-x86_64.tar(自己去官网下载,这里不提供),然后解压

tar xvf kibana-7.8.1-linux-x86_64.tar

2.配置yum源:

在/etc/yum.repos.d/ 下新建 kibana.repo 配置 yum 源地址:
新建kibana.repo:

cat>>kibana.repo

新建完成之后,在里面加上如下内容:

[kibana-7.x]
name=Kibana repository for 7.x packages
baseurl=https://artifacts.elastic.co/packages/7.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=1
autorefresh=1
type=rpm-md

3.修改配置文件kibana.yml:

修改config文件夹下的kibana.yml,添加信息:

server.port: 5601
server.host: “0.0.0.0”
server.name: tanklog
elasticsearch.hosts: [“http://127.0.0.0:9200/”]

注意,里面的地址是你自己服务器的IP地址。

4.开启端口5601:

如果防火墙是关闭的,首先打开防火墙,否则没办法开启5601端口:

1.输入命令、开启防火墙:

systemctl start firewalld.service

2.开启防火墙后,添加5601端口:

firewall-cmd --permanent --zone=public --add-port=5601/tcp

出现success,说明端口开启成功。

3.重启防火墙:

firewall-cmd --reload

5.启动kibana:

1.启动kibana之前,先启动ElasticSearch:
进入ES安装包下的bin目录,输入命令:

./elasticsearch -d(后台运行)

2.启动Kibana:
进入kibana安装目录下的bin目录,输入命令:

(1).新建kibana账户(推荐这种方式):

  #useradd kibana

授权:

 chown -R kibana:kibana kibana目录

切换账户:

su - kibana 

启动kibana:

nohup ./kibana & 

(2).如果在root账户下启动kibana

需要加--allow-root参数。 

nohup ./kibana --allow-root &  (--allow-root表示允许root用户去启动)

3.关闭防火墙:

kibana成功启动后,关闭防火墙:

systemctl stop firewalld.service

4.访问kibana:

浏览器中输入: http://IP:5601/app/kibana 
 

三、安装Zookeeper

参考:centos7上安装zookeeper - web_bird - 博客园 (cnblogs.com) 

1.需要先安装JDK:

jdk压缩包解压(自己去下载):

tar zxvf  jdk1.8.0_181.tar

配置环境变量:

修改/etc/profile文件,在文件最后加上:

export JAVA_HOME=/usr/local/jdk1.8.0_181
export PATH=$JAVA_HOME/bin:$PATH
export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar

使配置生效:

source /etc/profile

查看java版本:

[root@ecs-dev-0003 ~]# java -version
java version "1.8.0_181"
Java(TM) SE Runtime Environment (build 1.8.0_181-b13)
Java HotSpot(TM) 64-Bit Server VM (build 25.181-b13, mixed mode)
[root@ecs-dev-0003 ~]# javac -version
javac 1.8.0_181
[root@ecs-dev-0003 ~]# 

2.安装zookeeper:

zookeeper下载地址:https://downloads.apache.org/zookeeper/

解压安装包:

tar zxvf  zookeeper-3.4.10.tar.gz

进入到conf文件夹:

cp  zoo_sample.cfg zoo.cfg

修改配置文件zoo.cfg:

tickTime=2000
initLimit=10
syncLimit=5
dataDir=/usr/local/zookeeper-3.4.10/data
datalogDir=/usr/local/zookeeper-3.4.10/logs
clientPort=2181
autopurge.purgeInterval=1
server.1=10.17.****.****:2888:3888

我们需要注意dataDir和datalogDir对应的目录,如果对应的目录不存在,我们需要去创建。

启动zookeeper:

  ./bin/zkServer.sh start    启动zookeeper
  ./bin/zkServer.sh stop   关闭zookeeper
  ./bin/zkServer.sh status   查看zookeeper状态

也可以通过查看zookeeper的进程来判断其是否启动成功

ps -ef | grep zookeeper

3.设置zookeeper开机启动(尚未验证):

我们可以将zookeeper作为一个服务,设置其开机自启,这样每次我们打开虚拟机就可以开启zookeeper,彻底解放双手!设置zookeeper开机自启需要以下几个步骤:

1、进入 /etc/init.d 目录:

[root@ecs-dev conf]# cd /etc/init.d/
[root@ecs-dev init.d]# ls
functions  hostguard  multi-queue-hw  netconsole  network  README

2.创建文件zookeeper,并添加脚本:

vi zookeeper

脚本内容为:

#!/bin/bash
#chkconfig:2345 20 90
#description:zookeeper
#processname:zookeeper
ZK_PATH= 你的zk路径(到zk根目录就行了)
export JAVA_HOME= jdk目录(也是到根目录就好了)
case $1 in
         start) sh  $ZK_PATH/bin/zkServer.sh start;;
         stop)  sh  $ZK_PATH/bin/zkServer.sh stop;;
         status) sh  $ZK_PATH/bin/zkServer.sh status;;
         restart) sh $ZK_PATH/bin/zkServer.sh restart;;
         *)  echo "require start|stop|status|restart"  ;;
esac

3. 将zookeeper注册为服务:

chkconfig --add zookeeper

4.测试其是否生效

这里采用先停服务,再使用命令启动,注意需要修改创建的zookeeper服务权限:

[root@zhiyou init.d]# service zookeeper start
env: /etc/init.d/zookeeper: 权限不够
[root@zhiyou init.d]# 
[root@zhiyou init.d]# chmod +x zookeeper 
[root@zhiyou init.d]# 
[root@zhiyou init.d]# service zookeeper start
ZooKeeper JMX enabled by default
Using config: /opt/soft/zookeeper-3.4.11/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
[root@zhiyou init.d]# 
[root@zhiyou init.d]# service zookeeper status
ZooKeeper JMX enabled by default
Using config: /opt/soft/zookeeper-3.4.11/bin/../conf/zoo.cfg
Mode: standalone
[root@zhiyou init.d]# 

至此,我们已经完成了在centos7下安装zookeeper的全部步骤,以及设置了zookeeper开机自启动!

四、安装Kafka

解压kafka压缩包:

tar zxvf kafka_2.13-2.6.0.tgz

修改文件夹config中的配置文件server.properties.

配置log.dirs参数的时候,需要自己创建文件夹 /usr/local/kafka_2.13-2.6.0/logs

[root@ecs-zhihuiyingxiao-dev config]# cat server.properties 

# The id of the broker. This must be set to a unique integer for each broker.
broker.id=0

listeners=PLAINTEXT://10.17.64.172:9092

advertised.listeners=PLAINTEXT://10.17.64.172:9092

auto.create.topics.enable=true
num.network.threads=3

# The number of threads that the server uses for processing requests, which may include disk I/O
num.io.threads=8

# The send buffer (SO_SNDBUF) used by the socket server
socket.send.buffer.bytes=102400

# The receive buffer (SO_RCVBUF) used by the socket server
socket.receive.buffer.bytes=102400

# The maximum size of a request that the socket server will accept (protection against OOM)
socket.request.max.bytes=104857600


############################# Log Basics #############################

# A comma separated list of directories under which to store log files
log.dirs=/usr/local/kafka_2.13-2.6.0/logs

# The default number of log partitions per topic. More partitions allow greater
# parallelism for consumption, but this will also result in more files across
# the brokers.
num.partitions=1

# The number of threads per data directory to be used for log recovery at startup and flushing at shutdown.
# This value is recommended to be increased for installations with data dirs located in RAID array.
num.recovery.threads.per.data.dir=1

############################# Internal Topic Settings  #############################
# The replication factor for the group metadata internal topics "__consumer_offsets" and "__transaction_state"
# For anything other than development testing, a value greater than 1 is recommended to ensure availability such as 3.
offsets.topic.replication.factor=1
transaction.state.log.replication.factor=1
transaction.state.log.min.isr=1

############################# Log Flush Policy #############################

# Messages are immediately written to the filesystem but by default we only fsync() to sync
# the OS cache lazily. The following configurations control the flush of data to disk.
# There are a few important trade-offs here:
#    1. Durability: Unflushed data may be lost if you are not using replication.
#    2. Latency: Very large flush intervals may lead to latency spikes when the flush does occur as there will be a lot of data to flush.
#    3. Throughput: The flush is generally the most expensive operation, and a small flush interval may lead to excessive seeks.
# The settings below allow one to configure the flush policy to flush data after a period of time or
# every N messages (or both). This can be done globally and overridden on a per-topic basis.

# The number of messages to accept before forcing a flush of data to disk
log.flush.interval.messages=10000

# The maximum amount of time a message can sit in a log before we force a flush
log.flush.interval.ms=1000

############################# Log Retention Policy #############################

# The following configurations control the disposal of log segments. The policy can
# be set to delete segments after a period of time, or after a given size has accumulated.
# A segment will be deleted whenever *either* of these criteria are met. Deletion always happens
# from the end of the log.

# The minimum age of a log file to be eligible for deletion due to age
log.retention.hours=24

# A size-based retention policy for logs. Segments are pruned from the log unless the remaining
# segments drop below log.retention.bytes. Functions independently of log.retention.hours.
#log.retention.bytes=1073741824

# The maximum size of a log segment file. When this size is reached a new log segment will be created.
log.segment.bytes=1073741824

# The interval at which log segments are checked to see if they can be deleted according
# to the retention policies
log.retention.check.interval.ms=300000

############################# Zookeeper #############################

# Zookeeper connection string (see zookeeper docs for details).
# This is a comma separated host:port pairs, each corresponding to a zk
# server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002".
# You can also append an optional chroot string to the urls to specify the
# root directory for all kafka znodes.
zookeeper.connect=10.17.64.172:2181

# Timeout in ms for connecting to zookeeper
zookeeper.connection.timeout.ms=18000

group.initial.rebalance.delay.ms=0

 

标签:ELK,单机版,log,--,root,zookeeper,kibana,data,安装
来源: https://blog.csdn.net/wdquan19851029/article/details/122702511

本站声明: 1. iCode9 技术分享网(下文简称本站)提供的所有内容,仅供技术学习、探讨和分享;
2. 关于本站的所有留言、评论、转载及引用,纯属内容发起人的个人观点,与本站观点和立场无关;
3. 关于本站的所有言论和文字,纯属内容发起人的个人观点,与本站观点和立场无关;
4. 本站文章均是网友提供,不完全保证技术分享内容的完整性、准确性、时效性、风险性和版权归属;如您发现该文章侵犯了您的权益,可联系我们第一时间进行删除;
5. 本站为非盈利性的个人网站,所有内容不会用来进行牟利,也不会利用任何形式的广告来间接获益,纯粹是为了广大技术爱好者提供技术内容和技术思想的分享性交流网站。

专注分享技术,共同学习,共同进步。侵权联系[81616952@qq.com]

Copyright (C)ICode9.com, All Rights Reserved.

ICode9版权所有