ICode9

精准搜索请尝试: 精确搜索
首页 > 其他分享> 文章详细

第13章 大数据平台监控命令

2022-05-20 15:03:37  阅读:201  来源: 互联网

标签:src 13 0.0 平台 hadoop master usr 监控 local


13、实验任务一:监控大数据平台状态

6,9,13

步骤一:查看 Linux 系统的信息(uname -a)

[root@master ~]# uname -a 

	Linux master 3.10.0-693.el7.x86_64 #1 SMP Tue Aug 22 21:09:27 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux 

步骤二:查看硬盘信息

(1)查看所有分区(fdisk -l)

[root@master ~]# fdisk -l

磁盘 /dev/sda:42.9 GB, 42949672960 字节,83886080 个扇区
Units = 扇区 of 1 * 512 = 512 bytes 
扇区大小(逻辑/物理):512 字节 / 512 字节 
I/O 大小(最小/最佳):512 字节 / 512 字节 

 
磁盘标签类型:dos 
磁盘标识符:0x0009e895 
 
   设备 Boot      Start         End      Blocks   Id  System 
   /dev/sda1   *        2048     2099199     1048576   83  Linux 
   /dev/sda2         2099200    83886079    40893440   8e  Linux LVM 
 
磁盘 /dev/mapper/centos-root:39.7 GB, 39720058880 字节,77578240 个扇区 
Units = 扇区 of 1 * 512 = 512 bytes 扇区大小(逻辑/物理):512 字节 / 512 字节
I/O 大小(最小/最佳):512 字节 / 512 字节 
 
磁盘 /dev/mapper/centos-swap:2147 MB, 2147483648 字节,4194304 个 扇区

Units = 扇区 of 1 * 512 = 512 bytes 
扇区大小(逻辑/物理):512 字节 / 512 字节 
I/O 大小(最小/最佳):512 字节 / 512 字节 

(2)查看所有交换分区(swapon -s)

[root@master ~]# swapon -s 
文件名                          类型            大小    已用    权限 
/dev/dm-1                    partition       2097148	 0     -1 

(3)查看文件系统占比(df -h)

[root@master ~]# df -h 

文件系统                 容量  已用  可用 已用% 挂载点 
/dev/mapper/centos-root   37G  4.3G   33G   12%
/ devtmpfs                 3.9G     0  3.9G    0% 
/dev tmpfs                    3.9G     0  3.9G    0% 
/dev/shm tmpfs                    3.9G  8.7M  3.9G    1% 
/run tmpfs                    3.9G     0  3.9G    0% 
/sys/fs/cgroup /dev/sda1               1014M  143M  872M   15% 
/boot tmpfs                    781M     0  781M    0% /run/user/0

步骤三:查看网络 IP 地址(ifconfig)

[root@master ~]# ifconfig

ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500      
inet 192.168.1.6  netmask 255.255.255.0  broadcast 192.168.1.255        
	inet6 fe80::6b63:dc78:878e:35f3  prefixlen 64  scopeid 0x20<link> 

        inet6 fe80::2e35:1d99:a67d:6df9  prefixlen 64  scopeid 0x20<link>         inet6 fe80::84a9:35d5:e08d:bfeb  prefixlen 64  scopeid 0x20<link>         ether 00:0c:29:f9:05:0e  txqueuelen 1000  (Ethernet)        
        RX packets 373  bytes 41380 (40.4 KiB)       
        RX errors 0  dropped 0  overruns 0  frame 0        
        TX packets 452  bytes 50188 (49.0 KiB)       
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0 
 
lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536        
inet 127.0.0.1  netmask 255.0.0.0      
	inet6 ::1  prefixlen 128  scopeid 0x10<host>         loop  txqueuelen 1  (Local Loopback)         
	RX packets 0  bytes 0 (0.0 B)       
    RX errors 0  dropped 0  overruns 0  frame 0        
    TX packets 0  bytes 0 (0.0 B)         
    TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0 

步骤四:查看所有监听端口(netstat -lntp)

[root@master ~]# netstat -lntp 

Active Internet connections (only servers) 
Proto Recv-Q Send-Q Local Address           Foreign Address        
State       PID/Program name     
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      932/sshd            
tcp6       0      0 :::22                   :::*                    LISTEN      932/sshd            
tcp6       0      0 :::3306                 :::*                    LISTEN      1074/mysqld          [root@master ~]#   

步骤五:查看所有已经建立的连接(netstat -antp)

[hadoop@master bin]$ netstat -antp

(Not all processes could be identified, non-owned process info  will not be shown, you would have to be root to see it all.) 
Active Internet connections (servers and established) 
Proto Recv-Q Send-Q Local Address           Foreign Address        
State       PID/Program name   
tcp        0      0 0.0.0.0:50070           0.0.0.0:*               LISTEN      1453/java           
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      -                   
tcp        0      0 192.168.1.6:9000        0.0.0.0:*               LISTEN      1453/java          
tcp        0      0 0.0.0.0:50090           0.0.0.0:*               LISTEN      1628/java           
tcp        0      0 192.168.1.6:50608       192.168.1.6:9000        

TIME_WAIT   -                    tcp        0      0 192.168.1.6:22          192.168.1.1:49438    
ESTABLISHED -                    tcp        0      0 192.168.1.6:9000        192.168.1.8:39198      
ESTABLISHED 1453/java            tcp        0      0 192.168.1.6:9000        192.168.1.7:49666      
ESTABLISHED 1453/java            tcp6       0      0 192.168.1.6:3888        :::*                    
LISTEN      3925/java            tcp6       0      0 :::22                   :::*                    LISTEN      -                    tcp6       0      0 192.168.1.6:8088        :::*                    LISTEN      1821/java            tcp6       0      0 192.168.1.6:8030        :::*                    LISTEN      1821/java          
tcp6       0      0 192.168.1.6:8031        :::*                    LISTEN      1821/java          
tcp6       0      0 192.168.1.6:8032        :::*                    LISTEN      1821/java            
tcp6       0      0 192.168.1.6:8033        :::*                    LISTEN      1821/java           
tcp6       0      0 :::2181                 :::*                    LISTEN      3925/java            
tcp6       0      0 :::40648                :::*                    LISTEN      3925/java           
tcp6       0      0 :::3306                 :::*                    LISTEN      -                   
tcp6       0      0 192.168.1.6:8031        192.168.1.7:51526       ESTABLISHED 1821/java           
tcp6       0      0 192.168.1.6:8031        192.168.1.8:42024       ESTABLISHED 1821/java       

步骤六:实时显示进程状态(top),该命令可以查看进程对 CPU、内存的占比等

[root@master ~]# top

top - 21:32:44 up  1:02,  2 users,  load average: 0.00, 0.02, 0.05 Tasks: 112 total,   1 running, 111 sleeping,   0 stopped,   0 zombie %Cpu(s):  0.0 us,  0.0 sy,  0.0 ni,100.0 id,  0.0 wa,  0.0 hi,  0.0 si,  0.0 st KiB Mem :  7994076 total,  7441732 free,   320652 used,   231692 buff/cache KiB Swap:  2097148 total,  2097148 free,        0 used.  7401476 avail Mem  
 
   PID USER      PR  NI    VIRT    RES    SHR S  %CPU %MEM     TIME+ COMMAND                                                                                             685 root      20   0  305296   6300   4924 S   0.3  0.1   0:08.90 vmtoolsd                                                                                              1 root      20   0  190736   3780   2488 S   0.0  0.0   0:04.80 

systemd                                                                                                2 root      20   0       0      0      0 S   0.0  0.0   0:00.01 kthreadd                                                                                              3 root      20   0       0      0      0 S   0.0  0.0   0:00.03 ksoftirqd/0                                                                                           5 root       0 -20       0      0      0 S   0.0  0.0   0:00.00 kworker/0:0H                                                                                          6 root      20   0       0      0      0 S   0.0  0.0   0:00.27 kworker/u256:0                                                                                        7 root      rt   0       0      0      0 S   0.0  0.0   0:00.49 migration/0                                                                                           8 root      20   0       0      0      0 S   0.0  0.0   0:00.00 rcu_bh                                                                                                9 root      20   0       0      0      0 S   0.0  0.0   0:00.63 rcu_sched                                                                                            10 root      rt   0       0      0      0 S   0.0  0.0   0:00.02 watchdog/0                                                                                           11 root      rt   0       0      0      0 S   0.0  0.0   0:00.01 watchdog/1                                                                                           12 root      rt   0       0      0      0 S   0.0  0.0   0:00.77 migration/1                                                                                          13 root      20   0       0      0      0 S   0.0  0.0   0:00.09 ksoftirqd/1                                                                                          15 root       0 -20       0      0      0 S   0.0  0.0   0:00.00 kworker/1:0H                                                                                         16 root      rt   0       0      0      0 S   0.0  0.0   0:00.01 watchdog/2                                                                                           17 root      rt   0       0      0      0 S   0.0  0.0   0:00.76 migration/2                                                                                          18 root      20   0       0      0      0 S   0.0  0.0   0:00.14 ksoftirqd/2                                                                                          20 root       0 -20       0      0      0 S   0.0  0.0   0:00.00 kworker/2:0H                                                                                         21 root      rt   0       0      0      0 S   0.0  0.0   0:00.01 watchdog/3                                                                                           22 root      rt   0       0      0      0 S   0.0  0.0   0:00.68 migration/3                                                                                          23 root      20   0       0      0      0 S   0.0  0.0   0:00.02 ksoftirqd/3                                                                                          25 root       0 -20       0      0      0 S   0.0  0.0   0:00.00 kworker/3:0H                                                                                         27 root      20   0       0      0      0 S   0.0  0.0   0:00.01 kdevtmpfs                                                                                            28 root       0 -20       0      0      0 S   0.0  0.0   0:00.00 netns                                                                                                29 root      20   0       0      0      0 S   0.0  0.0   0:00.00 khungtaskd                                                                                           30 root       0 -20       0      0      0 S   0.0  0.0   0:00.00 writeback                                                                                            31 root       0 -20       0      0      0 S   0.0  0.0   0:00.00 kintegrityd  

步骤七:查看 CPU 信息( cat /proc/cpuinfo)

[root@master ~]# cat /proc/cpuinfo 
processor     : 0 
vendor_id       : GenuineIntel
cpu family      : 6 
model           : 85
model name      : Intel(R) Xeon(R) Gold 5118 CPU @ 2.30GHz 
stepping        : 4
microcode       : 0x2000050 
cpu MHz         : 2294.123 
cache size      : 16896 KB 
physical id     : 0 
siblings        : 2 
core id         : 0
cpu cores       : 2
apicid          : 0 
initial apicid  : 0 
fpu             : yes 
fpu_exception   : yes 
cpuid level     : 22
wp              : yes 
flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts nopl xtopology tsc_reliable nonstop_tsc aperfmperf eagerfpu pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch epb fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 invpcid rtm rdseed adx smap xsaveopt dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req
bogomips        : 4589.21 
clflush size    : 64 
cache_alignment : 64
address sizes   : 42
bits physical, 48 bits virtual power management: 

processor       : 1 
vendor_id       : GenuineIntel
cpu family      : 6 
model           : 85 
model name      : Intel(R) Xeon(R) Gold 5118 CPU @ 2.30GHz
stepping        : 4
microcode       : 0x2000050 
cpu MHz         : 2294.123 
cache size      : 16896 
KB physical id     : 0
siblings        : 2 
core id         : 1 
cpu cores       : 2
apicid          : 1 
initial apicid  : 1 fpu             : yes 
 
fpu_exception   : yes cpuid level     : 22 wp              : yes flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts nopl xtopology tsc_reliable nonstop_tsc aperfmperf eagerfpu pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch epb fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 invpcid rtm rdseed adx smap xsaveopt dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req bogomips        : 4589.21 clflush size    : 64 cache_alignment : 64 address sizes   : 42 bits physical, 48 bits virtual power management: 
 
processor       : 2 vendor_id       : GenuineIntel cpu family      : 6 model           : 85 model name      : Intel(R) Xeon(R) Gold 5118 CPU @ 2.30GHz stepping        : 4 microcode       : 0x2000050 cpu MHz         : 2294.123 cache size      : 16896 KB physical id     : 1 siblings        : 2 core id         : 0 cpu cores       : 2 apicid          : 2 initial apicid  : 2 fpu             : yes fpu_exception   : yes cpuid level     : 22 wp              : yes flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts nopl xtopology tsc_reliable nonstop_tsc aperfmperf eagerfpu pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch epb fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 invpcid rtm rdseed adx smap xsaveopt dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req bogomips        : 4589.21 clflush size    : 64 cache_alignment : 64 address sizes   : 42 bits physical, 48 bits virtual power management: 

步骤八:查看内存信息( cat /proc/meminfo)

[rootdmaster ~]cat /proc/meminfo
MemTotal:        7994076 kB
MemFree:         7441996 kB
MemAvailable:    7401740 kB 
Buffers:            2112 kB 
Cached:           176408 kB 
SwapCached:            0 kB 
Active:           265072 kB
Inactive:         137936 kB
Active(anon):     224980 kB 
Inactive(anon):     8332 kB 
Active(file):      40092 kB 
Inactive(file):   129604 kB
Unevictable:           0 kB 
Mlocked:               0 kB 
SwapTotal:       2097148 kB 

SwapFree:        2097148 kB 
Dirty:                 0 kB 
Writeback:             0 kB 
AnonPages:        224516 kB 
Mapped:            29664 kB 
Shmem:              8824 kB 
Slab:              53172 kB 
SReclaimable:      22956 kB 
SUnreclaim:        30216 kB 
KernelStack:        4464 kB 
PageTables:         3948 kB 
NFS_Unstable:          0 kB 
Bounce:                0 kB 
WritebackTmp:          0 kB
CommitLimit:     6094184 kB 
Committed_AS:     780596 kB 
VmallocTotal:   34359738367 kB
VmallocUsed:      191112 kB
VmallocChunk:   34359310332 kB
HardwareCorrupted:     0 kB 
AnonHugePages:    180224 kB
HugePages_Total:       0 
HugePages_Free:        0 
HugePages_Rsvd:        0
HugePages_Surp:        0 
Hugepagesize:       2048 kB
DirectMap4k:       81728 kB 
DirectMap2M:     3063808 kB
DirectMap1G:     7340032 kB

实验任务二: 通过命令查看 Hadoop 状态

步骤一:切换到 hadoop 用户

[root@master ~]# su - hadoop 

步骤二:切换到 Hadoop 的安装目录

[hadoop@master ~]$ cd /usr/local/src/hadoop/ 

步骤三:启动 Hadoop

[hadoop@master hadoop]$ start-all.sh 
This script is Deprecated. Instead use start-dfs.sh and startyarn.sh
Starting namenodes on [master]

master: 	starting 	namenode,	 logging to /usr/local/src/hadoop/logs/hadoop-hadoop-namenode-master.out
192.168.1.7: 	starting 	datanode,	 logging to /usr/local/src/hadoop/logs/hadoop-hadoop-datanode-slave1.out
192.168.1.8:	 starting 	datanode, 	logging to /usr/local/src/hadoop/logs/hadoop-hadoop-datanode-slave2.out Starting secondary namenodes [0.0.0.0] 
0.0.0.0: 	starting 	secondarynamenode, 	logging to /usr/local/src/hadoop/logs/hadoop-hadoop-secondarynamenode-

master.out
starting yarn daemons 
starting 	resourcemanager, 	logging to 	
usr/local/src/hadoop/logs/yarn-hadoop-resourcemanager-master.out
192.168.1.8:	 starting nodemanager, 	logging to /usr/local/src/hadoop/logs/yarn-hadoop-nodemanager-slave2.out 
192.168.1.7:	 starting nodemanager,	 logging to /usr/local/src/hadoop/logs/yarn-hadoop-nodemanager-slave1.out 

步骤四:关闭 Hadoop

[hadoop@master hadoop]$ stop-all.sh 

This script is Deprecated. Instead use stop-dfs.sh and stopyarn.sh
Stopping namenodes on [master]
master: stopping namenode 
192.168.1.8: stopping datanode 
192.168.1.7: stopping datanode 
Stopping secondary namenodes [0.0.0.0] 
0.0.0.0: stopping secondarynamenode 
stopping yarn daemons stopping resourcemanager 
192.168.1.7: stopping nodemanager 
192.168.1.8: stopping nodemanager
no proxyserver to stop 

实验三 通过命令监控大数据平台资 源状态

步骤一:确认切换到目录 /usr/local/src/hadoop

[hadoop@master hadoop]$cd /usr/local/src/hadoop 

步骤二:返回主机界面在 Master 主机上执行 start-all.sh

master 节点启动 zookeeper
[hadoop@master hadoop]$ zkServer.sh start 
 
slave1 节点启动 zookeeper 
[hadoop@slave1 hadoop]$ zkServer.sh start 
 
slave2 节点启动 zookeeper 
[hadoop@slave2 hadoop]$ zkServer.sh start 
 
master 节点 
[hadoop@master hadoop]$ start-all.sh 

步骤三:执行 JPS 命令

发现 Master 上有 NodeManager 进程和ResourceManager 进程,则 YARN 启动完成。

[hadoop@master hadoop]$ jps
2817 NameNode 
3681 ResourceManager
3477 NodeManager 
3909 Jps 
2990 SecondaryNameNode 

步骤四:查看 HDFS 目录

[hadoop@master hadoop]$  ./bin/hdfs dfs –ls / 

步骤五:查看 HDSF 的报告

执行命令: bin/hdfs dfsadmin -report

[hadoop@master hadoop]$ bin/hdfs dfsadmin -report 

Configured Capacity: 79401328640 (73.95 GB) 
Present Capacity: 75129376768 (69.97 GB) DFS
Remaining: 75129131008 (69.97 GB)
DFS Used: 245760 (240 KB) DFS Used%: 0.00%
Under replicated blocks: 8 
Blocks with corrupt replicas: 0 
Missing blocks: 0 
Missing blocks (with replication factor 1): 0 
 
------------------------------------------------- 
Live datanodes (2): 
 
Name: 192.168.1.8:50010 (slave2)
Hostname: slave2 
Decommission Status : Normal 
Configured Capacity: 39700664320 (36.97 GB) 
DFS Used: 122880 (120 KB) 
Non DFS Used: 2135302144 (1.99 GB) 
DFS Remaining: 37565239296 (34.99 GB) 
DFS Used%: 0.00% DFS Remaining%: 94.62% 
Configured Cache Capacity: 0 (0 B) 
Cache Used: 0 (0 B) 
Cache Remaining: 0 (0 B) 
Cache Used%: 100.00

Cache Remaining%: 0.00%
Xceivers: 1
Last contact: Mon May 04 21:54:13 CST 2020 
 
 
Name: 192.168.1.7:50010 (slave1)
Hostname: slave1
Decommission Status : Normal
Configured Capacity: 39700664320 (36.97 GB)
DFS Used: 122880 (120 KB) Non
DFS Used: 2136649728 (1.99 GB) 
DFS Remaining: 37563891712 (34.98 GB)
DFS Used%: 0.00% DFS Remaining%: 94.62% 
Configured Cache Capacity: 0 (0 B) 
Cache Used: 0 (0 B) 
Cache Remaining: 0 (0 B) 
Cache Used%: 100.00% 
Cache Remaining%: 0.00% 
Xceivers: 1 
Last contact:Mon May 04 21:54:13 CST 2020 

实验任务四:通过命令查看 HBase 状态

步骤一:查看 HBase 版本信息 切换到 HBase 安装目录

[hadoop@master hadoop]$cd /usr/local/src/hbase    
 
[hadoop@master src]$ hbase version

HBase 1.2.1
Source code repository git://asf-dev/home/busbey/projects/hbase revision=8d8a7107dc4ccbf36a92f64675dc60392f85c015
Compiled by busbey on Wed Mar 30 11:19:21 CDT 2016 
From source with checksum f4bb4a14bb4e0b72b46f729dae98a772 

结果显示 HBase1.2.1,说明 HBase 正在运行,版本号为 1.2.1。

如果没有启动,则执行命令 start-hbase.sh 启动 HBase。

[hadoop@master hbase]$ start-hbase.sh 
  starting master, logging to /usr/local/src/hbase/logs/hbasehadoop-master-master.out 
  Java HotSpot(TM) 64-Bit Server VM warning: ignoring option PermSize=128m; support was removed in 8.0 Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=128m; support was removed in 8.0 master: starting regionserver, logging to /usr/local/src/hbase/logs/hbase-hadoop-regionserver-master.out slave1: starting regionserver, logging to 
  
  /usr/local/src/hbase/logs/hbase-hadoop-regionserver-slave1.out 
  slave2: 	starting	 regionserver, 	logging to 
  /usr/local/src/hbase/logs/hbase-hadoop-regionserver-slave2.out 
  master: Java HotSpot(TM) 64-Bit Server VM warning: ignoring
  option PermSize=128m; support was removed in 8.0 
  master: Java HotSpot(TM) 64-Bit Server VM warning: ignoring 
  option MaxPermSize=128m; support was removed in 8.0 
  slave1: Java HotSpot(TM) 64-Bit Server VM warning: ignoring 
  option PermSize=128m; support was removed in 8.0 
  slave1: Java HotSpot(TM) 64-Bit Server VM warning: ignoring 
  option MaxPermSize=128m; support was removed in 8.0 
  slave2: Java HotSpot(TM) 64-Bit Server VM warning: ignoring
  option PermSize=128m; support was removed in 8.0 
  slave2: Java HotSpot(TM) 64-Bit Server VM warning: ignoring 
  option MaxPermSize=128m; support was removed in 8.0    

步骤二:查看 HBase 版本信息

执行命令hbase shell,进入HBase命令交互界面

[hadoop@master hadoop]$ hbase shell

SLF4J: Class path contains multiple SLF4J bindings. 
SLF4J: Found binding in [jar:file:/usr/local/src/hbase/lib/slf4j-log4j121.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/local/src/hadoop/share/hadoop/common/lib/slf4jlog4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory] HBase Shell; enter 'help<RETURN>' for list of supported commands. Type "exit<RETURN>" to leave the HBase Shell Version 1.2.1, r8d8a7107dc4ccbf36a92f64675dc60392f85c015, Wed Mar 30 11:19:21 CDT 2016 
 
hbase(main):001:0> 

输入 version,查询 HBase 版本

hbase(main):001:0> version 
1.2.1, r8d8a7107dc4ccbf36a92f64675dc60392f85c015, Wed Mar 30 11:19:21 CDT 2016 

步骤三:查询 HBase 状态,在 HBase 命令交互界面

hbase(main):002:0> status 
1 active master, 0 backup masters, 3 servers, 0 dead, 0.6667 average load 

以“简单”查询 HBase 的状态,执行命令 status 'simple'

hbase(main):003:0> status 'simple' 

active master:  master:16000 1589125905790
0 backup masters 
3 live servers    
master:16020 1589125908065         
requestsPerSecond=0.0, 
numberOfOnlineRegions=1,
usedHeapMB=28, maxHeapMB=1918, 
numberOfStores=1, numberOfStorefiles=1, storefileUncompressedSizeMB=0, storefileSizeMB=0, memstoreSizeMB=0, storefileIndexSizeMB=0, readRequestsCount=5, writeRequestsCount=1, rootIndexSizeKB=0, totalStaticIndexSizeKB=0, totalStaticBloomSizeKB=0, totalCompactingKVs=0, currentCompactedKVs=0, compactionProgressPct=NaN, coprocessors=[MultiRowMutationEndpoint]     slave1:16020 1589125915820         requestsPerSecond=0.0, numberOfOnlineRegions=0, usedHeapMB=17, maxHeapMB=440, numberOfStores=0, numberOfStorefiles=0, storefileUncompressedSizeMB=0, storefileSizeMB=0, memstoreSizeMB=0, storefileIndexSizeMB=0, readRequestsCount=0, writeRequestsCount=0, rootIndexSizeKB=0, totalStaticIndexSizeKB=0, totalStaticBloomSizeKB=0, totalCompactingKVs=0, currentCompactedKVs=0, compactionProgressPct=NaN, coprocessors=[]     slave2:16020 1589125917741         requestsPerSecond=0.0, numberOfOnlineRegions=1, usedHeapMB=15, maxHeapMB=440, numberOfStores=1, numberOfStorefiles=1, storefileUncompressedSizeMB=0, storefileSizeMB=0, memstoreSizeMB=0, storefileIndexSizeMB=0, readRequestsCount=4, writeRequestsCount=0, rootIndexSizeKB=0, totalStaticIndexSizeKB=0, totalStaticBloomSizeKB=0, totalCompactingKVs=0, currentCompactedKVs=0, compactionProgressPct=NaN, coprocessors=[] 0 dead servers Aggregate load: 0, regions: 2   

显示更多的关于 Master、Slave1和 Slave2 主机的服务端口、请求时间等详细信息。

如果需要查询更多关于 HBase 状态,执行命令 help 'status' hbase

hbase(main):004:0> help 'status'
Show cluster status. Can be 'summary', 'simple', 'detailed', or 'replication'. The default is 'summary'. Examples: 
 
  hbase> status  
  hbase> status 'simple'  
  hbase> status 'summary'   
  hbase> status 'detailed'  
  hbase> status 'replication'   
  hbase> status 'replication', 'source'   
  hbase> status 'replication', 'sink' 
 
hbase(main):005:0> quit    

步骤四 停止 HBase 服务

[hadoop@master hbase]$ stop-hbase.sh  stopping hbasecat.........      

实验任务五:通过命令查看 Hive 状态

步骤一:启动 Hive

切换到/usr/local/src/hive 目录,输入 hive,回车。

[hadoop@master hadoop]$ cd /usr/local/src/hive 
 
[hadoop@master hive]$ hive 
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/local/src/hive/lib/hivejdbc-2.0.0standalone.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/local/src/hive/lib/log4jslf4j-impl-2.4.1.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/usr/local/src/hadoop/share/hadoop/common/lib/slf4jlog4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory] 
 
Logging initialized using configuration in jar:file:/usr/local/src/hive/lib/hive-common-2.0.0.jar!/hivelog4j2.properties Hive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider using a different execution engine (i.e. spark, tez) or using Hive 1.X releases. hive> 

步骤二:Hive 操作基本命令

注意:Hive 命令行语句后面一定要加分号。

(1)查看数据库

hive> show databases; 
OK 
default 
Time taken: 0.011 seconds, Fetched: 1 row(s) 

显示默认的数据库 default。

(2)查看 default 数据库所有表

hive> use default;   

hive> show tables; 
OK
test 
Time taken: 0.026 seconds

显示 default 数据中没有任何表。

(3)创建表 stu,表的 id 为整数型,name 为字符型

hive> create table stu(id int,name string);
OK 
Time taken: 0.53 seconds 

(4)为表 stu 插入一条信息,id 号为 001,name 为张三

hive> insert into stu values (1001,"zhangsan"); 

WARNING: Hive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider using a different execution engine (i.e. spark, tez) or using Hive 1.X releases. Query ID = hadoop_20200515102811_1bccf3d2-88e3-4403-b25b1e51e6e215b5 Total jobs = 3 Launching Job 1 out of 3 Number of reduce tasks is set to 0 since there's no reduce operator Starting Job = job_1588987665170_0001, Tracking URL = http://master:8088/proxy/application_1588987665170_0001/ Kill Command = /usr/local/src/hadoop/bin/hadoop job  -kill job_1588987665170_0001 Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 0 2020-05-15 10:34:16,557 Stage-1 map = 0%,  reduce = 0% 2020-05-15 10:34:37,656 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 5.63 sec MapReduce Total cumulative CPU time: 5 seconds 630 msec Ended Job = job_1588987665170_0001 Stage-4 is selected by condition resolver. Stage-3 is filtered out by condition resolver. Stage-5 is filtered out by condition resolver. Moving data to: hdfs://192.168.1.6:9000/user/hive/warehouse/stu/.hivestaging_hive_2020-05-15_10-33-51_327_8147862916316704428-1/-ext10000 Loading data to table default.stu MapReduce Jobs Launched:  Stage-Stage-1: Map: 1   Cumulative CPU: 5.63 sec   HDFS Read: 4177 HDFS Write: 78 SUCCESS Total MapReduce CPU Time Spent: 5 seconds 630 msec OK Time taken: 47.769 seconds    

按照以上操作,继续插入两条信息:id 和 name 分别为 1002、1003 和 lisi、wangwu。

(5)插入数据后查看表的信息

 hive> show tables; 
 
 OK
 stu
 test 
values__tmp__table__1
Time taken: 0.019 seconds, Fetched: 3 row(s) 

(6)查看表 stu 结构

hive> desc stu;

OK 
id                      int                                       
name                    string                                    
Time taken: 0.031 seconds, Fetched: 2 row(s) 

(7)查看表 stu 的内容

hive> select * from stu;

OK 
1001       zhangsan 
2002       lisi 
3003       wangwu
Time taken: 0.101 seconds, Fetched: 3 row(s) 

步骤三:查看文件系统和历史命令

(1)查看本地文件系统,执行命令 ! ls /usr/local/src;

hive> ! ls /usr/local/src; 

flume 
hadoop 
hbase
hive 
jdk1.8.0_152 
sqoop 
zookeeper 

(2)查看 HDFS 文件系统,执行命令 dfs -ls /;

hive> dfs -ls /;

Found 5 items drwxr-xr-x   - hadoop supergroup          0 2020-05-04 22:06 /bigdata -rw-r--r--   3 hadoop supergroup         12 2020-05-04 22:12 /bigdatafile drwxr-xr-x   - hadoop supergroup          0 2020-05-10 23:51 /hbase drwx-wx-wx   - hadoop supergroup          0 2020-05-15 10:33 /tmp drwxrwxrwx   - hadoop supergroup          0 2020-04-23 14:08 /user  
 
hive> exit; 

(3)查看在 Hive 中输入的所有历史命令

进入到当前用户 Hadoop 的目录/home/hadoop,查看.hivehistory 文件。

[hadoop@master home]$ cd /home/hadoop 
 
[hadoop@master ~]$ cat .hivehistory  

create database sample;
show databases; create database sample; 
use sample;  
create table student(number STRING, name STRING)  

row format delimited  
fields terminated by "|" 
stored as textfile;
exit; 
show databases;
use default;  
show tables; 
create table stu(id int,name string);
insert into stu values (1001,"zhangsan"); 
show tables;
desc stu; select * from stu; 
! ls /usr/local/src; dfs -ls /;
exit;

结果显示,之前在 Hive 命令行界面下运行的所有命令(含错误命令)都显示了出 来,有助于维护、故障排查等工作。

任务六:查看 ZooKeeper 状态

步骤一: 查看 ZooKeeper 状态,

[hadoop@master ~]$ zkServer.sh status

ZooKeeper JMX enabled by default 
Using config: /usr/local/src/zookeeper/bin/../conf/zoo.cfg 
Mode: follower

步骤二: 查看运行进程

QuorumPeerMain:QuorumPeerMain 是 ZooKeeper 集群的启动入口类,是用来加载配

置启动 QuorumPeer 线程的。

执行命令 jps 以查看进程情况。

[hadoop@master ~]$ jps 3987 Jps 
 
3925 QuorumPeerMain 
1628 SecondaryNameNode 
1453 NameNode
1821 ResourceManager 

此时 QuorumPeerMain 进程已启动

步骤三: zkCli.sh,连接到ZooKeeper 服务。

[hadoop@master ~]$ zkCli.sh 

Connecting to localhost:2181 2020-05-15 14:47:11,157 [myid:] - INFO  [main:Environment@100] - Client environment:zookeeper.version=3.4.8--1, built on 02/06/2016 03:18 GMT 2020-05-15 14:47:11,160 [myid:] - INFO  [main:Environment@100] - Client environment:host.name=master 2020-05-15 14:47:11,160 [myid:] - INFO  [main:Environment@100] - Client environment:java.version=1.8.0_152 2020-05-15 14:47:11,162 [myid:] - INFO  [main:Environment@100] - Client environment:java.vendor=Oracle Corporation 2020-05-15 14:47:11,162 [myid:] - INFO  [main:Environment@100] - Client environment:java.home=/usr/local/src/java/jre 2020-05-15 14:47:11,162 [myid:] - INFO  [main:Environment@100] - Client environment:java.class.path=/usr/local/src/zookeeper/bin/../build /classes:/usr/local/src/zookeeper/bin/../build/lib/*.jar:/usr/loc al/src/zookeeper/bin/../lib/slf4j-log4j121.6.1.jar:/usr/local/src/zookeeper/bin/../lib/slf4j-api1.6.1.jar:/usr/local/src/zookeeper/bin/../lib/netty3.7.0.Final.jar:/usr/local/src/zookeeper/bin/../lib/log4j1.2.16.jar:/usr/local/src/zookeeper/bin/../lib/jline0.9.94.jar:/usr/local/src/zookeeper/bin/../zookeeper3.4.8.jar:/usr/local/src/zookeeper/bin/../src/java/lib/*.jar:/usr /local/src/zookeeper/bin/../conf:.::/usr/local/src/java/lib:/usr/ local/src/java/jre/lib:/usr/local/src/sqoop/lib 2020-05-15 14:47:11,162 [myid:] - INFO  [main:Environment@100] - Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/l ib64:/lib64:/lib:/usr/lib 2020-05-15 14:47:11,162 [myid:] - INFO  [main:Environment@100] - Client environment:java.io.tmpdir=/tmp 2020-05-15 14:47:11,163 [myid:] - INFO  [main:Environment@100] - Client environment:java.compiler=<NA> 2020-05-15 14:47:11,163 [myid:] - INFO  [main:Environment@100] - Client environment:os.name=Linux 2020-05-15 14:47:11,163 [myid:] - INFO  [main:Environment@100] - Client environment:os.arch=amd64 2020-05-15 14:47:11,163 [myid:] - INFO  [main:Environment@100] - Client environment:os.version=3.10.0-693.el7.x86_64 2020-05-15 14:47:11,163 [myid:] - INFO  [main:Environment@100] - Client environment:user.name=hadoop 2020-05-15 14:47:11,163 [myid:] - INFO  [main:Environment@100] - Client environment:user.home=/home/hadoop 
 
2020-05-15 14:47:11,163 [myid:] - INFO  [main:Environment@100] - Client environment:user.dir=/usr/local/src/hadoop 2020-05-15 14:47:11,164 [myid:] - INFO  [main:ZooKeeper@438] - Initiating client connection, connectString=localhost:2181 sessionTimeout=30000 watcher=org.apache.zookeeper.ZooKeeperMain$MyWatcher@42110406 Welcome to ZooKeeper! 2020-05-15 14:47:11,191 [myid:] - INFO  [mainSendThread(localhost:2181):ClientCnxn$SendThread@1032] - Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error) JLine support is enabled 2020-05-15 14:47:11,249 [myid:] - INFO  [mainSendThread(localhost:2181):ClientCnxn$SendThread@876] - Socket connection established to localhost/127.0.0.1:2181, initiating session 2020-05-15 14:47:11,260 [myid:] - INFO  [mainSendThread(localhost:2181):ClientCnxn$SendThread@1299] - Session establishment complete on server localhost/127.0.0.1:2181, sessionid = 0x171f70f3bda20ea, negotiated timeout = 30000 
 
WATCHER:: 
 
WatchedEvent state:SyncConnected type:None path:null [zk: localhost:2181(CONNECTED) 0] 

结果显示已经连接成功,系统输出 ZooKeeper 的相关环境配置信息,并在屏幕中输出 “Welcome to ZooKeeper!”等信息。

输入 help 命令之后,屏幕会输出如下可用的 ZooKeeper 命令。
[zk: localhost:2181(CONNECTED) 0] help 
ZooKeeper -server host:port cmd args        
stat path [watch]        
set path data [version]     
ls path [watch]      
delquota [-n|-b] path     
ls2 path [watch]        
setAcl path acl    
setquota -n|-b val path  
history   
redo cmdno         
printwatches on|off        
delete path [version]      
sync path       
listquota path       
rmr path         
get path [watch]       
create [-s] [-e] path data acl       
addauth scheme auth      
quit        
getAcl path       
close      
connect host:port
[zk: localhost:2181(CONNECTED) 1] 

步骤四: 监听/hbase 目录

一旦/hbase 内容有变化,将会有提示。打开监视,执行命令 get /hbase 1。

[zk: localhost:2181(CONNECTED) 0] get /hbase 1 
 
cZxid = 0x100000002 ctime = Thu Apr 23 16:02:29 CST 2020 mZxid = 0x100000002 mtime = Thu Apr 23 16:02:29 CST 2020 pZxid = 0x20000008d cversion = 26 dataVersion = 0 aclVersion = 0 ephemeralOwner = 0x0 dataLength = 0 numChildren = 16 [zk: localhost:2181(CONNECTED) 1] set /hbase value-update 
 
WATCHER::cZxid = 0x100000002 
 
 
WatchedEvent state:SyncConnected type:NodeDataChanged path:/hbase ctime = Thu Apr 23 16:02:29 CST 2020 mZxid = 0x20000c6d3 mtime = Fri May 15 15:03:41 CST 2020 pZxid = 0x20000008d cversion = 26 dataVersion = 1 aclVersion = 0 ephemeralOwner = 0x0 dataLength = 12 numChildren = 16 [zk: localhost:2181(CONNECTED) 2] get /hbase value-update cZxid = 0x100000002 ctime = Thu Apr 23 16:02:29 CST 2020 mZxid = 0x20000c6d3 mtime = Fri May 15 15:03:41 CST 2020 pZxid = 0x20000008d cversion = 26 dataVersion = 1 aclVersion = 0 ephemeralOwner = 0x0 dataLength = 12 numChildren = 16 
 
[zk: localhost:2181(CONNECTED) 3] quit 

结果显示,当执行命令 set /hbase value-update 后,数据版本由 0 变成 1,说明 /hbase 处于监控中。

任务七:查看 Sqoop 状态

步骤一: 查询 Sqoop 版本号

验证 Sqoop 是否启动成功

首先切换到/usr/local/src/sqoop 目录,执行命令:./bin/sqoop-version

[hadoop@master ~]$ cd /usr/local/src/sqoop 
[hadoop@master sqoop]$ ./bin/sqoop-version

Warning: /usr/local/src/sqoop/../hcatalog does not exist! HCatalog jobs will fail. Please set $HCAT_HOME to the root of your HCatalog installation. Warning: /usr/local/src/sqoop/../accumulo does not exist! Accumulo imports will fail. Please set $ACCUMULO_HOME to the root of your Accumulo installation. 20/05/06 17:40:16 INFO sqoop.Sqoop: Running Sqoop version: 1.4.7 Sqoop 1.4.7 git commit id 2328971411f57f0cb683dfb79d19d4d19d185dd8 
Compiled by maugli on Thu Dec 21 15:59:58 STD 2017 

步骤二: 测试 Sqoop 是否能够成功连接数据库

切换到 Sqoop 的 目 录 , 执 行 命 令

bin/sqoop list-databases  --connect  jdbc:mysql://master:3306/  --username root --password Password123$,

命令中“master:3306”为数据库主机名和端口

[hadoop@master hadoop]$cd /usr/local/src/sqoop 
[hadoop@master sqoop]$ bin/sqoop list-databases --connect jdbc:mysql://master:3306/ --username root --password Password123$ 

Warning: /usr/local/src/sqoop/../hcatalog does not exist! HCatalog jobs will fail. Please set $HCAT_HOME to the root of your HCatalog installation. Warning: /usr/local/src/sqoop/../accumulo does not exist! Accumulo imports will fail. Please set $ACCUMULO_HOME to the root of your Accumulo installation. 20/05/15 12:15:57 INFO sqoop.Sqoop: Running Sqoop version: 1.4.7 20/05/15 12:15:57 WARN tool.BaseSqoopTool: Setting your password on the command-line is insecure. Consider using -P instead. 20/05/15 12:15:57 INFO manager.MySQLManager: Preparing to use a MySQL streaming resultset. Fri May 15 12:15:57 CST 2020 WARN: Establishing SSL connection without server's identity verification is not recommended. According to MySQL 5.5.45+, 5.6.26+ and 5.7.6+ requirements SSL connection must be established by default if explicit option isn't set. For compliance with existing applications not using SSL the verifyServerCertificate property is set to 'false'. You need either to explicitly disable SSL by setting useSSL=false, or set useSSL=true and provide truststore for server certificate verification. 

information_schema 
hive
mysql
performance_schema
sys 

结果显示,可以连接到 MySQL,并查看到 Master 主机中 MySQL 的所有库实例,如 information_schema、hive、mysql、performance_schema 和 sys 等数据库。

步骤三: 启动 Sqoop。

[hadoop@master sqoop]$ sqoop help

Warning: /usr/local/src/sqoop/../hcatalog does not exist! HCatalog jobs will fail. Please set $HCAT_HOME to the root of your HCatalog installation. Warning: /usr/local/src/sqoop/../accumulo does not exist! Accumulo imports will fail. Please set $ACCUMULO_HOME to the root of your Accumulo installation. 20/05/15 13:42:02 INFO sqoop.Sqoop: Running Sqoop version: 1.4.7 usage: sqoop COMMAND [ARGS] 
 
Available commands:   codegen            Generate code to interact with database records   create-hive-table  Import a table definition into Hive   eval               Evaluate a SQL statement and display the results   export             Export an HDFS directory to a database table   help               List available commands   import             Import a table from a database to HDFS   import-all-tables  Import tables from a database to HDFS   import-mainframe   Import datasets from a mainframe server to HDFS   job                Work with saved jobs   list-databases     List available databases on a server   list-tables        List available tables in a database   merge              Merge results of incremental imports   metastore          Run a standalone Sqoop metastore   version            Display version information 
 
See 'sqoop help COMMAND' for information on a specific command. 

任务八:: 通过命令查看 Flume 状态

步骤一:查看 Flume的版本。

[hadoop@master sqoop]$ cd /usr/local/src/flume 
 
[hadoop@master flume]$ flume-ng version 

Flume 1.6.0 
Source code repository: https://git-wipus.apache.org/repos/asf/flume.git Revision: 2561a23240a71ba20bf288c7c2cda88f443c2080 Compiled by hshreedharan on Mon May 11 11:15:44 PDT 2015 From source with checksum b29e416802ce9ece3269d34233baf43f 

步骤二:添加 example.com

[hadoop@master flume]$ vim /usr/local/src/flume/example.conf

# 在文件中写入以下内容 # a1 是 agent 名,r1,k1,c1 是 a1 的三个组件 
a1.sources=r1
a1.sinks=k1 
a1.channels=c1 
 
# 设置 r1 源文件的类型、路径和文件头属性 
a1.sources.r1.type=spooldir 
a1.sources.r1.spoolDir=/usr/local/src/flume/ 
a1.sources.r1.fileHeader=true 
 
# 设置 k1 目标存储器属性
a1.sinks.k1.type=hdfs # 目标存储器类型 hdfs 

 
a1.sinks.k1.hdfs.path=hdfs://master:9000/flume # 目标存储位置 a1.sinks.k1.hdfs.rollsize=1048760  #临时文件达 1048760 bytes 时,滚动形 成目标文件 
a1.sinks.k1.hdfs.rollCount=0  #0 表示不根据 events 数量来滚动形成目标文件 a1.sinks.k1.hdfs.rollInterval=900 # 间隔 900 秒将临时文件滚动形成目标文 件 a1.sinks.k1.hdfs.useLocalTimeStamp=true  # 使用本地时间戳 
 
# 设置 c1 暂存容器属性 
a1.channels.c1.type=file # 使用文件作为暂存容器
a1.channels.c1.capacity=1000 
a1.channels.c1.transactionCapacity=100 
 
# 使用 c1 作为源和目标数据的传输通道 
a1.sources.r1.channels = c1 
a1.sinks.k1.channel = c1   

步骤三:启动 Flume Agent a1 日志控制台

[hadoop@master flume]$ /usr/local/src/flume/bin/flume-ng agent -conf ./conf --conf-file ./example.conf --name a1 Dflume.root.logger=INFO,console 

步骤四: 查看结果

[hadoop@master flume]$ hdfs dfs -lsr /flume

drwxr-xr-x - hadoop supergroup    0 2020-05-15 15:16  /flume/20200515 -rw-r--r-- 2 hadoop supergroup    11 2020-05-15 15:16  /flume/20200515/events-.1545376595231

标签:src,13,0.0,平台,hadoop,master,usr,监控,local
来源: https://www.cnblogs.com/wengfy/p/16292281.html

本站声明: 1. iCode9 技术分享网(下文简称本站)提供的所有内容,仅供技术学习、探讨和分享;
2. 关于本站的所有留言、评论、转载及引用,纯属内容发起人的个人观点,与本站观点和立场无关;
3. 关于本站的所有言论和文字,纯属内容发起人的个人观点,与本站观点和立场无关;
4. 本站文章均是网友提供,不完全保证技术分享内容的完整性、准确性、时效性、风险性和版权归属;如您发现该文章侵犯了您的权益,可联系我们第一时间进行删除;
5. 本站为非盈利性的个人网站,所有内容不会用来进行牟利,也不会利用任何形式的广告来间接获益,纯粹是为了广大技术爱好者提供技术内容和技术思想的分享性交流网站。

专注分享技术,共同学习,共同进步。侵权联系[81616952@qq.com]

Copyright (C)ICode9.com, All Rights Reserved.

ICode9版权所有