ICode9

精准搜索请尝试: 精确搜索
首页 > 其他分享> 文章详细

spark集群安装部署

2021-12-21 19:32:58  阅读:179  来源: 互联网

标签:bin 部署 app hadoop 集群 home spark hadoop2.7


1.在官网下载spark安装包

# wget https://archive.apache.org/dist/spark/spark-2.4.8/spark-2.4.8-bin-hadoop2.7.tgz

2.解压

# tar -zxvf spark-2.4.8-bin-hadoop2.7.tgz -C /home/hadoop/app

3.修改配置

# cd  /home/hadoop/app/spark-2.3.1-bin-hadoop2.7/conf/
# cp spark-env.sh.template spark-env.sh
# cp slaves.template slaves
# cp spark-defaults.conf.template spark-defaults.conf
# vim spark-env.sh

添加
export HADOOP_CONF_DIR=/home/hadoop/app/hadoop-2.7.5/etc/hadoop
export HADOOP_HOME=/home/hadoop/app/hadoop-2.7.5
export JAVA_HOME=/opt/jdk1.8.0_202
export SPARK_HOME=/home/hadoop/app/spark-2.3.1-bin-hadoop2.7
export SCALA_HOME=/home/hadoop/app/scala-2.11.8
export SPARK_LOG_DIR=/home/hadoop/app/spark-2.3.1-bin-hadoop2.7/logs
export SPARK_PID_DIR=/home/hadoop/app/spark-2.3.1-bin-hadoop2.7/logs/pid

修改spark-defaults.conf
# vim spark-defaults.conf

添加
spark.eventLog.enabled                             true
spark.eventLog.dir                                 hdfs://ns1/spark/eventLog
spark.rdd.compress                                 true
spark.driver.mebaiwanry                                4G
spark.yarn.historyServer.address                   dba-01:18080
spark.history.ui.port                              18080
spark.history.fs.logDirectory                      hdfs://ns1/spark/eventLog
spark.yarn.maxAppAttempts                          4
spark.yarn.stagingDir                              hdfs://ns1/spark/stagingDir

spark.yarn.singleContainerPerNode                  false
spark.yarn.allocator.waitTime                      60s
spark.logConf                                      true
spark.ui.killEnabled                               false
spark.streaming.backpressure.initialRate           1000
spark.streaming.kafka.maxRatePerPartition         10000
spark.streaming.blockInterval                     1000
spark.streaming.backpressure.enabled              true
spark.streaming.receiver.maxRate                  10000
spark.streaming.kafka.maxRetries                  10
spark.default.parallelism                         64
spark.streaming.dynamicAllocation.enabled         false
spark.streaming.dynamicAllocation.minExecutors    1
spark.streaming.dynamicAllocation.maxExecutors    50
spark.shuffle.service.enabled             true
spark.dynamicAllocation.enabled           true
spark.dynamicAllocation.minExecutors      1
spark.dynamicAllocation.maxExecutors      20
spark.driver.maxResultSize  4g

修改slaves
# vim slaves
添加
dba-01
dba-02
dba-03

4.创建目录

# cd /home/hadoop/app/spark-2.3.1-bin-hadoop2.7
# mkdir -p logs/pid
# hdfs dfs -mkdir -p /spark/stagingDir
# hdfs dfs -mkdir -p /spark/eventLog

5.传输到其他节点

# cd /home/hadoop/app
# scp -r spark-2.3.1-bin-hadoop2.7 hadoop@dba-02:/home/hadoop/app
# scp -r spark-2.3.1-bin-hadoop2.7 hadoop@dba-03:/home/hadoop/app
# scp -r spark-2.3.1-bin-hadoop2.7 hadoop@dba-04:/home/hadoop/app
# scp -r spark-2.3.1-bin-hadoop2.7 hadoop@dba-05:/home/hadoop/app

6.任意一个节点启动spark集群

# cd /home/hadoop/app/spark-2.3.1-bin-hadoop2.7/sbin
# ./start-all.sh

7.添加spark环境变量

# vim /etc/profile
export SPARK_HOME=/home/hadoop/app/spark-2.3.1-bin-hadoop2.7
export PATH=$SPARK_HOME/bin

# source /etc/profile

标签:bin,部署,app,hadoop,集群,home,spark,hadoop2.7
来源: https://www.cnblogs.com/slqdba/p/15716631.html

本站声明: 1. iCode9 技术分享网(下文简称本站)提供的所有内容,仅供技术学习、探讨和分享;
2. 关于本站的所有留言、评论、转载及引用,纯属内容发起人的个人观点,与本站观点和立场无关;
3. 关于本站的所有言论和文字,纯属内容发起人的个人观点,与本站观点和立场无关;
4. 本站文章均是网友提供,不完全保证技术分享内容的完整性、准确性、时效性、风险性和版权归属;如您发现该文章侵犯了您的权益,可联系我们第一时间进行删除;
5. 本站为非盈利性的个人网站,所有内容不会用来进行牟利,也不会利用任何形式的广告来间接获益,纯粹是为了广大技术爱好者提供技术内容和技术思想的分享性交流网站。

专注分享技术,共同学习,共同进步。侵权联系[81616952@qq.com]

Copyright (C)ICode9.com, All Rights Reserved.

ICode9版权所有