ICode9

精准搜索请尝试: 精确搜索
首页 > 其他分享> 文章详细

在aws的emr上部署dolphinscheduler

2021-05-18 19:59:50  阅读:346  来源: 互联网

标签:hdfs s3 dolphinscheduler aws hadoop sh emr config


一、相关连接

1、dolphinscheduler单机部署说明
2、dolphinschedulerd的git源码
3、aws的s3访问说明
4、aws的s3 Endpoint说明

二、部署目标

本文dolphinscheduler的部署目标是实现存储和计算完全分离,数据和资源存储在s3上、计算采用emr集群进行动态扩充管理。

三、流程

3.1 下载源码并修改

下载源码

git clone -b 1.3.6-release https://github.com/apache/dolphinscheduler.git

cp dolphinscheduler dolphinscheduler_debug
cd dolphinscheduler_debug

修改源码,s3AccessKey和s3SecretKey的值可能不一致

sed -i "s/fs.s3a.access.key/fs.s3.awsAccessKeyId/g" `grep fs.s3a.access.key -rl /home/hadoop/workspace/dolphinscheduler_debug`

sed -i "s/fs.s3a.secret.key/fs.s3.awsSecretAccessKey/g" `grep fs.s3a.secret.key -rl /home/hadoop/workspace/dolphinscheduler_debug`

修改说明s3配置的key可能不一样:
在这里插入图片描述

在这里插入图片描述

以下是否修改视情况而定

vi ./dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/utils/HadoopUtils.java


public static final String resourceUploadPath = PropertyUtils.getString(RESOURCE_UPLOAD_PATH, "/dolphinscheduler");

编译

./mvnw clean install -Prelease

3.2 配置

dolphinscheduler运行用户必须改为是emr的hadoop账号,省事

主要参考dolphinscheduler单机部署说明文档

  • 重要流程
    1、上传mysql-connector-java-5.1.49.jar到 lib目标
    2、配置conf/datasource.properties
spring.datasource.driver-class-name=com.mysql.jdbc.Driver
spring.datasource.url=jdbc:mysql://xxxxxxxxx:3306/dolphinscheduler?characterEncoding=UTF-8&allowMultiQueries=true
spring.datasource.username=dolphinscheduler
spring.datasource.password=yyyyyy

3、配置conf/env/dolphinscheduler_env.sh

emr在配置时候集成了hadoop、spark、hive,所以就只配置了python路径

export PYTHON_HOME=/usr/bin/python3.7

export PATH=$PYTHON_HOME:$PATH

4、配置conf/config/install_config.conf


# NOTICE :  If the following config has special characters in the variable `.*[]^${}\+?|()@#&`, Please escape, for example, `[` escape to `\[`
# postgresql or mysql
dbtype="mysql"

# db config
# db address and port
dbhost="xxxxxxxxx:3306"

# db username
username="dolphinscheduler"

# database name
dbname="dolphinscheduler"

# db passwprd
# NOTICE: if there are special characters, please use the \ to escape, for example, `[` escape to `\[`
password="yyyyyy"

# zk cluster
zkQuorum="localhost:2181"

# Note: the target installation path for dolphinscheduler, please not config as the same as the current path (pwd)
installPath="/opt/soft/dolphinscheduler"

# deployment user
# Note: the deployment user needs to have sudo privileges and permissions to operate hdfs. If hdfs is enabled, the root directory needs to be created by itself
deployUser="hadoop"


# alert config
# mail server host
mailServerHost="smtp.exmail.qq.com"

# mail server port
# note: Different protocols and encryption methods correspond to different ports, when SSL/TLS is enabled, make sure the port is correct.
mailServerPort="25"

# sender
mailSender="xxxxxxxxxx"

# user
mailUser="xxxxxxxxxx"

# sender password
# note: The mail.passwd is email service authorization code, not the email login password.
mailPassword="xxxxxxxxxx"

# TLS mail protocol support
starttlsEnable="true"

# SSL mail protocol support
# only one of TLS and SSL can be in the true state.
sslEnable="false"

#note: sslTrust is the same as mailServerHost
sslTrust="smtp.exmail.qq.com"


# resource storage type: HDFS, S3, NONE
resourceStorageType="S3"

# if resourceStorageType is HDFS,defaultFS write namenode address,HA you need to put core-site.xml and hdfs-site.xml in the conf directory.
# if S3,write S3 address,HA,for example :s3a://dolphinscheduler,
# Note,s3 be sure to create the root directory /dolphinscheduler
# defaultFS="hdfs://mycluster:8020"
defaultFS="s3a://cccc-dolphinscheduler"

# if resourceStorageType is S3, the following three configuration is required, otherwise please ignore
# s3Endpoint="http://192.168.xx.xx:9010"
s3Endpoint="http://s3.us-west-2.amazonaws.com"
s3AccessKey="awddsasdaasd"
s3SecretKey="aaa/aqw/9hiA5ExnY"

# if resourcemanager HA is enabled, please set the HA IPs; if resourcemanager is single, keep this value empty
yarnHaIps="172.10.170.26"

# if resourcemanager HA is enabled or not use resourcemanager, please keep the default value; If resourcemanager is single, you only need to replace ds1 to actual resourcemanager hostname
singleYarnIp="172.10.170.26"

# resource store on HDFS/S3 path, resource file will store to this hadoop hdfs path, self configuration, please make sure the directory exists on hdfs and have read write permissions. "/dolphinscheduler" is recommended
resourceUploadPath="/cccc-dolphinscheduler/"

# who have permissions to create directory under HDFS/S3 root path
# Note: if kerberos is enabled, please config hdfsRootUser=
hdfsRootUser="hdfs"

# kerberos config
# whether kerberos starts, if kerberos starts, following four items need to config, otherwise please ignore
kerberosStartUp="false"
# kdc krb5 config file path
krb5ConfPath="$installPath/conf/krb5.conf"
# keytab username
keytabUserName="hdfs-mycluster@ESZ.COM"
# username keytab path
keytabPath="$installPath/conf/hdfs.headless.keytab"


# api server port
apiServerPort="12345"


# install hosts
# Note: install the scheduled hostname list. If it is pseudo-distributed, just write a pseudo-distributed hostname
ips="localhost"

# ssh port, default 22
# Note: if ssh port is not default, modify here
sshPort="22"

# run master machine
# Note: list of hosts hostname for deploying master
masters="localhost"

# run worker machine
# note: need to write the worker group name of each worker, the default value is "default"
workers="localhost:default"

# run alert machine
# note: list of machine hostnames for deploying alert server
alertServer="localhost"

# run api machine
# note: list of machine hostnames for deploying api server
apiServers="localhost"


5、其他相关操作


1、上传下载文件(rz/sz)

sudo yum -y install lrzsz



2、修改免密(hadoop)

#echo 'hadoop ALL=(ALL) NOPASSWD: NOPASSWD: ALL' >> /etc/sudoers


ssh-keygen -t rsa

cd .ssh/
cat id_rsa.pub >> authorized_keys

cd ..

chmod 700 -R .ssh/

ssh localhost


3、文件


sudo bash

mkdir -p /opt/dolphinscheduler;
cd /opt/dolphinscheduler;

rz df的bin包

tar -zxvf apache-dolphinscheduler-1.3.6-bin.tar.gz -C /opt/dolphinscheduler;
 
mv apache-dolphinscheduler-1.3.6-bin  dolphinscheduler-bin

chown -R hadoop:hadoop dolphinscheduler-bin

mkdir -p /data/dolphinscheduler
chown -R hadoop:hadoop /data/dolphinscheduler

4、配置

rz  mysql驱动

5、sh命令
一键部署:sh install.sh
一键停止:sh ./bin/stop-all.sh
一键开始:sh ./bin/start-all.sh

四、web

http://xx.xx.xx.xxx:12345/dolphinscheduler/
默认用户:admin:dolphinscheduler123

进入后按需创建租户:emr创建hadoop租户
然后创建普通用户,并指定hadoop租户

标签:hdfs,s3,dolphinscheduler,aws,hadoop,sh,emr,config
来源: https://blog.csdn.net/qq_27297393/article/details/116998595

本站声明: 1. iCode9 技术分享网(下文简称本站)提供的所有内容,仅供技术学习、探讨和分享;
2. 关于本站的所有留言、评论、转载及引用,纯属内容发起人的个人观点,与本站观点和立场无关;
3. 关于本站的所有言论和文字,纯属内容发起人的个人观点,与本站观点和立场无关;
4. 本站文章均是网友提供,不完全保证技术分享内容的完整性、准确性、时效性、风险性和版权归属;如您发现该文章侵犯了您的权益,可联系我们第一时间进行删除;
5. 本站为非盈利性的个人网站,所有内容不会用来进行牟利,也不会利用任何形式的广告来间接获益,纯粹是为了广大技术爱好者提供技术内容和技术思想的分享性交流网站。

专注分享技术,共同学习,共同进步。侵权联系[81616952@qq.com]

Copyright (C)ICode9.com, All Rights Reserved.

ICode9版权所有