ICode9

精准搜索请尝试: 精确搜索
首页 > 编程语言> 文章详细

Java kerberos hdfs

2022-03-03 14:33:53  阅读:190  来源: 互联网

标签:hdfs Java kerberos hadoop conf fileSystem FileSystem



hadoop: hdfs: host: hdfs://192.168.0.161:8020 path: /app-logs user: hdfs batch-size: 105267200 #1024*1024*1024 1G batch-rollover-interval: 60000 #1000*60*2 2miniutes kerberos: keytab: C:\ProgramData\MIT\Kerberos5\hdfs.headless.keytab user: hdfs-test@EMERGEN.COM kerber-conf: C:\ProgramData\MIT\Kerberos5\krb5.conf

jar包:

  <dependency>
            <groupId>org.apache.hadoop</groupId>
            <artifactId>hadoop-hdfs</artifactId>
            <version>3.2.2</version>
        </dependency>
        <dependency>
            <groupId>org.apache.hadoop</groupId>
            <artifactId>hadoop-client</artifactId>
            <version>3.2.2</version>
        </dependency>

配置conf

public  static  Configuration getConf(){
        System.out.println(hdfsConf.toString());
        Configuration conf = new Configuration();
        Kerbers kerberos = hdfsConf.getKerberos();
        String user=kerberos.getUser();
        String keytab=kerberos.getKeytab();
        String krbConf=kerberos.getKerberConf();
        conf.set("fs.defaultFS", hdfsConf.getHost());
        /**
         * hadoop使用kerberos验证
         */
        conf.set("hadoop.security.authentication", "kerberos");
        /**
         * hadoop namenode节点的principal(验证实体)
         */
        conf.set("dfs.namenode.kerberos.principal", user);
        /**
         * 访问hadoop集群的principal
         */
        conf.set("kerberos.principal", user);
        /**
         * 访问hadoop集群的principal对应的keytab文件路径
         */
        conf.set("kerberos.keytab", keytab);
        /**
         * krb5.conf->kerberos客户端配置文件,这个Kerberos配置文件是必须配置的,不然找不到kerberos服务
         */
        System.setProperty("java.security.krb5.conf", krbConf);
        UserGroupInformation.setConfiguration(conf);
        try {
            //使用待验证的实体,调用loginUserFromKeytab api向hbase进行kerberos验证
            UserGroupInformation.loginUserFromKeytab(user, keytab);

        }
        catch (Exception ex){

        }
        return  conf;
    }

创建hdfs连接

public static FileSystem getfileSystem() {

        //HDFS关键类FileSystem
        //连接的URI
        URI uri = URI.create(hdfsConf.getHost());
        //相关配置
       // Configuration conf = new Configuration();

        Configuration conf=getConf();
        //可以设置副本个数如:conf.set("dfs.replication","3");
        //客户端名称
        FileSystem fileSystem = null;
        try {
            fileSystem=fileSystem.get(conf);
           // fileSystem = FileSystem.get(uri, conf, hdfsConf.getUser());
        } catch (IOException e) {
            log.error("连接HDFS失败" + e.getMessage());
        } catch (Exception e) {
            log.error("连接HDFS失败" + e.getMessage());
        }
        return fileSystem;
    }

hdfs 帮助类:

public static FileSystem getfileSystem() {

        //HDFS关键类FileSystem
        //连接的URI
        URI uri = URI.create(hdfsConf.getHost());
        //相关配置
       // Configuration conf = new Configuration();

        Configuration conf=getConf();
        //可以设置副本个数如:conf.set("dfs.replication","3");
        //客户端名称
        FileSystem fileSystem = null;
        try {
            fileSystem=fileSystem.get(conf);
           // fileSystem = FileSystem.get(uri, conf, hdfsConf.getUser());
        } catch (IOException e) {
            log.error("连接HDFS失败" + e.getMessage());
        } catch (Exception e) {
            log.error("连接HDFS失败" + e.getMessage());
        }
        return fileSystem;
    }

测试用例:

    void testHdfs() throws Exception {
        log.info("===============testHdfs======================");
        Path src = new Path("D:\\eee.txt");
        Path dst = new Path("/app-logs/");
        FileSystem fileSystem = HdfsOperator.getfileSystem();
        RemoteIterator<LocatedFileStatus> locatedFileStatusRemoteIterator = HdfsOperator.listFiles(fileSystem, new Path("/app-logs"), false);
        System.out.println(locatedFileStatusRemoteIterator);
        while(locatedFileStatusRemoteIterator.hasNext()){
            LocatedFileStatus next = locatedFileStatusRemoteIterator.next();
            System.out.println(next.getPath().toString());
        }
}

输出结果:

 

标签:hdfs,Java,kerberos,hadoop,conf,fileSystem,FileSystem
来源: https://www.cnblogs.com/younger5/p/15959651.html

本站声明: 1. iCode9 技术分享网(下文简称本站)提供的所有内容,仅供技术学习、探讨和分享;
2. 关于本站的所有留言、评论、转载及引用,纯属内容发起人的个人观点,与本站观点和立场无关;
3. 关于本站的所有言论和文字,纯属内容发起人的个人观点,与本站观点和立场无关;
4. 本站文章均是网友提供,不完全保证技术分享内容的完整性、准确性、时效性、风险性和版权归属;如您发现该文章侵犯了您的权益,可联系我们第一时间进行删除;
5. 本站为非盈利性的个人网站,所有内容不会用来进行牟利,也不会利用任何形式的广告来间接获益,纯粹是为了广大技术爱好者提供技术内容和技术思想的分享性交流网站。

专注分享技术,共同学习,共同进步。侵权联系[81616952@qq.com]

Copyright (C)ICode9.com, All Rights Reserved.

ICode9版权所有