在 Hadoop 3.3.1 集群上配置 Kerberos

如何解决在 Hadoop 3.3.1 集群上配置 Kerberos

我制作了一个 hadoop 集群并能够用它实现一些处理(avro、spark、kafka),现在我想设置 kerberos 以确保安全,但经过多次尝试后我没有得到任何结果。>

有人可以做到或知道如何进行吗?无论是什么系统(Centos 或 Debian)

我收到如下错误:

base    | Authenticating as principal root/admin@EXAMPLE.COM with password.
base    | kadmin: Client 'root/admin@EXAMPLE.COM' not found in Kerberos database while initializing kadmin interface

Dockerfile

FROM centos:7

RUN yum clean all; \
    rpm --rebuilddb; \
    yum install -y initscripts curl nano cmake git curl which tar sudo rsync openssh-server openssh-clients

RUN yum update -y libselinux

RUN yum install -y java-1.8.0-openjdk

# RUN ssh-keygen && \
#     ssh-copy-id -i localhost

ENV JAVA_HOME=/usr/lib/jvm/java-1.8.0/jre

RUN curl -O https://dist.apache.org/repos/dist/release/hadoop/common/KEYS

RUN gpg --import KEYS

ENV HADOOP_VERSION 3.3.1
ENV HADOOP_URL https://www.apache.org/dist/hadoop/common/hadoop-$HADOOP_VERSION/hadoop-$HADOOP_VERSION.tar.gz

RUN set -x \
    && curl -fSL "$HADOOP_URL" -o /tmp/hadoop.tar.gz \
    && curl -fSL "$HADOOP_URL.asc" -o /tmp/hadoop.tar.gz.asc \
    && gpg --verify /tmp/hadoop.tar.gz.asc \
    && tar -xvf /tmp/hadoop.tar.gz -C /opt/ \
    && rm /tmp/hadoop.tar.gz*

RUN ln -s /opt/hadoop-$HADOOP_VERSION/etc/hadoop /etc/hadoop

RUN mkdir /opt/hadoop-$HADOOP_VERSION/logs

RUN mkdir /hadoop-data

ENV HADOOP_HOME=/opt/hadoop-$HADOOP_VERSION
ENV HADOOP_PREFIX=/opt/hadoop-$HADOOP_VERSION
ENV HADOOP_CONF_DIR=/etc/hadoop
ENV MULTIHOMED_NETWORK=1
ENV USER=root
ENV PATH $HADOOP_HOME/bin/:$PATH

# Kerberos client
RUN yum -y install krb5-server krb5-libs krb5-auth-dialog krb5-workstation
RUN yum -y install apache-commons-daemon-jsvc
RUN yum install net-tools -y
RUN yum install telnet telnet-server -y
RUN yum -y install which

RUN mkdir -p /var/log/kerberos
RUN touch /var/log/kerberos/kadmind.log

ENV HADOOP_COMMON_HOME $HADOOP_HOME
ENV HADOOP_HDFS_HOME $HADOOP_HOME
ENV HADOOP_MAPRED_HOME $HADOOP_HOME
ENV HADOOP_YARN_HOME $HADOOP_HOME
ENV HADOOP_CONF_DIR $HADOOP_HOME/etc/hadoop
ENV YARN_CONF_DIR $HADOOP_HOME/etc/hadoop
ENV NM_CONTAINER_EXECUTOR_PATH $HADOOP_HOME/bin/container-executor
ENV HADOOP_BIN_HOME $HADOOP_HOME/bin
ENV PATH $PATH:$HADOOP_BIN_HOME

ENV KRB_REALM EXAMPLE.COM
ENV DOMAIN_REALM EXAMPLE.COM
ENV KERBEROS_ADMIN admin/admin
ENV KERBEROS_ADMIN_PASSWORD admin
ENV KERBEROS_ROOT_USER_PASSWORD password
ENV KEYTAB_DIR /etc/security/keytabs
ENV FQDN hadoop.docker.com

RUN mkdir $HADOOP_HOME/input
RUN cp $HADOOP_HOME/etc/hadoop/*.xml $HADOOP_HOME/input

ADD config_files/hadoop-env.sh $HADOOP_HOME/etc/hadoop/hadoop-env.sh
ADD config_files/krb5.conf /etc/krb5.conf
ADD config_files/core-site.xml $HADOOP_HOME/etc/hadoop/core-site.xml
ADD config_files/hdfs-site.xml $HADOOP_HOME/etc/hadoop/hdfs-site.xml
ADD config_files/mapred-site.xml $HADOOP_HOME/etc/hadoop/mapred-site.xml
ADD config_files/yarn-site.xml $HADOOP_HOME/etc/hadoop/yarn-site.xml
ADD config_files/container-executor.cfg $HADOOP_HOME/etc/hadoop/container-executor.cfg
RUN mkdir $HADOOP_HOME/nm-local-dirs \
    && mkdir $HADOOP_HOME/nm-log-dirs 
ADD config_files/ssl-server.xml $HADOOP_HOME/etc/hadoop/ssl-server.xml
ADD config_files/ssl-client.xml $HADOOP_HOME/etc/hadoop/ssl-client.xml
ADD config_files/keystore.jks $HADOOP_HOME/lib/keystore.jks

ADD entrypoint.sh /entrypoint.sh

RUN chmod a+x /entrypoint.sh

EXPOSE 8188 9864 9870 8042 8088 9866 22

ENTRYPOINT ["/entrypoint.sh"]

entrypoint.sh

#!/bin/bash

# sudo echo "*/admin@EXAMPLE.COM *" > /var/kerberos/krb5kdc/kadm5.acl
sudo kdb5_util create -r ${KERBEROS_ADMIN} -s -P changeme

# service krb5kdc start
# service kadmin start
# service krb524 start 

# create namenode kerberos principal and keytab
sudo kadmin -q "modprinc -unlock PRINCNAME root@${KRB_REALM}"
sudo kadmin -q "addprinc -p ${KERBEROS_ROOT_USER_PASSWORD} root@${KRB_REALM}"
sudo kadmin -q "addprinc -p ${KERBEROS_ROOT_USER_PASSWORD} nn/$(hostname -f)@${KRB_REALM}"
sudo kadmin -q "addprinc -p ${KERBEROS_ROOT_USER_PASSWORD} dn/$(hostname -f)@${KRB_REALM}"
sudo kadmin -q "addprinc -p ${KERBEROS_ROOT_USER_PASSWORD} HTTP/$(hostname -f)@${KRB_REALM}"
sudo kadmin -q "addprinc -p ${KERBEROS_ROOT_USER_PASSWORD} jhs/$(hostname -f)@${KRB_REALM}"
sudo kadmin -q "addprinc -p ${KERBEROS_ROOT_USER_PASSWORD} yarn/$(hostname -f)@${KRB_REALM}"
sudo kadmin -q "addprinc -p ${KERBEROS_ROOT_USER_PASSWORD} rm/$(hostname -f)@${KRB_REALM}"
sudo kadmin -q "addprinc -p ${KERBEROS_ROOT_USER_PASSWORD} nm/$(hostname -f)@${KRB_REALM}"

sudo kadmin -q "xst -k nn.service.keytab nn/$(hostname -f)@${KRB_REALM}"
sudo kadmin -q "xst -k dn.service.keytab dn/$(hostname -f)@${KRB_REALM}"
sudo kadmin -q "xst -k spnego.service.keytab HTTP/$(hostname -f)@${KRB_REALM}"
sudo kadmin -q "xst -k jhs.service.keytab jhs/$(hostname -f)@${KRB_REALM}"
sudo kadmin -q "xst -k yarn.service.keytab yarn/$(hostname -f)@${KRB_REALM}"
sudo kadmin -q "xst -k rm.service.keytab rm/$(hostname -f)@${KRB_REALM}"
sudo kadmin -q "xst -k nm.service.keytab nm/$(hostname -f)@${KRB_REALM}"

# mkdir -p ${KEYTAB_DIR}
# mv nn.service.keytab ${KEYTAB_DIR}
# mv dn.service.keytab ${KEYTAB_DIR}
# mv spnego.service.keytab ${KEYTAB_DIR}
# mv jhs.service.keytab ${KEYTAB_DIR}
# mv yarn.service.keytab ${KEYTAB_DIR}
# mv rm.service.keytab ${KEYTAB_DIR}
# mv nm.service.keytab ${KEYTAB_DIR}
# chmod 400 ${KEYTAB_DIR}/nn.service.keytab
# chmod 400 ${KEYTAB_DIR}/dn.service.keytab
# chmod 400 ${KEYTAB_DIR}/spnego.service.keytab
# chmod 400 ${KEYTAB_DIR}/jhs.service.keytab
# chmod 400 ${KEYTAB_DIR}/yarn.service.keytab
# chmod 400 ${KEYTAB_DIR}/rm.service.keytab
# chmod 400 ${KEYTAB_DIR}/nm.service.keytab

if [[ $1 == "-d" ]]; then
  while true; do sleep 1000; done
fi

if [[ $1 == "-bash" ]]; then
  /bin/bash
fi

docker-compose.yml*

version: "3"

networks:
  custom:
    driver: bridge
    ipam:
      driver: default
      config:
      - subnet: 172.22.0.0/16
        gateway: 172.22.0.1

services:
  kdc:
    networks:
      custom:
        ipv4_address: 172.22.0.2
    image: sequenceiq/kerberos
    hostname: kdc.kerberos.com
    environment:
      REALM: EXAMPLE.COM
      DOMAIN_REALM: kdc.kerberos.com
    volumes:
      - "./config_files/krb5.conf:/etc/krb5.conf"
      - "/dev/urandom:/dev/random"
      - "/etc/localtime:/etc/localtime:ro"

  base:
    networks:
      custom:
        ipv4_address: 172.22.0.3
    build: ./base
    container_name: base
    restart: always
    ports:
      - 9870:9870
      - 9000:9000
    depends_on: 
      - kdc
    hostname: hadoop
    domainname: docker.com
    tty: true
    extra_hosts:
      - "kdc.kerberos.com kdc:172.22.0.2"
    environment:
      CLUSTER_NAME: test
      TZ: Europe/Paris
      KRB_REALM: EXAMPLE.COM
      DOMAIN_REALM: kdc.kerberos.com
      FQDN: hadoop.docker.com
    volumes:
      - "./config_files/krb5.conf:/etc/krb5.conf"
      - "/etc/localtime:/etc/localtime:ro"

非常感谢!

版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。

相关推荐


使用本地python环境可以成功执行 import pandas as pd import matplotlib.pyplot as plt # 设置字体 plt.rcParams['font.sans-serif'] = ['SimHei'] # 能正确显示负号 p
错误1:Request method ‘DELETE‘ not supported 错误还原:controller层有一个接口,访问该接口时报错:Request method ‘DELETE‘ not supported 错误原因:没有接收到前端传入的参数,修改为如下 参考 错误2:cannot r
错误1:启动docker镜像时报错:Error response from daemon: driver failed programming external connectivity on endpoint quirky_allen 解决方法:重启docker -> systemctl r
错误1:private field ‘xxx‘ is never assigned 按Altʾnter快捷键,选择第2项 参考:https://blog.csdn.net/shi_hong_fei_hei/article/details/88814070 错误2:启动时报错,不能找到主启动类 #
报错如下,通过源不能下载,最后警告pip需升级版本 Requirement already satisfied: pip in c:\users\ychen\appdata\local\programs\python\python310\lib\site-packages (22.0.4) Coll
错误1:maven打包报错 错误还原:使用maven打包项目时报错如下 [ERROR] Failed to execute goal org.apache.maven.plugins:maven-resources-plugin:3.2.0:resources (default-resources)
错误1:服务调用时报错 服务消费者模块assess通过openFeign调用服务提供者模块hires 如下为服务提供者模块hires的控制层接口 @RestController @RequestMapping("/hires") public class FeignControl
错误1:运行项目后报如下错误 解决方案 报错2:Failed to execute goal org.apache.maven.plugins:maven-compiler-plugin:3.8.1:compile (default-compile) on project sb 解决方案:在pom.
参考 错误原因 过滤器或拦截器在生效时,redisTemplate还没有注入 解决方案:在注入容器时就生效 @Component //项目运行时就注入Spring容器 public class RedisBean { @Resource private RedisTemplate<String
使用vite构建项目报错 C:\Users\ychen\work>npm init @vitejs/app @vitejs/create-app is deprecated, use npm init vite instead C:\Users\ychen\AppData\Local\npm-
参考1 参考2 解决方案 # 点击安装源 协议选择 http:// 路径填写 mirrors.aliyun.com/centos/8.3.2011/BaseOS/x86_64/os URL类型 软件库URL 其他路径 # 版本 7 mirrors.aliyun.com/centos/7/os/x86
报错1 [root@slave1 data_mocker]# kafka-console-consumer.sh --bootstrap-server slave1:9092 --topic topic_db [2023-12-19 18:31:12,770] WARN [Consumer clie
错误1 # 重写数据 hive (edu)> insert overwrite table dwd_trade_cart_add_inc > select data.id, > data.user_id, > data.course_id, > date_format(
错误1 hive (edu)> insert into huanhuan values(1,'haoge'); Query ID = root_20240110071417_fe1517ad-3607-41f4-bdcf-d00b98ac443e Total jobs = 1
报错1:执行到如下就不执行了,没有显示Successfully registered new MBean. [root@slave1 bin]# /usr/local/software/flume-1.9.0/bin/flume-ng agent -n a1 -c /usr/local/softwa
虚拟及没有启动任何服务器查看jps会显示jps,如果没有显示任何东西 [root@slave2 ~]# jps 9647 Jps 解决方案 # 进入/tmp查看 [root@slave1 dfs]# cd /tmp [root@slave1 tmp]# ll 总用量 48 drwxr-xr-x. 2
报错1 hive> show databases; OK Failed with exception java.io.IOException:java.lang.RuntimeException: Error in configuring object Time taken: 0.474 se
报错1 [root@localhost ~]# vim -bash: vim: 未找到命令 安装vim yum -y install vim* # 查看是否安装成功 [root@hadoop01 hadoop]# rpm -qa |grep vim vim-X11-7.4.629-8.el7_9.x
修改hadoop配置 vi /usr/local/software/hadoop-2.9.2/etc/hadoop/yarn-site.xml # 添加如下 <configuration> <property> <name>yarn.nodemanager.res