一、先把新版本的 ld-2.xx.so、libc-2.xx.so 一起放入 /lib 目录,并注意赋予执行权限
二、删掉指向旧版本的软链接 rm ld-linux-aarch64.so.1 libc.so.6
三、这时所有的命令都是不能用的。同时恢复ld-linux、libc的软链接指向新版本
LD_PRELOAD="/lib/libc-2.26.so /lib/ld-2.26.so" /bin/ln -s /lib/ld-2.26.so /lib/ld-linux-aarch64.so.1
LD_PRELOAD="/lib/libc-2.26.so /lib/ld-2.26.so" /bin/ln -s /lib/libc-2.26.so /lib/libc.so.6
这时候,虽然大部分busybox命令都可以执行了,但是像dropbear、dmesg这些部分功能还是不正常的,因为还有 libnss_dns、libnss_file、libpthread、libresolv、libdl、libanl、libcrypt、libm、libnsl、librt、libutil 这些库需要跟libc库同步升级
一、启动namenode服务后,web页面依然无法访问
1、启动namenode服务,
指令:start-all.sh
'''
[root@hadoop1 hadoop-3.2.1]# start-all.sh
Starting namenodes on [hadoop1]
Starting datanodes
Starting secondary namenodes [hadoop1]
Starting resourcemanager
Starting nodemanagers
ERROR: Refusing to run as root: roo account is not found. Aborting.
'''
2、查看namenode服务是否启动,
'''
[root@hadoop1 hadoop-3.2.1]# jps
8130 Jps
7494 ResourceManager
6871 NameNode
7244 SecondaryNameNode
'''
3、查看后台监听端口
'''
[root@hadoop1 hadoop-3.2.1]# netstat -nltp |grep 6871
tcp 0 0 192.168.43.250:9000 0.0.0.0:* LISTEN 6871/java
tcp 0 0 0.0.0.0:9870 0.0.0.0:* LISTEN 6871/java
'''
4、查看web是否可以访问,发现web页面无法访问
5、检查防火墙设置,可以看到hadoop1服务器已经禁用了除本机外的其他多有服务访问,
[root@hadoop1 hadoop-3.2.1]# service iptables status
表格:filter
Chain INPUT (policy ACCEPT)
num target prot opt source destination
1 ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED
2 ACCEPT icmp -- 0.0.0.0/0 0.0.0.0/0
3 ACCEPT all -- 0.0.0.0/0 0.0.0.0/0
4 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:22
5 REJECT all -- 0.0.0.0/0 0.0.0.0/0 reject-with icmp-host-prohibited
Chain FORWARD (policy ACCEPT)
num target prot opt source destination
1 REJECT all -- 0.0.0.0/0 0.0.0.0/0 reject-with icmp-host-prohibited
Chain OUTPUT (policy ACCEPT)
num target prot opt source destination
6、关闭防火墙,并把防火墙设置为开启不启动
centos6:
关闭防火墙:service iptables stop
设置开启不启动防火墙:chkconfig iptables off
centos7:
关闭防火墙:systemctl stop firewalld.service
设置开启不启动防火墙:systemctl disable firewalld.service
7、检查web已经可以正常显示
8、如果上面的操作依然无法访问的话,需要查看一下主机的hosts文件 是否有配置域名映射
二、开启datanode指令时出现waring
[root@hadoop1 hadoop-3.2.1]# hadoop-daemon.sh start datanode
WARNING: Use of this script to start HDFS daemons is deprecated.
WARNING: Attempting to execute replacement "hdfs --daemon start" instead.
主要是2.7版本的hadoop已经把hadoop命令改为hdfs命令了,所以尝试使用
指令:hdfs --daemon start datanode
'''
[root@hadoop2 hadoop-3.2.1]# hdfs --daemon start datanode
[root@hadoop2 hadoop-3.2.1]# jps
4064 Jps
4033 DataNode
2922 ResourceManager
'''
三、使用root配置的hadoop并启动会出现报错
错误:
Starting namenodes on [master]
ERROR: Attempting to operate on hdfs namenode as root
ERROR: but there is no HDFS_NAMENODE_USER defined. Aborting operation.
Starting datanodes
ERROR: Attempting to operate on hdfs datanode as root
ERROR: but there is no HDFS_DATANODE_USER defined. Aborting operation.
Starting secondary namenodes [slave1]
ERROR: Attempting to operate on hdfs secondarynamenode as root
ERROR: but there is no HDFS_SECONDARYNAMENODE_USER defined. Aborting operation.
原因分析:由于root没有start-dfs.sh和 stop-dfs.sh脚本的执行权限,在这两个脚本的开头加上如下参数,给root赋予执行权限即可:
HDFS_DATANODE_USER=root
HADOOP_SECURE_DN_USER=hdfs /* 后续版本这边需要修改为 HDFS_DATANODE_SECURE_USER=hdfs
HDFS_NAMENODE_USER=root
HDFS_SECONDARYNAMENODE_USER=root
start-yarn.sh,stop-yarn.sh顶部也需添加以下
YARN_RESOURCEMANAGER_USER=root
HADOOP_SECURE_DN_USER=yarn /* 后续版本这边需要修改为 HDFS_DATANODE_SECURE_USER=hdfs
YARN_NODEMANAGER_USER=root
4、hdfs运行指令时出现warn警告提示:
2019-11-13 00:07:58,517 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
该警告信息主要是由于是依赖库的问题
我们对静态库查看下依赖:看下依赖是否都正常:
通过指令 ldd libhadoop.so.1.0.0
'''
./libhadoop.so.1.0.0: /lib64/libc.so.6: version `GLIBC_2.14' not found (required by ./libhadoop.so.1.0.0)
linux-vdso.so.1 => (0x00007fff369ff000)
libdl.so.2 =>/lib64/libdl.so.2 (0x00007f3caa7ea000)
libc.so.6 =>/lib64/libc.so.6 (0x00007f3caa455000)
/lib64/ld-linux-x86-64.so.2 (0x00007f3caac1b000)
'''
可以看到是glibc 版本的问题:
我们再确认下:
GLIBC_2.14找不到,现在检查系统的glibc库, ldd --version 即可检查。
输入命令:
'''
ldd --version
ldd (GNU libc) 2.12
Copyright (C) 2010 Free Software Foundation, Inc.
This is free softwaresee the source for copying conditions. There is NO
warrantynot even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
Written by Roland McGrath and Ulrich Drepper.
'''
还可以直接确认下glibc 目前支持的版本:
通过如下查询方法:
'''
strings /lib64/libc.so.6|grep GLIBC
GLIBC_2.2.5
GLIBC_2.2.6
GLIBC_2.3
GLIBC_2.3.2
GLIBC_2.3.3
GLIBC_2.3.4
GLIBC_2.4
GLIBC_2.5
GLIBC_2.6
GLIBC_2.7
GLIBC_2.8
GLIBC_2.9
GLIBC_2.10
GLIBC_2.11
GLIBC_2.12
GLIBC_PRIVATE
'''
可以看到目前只支持到 2.12
解决办法有两个
1、升级 glibc 库
2、屏蔽hadoop提示这个告警
直接在log4j日志中去除告警信息。在$HADOOP_HOME/etc/hadoop/log4j.properties文件中添加
log4j.logger.org.apache.hadoop.util.NativeCodeLoader=ERROR
欢迎分享,转载请注明来源:夏雨云
评论列表(0条)