我把hdfs的端口修改了,fs.defaultFS hdfs://master:8020修改成hdfs://master:9000。
然后spark程序提交就出现问题了
org.apache.spark.sql.AnalysisException: org.apache.hadoop.hive.ql.metadata.HiveException: MetaException(message:java.net.ConnectException: Call From slave203/10.10.22.203 to master123:8020 failed on connection exception: java.net.ConnectException: 拒绝连接; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused);
我直接运行hive也出现问题。
#hive
log4j:WARN No such property [maxFileSize] in org.apache.log4j.DailyRollingFileAppender.
Logging initialized using configuration in file:/etc/hive/2.6.5.0-292/0/hive-log4j.properties
hive> show tables;
FAILED: SemanticException MetaException(message:java.net.ConnectException: Call From slave203/10.10.22.203 to master123:8020 failed on connection exception: java.net.ConnectException: 拒绝连接; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused)
我很确定是hive连不上,可是hive没有配置过hdfs啊,这个要怎么解决?