在使用3.2.0版本的hadoop時執行wordcount出現如下錯誤:
仔細檢視後發現其實是找不到主類,原因是我在配置mapred-site.xm這個檔案時缺少了引數,hadoop2.x與hadoop3.x的xml檔案引數還是有一定差距。修改後的mapred-site.xml為:[root@server1 jars]# hadoop jar bigdata.jar demo.wordcountmain /data/txt/math.txt /out
2020-03-09 23:14:00,216 info client.rmproxy: connecting to resourcemanager at server1/172.16.115.189:8032
2020-03-09 23:14:00,861 info mapreduce.jobresourceuploader: disabling erasure coding for path: /tmp/hadoop-yarn/staging/root/.staging/job_1583809329756_0001
2020-03-09 23:14:01,996 info input.fileinputformat: total input files to process : 1
2020-03-09 23:14:02,529 info mapreduce.jobsubmitter: number of splits:1
2020-03-09 23:14:02,672 info configuration.deprecation: yarn.resourcemanager.system-metrics-publisher.enabled is deprecated. instead, use yarn.system-metrics-publisher.enabled
2020-03-09 23:14:03,215 info mapreduce.jobsubmitter: submitting tokens for job: job_1583809329756_0001
2020-03-09 23:14:03,216 info mapreduce.jobsubmitter: executing with tokens:
2020-03-09 23:14:03,446 info conf.configuration: resource-types.xml not found
2020-03-09 23:14:03,446 info resource.resourceutils: unable to find 'resource-types.xml'.
2020-03-09 23:14:04,151 info mapreduce.job: running job: job_1583809329756_0001
2020-03-09 23:14:17,547 info mapreduce.job: job job_1583809329756_0001 running in uber mode : false
2020-03-09 23:14:17,549 info mapreduce.job: map 0% reduce 0%
2020-03-09 23:14:17,596 info mapreduce.job: job job_1583809329756_0001 failed with state failed due to:
2020-03-09 23:14:17,642 info mapreduce.job: counters: 0
而且還要確保上述配置中的環境變數是設定好的,我的hadoop環境變數為:<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
mapred.job.tracker
server1:49001 //此處應該填寫自己的master引數
mapred.local.dir
/root/hadoop/var
mapreduce.framework.name
yarn
hadoop_mapred_home=$
mapreduce.map.env
hadoop_mapred_home=$
mapreduce.reduce.env
hadoop_mapred_home=$
$/etc/hadoop,
$/share/hadoop/common/*,
$/share/hadoop/common/lib/*,
$/share/hadoop/hdfs/*,
$/share/hadoop/hdfs/lib/*,
$/share/hadoop/mapreduce/*,
$/share/hadoop/mapreduce/lib/*,
$/share/hadoop/yarn/*,
$/share/hadoop/yarn/lib/*
同時需要在$hadoop_home/etc/worker檔案中需要新增所有的節點名稱。export hadoop_home=/opt/hadoop/hadoop-3.2.0
export hadoop_install=$hadoop_home
export hadoop_mapred_home=$hadoop_home
export hadoop_common_home=$hadoop_home
export hadoop_hdfs_home=$hadoop_home
export yarn_home=$hadoop_home
export hadoop_common_lib_native_dir=$hadoop_home/lib/native
export path=$path:$hadoop_home/sbin:$hadoop_home/bin
hadoop1 x和hadoop2 x的對比
1.能否總結出mapreduce設計思路?2.hadoop1遇到了什麼問題?3.hadoop2做了什麼改進,具體哪些變化?對 hadoop1 和 hadoop 2 做了乙個解釋 不錯 拿來看看 從上圖中可以清楚的看出原 mapreduce 程式的流程及設計思路 可以看得出原來的 map reduce...
Hadoop1 x與Hadoop2 x的區別?
hadoop 解讀大資料雪崩的解決方案,從谷歌2003年發布 和2004年mapreduce 開始已經走過了漫長的道路。它通過橫向擴充套件而不是擴充套件戰略創造了波瀾。來自doug cutting以及雅虎和apache hadoop專案團隊的進展推動了mapreduce程式設計的普及 這種程式設計適...
Hadoop1 x與Hadoop2 x的區別解析
hadoop 解讀大資料雪崩的解決方案,從谷歌2003年發布 和2004年mapreduce 開始已經走過了漫長的道路。它通過橫向擴充套件而不是擴充套件戰略創造了波瀾。來自doug cutting以及雅虎和apache hadoop專案團隊的進展推動了mapreduce程式設計的普及 這種程式設計適...