概述:
利用wordcount做hadoop效能測試,依據count的資料規模增長進行效能分析評測
版本:bin/hadoop version
hadoop 2.3.0-cdh5.0.0
測試步驟:
1.利用randomtextwriter生成指定規模的測試集合
2.執行wordcount:
nohup bin/hadoop jar share/hadoop/mapreduce2/hadoop-mapreduce-examples-2.3.0-cdh5.0.0.jar wordcount /home/test/mrinput50 /home/test/mroutput50 > wc.log 2>&1 &
3.評測內容:
total time spent by all maps in occupied slots (ms)=1504892
total time spent by all reduces in occupied slots (ms)=84038
total time spent by all map tasks (ms)=1504892
total time spent by all reduce tasks (ms)=84038
gc time elapsed (ms)=17285
cpu time spent (ms)=1812107
hadoop執行自帶例項wordcount
作業系統 ubuntu hadoop版本 3.1.3 cd usr local hadoop bin hdfs namenode format 格式化namenode sbin start dfs.shbin hdfs dfs mkdir input 新建input資料夾 bin hdfs dfs ...
Hadoop偽分布式執行wordcount小例子
先說點小知識 hadoop fs 使用面最廣,可以操作任何檔案系統 hadoop dfs和hdfs dfs只能操作hdfs相關的 先建資料存放目錄和結果輸出目錄 guo guo opt hadoop hadoop 2.7.2 hdfs dfs mkdir data input guo guo opt...
Hadoop偽分布式執行wordcount例子
1.進入hadoop目錄,新建乙個test.log檔案,cat命令檢視檔案內容 2.啟動yarn和dfs,一種是全部啟動start all.sh,另外一種分別啟動,如下圖的提示 4.把新建的檔案傳到hdfs的data input中,用ls命令檢視是否傳遞成功 5.進入mapreduce目錄 6.ls...