map-reduce的過程首先是由客戶端提交乙個任務開始的。
提交任務主要是通過jobclient.runjob(jobconf)靜態函式實現的:
public static runningjob runjob(jobconf job) throws ioexception finally finally finally
//建立reduce task
this.reduces = new taskinprogress[numreducetasks];
for (int i = 0; i < numreducetasks; i++) else if (action instanceof committaskaction) else else else finally else else finally finally while (kvfull);
} finally catch (mapbuffertoosmallexception e) else {
combinecollector.setwriter(writer);
combineandspill(kviter, combineinputcounter);
reducetask的run函式如下:
public void run(jobconf job, final taskumbilicalprotocol umbilical)
throws ioexception {
job.setboolean("mapred.skip.on", isskipping());
//對於reduce,則包含三個步驟:拷貝,排序,reduce
if (ismaporreduce()) {
copyphase = getprogress().addphase("copy");
sortphase = getprogress().addphase("sort");
reducephase = getprogress().addphase("reduce");
startcommunicationthread(umbilical);
final reporter reporter = getreporter(umbilical);
initialize(job, reporter);
//copy階段,主要使用reducecopier的fetchoutputs函式獲得map的輸出。建立多個執行緒mapoutputcopier,其中copyoutput進行拷貝。
boolean islocal = "local".equals(job.get("mapred.job.tracker", "local"));
if (!islocal) {
reducecopier = new reducecopier(umbilical, job);
if (!reducecopier.fetchoutputs()) {
copyphase.complete();
//sort階段,將得到的map輸出合併,直到檔案數小於io.sort.factor時停止,返回乙個iterator用於訪問key-value
setphase(taskstatus.phase.sort);
statusupdate(umbilical);
final filesystem rfs = filesystem.getlocal(job).getraw();
rawkeyvalueiterator riter = islocal
? merger.merge(job, rfs, job.getmapoutputkeyclass(),
job.getmapoutputvalueclass(), codec, getmapfiles(rfs, true),
!conf.getkeepfailedtaskfiles(), job.getint("io.sort.factor", 100),
new path(gettaskid().tostring()), job.getoutputkeycomparator(),
reporter)
: reducecopier.createkviterator(job, rfs, reporter);
mapoutputfilesondisk.clear();
sortphase.complete();
//reduce階段
setphase(taskstatus.phase.reduce);
reducer reducer = reflectionutils.newinstance(job.getreducerclass(), job);
class keyclass = job.getmapoutputkeyclass();
class valclass = job.getmapoutputvalueclass();
reducevaluesiterator values = isskipping() ?
new skippingreducevaluesiterator(riter,
job.getoutputvaluegroupingcomparator(), keyclass, valclass,
job, reporter, umbilical) :
new reducevaluesiterator(riter,
job.getoutputvaluegroupingcomparator(), keyclass, valclass,
job, reporter);
//逐個讀出key-value list,然後呼叫reducer的reduce函式
while (values.more()) {
reduceinputkeycounter.increment(1);
reducer.reduce(values.getkey(), values, collector, reporter);
values.nextkey();
values.informreduceprogress();
reducer.close();
out.close(reporter);
done(umbilical);
map-reduce的過程總結如下圖:
綠色通道:
好文要頂
關注我收藏該文
與我聯絡
hadoop 原始碼筆記
public inte ce tool extends configurable public int run string args throws exception public static void main string args throws exception toolrunner執行...
Hadoop原始碼結構
hadoop專案已經得到社群以及行業內很多大牛的貢獻,現在版本已經推進到了1.0.0版本,本人以後將就當前1.0.0版本進行原始碼分析,如有重大特性更新的版本發布,會有相關的原始碼增補分析,多謝!bin 此目錄下為進行hadoop配置 執行以及管理的shell命令集合 c 此目錄下為linux下am...
ArrayList部分原始碼
預設初始容量 private static final int default capacity 10 空陣列,有參建構函式,引數為0時,將elementdata陣列賦值為empty elementdata private static final object empty elementdata ...