./hadoop fs
1、檢視指定目錄下內容:hadoop fs –ls [檔案目錄]
[root@cdh01 tmp]# hadoop fs -ls -h /tmp
found 2 items
drwxrwxrwx - hdfs supergroup 0 2016-01-21 10:24 /tmp/.cloudera_health_monitoring_canary_files
drwx-wx-wx - hive supergroup 0 2016-01-21 10:02 /tmp/hive
[root@cdh01 tmp]# hadoop fs -ls -h /
found 2 items
drwxrwxrwx - hdfs supergroup 0 2016-01-21 10:02 /tmp
drwxrwxr-x - hdfs supergroup 0 2016-01-21 10:01 /user
2、將本地資料夾儲存至hadoop上:hadoop fs –put [本地目錄] [hadoop目錄]
[root@cdh01 /]# mkdir test_put_dir #建立目錄
[root@cdh01 /]# chown hdfs:hadoop test_put_dir #賦目錄許可權給hadoop使用者
[root@cdh01 /]# su hdfs #切換到hadoop使用者
[hdfs@cdh01 /]$ ls
bin boot dev dfs dfs_bak etc home lib lib64 lost+found media misc mnt net opt proc root sbin selinux srv sys test_put_dir tmp usr var wawa.txt wbwb.txt wyp.txt
[hdfs@cdh01 /]$ hadoop fs -put test_put_dir /
[hdfs@cdh01 /]$ hadoop fs -ls /
found 4 items
drwxr-xr-x - hdfs supergroup 0 2016-01-21 11:07 /hff
drwxr-xr-x - hdfs supergroup 0 2016-01-21 15:25 /test_put_dir
drwxrwxrwt - hdfs supergroup 0 2016-01-21 10:39 /tmp
drwxr-xr-x - hdfs supergroup 0 2016-01-21 10:39 /user
3、在hadoop指定目錄內建立新目錄:hadoop fs –mkdir [目錄位址]
[root@cdh01 /]# su hdfs
[hdfs@cdh01 /]$ hadoop fs -mkdir /hff
4、在hadoop指定目錄下新建乙個空檔案,使用touchz命令:
[hdfs@cdh01 /]$ hadoop fs -touchz /test_put_dir/test_new_file.txt
[hdfs@cdh01 /]$ hadoop fs -ls /test_put_dir
found 1 items
-rw-r--r-- 3 hdfs supergroup 0 2016-01-21 15:29 /test_put_dir/test_new_file.txt
5、將本地檔案儲存至hadoop上:hadoop fs –put [本地位址] [hadoop目錄]
[hdfs@cdh01 /]$ hadoop fs -put wyp.txt /hff #直接目錄
[hdfs@cdh01 /]$ hadoop fs -put wyp.txt hdfs: #伺服器目錄
注:檔案wyp.txt放在/根目錄下,結構如:
bin dfs_bak lib64 mnt root sys var
boot etc lost+found net sbin test_put_dir wawa2.txt
dev home media opt selinux tmp wbwb.txt
dfs lib misc proc srv usr wyp.txt
6、開啟某個已存在檔案:hadoop fs –cat [file_path]
[hdfs@cdh01 /]$ hadoop fs -cat /hff/wawa.txt
1 張三 男 135
2 劉麗 女 235
3 王五 男 335
7、將hadoop上某個檔案重新命名hadoop fs –mv [舊檔名] [新檔名]
[hdfs@cdh01 /]$ hadoop fs -mv /tmp /tmp_bak #修改資料夾名
8、將hadoop上某個檔案down至本地已有目錄下:hadoop fs -get [檔案目錄] [本地目錄]
[hdfs@cdh01 /]$ hadoop fs -get /hff/wawa.txt /test_put_dir
[hdfs@cdh01 /]$ ls -l /test_put_dir/
total 4
-rw-r--r-- 1 hdfs hdfs 42 jan 21 15:39 wawa.txt
9、刪除hadoop上指定檔案:hadoop fs -rm [檔案位址]
[hdfs@cdh01 /]$ hadoop fs -ls /test_put_dir/
found 2 items
-rw-r--r-- 3 hdfs supergroup 0 2016-01-21 15:41 /test_put_dir/new2.txt
-rw-r--r-- 3 hdfs supergroup 0 2016-01-21 15:29 /test_put_dir/test_new_file.txt
[hdfs@cdh01 /]$ hadoop fs -rm /test_put_dir/new2.txt
16/01/21 15:42:24 info fs.trashpolicydefault: namenode trash configuration: deletion interval = 1440 minutes, emptier interval = 0 minutes.
moved: 'hdfs:' to trash at: hdfs:
[hdfs@cdh01 /]$ hadoop fs -ls /test_put_dir/
found 1 items
-rw-r--r-- 3 hdfs supergroup 0 2016-01-21 15:29 /test_put_dir/test_new_file.txt
10、刪除hadoop上指定資料夾(包含子目錄等):hadoop fs –rm -r [目錄位址]
[hdfs@cdh01 /]$ hadoop fs -rmr /test_put_dir
16/01/21 15:50:59 info fs.trashpolicydefault: namenode trash configuration: deletion interval = 1440 minutes, emptier interval = 0 minutes.
moved: 'hdfs:' to trash at: hdfs:
[hdfs@cdh01 /]$ hadoop fs -ls /
found 3 items
drwxr-xr-x - hdfs supergroup 0 2016-01-21 11:07 /hff
drwxrwxrwt - hdfs supergroup 0 2016-01-21 10:39 /tmp
drwxr-xr-x - hdfs supergroup 0 2016-01-21 15:42 /user
11、將hadoop指定目錄下所有內容儲存為乙個檔案,同時down至本地
hadoop dfs –getmerge /user /home/t
12、將正在執行的hadoop作業kill掉
hadoop job –kill [job-id]
from
hadoop 啟動指令
start all.sh 啟動所有的hadoop守護程序。包括namenode secondary namenode datanode jobtracker tasktrack stop all.sh 停止所有的hadoop守護程序。包括namenode secondary namenode dat...
hadoop 啟動指令
start all.sh 啟動所有的hadoop守護程序。包括namenode secondary namenode datanode jobtracker tasktrack stop all.sh 停止所有的hadoop守護程序。包括namenode secondary namenode dat...
mysql常用指令 Mysql常用指令
mysql常用指令2021 01 19 23 40 45 作用 去除select 查詢出來的結果中重複的資料,重複資料只展示一列.關鍵字 distinct 用法 select distinct source from student table source 去重的字段條件 student tabl...