[root@node01 hadoop-2.6.0-cdh5.14.0]# bin/hdfs dfs
usage: hadoop fs [generic options]
[-cat [-ignorecrc] ...]
[-checksum ...]
[-chgrp [-r] group path...]
[-chmod [-r] path...]
[-chown [-r] [owner][:[group]] path...]
[-copyfromlocal [-f] [-p] [-l] ... ]
[-copytolocal [-p] [-ignorecrc] [-crc] ... ]
[-count [-q] [-h] [-v] [-x] ...]
[-cp [-f] [-p | -p[topax]] ... ]
[-createsnapshot ]
[-deletesnapshot ]
[-df [-h] [...]]
[-du [-s] [-h] [-x] ...]
[-expunge]
[-find ... ...]
[-get [-p] [-ignorecrc] [-crc] ... ]
[-getfacl [-r] ]
[-getfattr [-r] [-e en] ]
[-getmerge [-nl] ]
[-help [cmd ...]]
[-ls [-c] [-d] [-h] [-q] [-r] [-t] [-s] [-r] [-u] [...]]
[-mkdir [-p] ...]
[-movefromlocal ... ]
[-movetolocal ]
[-mv ... ]
[-put [-f] [-p] [-l] ... ]
[-renamesnapshot ]
[-rm [-f] [-r|-r] [-skiptrash] ...]
[-rmdir [--ignore-fail-on-non-empty] ...]
[-setfacl [-r] [ ]|[--set ]]
[-setfattr ]
[-setrep [-r] [-w] ...]
[-stat [format] ...]
[-tail [-f] ]
[-test -[defsz] ]
[-text [-ignorecrc] ...]
[-touchz ...]
[-usage [cmd ...]]
hdfs dfs -chmod 666 /hello.txt
hdfs dfs -chown someuser:somegrp /hello.txt
-chgrp 、-chmod、-chown:linux檔案系統中的用法一樣,修改檔案所屬許可權
這裡設定的副本數只是記錄在namenode的元資料中,是否真的會有這麼多副本,還得看datanode的數量。因為目前只有3臺裝置,最多也就3個副本,只有節點數的增加到10台時,副本數才能達到10。
hdfs檔案操作shell命令
usage hadoop fs generic options cat ignorecrc checksum chgrp r group path.chmod r mode octalmode path.chown r owner group path.copyfromlocal f p l cop...
HDFS操作常用的Shell命令
hadoop fs copyfromlocal uri f 如果檔案存在,則覆蓋 從本地檔案系統考貝到集群上 將乙個本地檔案系統的原始檔或多個原始檔追加到目標檔案系統 hadoop fs copytolocal ignorecrc crc uri ignorecrc 選項複製crc校驗失敗的文 件 ...
hdfs命令列shell操作
bin hadoop fs 具體命令 or bin hdfs dfs 具體命令 兩個是完全相同的。hdfs的操作可以分為三類 1.本地 hdfs 上傳 3.hdfs hdfs hdfs內部操作 一 上傳 1.put 從本地檔案系統中拷貝檔案到 hdfs 路徑去 put2.txt 本地檔案路徑,上傳到...