注:這個模擬實際上也相當於是將 flume 日誌輸出到hdfs中,然後再通過hive外部表關聯hdfs對應的路徑而已。
# name the components on this agent
a1.sources = r1
a1.sinks = k1
a1.channels = c1
# describe/configure the source
a1.sources
.r1.type = exec
a1.sources
.r1.command = tail -f /usr/hdp/2.5
.3.0-37/flume/conf/demo/test/log.log
# describe the sink
a1.sinks
.k1.type = hdfs
a1.sinks
.k1.hdfs
.path = hdfs://hdp39:8020/tmp/flumetest
a1.sinks
.k1.hdfs
.filetype = datastream
a1.sinks
.k1.hdfs
.writeformat = text
a1.sinks
.k1.hdfs
.rollsize=10240
tier1.sinks
.sink1.hdfs
.idletimeout=60
# use a channel which buffers events in memory
a1.channels
.c1.type = memory
a1.channels
.c1.capacity = 1000
a1.channels
.c1.transactioncapacity = 100
# bind the source and sink to the channel
a1.sources
.r1.channels = c1
a1.sinks
.k1.channel = c1
flume-ng agent -n a1 -c
../conf -f exec2hive -dflume
.root.logger=debug,console
往log.log中追加資料,模擬日誌的生成
echo hello,flume! >> log.log
echo hello,flume!! >> log.log
建立hive 外部表
create
external
table flume1(info string)
row format delimited
fields terminated by
'\t'
location '/tmp/flumetest/';
查詢hive
hive> select * from flume1;
okhello,flume!
hello,flume!
hello,flumemore log.log more log.log
hello,flumemore log.log more log.log
hello
Flume儲存日誌到MongoDB
然後放到 usr local 目錄下解壓 tar zxvf apache flume 1.5.2 bin.tar.gz 定義元件名稱 agent2.sources source2 agent2.sinks sink2 agent2.channels channel2 定義資料入口 agent2.so...
Flume 三 監測本地目錄寫入日誌
在 opt下面建立 flume 要監測的目錄 opt flumelog events root cai flumelog mkdir events建立監查點的目錄 opt flumelog checkpoint events root cai flumelog mkdir checkpoint ro...
flask將日誌寫入日誌檔案
import logging logging.basicconfig level logging.debug,控制台列印的日誌級別 filename log new.log 將日誌寫入log new.log檔案中 filemode a 模式,有w和a,w就是寫模式,每次都會重新寫日誌,覆蓋之前的日誌...