安裝jdk,安裝zookeeper並保證zk服務正常啟動
cd /export/softwares
wget
tar –zxvf kafka_2.11-1.0.0.tgz -c /export/servers/
node01執行以下命令進入到kafka的配置檔案目錄,修改配置檔案
node01執行以下命令建立資料檔案存放目錄
mkdir -p /export/servers/kafka_2.11-1.0.0/logs
cd /export/servers/kafka_2.11-1.0.0/config
vim server.properties
broker.id=0
num.network.threads=3
num.io.threads=8
socket.send.buffer.bytes=102400
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600
log.dirs=/export/servers/kafka_2.11-1.0.0/logs
num.partitions=2
num.recovery.threads.per.data.dir=1
offsets.topic.replication.factor=1
transaction.state.log.replication.factor=1
transaction.state.log.min.isr=1
log.flush.interval.messages=10000
log.flush.interval.ms=1000
log.retention.hours=168
log.segment.bytes=1073741824
log.retention.check.interval.ms=300000
zookeeper.connect=node01:2181,node02:2181,node03:2181
zookeeper.connection.timeout.ms=6000
group.initial.rebalance.delay.ms=0
delete.topic.enable=true
host.name=node01
node01執行以下命令,將node01伺服器的kafka安裝包傳送到node02和node03伺服器上面去
cd /export/servers/
scp -r kafka_2.11-1.0.0/ node02:$pwd
scp -r kafka_2.11-1.0.0/ node03:$pwd
node02與node03伺服器修改kafka配置檔案
node02使用以下命令修改kafka配置檔案
cd /export/servers/kafka_2.11-1.0.0/config
vim server.properties
broker.id=1
num.network.threads=3
num.io.threads=8
socket.send.buffer.bytes=102400
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600
log.dirs=/export/servers/kafka_2.11-1.0.0/logs
num.partitions=2
num.recovery.threads.per.data.dir=1
offsets.topic.replication.factor=1
transaction.state.log.replication.factor=1
transaction.state.log.min.isr=1
log.flush.interval.messages=10000
log.flush.interval.ms=1000
log.retention.hours=168
log.segment.bytes=1073741824
log.retention.check.interval.ms=300000
zookeeper.connect=node01:2181,node02:2181,node03:2181
zookeeper.connection.timeout.ms=6000
group.initial.rebalance.delay.ms=0
delete.topic.enable=true
host.name=node02
node03使用以下命令修改kafka配置檔案
cd /export/servers/kafka_2.11-1.0.0/config
vim server.properties
broker.id=2
num.network.threads=3
num.io.threads=8
socket.send.buffer.bytes=102400
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600
log.dirs=/export/servers/kafka_2.11-1.0.0/logs
num.partitions=2
num.recovery.threads.per.data.dir=1
offsets.topic.replication.factor=1
transaction.state.log.replication.factor=1
transaction.state.log.min.isr=1
log.flush.interval.messages=10000
log.flush.interval.ms=1000
log.retention.hours=168
log.segment.bytes=1073741824
log.retention.check.interval.ms=300000
zookeeper.connect=node01:2181,node02:2181,node03:2181
zookeeper.connection.timeout.ms=6000
group.initial.rebalance.delay.ms=0
delete.topic.enable=true
host.name=node03
注意事項:在kafka啟動前,一定要讓zookeeper啟動起來。
node01執行以下命令將kafka程序啟動在後台
cd /export/servers/kafka_2.11-1.0.0
nohup bin/kafka-server-start.sh config/server.properties 2>&1 &
node02執行以下命令將kafka程序啟動在後台
cd /export/servers/kafka_2.11-1.0.0
nohup bin/kafka-server-start.sh config/server.properties 2>&1 &
node03執行以下命令將kafka程序啟動在後台
cd /export/servers/kafka_2.11-1.0.0
nohup bin/kafka-server-start.sh config/server.properties 2>&1 &
三颱機器也可以執行以下命令停止kafka集群
cd /export/servers/kafka_2.11-1.0.0
bin/kafka-server-stop.sh
在kafka的bin目錄下新增kf.sh
cd cd /export/servers/kafka_2.11-1.0.0/bin
vim kf.sh
#!/bin/bash
case $1 in
"start");;
"stop");;
esac
修改檔案許可權
chmod 751 kf.sh
kafka集群啟動:
cd /export/servers/kafka_2.11-1.0.0
bin/kf.sh start
kafka集群關閉:
cd /export/servers/kafka_2.11-1.0.0
bin/kf.sh stop
Kafka集群環境搭建
在搭建kafka的集群環境之前,需要把zookeeper的集群環境搭建好。在192.168.241.20這台機子節點上 broker.id 0 listeners plaintext zookeeper.connect 192.168.241.20 2181,192.168.241.21 2181,...
Kafka集群環境搭建
kafka集群需要zookeeper集群來進行管理,需要預先安裝,安裝教程如下 kafka 由linkedin 領英 全球職場社交平台公司開發,貢獻給apache成為頂級專案,是乙個分布式的流平台。它具有以下特點 這裡解釋一下 kafka 安裝包的命名規則 以kafka 2.11 2.4.1.tgz...
kafka集群環境搭建
目錄 1 初始化環境準備 3 node01伺服器修改kafka配置檔案 4 安裝包分發到其他伺服器上面去 5 node02與node03伺服器修改配置檔案 6 kafka集群啟動與停止 安裝jdk,安裝zookeeper並保證zk服務正常啟動 cd export softwares wget tar...