安裝kafka
cd /soft
tar -zxvf kafka_2.11-1.0.0.tgz -c /usr/local/
mv /usr/local/kafka_2.11-1.0.0/ /usr/local/kafka_2.11
環境變數:
echo "export kafka_home=/usr/local/kafka_2.11" >> /etc/profile
echo -e 'export path=$path:$kafka_home/bin'>> /etc/profile
source /etc/profile
新建資料夾
mkdir /usr/local/kafka_2.11/kafka-logs
將kafka scp到node2,node3節點(也即拷貝資料夾kafka_2.11到目錄/usr/local/下):
sudo scp -r /usr/local/kafka_2.11 node2:/usr/local/
sudo scp -r /usr/local/kafka_2.11 node3:/usr/local/
按照前面步驟,為node2和node3配置hbase的環境變數。
chown -r hadoop /usr/local/kafka_2.11
chgrp -r hadoop /usr/local/kafka_2.11
修改配置檔案:
sudo vim /usr/local/kafka_2.11/config/server.properties
node1節點配置:
broker.id=1
port=9092 #新增
host.name=node1 #新增
log.dirs=/usr/local/kafka_2.11/kafka-logs
zookeeper.connect=node1:2181,node2:2181,node3:2181 #zk的ip和埠
node2節點配置:
broker.id=2
port=9092 #新增
host.name=node2 #新增
log.dirs=/usr/local/kafka_2.11/kafka-logs
zookeeper.connect=node1:2181,node2:2181,node3:2181 #zk的ip和埠
node3節點配置:
broker.id=3
port=9092 #新增
host.name=node3 #新增
log.dirs=/usr/local/kafka_2.11/kafka-logs
zookeeper.connect=node1:2181,node2:2181,node3:2181 #zk的ip和埠
啟動,需要到各個節點下啟動:
/usr/local/kafka_2.11/bin/kafka-server-start.sh /usr/local/kafka_2.11/config/server.properties
例子1:
建立乙個複製因子為3的新主題
/usr/local/kafka_2.11/bin/kafka-topics.sh -create -zookeeper localhost:2181 -replication-factor 3 -partitions 1 -topic my-replicated-topic
檢視顯示主題資訊
/usr/local/kafka_2.11/bin/kafka-topics.sh -describe -zookeeper localhost:2181 -topic my-replicated-topic
第五步:發布訊息,消費訊息
發布:/usr/local/kafka_2.11/bin/kafka-console-producer.sh -broker-list localhost:9092 -topic my-replicated-topic
消費:/usr/local/kafka_2.11/bin/kafka-console-consumer.sh -bootstrap-server localhost:9092 -from-beginning -topic my-replicated-topic
例子2:
建立乙個複製因子為1的新主題
/usr/local/kafka_2.11/bin/kafka-topics.sh --create --zookeeper 192.168.209.129:2181,192.168.209.130:2181,192.168.209.131:2181 --replication-factor 1 --partitions 3 --topic first
通過shell命令傳送訊息
/usr/local/kafka_2.11/bin/kafka-console-producer.sh --broker-list 192.168.209.129:9092,192.168.209.130:9092,192.168.209.131:9092 --topic first
通過shell消費訊息
/usr/local/kafka_2.11/bin/kafka-console-consumer.sh --zookeeper 192.168.209.129:2181,192.168.209.130:2181,192.168.209.131:2181 --from-beginning --topic first
centos7搭建kafka集群
安裝環境 集群規劃 伺服器lnh01 伺服器lnh02 伺服器lnh03 zookeeper zookeeper zookeeper kafka kafka kafka 安裝步驟 上傳壓縮包到伺服器 解壓至 opt soft下 命令 tar zxvf kafka 2.12 2.4.1.tgz c o...
centos7搭建kafka集群
7.關閉防火牆 systemctl stop firewalld.service 停止firewall systemctl disable firewalld.service 禁止firewall開機啟動 firewall cmd state 檢視預設防火牆狀態 關閉後顯示notrunning,開啟...
Centos7下kafka集群搭建
1.kafka以topic來進行訊息管理,每個topic包含多個partition,每個partition對應乙個邏輯log,有多個segment組成。2.每個segment中儲存多條訊息 見下圖 訊息id由其邏輯位置決定,即從訊息id可直接定位到訊息的儲存位置,避免id到位置的額外對映。3.每個p...