若泽大数据 www.ruozedata.com

ruozedata


  • 主页

  • 归档

  • 分类

  • 标签

  • 发展历史

  • Suche

08生产预警平台项目之Flume Agent(聚合节点) sink to kafka cluster

Veröffentlicht am 2018-09-07 | Bearbeitet am 2019-05-31 | in 生产预警平台项目 | Aufrufe:

1.创建logtopic

1
[root@sht-sgmhadoopdn-01 kafka]# bin/kafka-topics.sh --create --zookeeper 172.16.101.58:2181,172.16.101.59:2181,172.16.101.60:2181/kafka --replication-factor 3 --partitions 1 --topic logtopic

2.创建avro_memory_kafka.properties (kafka sink)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
[root@sht-sgmhadoopcm-01 ~]# cd /tmp/flume-ng/conf
[root@sht-sgmhadoopcm-01 conf]# cp avro_memory_hdfs.properties avro_memory_kafka.properties
[root@sht-sgmhadoopcm-01 conf]# vi avro_memory_kafka.properties
#Name the components on this agent
a1.sources = r1
a1.sinks = k1
a1.channels = c1

# Describe/configure the source
a1.sources.r1.type = avro
a1.sources.r1.bind = 172.16.101.54
a1.sources.r1.port = 4545


#Describe the sink
a1.sinks.k1.type = org.apache.flume.sink.kafka.KafkaSink
a1.sinks.k1.kafka.topic = logtopic
a1.sinks.k1.kafka.bootstrap.servers = 172.16.101.58:9092,172.16.101.59:9092,172.16.101.60:9092
a1.sinks.k1.kafka.flumeBatchSize = 6000
a1.sinks.k1.kafka.producer.acks = 1
a1.sinks.k1.kafka.producer.linger.ms = 1
a1.sinks.ki.kafka.producer.compression.type = snappy


#Use a channel which buffers events in memory
a1.channels.c1.type = memory
a1.channels.c1.keep-alive = 90
a1.channels.c1.capacity = 2000000
a1.channels.c1.transactionCapacity = 6000


#Bind the source and sink to the channel
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1

3.后台启动 flume-ng agent(聚合节点)和查看nohup.out

1
2
3
4
5
6
7
8
9
[root@sht-sgmhadoopcm-01 ~]# source /etc/profile
[root@sht-sgmhadoopcm-01 ~]# cd /tmp/flume-ng/
[root@sht-sgmhadoopcm-01 flume-ng]# nohup flume-ng agent -c conf -f /tmp/flume-ng/conf/avro_memory_kafka.properties -n a1 -Dflume.root.logger=INFO,console &
[1] 4971
[root@sht-sgmhadoopcm-01 flume-ng]# nohup: ignoring input and appending output to `nohup.out'

[root@sht-sgmhadoopcm-01 flume-ng]#
[root@sht-sgmhadoopcm-01 flume-ng]#
[root@sht-sgmhadoopcm-01 flume-ng]# cat nohup.out

4.检查log收集的三台(收集节点)开启没

1
2
3
4
5
6
7
8
[hdfs@flume-agent-01 flume-ng]$ . ~/.bash_profile 
[hdfs@flume-agent-02 flume-ng]$ . ~/.bash_profile
[hdfs@flume-agent-03 flume-ng]$ . ~/.bash_profile


[hdfs@flume-agent-01 flume-ng]$ nohup flume-ng agent -c /tmp/flume-ng/conf -f /tmp/flume-ng/conf/exec_memory_avro.properties -n a1 -Dflume.root.logger=INFO,console &
[hdfs@flume-agent-01 flume-ng]$ nohup flume-ng agent -c /tmp/flume-ng/conf -f /tmp/flume-ng/conf/exec_memory_avro.properties -n a1 -Dflume.root.logger=INFO,console &
[hdfs@flume-agent-01 flume-ng]$ nohup flume-ng agent -c /tmp/flume-ng/conf -f /tmp/flume-ng/conf/exec_memory_avro.properties -n a1 -Dflume.root.logger=INFO,console &

5.打开kafka manager监控

http://172.16.101.55:9999
enter description here

ruozedata WeChat Bezahlung
# spark # 高级 # 生产预警平台项目
07生产预警平台项目之kafka-manager监控工具的搭建(sbt安装与编译)
09生产预警平台项目之基于Spark Streaming Direct方式的WordCount最详细案例(java版)
  • Inhaltsverzeichnis
  • Übersicht

ruozedata

若泽数据优秀博客汇总
155 Artikel
31 Kategorien
74 schlagwörter
RSS
GitHub B站学习视频 腾讯课堂学习视频 官网
  1. 1. 1.创建logtopic
  2. 2. 2.创建avro_memory_kafka.properties (kafka sink)
  3. 3. 3.后台启动 flume-ng agent(聚合节点)和查看nohup.out
  4. 4. 4.检查log收集的三台(收集节点)开启没
  5. 5. 5.打开kafka manager监控
|
若泽数据
|