全部产品
Search
文档中心

开源大数据平台E-MapReduce:Spark对接RocketMQ

更新时间:Aug 15, 2023

本文介绍如何通过Spark Streaming消费消息队列RocketMQ(简称MQ)中的数据并计算每个Batch中的单词。

通过Spark访问MQ

代码示例如下。

val Array(cId, topic, subExpression, parallelism, interval) = args
    val accessKeyId = System.getenv("ALIBABA_CLOUD_ACCESS_KEY_ID")
    val accessKeySecret = System.getenv("ALIBABA_CLOUD_ACCESS_KEY_SECRET")
    val numStreams = parallelism.toInt
    val batchInterval = Milliseconds(interval.toInt)
    val conf = new SparkConf().setAppName("Test ONS Streaming")
    val ssc = new StreamingContext(conf, batchInterval)
    def func: Message => Array[Byte] = msg => msg.getBody
    val onsStreams = (0 until numStreams).map { i =>
      println(s"starting stream $i")
      OnsUtils.createStream(
        ssc,
        cId,
        topic,
        subExpression,
        accessKeyId,
        accessKeySecret,
        StorageLevel.MEMORY_AND_DISK_2,
        func)
    }
    val unionStreams = ssc.union(onsStreams)
    unionStreams.foreachRDD(rdd => {
      rdd.map(bytes => new String(bytes)).flatMap(line => line.split(" "))
        .map(word => (word, 1))
        .reduceByKey(_ + _).collect().foreach(e => println(s"word: ${e._1}, cnt: ${e._2}"))
    })
    ssc.start()
    ssc.awaitTermination()
说明

运行代码示例前必须先配置环境变量。关于如何配置环境变量,请参见配置环境变量

附录

完整示例代码,请参见通过Spark访问RocketMQ