Skip to content

Commit

Permalink
Derive required #cores for Spark from #partitions of Kafka input topic
Browse files Browse the repository at this point in the history
  • Loading branch information
miguno committed Sep 30, 2014
1 parent 657c09d commit 891d50b
Showing 1 changed file with 2 additions and 1 deletion.
Original file line number Diff line number Diff line change
Expand Up @@ -55,7 +55,8 @@ class KafkaSparkStreamingSpec extends FeatureSpec with Matchers with BeforeAndAf
// DStream, and each receiver occupies 1 core. If all your cores are occupied by receivers then no data will be
// processed!
// https://spark.apache.org/docs/1.1.0/streaming-programming-guide.html
conf.setMaster("local[2]")
val cores = inputTopic.partitions + 1
conf.setMaster(s"local[$cores]")
// Use Kryo to speed up serialization, recommended as default setup for Spark Streaming
// http://spark.apache.org/docs/1.1.0/tuning.html#data-serialization
conf.set("spark.serializer", "org.apache.spark.serializer.KryoSerializer")
Expand Down

0 comments on commit 891d50b

Please sign in to comment.