Skip to content

Instantly share code, notes, and snippets.

@yanakad
Last active September 9, 2015 19:16
Show Gist options
  • Save yanakad/63e1eff98dccdf69c84f to your computer and use it in GitHub Desktop.
Save yanakad/63e1eff98dccdf69c84f to your computer and use it in GitHub Desktop.
Multiple workers per node

spark-env.sh

export SPARK_WORKER_INSTANCES=2
export SPARK_WORKER_CORES=2   #my machine has 4 total
export SPARK_WORKER_MEMORY=3g #my machine has 6G total

Start the master: sbin/start-master.sh

Start the workers:

sbin/start-slave.sh 1 spark://yana-Ubuntu:7077
sbin/start-slave.sh 2 spark://yana-Ubuntu:7077
@vptech20nn
Copy link

Hi Thanks for posting this sample. I have a followup question. What will be the behavior when application is submitted with spark.executor.cores=3. Will it span 6 executors (3 for each worker) for the application ?

And with spark.core,max=12, with it allocate 2 cpu cores o each of these of these workers.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment