spark-env.sh
export SPARK_WORKER_INSTANCES=2
export SPARK_WORKER_CORES=2 #my machine has 4 total
export SPARK_WORKER_MEMORY=3g #my machine has 6G total
Start the master: sbin/start-master.sh
Start the workers:
sbin/start-slave.sh 1 spark://yana-Ubuntu:7077
sbin/start-slave.sh 2 spark://yana-Ubuntu:7077
Hi Thanks for posting this sample. I have a followup question. What will be the behavior when application is submitted with spark.executor.cores=3. Will it span 6 executors (3 for each worker) for the application ?
And with spark.core,max=12, with it allocate 2 cpu cores o each of these of these workers.