hadoop - Where are logs in Spark on YARN? -


i'm new spark. can run spark 0.9.1 on yarn (2.0.0-cdh4.2.1). there no log after execution.

the following command used run spark example. logs not found in history server in normal mapreduce job.

spark_jar=./assembly/target/scala-2.10/spark-assembly-0.9.1-hadoop2.0.0-cdh4.2.1.jar \ ./bin/spark-class org.apache.spark.deploy.yarn.client --jar ./spark-example-1.0.0.jar \ --class simpleapp --args yarn-standalone  --num-workers 3 --master-memory 1g \ --worker-memory 1g --worker-cores 1 

where can find logs/stderr/stdout?

is there someplace set configuration? did find output console saying:

14/04/14 18:51:52 info client: command applicationmaster: $java_home/bin/java -server -xmx640m -djava.io.tmpdir=$pwd/tmp org.apache.spark.deploy.yarn.applicationmaster --class simpleapp --jar ./spark-example-1.0.0.jar --args 'yarn-standalone' --worker-memory 1024 --worker-cores 1 --num-workers 3 1> <log_dir>/stdout 2> <log_dir>/stderr

in line, notice 1> $log_dir/stdout 2> $log_dir/stderr

where can log_dir set?

pretty article question:

running spark on yarn - see section "debugging application". decent explanation required examples.

the thing need follow correctly working history server spark close spark context in application. otherwise, application history server not see complete , not show (despite history ui accessible not visible).


Comments

Popular posts from this blog

apache - Remove .php and add trailing slash in url using htaccess not loading css -

javascript - jQuery show full size image on click -