16/05/03 23:19:33 INFO security.UserGroupInformation: Using the configured user name and password, user name is dt_dataminers 16/05/03 23:19:33 INFO security.UserGroupInformation: Truncate the ugi cache file. 16/05/03 23:19:38 INFO spark.SparkContext: Running Spark version 1.6.1 16/05/03 23:19:38 WARN spark.SparkConf: Detected deprecated memory fraction settings: [spark.storage.memoryFraction]. As of Spark 1.6, execution and storage memory management are unified. All memory fractions used in the old model are now deprecated and no longer read. If you wish to use the old memory management, you may explicitly enable `spark.memory.useLegacyMode` (not recommended). 16/05/03 23:19:38 WARN spark.SparkConf: SPARK_CLASSPATH was detected (set to ':/etc/hadoop/conf:/usr/lib/hadoop/lib/*:/usr/lib/hadoop-hdfs/./:/usr/lib/hadoop-hdfs/lib/*:/usr/lib/hadoop-hdfs/.//*:/usr/lib/hadoop-yarn/.//*:/usr/lib/hadoop-mapreduce/lib/*:/usr/lib/hadoop-mapreduce/contrib/streaming/*:/etc/hbase/conf:/usr/lib/hbase/hbase-client-0.98.6-cdh5.2.0-sogou.jar:/usr/lib/hbase/hbase-client-sogou.jar:/usr/lib/hbase/hbase-common-0.98.6-cdh5.2.0-sogou.jar:/usr/lib/hbase/hbase-common-0.98.6-cdh5.2.0-sogou-tests.jar:/usr/lib/hbase/hbase-common-sogou.jar:/usr/lib/hbase/hbase-common-sogou-tests.jar:/usr/lib/hbase/hbase-examples-0.98.6-cdh5.2.0-sogou.jar:/usr/lib/hbase/hbase-examples-sogou.jar:/usr/lib/hbase/hbase-hadoop2-compat-0.98.6-cdh5.2.0-sogou.jar:/usr/lib/hbase/hbase-hadoop2-compat-0.98.6-cdh5.2.0-sogou-tests.jar:/usr/lib/hbase/hbase-hadoop2-compat-sogou.jar:/usr/lib/hbase/hbase-hadoop2-compat-sogou-tests.jar:/usr/lib/hbase/hbase-hadoop-compat-0.98.6-cdh5.2.0-sogou.jar:/usr/lib/hbase/hbase-hadoop-compat-0.98.6-cdh5.2.0-sogou-tests.jar:/usr/lib/hbase/hbase-hadoop-compat-sogou.jar:/usr/lib/hbase/hbase-hadoop-compat-sogou-tests.jar:/usr/lib/hbase/hbase-it-0.98.6-cdh5.2.0-sogou.jar:/usr/lib/hbase/hbase-it-0.98.6-cdh5.2.0-sogou-tests.jar:/usr/lib/hbase/hbase-it-sogou.jar:/usr/lib/hbase/hbase-it-sogou-tests.jar:/usr/lib/hbase/hbase-prefix-tree-0.98.6-cdh5.2.0-sogou.jar:/usr/lib/hbase/hbase-prefix-tree-sogou.jar:/usr/lib/hbase/hbase-protocol-0.98.6-cdh5.2.0-sogou.jar:/usr/lib/hbase/hbase-protocol-sogou.jar:/usr/lib/hbase/hbase-server-0.98.6-cdh5.2.0-sogou.jar:/usr/lib/hbase/hbase-server-0.98.6-cdh5.2.0-sogou-tests.jar:/usr/lib/hbase/hbase-server-sogou.jar:/usr/lib/hbase/hbase-server-sogou-tests.jar:/usr/lib/hbase/hbase-shell-0.98.6-cdh5.2.0-sogou.jar:/usr/lib/hbase/hbase-shell-sogou.jar:/usr/lib/hbase/hbase-testing-util-0.98.6-cdh5.2.0-sogou.jar:/usr/lib/hbase/hbase-testing-util-sogou.jar:/usr/lib/hbase/hbase-thrift-0.98.6-cdh5.2.0-sogou.jar:/usr/lib/hbase/hbase-thrift-sogou.jar:/usr/lib/hbase/lib/activation-1.1.jar:/usr/lib/hbase/lib/apacheds-i18n-2.0.0-M15.jar:/usr/lib/hbase/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/usr/lib/hbase/lib/api-asn1-api-1.0.0-M20.jar:/usr/lib/hbase/lib/api-util-1.0.0-M20.jar:/usr/lib/hbase/lib/asm-3.2.jar:/usr/lib/hbase/lib/avro.jar:/usr/lib/hbase/lib/commons-beanutils-1.7.0.jar:/usr/lib/hbase/lib/commons-beanutils-core-1.8.0.jar:/usr/lib/hbase/lib/commons-cli-1.2.jar:/usr/lib/hbase/lib/commons-codec-1.7.jar:/usr/lib/hbase/lib/commons-collections-3.2.1.jar:/usr/lib/hbase/lib/commons-compress-1.4.1.jar:/usr/lib/hbase/lib/commons-configuration-1.6.jar:/usr/lib/hbase/lib/commons-daemon-1.0.3.jar:/usr/lib/hbase/lib/commons-digester-1.8.jar:/usr/lib/hbase/lib/commons-el-1.0.jar:/usr/lib/hbase/lib/commons-httpclient-3.1.jar:/usr/lib/hbase/lib/commons-io-2.4.jar:/usr/lib/hbase/lib/commons-lang-2.6.jar:/usr/lib/hbase/lib/commons-logging-1.1.1.jar:/usr/lib/hbase/lib/commons-math-2.1.jar:/usr/lib/hbase/lib/commons-math3-3.1.1.jar:/usr/lib/hbase/lib/commons-net-3.1.jar:/usr/lib/hbase/lib/commons-pool-1.5.4.jar:/usr/lib/hbase/lib/core-3.1.1.jar:/usr/lib/hbase/lib/curator-client-2.6.0.jar:/usr/lib/hbase/lib/curator-framework-2.6.0.jar:/usr/lib/hbase/lib/curator-recipes-2.6.0.jar:/usr/lib/hbase/lib/embedded-jmxtrans-1.0.6.jar:/usr/lib/hbase/lib/findbugs-annotations-1.3.9-1.jar:/usr/lib/hbase/lib/gson-2.2.4.jar:/usr/lib/hbase/lib/guava-12.0.1.jar:/usr/lib/hbase/lib/hamcrest-core-1.3.jar:/usr/lib/hbase/lib/hbase-client-0.98.6-cdh5.2.0-sogou.jar:/usr/lib/hbase/lib/hbase-common-0.98.6-cdh5.2.0-sogou.jar:/usr/lib/hbase/lib/hbase-common-0.98.6-cdh5.2.0-sogou-tests.jar:/usr/lib/hbase/lib/hbase-examples-0.98.6-cdh5.2.0-sogou.jar:/usr/lib/hbase/lib/hbase-hadoop2-compat-0.98.6-cdh5.2.0-sogou.jar:/usr/lib/hbase/lib/hbase-hadoop2-compat-0.98.6-cdh5.2.0-sogou-tests.jar:/usr/lib/hbase/lib/hbase-hadoop-compat-0.98.6-cdh5.2.0-sogou.jar:/usr/lib/hbase/lib/hbase-hadoop-compat-0.98.6-cdh5.2.0-sogou-tests.jar:/usr/lib/hbase/lib/hbase-it-0.98.6-cdh5.2.0-sogou.jar:/usr/lib/hbase/lib/hbase-it-0.98.6-cdh5.2.0-sogou-tests.jar:/usr/lib/hbase/lib/hbase-prefix-tree-0.98.6-cdh5.2.0-sogou.jar:/usr/lib/hbase/lib/hbase-protocol-0.98.6-cdh5.2.0-sogou.jar:/usr/lib/hbase/lib/hbase-server-0.98.6-cdh5.2.0-sogou.jar:/usr/lib/hbase/lib/hbase-server-0.98.6-cdh5.2.0-sogou-tests.jar:/usr/lib/hbase/lib/hbase-shell-0.98.6-cdh5.2.0-sogou.jar:/usr/lib/hbase/lib/hbase-testing-util-0.98.6-cdh5.2.0-sogou.jar:/usr/lib/hbase/lib/hbase-thrift-0.98.6-cdh5.2.0-sogou.jar:/usr/lib/hbase/lib/high-scale-lib-1.1.1.jar:/usr/lib/hbase/lib/hsqldb-1.8.0.10.jar:/usr/lib/hbase/lib/htrace-core-2.04.jar:/usr/lib/hbase/lib/htrace-core.jar:/usr/lib/hbase/lib/httpclient-4.2.5.jar:/usr/lib/hbase/lib/httpcore-4.2.5.jar:/usr/lib/hbase/lib/jamon-runtime-2.3.1.jar:/usr/lib/hbase/lib/jasper-compiler-5.5.23.jar:/usr/lib/hbase/lib/jasper-runtime-5.5.23.jar:/usr/lib/hbase/lib/java-xmlbuilder-0.4.jar:/usr/lib/hbase/lib/jaxb-api-2.1.jar:/usr/lib/hbase/lib/jaxb-impl-2.2.3-1.jar:/usr/lib/hbase/lib/jersey-core-1.8.jar:/usr/lib/hbase/lib/jersey-json-1.8.jar:/usr/lib/hbase/lib/jersey-server-1.8.jar:/usr/lib/hbase/lib/jets3t-0.9.0.jar:/usr/lib/hbase/lib/jettison-1.3.1.jar:/usr/lib/hbase/lib/jetty-6.1.26.cloudera.2.jar:/usr/lib/hbase/lib/jetty-sslengine-6.1.26.cloudera.2.jar:/usr/lib/hbase/lib/jetty-util-6.1.26.cloudera.2.jar:/usr/lib/hbase/lib/jsch-0.1.42.jar:/usr/lib/hbase/lib/jsp-2.1-6.1.14.jar:/usr/lib/hbase/lib/jsp-api-2.1-6.1.14.jar:/usr/lib/hbase/lib/jsp-api-2.1.jar:/usr/lib/hbase/lib/jsr250-api-1.0.jar:/usr/lib/hbase/lib/jsr305-1.3.9.jar:/usr/lib/hbase/lib/junit-4.11.jar:/usr/lib/hbase/lib/log4j-1.2.17.jar:/usr/lib/hbase/lib/metrics-core-2.2.0.jar:/usr/lib/hbase/lib/netty-3.6.6.Final.jar:/usr/lib/hbase/lib/paranamer-2.3.jar:/usr/lib/hbase/lib/protobuf-java-2.5.0.jar:/usr/lib/hbase/lib/servlet-api-2.5-6.1.14.jar:/usr/lib/hbase/lib/servlet-api-2.5.jar:/usr/lib/hbase/lib/slf4j-api-1.7.5.jar:/usr/lib/hbase/lib/slf4j-log4j12.jar:/usr/lib/hbase/lib/snappy-java-1.0.4.1.jar:/usr/lib/hbase/lib/xmlenc-0.52.jar:/usr/lib/hbase/lib/xz-1.0.jar:/usr/lib/hbase/lib/zookeeper.jar:::/usr/lib/hadoop-mapreduce/.//activation-1.1.jar:/usr/lib/hadoop-mapreduce/.//apacheds-i18n-2.0.0-M15.jar:/usr/lib/hadoop-mapreduce/.//apacheds-kerberos-codec-2.0.0-M15.jar:/usr/lib/hadoop-mapreduce/.//api-asn1-api-1.0.0-M20.jar:/usr/lib/hadoop-mapreduce/.//api-util-1.0.0-M20.jar:/usr/lib/hadoop-mapreduce/.//asm-3.2.jar:/usr/lib/avro/avro.jar:/usr/lib/hadoop-mapreduce/.//aws-java-sdk-1.7.4.jar:/usr/lib/hadoop-mapreduce/.//commons-beanutils-1.7.0.jar:/usr/lib/hadoop-mapreduce/.//commons-beanutils-core-1.8.0.jar:/usr/lib/hadoop-mapreduce/.//commons-cli-1.2.jar:/usr/lib/hadoop-mapreduce/.//commons-codec-1.4.jar:/usr/lib/hadoop-mapreduce/.//commons-collections-3.2.1.jar:/usr/lib/hadoop-mapreduce/.//commons-compress-1.4.1.jar:/usr/lib/hadoop-mapreduce/.//commons-configuration-1.6.jar:/usr/lib/hadoop-mapreduce/.//commons-digester-1.8.jar:/usr/lib/hadoop-mapreduce/.//commons-el-1.0.jar:/usr/lib/hadoop-mapreduce/.//commons-httpclient-3.1.jar:/usr/lib/hadoop-mapreduce/.//commons-io-2.4.jar:/usr/lib/hadoop-mapreduce/.//commons-lang-2.6.jar:/usr/lib/hadoop-mapreduce/.//commons-logging-1.1.3.jar:/usr/lib/hadoop-mapreduce/.//commons-math3-3.1.1.jar:/usr/lib/hadoop-mapreduce/.//commons-net-3.1.jar:/usr/lib/hadoop-mapreduce/.//curator-client-2.6.0.jar:/usr/lib/hadoop-mapreduce/.//curator-framework-2.6.0.jar:/usr/lib/hadoop-mapreduce/.//curator-recipes-2.6.0.jar:/usr/lib/hadoop-mapreduce/.//gson-2.2.4.jar:/usr/lib/hadoop-mapreduce/.//guava-11.0.2.jar:/usr/lib/hadoop-mapreduce/.//hadoop-archives-2.5.0-cdh5.3.2.jar:hadoop-archives-2.5.0-cdh5.3.2.jar:/usr/lib/hadoop-mapreduce/.//hadoop-auth-2.5.0-cdh5.3.2.jar:hadoop-auth-2.5.0-cdh5.3.2.jar:/usr/lib/hadoop-mapreduce/.//hadoop-aws-2.5.0-cdh5.3.2.jar:hadoop-aws-2.5.0-cdh5.3.2.jar:/usr/lib/hadoop-mapreduce/.//hadoop-azure-2.5.0-cdh5.3.2.jar:hadoop-azure-2.5.0-cdh5.3.2.jar:/usr/lib/hadoop-mapreduce/.//hadoop-datajoin-2.5.0-cdh5.3.2.jar:hadoop-datajoin-2.5.0-cdh5.3.2.jar:/usr/lib/hadoop-mapreduce/.//hadoop-distcp-2.5.0-cdh5.3.2.jar:hadoop-distcp-2.5.0-cdh5.3.2.jar:/usr/lib/hadoop-mapreduce/.//hadoop-extras-2.5.0-cdh5.3.2.jar:hadoop-extras-2.5.0-cdh5.3.2.jar:/usr/lib/hadoop-mapreduce/.//hadoop-gridmix-2.5.0-cdh5.3.2.jar:hadoop-gridmix-2.5.0-cdh5.3.2.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-app-2.5.0-cdh5.3.2.jar:hadoop-mapreduce-client-app-2.5.0-cdh5.3.2.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-common-2.5.0-cdh5.3.2.jar:hadoop-mapreduce-client-common-2.5.0-cdh5.3.2.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-core-2.5.0-cdh5.3.2.jar:hadoop-mapreduce-client-core-2.5.0-cdh5.3.2.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-hs-2.5.0-cdh5.3.2.jar:hadoop-mapreduce-client-hs-2.5.0-cdh5.3.2.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-hs-plugins-2.5.0-cdh5.3.2.jar:hadoop-mapreduce-client-hs-plugins-2.5.0-cdh5.3.2.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-jobclient-2.5.0-cdh5.3.2.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-jobclient-2.5.0-cdh5.3.2-tests.jar:hadoop-mapreduce-client-jobclient-2.5.0-cdh5.3.2.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-nativetask-2.5.0-cdh5.3.2.jar:hadoop-mapreduce-client-nativetask-2.5.0-cdh5.3.2.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-shuffle-2.5.0-cdh5.3.2.jar:hadoop-mapreduce-client-shuffle-2.5.0-cdh5.3.2.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-examples-2.5.0-cdh5.3.2.jar:hadoop-mapreduce-examples-2.5.0-cdh5.3.2.jar:/usr/lib/hadoop-mapreduce/.//hadoop-rumen-2.5.0-cdh5.3.2.jar:hadoop-rumen-2.5.0-cdh5.3.2.jar:/usr/lib/hadoop-mapreduce/.//hadoop-sls-2.5.0-cdh5.3.2.jar:hadoop-sls-2.5.0-cdh5.3.2.jar:/usr/lib/hadoop-mapreduce/.//hadoop-streaming-2.5.0-cdh5.3.2.jar:hadoop-streaming-2.5.0-cdh5.3.2.jar:/usr/lib/hadoop-mapreduce/.//hamcrest-core-1.3.jar:/usr/lib/hadoop-mapreduce/.//httpclient-4.2.5.jar:/usr/lib/hadoop-mapreduce/.//httpcore-4.2.5.jar:/usr/lib/hadoop-mapreduce/.//jackson-core-asl-1.8.8.jar:/usr/lib/hadoop-mapreduce/.//jackson-jaxrs-1.8.8.jar:/usr/lib/hadoop-mapreduce/.//jackson-mapper-asl-1.8.8.jar:/usr/lib/hadoop-mapreduce/.//jackson-xc-1.8.8.jar:/usr/lib/hadoop-mapreduce/.//jasper-compiler-5.5.23.jar:/usr/lib/hadoop-mapreduce/.//jasper-runtime-5.5.23.jar:/usr/lib/hadoop-mapreduce/.//java-xmlbuilder-0.4.jar:/usr/lib/hadoop-mapreduce/.//jaxb-api-2.2.2.jar:/usr/lib/hadoop-mapreduce/.//jaxb-impl-2.2.3-1.jar:/usr/lib/hadoop-mapreduce/.//jersey-core-1.9.jar:/usr/lib/hadoop-mapreduce/.//jersey-json-1.9.jar:/usr/lib/hadoop-mapreduce/.//jersey-server-1.9.jar:/usr/lib/hadoop-mapreduce/.//jets3t-0.9.0.jar:/usr/lib/hadoop-mapreduce/.//jettison-1.1.jar:/usr/lib/hadoop-mapreduce/.//jetty-6.1.26.cloudera.4.jar:/usr/lib/hadoop-mapreduce/.//jetty-util-6.1.26.cloudera.4.jar:/usr/lib/hadoop-mapreduce/.//jsch-0.1.42.jar:/usr/lib/hadoop-mapreduce/.//jsp-api-2.1.jar:/usr/lib/hadoop-mapreduce/.//jsr305-1.3.9.jar:/usr/lib/hadoop-mapreduce/.//junit-4.11.jar:/usr/lib/hadoop-mapreduce/.//log4j-1.2.17.jar:/usr/lib/hadoop-mapreduce/.//metrics-core-3.0.1.jar:/usr/lib/hadoop-mapreduce/.//microsoft-windowsazure-storage-sdk-0.6.0.jar:/usr/lib/hadoop-mapreduce/.//paranamer-2.3.jar:/usr/lib/hadoop-mapreduce/.//protobuf-java-2.5.0.jar:/usr/lib/hadoop-mapreduce/.//servlet-api-2.5.jar:/usr/lib/hadoop-mapreduce/.//snappy-java-1.0.4.1.jar:/usr/lib/hadoop-mapreduce/.//stax-api-1.0-2.jar:/usr/lib/hadoop-mapreduce/.//xmlenc-0.52.jar:/usr/lib/hadoop-mapreduce/.//xz-1.0.jar:/usr/lib/hadoop-mapreduce/.//zookeeper-3.4.5-cdh5.3.2.jar::/usr/lib/hadoop/.//hadoop-annotations-2.5.0-cdh5.3.2.jar:hadoop-annotations-2.5.0-cdh5.3.2.jar:/usr/lib/hadoop/.//hadoop-auth-2.5.0-cdh5.3.2.jar:hadoop-auth-2.5.0-cdh5.3.2.jar:/usr/lib/hadoop/.//hadoop-common-2.5.0-cdh5.3.2.jar:/usr/lib/hadoop/.//hadoop-common-2.5.0-cdh5.3.2-tests.jar:hadoop-common-2.5.0-cdh5.3.2.jar:/usr/lib/hadoop/.//hadoop-nfs-2.5.0-cdh5.3.2.jar:hadoop-nfs-2.5.0-cdh5.3.2.jar::/usr/lib/hadoop/.//hadoop-annotations-2.5.0-cdh5.3.2.jar:hadoop-annotations-2.5.0-cdh5.3.2.jar:/usr/lib/hadoop/.//hadoop-auth-2.5.0-cdh5.3.2.jar:hadoop-auth-2.5.0-cdh5.3.2.jar:/usr/lib/hadoop/.//hadoop-common-2.5.0-cdh5.3.2.jar:/usr/lib/hadoop/.//hadoop-common-2.5.0-cdh5.3.2-tests.jar:hadoop-common-2.5.0-cdh5.3.2.jar:/usr/lib/hadoop/.//hadoop-nfs-2.5.0-cdh5.3.2.jar:hadoop-nfs-2.5.0-cdh5.3.2.jar:../parquet/parquet-avro.jar:../parquet/parquet-avro-javadoc.jar:../parquet/parquet-avro-sources.jar:../parquet/parquet-cascading.jar:../parquet/parquet-cascading-javadoc.jar:../parquet/parquet-cascading-sources.jar:../parquet/parquet-column.jar:../parquet/parquet-column-javadoc.jar:../parquet/parquet-column-sources.jar:../parquet/parquet-common.jar:../parquet/parquet-common-javadoc.jar:../parquet/parquet-common-sources.jar:../parquet/parquet-encoding.jar:../parquet/parquet-encoding-javadoc.jar:../parquet/parquet-encoding-sources.jar:../parquet/parquet-format.jar:../parquet/parquet-format-javadoc.jar:../parquet/parquet-format-sources.jar:../parquet/parquet-generator.jar:../parquet/parquet-generator-javadoc.jar:../parquet/parquet-generator-sources.jar:../parquet/parquet-hadoop-bundle.jar:../parquet/parquet-hadoop-bundle-sources.jar:../parquet/parquet-hadoop.jar:../parquet/parquet-hadoop-javadoc.jar:../parquet/parquet-hadoop-sources.jar:../parquet/parquet-pig-bundle.jar:../parquet/parquet-pig-bundle-sources.jar:../parquet/parquet-pig.jar:../parquet/parquet-pig-javadoc.jar:../parquet/parquet-pig-sources.jar:../parquet/parquet-scrooge.jar:../parquet/parquet-scrooge-javadoc.jar:../parquet/parquet-scrooge-sources.jar:../parquet/parquet-test-hadoop2.jar:../parquet/parquet-thrift.jar:../parquet/parquet-thrift-javadoc.jar:../parquet/parquet-thrift-sources.jar::/usr/lib/hadoop-yarn/lib/activation-1.1.jar:/usr/lib/hadoop-yarn/lib/aopalliance-1.0.jar:/usr/lib/hadoop-yarn/lib/asm-3.2.jar:/usr/lib/hadoop-yarn/lib/commons-cli-1.2.jar:/usr/lib/hadoop-yarn/lib/commons-codec-1.4.jar:/usr/lib/hadoop-yarn/lib/commons-collections-3.2.1.jar:/usr/lib/hadoop-yarn/lib/commons-compress-1.4.1.jar:/usr/lib/hadoop-yarn/lib/commons-httpclient-3.1.jar:/usr/lib/hadoop-yarn/lib/commons-io-2.4.jar:/usr/lib/hadoop-yarn/lib/commons-lang-2.6.jar:/usr/lib/hadoop-yarn/lib/commons-logging-1.1.3.jar:/usr/lib/hadoop-yarn/lib/fst-2.24.jar:/usr/lib/hadoop-yarn/lib/guava-11.0.2.jar:/usr/lib/hadoop-yarn/lib/guice-3.0.jar:/usr/lib/hadoop-yarn/lib/guice-servlet-3.0.jar:/usr/lib/hadoop-yarn/lib/jackson-core-asl-1.8.8.jar:/usr/lib/hadoop-yarn/lib/jackson-jaxrs-1.8.8.jar:/usr/lib/hadoop-yarn/lib/jackson-mapper-asl-1.8.8.jar:/usr/lib/hadoop-yarn/lib/jackson-xc-1.8.8.jar:/usr/lib/hadoop-yarn/lib/javassist-3.18.1-GA.jar:/usr/lib/hadoop-yarn/lib/javax.inject-1.jar:/usr/lib/hadoop-yarn/lib/jaxb-api-2.2.2.jar:/usr/lib/hadoop-yarn/lib/jaxb-impl-2.2.3-1.jar:/usr/lib/hadoop-yarn/lib/jersey-client-1.9.jar:/usr/lib/hadoop-yarn/lib/jersey-core-1.9.jar:/usr/lib/hadoop-yarn/lib/jersey-guice-1.9.jar:/usr/lib/hadoop-yarn/lib/jersey-json-1.9.jar:/usr/lib/hadoop-yarn/lib/jersey-server-1.9.jar:/usr/lib/hadoop-yarn/lib/jettison-1.1.jar:/usr/lib/hadoop-yarn/lib/jetty-6.1.26.cloudera.4.jar:/usr/lib/hadoop-yarn/lib/jetty-util-6.1.26.cloudera.4.jar:/usr/lib/hadoop-yarn/lib/jsr305-1.3.9.jar:/usr/lib/hadoop-yarn/lib/leveldbjni-all-1.8.jar:/usr/lib/hadoop-yarn/lib/log4j-1.2.17.jar:/usr/lib/hadoop-yarn/lib/objenesis-2.1.jar:/usr/lib/hadoop-yarn/lib/protobuf-java-2.5.0.jar:/usr/lib/hadoop-yarn/lib/servlet-api-2.5.jar:/usr/lib/hadoop-yarn/lib/stax-api-1.0-2.jar:/usr/lib/hadoop-yarn/lib/xz-1.0.jar:/usr/lib/hadoop-yarn/lib/zookeeper-3.4.5-cdh5.3.2.jar:'). This is deprecated in Spark 1.0+. Please instead use: - ./spark-submit with --driver-class-path to augment the driver classpath - spark.executor.extraClassPath to augment the executor classpath 16/05/03 23:19:38 WARN spark.SparkConf: Setting 'spark.executor.extraClassPath' to ':/etc/hadoop/conf:/usr/lib/hadoop/lib/*:/usr/lib/hadoop-hdfs/./:/usr/lib/hadoop-hdfs/lib/*:/usr/lib/hadoop-hdfs/.//*:/usr/lib/hadoop-yarn/.//*:/usr/lib/hadoop-mapreduce/lib/*:/usr/lib/hadoop-mapreduce/contrib/streaming/*:/etc/hbase/conf:/usr/lib/hbase/hbase-client-0.98.6-cdh5.2.0-sogou.jar:/usr/lib/hbase/hbase-client-sogou.jar:/usr/lib/hbase/hbase-common-0.98.6-cdh5.2.0-sogou.jar:/usr/lib/hbase/hbase-common-0.98.6-cdh5.2.0-sogou-tests.jar:/usr/lib/hbase/hbase-common-sogou.jar:/usr/lib/hbase/hbase-common-sogou-tests.jar:/usr/lib/hbase/hbase-examples-0.98.6-cdh5.2.0-sogou.jar:/usr/lib/hbase/hbase-examples-sogou.jar:/usr/lib/hbase/hbase-hadoop2-compat-0.98.6-cdh5.2.0-sogou.jar:/usr/lib/hbase/hbase-hadoop2-compat-0.98.6-cdh5.2.0-sogou-tests.jar:/usr/lib/hbase/hbase-hadoop2-compat-sogou.jar:/usr/lib/hbase/hbase-hadoop2-compat-sogou-tests.jar:/usr/lib/hbase/hbase-hadoop-compat-0.98.6-cdh5.2.0-sogou.jar:/usr/lib/hbase/hbase-hadoop-compat-0.98.6-cdh5.2.0-sogou-tests.jar:/usr/lib/hbase/hbase-hadoop-compat-sogou.jar:/usr/lib/hbase/hbase-hadoop-compat-sogou-tests.jar:/usr/lib/hbase/hbase-it-0.98.6-cdh5.2.0-sogou.jar:/usr/lib/hbase/hbase-it-0.98.6-cdh5.2.0-sogou-tests.jar:/usr/lib/hbase/hbase-it-sogou.jar:/usr/lib/hbase/hbase-it-sogou-tests.jar:/usr/lib/hbase/hbase-prefix-tree-0.98.6-cdh5.2.0-sogou.jar:/usr/lib/hbase/hbase-prefix-tree-sogou.jar:/usr/lib/hbase/hbase-protocol-0.98.6-cdh5.2.0-sogou.jar:/usr/lib/hbase/hbase-protocol-sogou.jar:/usr/lib/hbase/hbase-server-0.98.6-cdh5.2.0-sogou.jar:/usr/lib/hbase/hbase-server-0.98.6-cdh5.2.0-sogou-tests.jar:/usr/lib/hbase/hbase-server-sogou.jar:/usr/lib/hbase/hbase-server-sogou-tests.jar:/usr/lib/hbase/hbase-shell-0.98.6-cdh5.2.0-sogou.jar:/usr/lib/hbase/hbase-shell-sogou.jar:/usr/lib/hbase/hbase-testing-util-0.98.6-cdh5.2.0-sogou.jar:/usr/lib/hbase/hbase-testing-util-sogou.jar:/usr/lib/hbase/hbase-thrift-0.98.6-cdh5.2.0-sogou.jar:/usr/lib/hbase/hbase-thrift-sogou.jar:/usr/lib/hbase/lib/activation-1.1.jar:/usr/lib/hbase/lib/apacheds-i18n-2.0.0-M15.jar:/usr/lib/hbase/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/usr/lib/hbase/lib/api-asn1-api-1.0.0-M20.jar:/usr/lib/hbase/lib/api-util-1.0.0-M20.jar:/usr/lib/hbase/lib/asm-3.2.jar:/usr/lib/hbase/lib/avro.jar:/usr/lib/hbase/lib/commons-beanutils-1.7.0.jar:/usr/lib/hbase/lib/commons-beanutils-core-1.8.0.jar:/usr/lib/hbase/lib/commons-cli-1.2.jar:/usr/lib/hbase/lib/commons-codec-1.7.jar:/usr/lib/hbase/lib/commons-collections-3.2.1.jar:/usr/lib/hbase/lib/commons-compress-1.4.1.jar:/usr/lib/hbase/lib/commons-configuration-1.6.jar:/usr/lib/hbase/lib/commons-daemon-1.0.3.jar:/usr/lib/hbase/lib/commons-digester-1.8.jar:/usr/lib/hbase/lib/commons-el-1.0.jar:/usr/lib/hbase/lib/commons-httpclient-3.1.jar:/usr/lib/hbase/lib/commons-io-2.4.jar:/usr/lib/hbase/lib/commons-lang-2.6.jar:/usr/lib/hbase/lib/commons-logging-1.1.1.jar:/usr/lib/hbase/lib/commons-math-2.1.jar:/usr/lib/hbase/lib/commons-math3-3.1.1.jar:/usr/lib/hbase/lib/commons-net-3.1.jar:/usr/lib/hbase/lib/commons-pool-1.5.4.jar:/usr/lib/hbase/lib/core-3.1.1.jar:/usr/lib/hbase/lib/curator-client-2.6.0.jar:/usr/lib/hbase/lib/curator-framework-2.6.0.jar:/usr/lib/hbase/lib/curator-recipes-2.6.0.jar:/usr/lib/hbase/lib/embedded-jmxtrans-1.0.6.jar:/usr/lib/hbase/lib/findbugs-annotations-1.3.9-1.jar:/usr/lib/hbase/lib/gson-2.2.4.jar:/usr/lib/hbase/lib/guava-12.0.1.jar:/usr/lib/hbase/lib/hamcrest-core-1.3.jar:/usr/lib/hbase/lib/hbase-client-0.98.6-cdh5.2.0-sogou.jar:/usr/lib/hbase/lib/hbase-common-0.98.6-cdh5.2.0-sogou.jar:/usr/lib/hbase/lib/hbase-common-0.98.6-cdh5.2.0-sogou-tests.jar:/usr/lib/hbase/lib/hbase-examples-0.98.6-cdh5.2.0-sogou.jar:/usr/lib/hbase/lib/hbase-hadoop2-compat-0.98.6-cdh5.2.0-sogou.jar:/usr/lib/hbase/lib/hbase-hadoop2-compat-0.98.6-cdh5.2.0-sogou-tests.jar:/usr/lib/hbase/lib/hbase-hadoop-compat-0.98.6-cdh5.2.0-sogou.jar:/usr/lib/hbase/lib/hbase-hadoop-compat-0.98.6-cdh5.2.0-sogou-tests.jar:/usr/lib/hbase/lib/hbase-it-0.98.6-cdh5.2.0-sogou.jar:/usr/lib/hbase/lib/hbase-it-0.98.6-cdh5.2.0-sogou-tests.jar:/usr/lib/hbase/lib/hbase-prefix-tree-0.98.6-cdh5.2.0-sogou.jar:/usr/lib/hbase/lib/hbase-protocol-0.98.6-cdh5.2.0-sogou.jar:/usr/lib/hbase/lib/hbase-server-0.98.6-cdh5.2.0-sogou.jar:/usr/lib/hbase/lib/hbase-server-0.98.6-cdh5.2.0-sogou-tests.jar:/usr/lib/hbase/lib/hbase-shell-0.98.6-cdh5.2.0-sogou.jar:/usr/lib/hbase/lib/hbase-testing-util-0.98.6-cdh5.2.0-sogou.jar:/usr/lib/hbase/lib/hbase-thrift-0.98.6-cdh5.2.0-sogou.jar:/usr/lib/hbase/lib/high-scale-lib-1.1.1.jar:/usr/lib/hbase/lib/hsqldb-1.8.0.10.jar:/usr/lib/hbase/lib/htrace-core-2.04.jar:/usr/lib/hbase/lib/htrace-core.jar:/usr/lib/hbase/lib/httpclient-4.2.5.jar:/usr/lib/hbase/lib/httpcore-4.2.5.jar:/usr/lib/hbase/lib/jamon-runtime-2.3.1.jar:/usr/lib/hbase/lib/jasper-compiler-5.5.23.jar:/usr/lib/hbase/lib/jasper-runtime-5.5.23.jar:/usr/lib/hbase/lib/java-xmlbuilder-0.4.jar:/usr/lib/hbase/lib/jaxb-api-2.1.jar:/usr/lib/hbase/lib/jaxb-impl-2.2.3-1.jar:/usr/lib/hbase/lib/jersey-core-1.8.jar:/usr/lib/hbase/lib/jersey-json-1.8.jar:/usr/lib/hbase/lib/jersey-server-1.8.jar:/usr/lib/hbase/lib/jets3t-0.9.0.jar:/usr/lib/hbase/lib/jettison-1.3.1.jar:/usr/lib/hbase/lib/jetty-6.1.26.cloudera.2.jar:/usr/lib/hbase/lib/jetty-sslengine-6.1.26.cloudera.2.jar:/usr/lib/hbase/lib/jetty-util-6.1.26.cloudera.2.jar:/usr/lib/hbase/lib/jsch-0.1.42.jar:/usr/lib/hbase/lib/jsp-2.1-6.1.14.jar:/usr/lib/hbase/lib/jsp-api-2.1-6.1.14.jar:/usr/lib/hbase/lib/jsp-api-2.1.jar:/usr/lib/hbase/lib/jsr250-api-1.0.jar:/usr/lib/hbase/lib/jsr305-1.3.9.jar:/usr/lib/hbase/lib/junit-4.11.jar:/usr/lib/hbase/lib/log4j-1.2.17.jar:/usr/lib/hbase/lib/metrics-core-2.2.0.jar:/usr/lib/hbase/lib/netty-3.6.6.Final.jar:/usr/lib/hbase/lib/paranamer-2.3.jar:/usr/lib/hbase/lib/protobuf-java-2.5.0.jar:/usr/lib/hbase/lib/servlet-api-2.5-6.1.14.jar:/usr/lib/hbase/lib/servlet-api-2.5.jar:/usr/lib/hbase/lib/slf4j-api-1.7.5.jar:/usr/lib/hbase/lib/slf4j-log4j12.jar:/usr/lib/hbase/lib/snappy-java-1.0.4.1.jar:/usr/lib/hbase/lib/xmlenc-0.52.jar:/usr/lib/hbase/lib/xz-1.0.jar:/usr/lib/hbase/lib/zookeeper.jar:::/usr/lib/hadoop-mapreduce/.//activation-1.1.jar:/usr/lib/hadoop-mapreduce/.//apacheds-i18n-2.0.0-M15.jar:/usr/lib/hadoop-mapreduce/.//apacheds-kerberos-codec-2.0.0-M15.jar:/usr/lib/hadoop-mapreduce/.//api-asn1-api-1.0.0-M20.jar:/usr/lib/hadoop-mapreduce/.//api-util-1.0.0-M20.jar:/usr/lib/hadoop-mapreduce/.//asm-3.2.jar:/usr/lib/avro/avro.jar:/usr/lib/hadoop-mapreduce/.//aws-java-sdk-1.7.4.jar:/usr/lib/hadoop-mapreduce/.//commons-beanutils-1.7.0.jar:/usr/lib/hadoop-mapreduce/.//commons-beanutils-core-1.8.0.jar:/usr/lib/hadoop-mapreduce/.//commons-cli-1.2.jar:/usr/lib/hadoop-mapreduce/.//commons-codec-1.4.jar:/usr/lib/hadoop-mapreduce/.//commons-collections-3.2.1.jar:/usr/lib/hadoop-mapreduce/.//commons-compress-1.4.1.jar:/usr/lib/hadoop-mapreduce/.//commons-configuration-1.6.jar:/usr/lib/hadoop-mapreduce/.//commons-digester-1.8.jar:/usr/lib/hadoop-mapreduce/.//commons-el-1.0.jar:/usr/lib/hadoop-mapreduce/.//commons-httpclient-3.1.jar:/usr/lib/hadoop-mapreduce/.//commons-io-2.4.jar:/usr/lib/hadoop-mapreduce/.//commons-lang-2.6.jar:/usr/lib/hadoop-mapreduce/.//commons-logging-1.1.3.jar:/usr/lib/hadoop-mapreduce/.//commons-math3-3.1.1.jar:/usr/lib/hadoop-mapreduce/.//commons-net-3.1.jar:/usr/lib/hadoop-mapreduce/.//curator-client-2.6.0.jar:/usr/lib/hadoop-mapreduce/.//curator-framework-2.6.0.jar:/usr/lib/hadoop-mapreduce/.//curator-recipes-2.6.0.jar:/usr/lib/hadoop-mapreduce/.//gson-2.2.4.jar:/usr/lib/hadoop-mapreduce/.//guava-11.0.2.jar:/usr/lib/hadoop-mapreduce/.//hadoop-archives-2.5.0-cdh5.3.2.jar:hadoop-archives-2.5.0-cdh5.3.2.jar:/usr/lib/hadoop-mapreduce/.//hadoop-auth-2.5.0-cdh5.3.2.jar:hadoop-auth-2.5.0-cdh5.3.2.jar:/usr/lib/hadoop-mapreduce/.//hadoop-aws-2.5.0-cdh5.3.2.jar:hadoop-aws-2.5.0-cdh5.3.2.jar:/usr/lib/hadoop-mapreduce/.//hadoop-azure-2.5.0-cdh5.3.2.jar:hadoop-azure-2.5.0-cdh5.3.2.jar:/usr/lib/hadoop-mapreduce/.//hadoop-datajoin-2.5.0-cdh5.3.2.jar:hadoop-datajoin-2.5.0-cdh5.3.2.jar:/usr/lib/hadoop-mapreduce/.//hadoop-distcp-2.5.0-cdh5.3.2.jar:hadoop-distcp-2.5.0-cdh5.3.2.jar:/usr/lib/hadoop-mapreduce/.//hadoop-extras-2.5.0-cdh5.3.2.jar:hadoop-extras-2.5.0-cdh5.3.2.jar:/usr/lib/hadoop-mapreduce/.//hadoop-gridmix-2.5.0-cdh5.3.2.jar:hadoop-gridmix-2.5.0-cdh5.3.2.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-app-2.5.0-cdh5.3.2.jar:hadoop-mapreduce-client-app-2.5.0-cdh5.3.2.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-common-2.5.0-cdh5.3.2.jar:hadoop-mapreduce-client-common-2.5.0-cdh5.3.2.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-core-2.5.0-cdh5.3.2.jar:hadoop-mapreduce-client-core-2.5.0-cdh5.3.2.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-hs-2.5.0-cdh5.3.2.jar:hadoop-mapreduce-client-hs-2.5.0-cdh5.3.2.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-hs-plugins-2.5.0-cdh5.3.2.jar:hadoop-mapreduce-client-hs-plugins-2.5.0-cdh5.3.2.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-jobclient-2.5.0-cdh5.3.2.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-jobclient-2.5.0-cdh5.3.2-tests.jar:hadoop-mapreduce-client-jobclient-2.5.0-cdh5.3.2.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-nativetask-2.5.0-cdh5.3.2.jar:hadoop-mapreduce-client-nativetask-2.5.0-cdh5.3.2.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-shuffle-2.5.0-cdh5.3.2.jar:hadoop-mapreduce-client-shuffle-2.5.0-cdh5.3.2.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-examples-2.5.0-cdh5.3.2.jar:hadoop-mapreduce-examples-2.5.0-cdh5.3.2.jar:/usr/lib/hadoop-mapreduce/.//hadoop-rumen-2.5.0-cdh5.3.2.jar:hadoop-rumen-2.5.0-cdh5.3.2.jar:/usr/lib/hadoop-mapreduce/.//hadoop-sls-2.5.0-cdh5.3.2.jar:hadoop-sls-2.5.0-cdh5.3.2.jar:/usr/lib/hadoop-mapreduce/.//hadoop-streaming-2.5.0-cdh5.3.2.jar:hadoop-streaming-2.5.0-cdh5.3.2.jar:/usr/lib/hadoop-mapreduce/.//hamcrest-core-1.3.jar:/usr/lib/hadoop-mapreduce/.//httpclient-4.2.5.jar:/usr/lib/hadoop-mapreduce/.//httpcore-4.2.5.jar:/usr/lib/hadoop-mapreduce/.//jackson-core-asl-1.8.8.jar:/usr/lib/hadoop-mapreduce/.//jackson-jaxrs-1.8.8.jar:/usr/lib/hadoop-mapreduce/.//jackson-mapper-asl-1.8.8.jar:/usr/lib/hadoop-mapreduce/.//jackson-xc-1.8.8.jar:/usr/lib/hadoop-mapreduce/.//jasper-compiler-5.5.23.jar:/usr/lib/hadoop-mapreduce/.//jasper-runtime-5.5.23.jar:/usr/lib/hadoop-mapreduce/.//java-xmlbuilder-0.4.jar:/usr/lib/hadoop-mapreduce/.//jaxb-api-2.2.2.jar:/usr/lib/hadoop-mapreduce/.//jaxb-impl-2.2.3-1.jar:/usr/lib/hadoop-mapreduce/.//jersey-core-1.9.jar:/usr/lib/hadoop-mapreduce/.//jersey-json-1.9.jar:/usr/lib/hadoop-mapreduce/.//jersey-server-1.9.jar:/usr/lib/hadoop-mapreduce/.//jets3t-0.9.0.jar:/usr/lib/hadoop-mapreduce/.//jettison-1.1.jar:/usr/lib/hadoop-mapreduce/.//jetty-6.1.26.cloudera.4.jar:/usr/lib/hadoop-mapreduce/.//jetty-util-6.1.26.cloudera.4.jar:/usr/lib/hadoop-mapreduce/.//jsch-0.1.42.jar:/usr/lib/hadoop-mapreduce/.//jsp-api-2.1.jar:/usr/lib/hadoop-mapreduce/.//jsr305-1.3.9.jar:/usr/lib/hadoop-mapreduce/.//junit-4.11.jar:/usr/lib/hadoop-mapreduce/.//log4j-1.2.17.jar:/usr/lib/hadoop-mapreduce/.//metrics-core-3.0.1.jar:/usr/lib/hadoop-mapreduce/.//microsoft-windowsazure-storage-sdk-0.6.0.jar:/usr/lib/hadoop-mapreduce/.//paranamer-2.3.jar:/usr/lib/hadoop-mapreduce/.//protobuf-java-2.5.0.jar:/usr/lib/hadoop-mapreduce/.//servlet-api-2.5.jar:/usr/lib/hadoop-mapreduce/.//snappy-java-1.0.4.1.jar:/usr/lib/hadoop-mapreduce/.//stax-api-1.0-2.jar:/usr/lib/hadoop-mapreduce/.//xmlenc-0.52.jar:/usr/lib/hadoop-mapreduce/.//xz-1.0.jar:/usr/lib/hadoop-mapreduce/.//zookeeper-3.4.5-cdh5.3.2.jar::/usr/lib/hadoop/.//hadoop-annotations-2.5.0-cdh5.3.2.jar:hadoop-annotations-2.5.0-cdh5.3.2.jar:/usr/lib/hadoop/.//hadoop-auth-2.5.0-cdh5.3.2.jar:hadoop-auth-2.5.0-cdh5.3.2.jar:/usr/lib/hadoop/.//hadoop-common-2.5.0-cdh5.3.2.jar:/usr/lib/hadoop/.//hadoop-common-2.5.0-cdh5.3.2-tests.jar:hadoop-common-2.5.0-cdh5.3.2.jar:/usr/lib/hadoop/.//hadoop-nfs-2.5.0-cdh5.3.2.jar:hadoop-nfs-2.5.0-cdh5.3.2.jar::/usr/lib/hadoop/.//hadoop-annotations-2.5.0-cdh5.3.2.jar:hadoop-annotations-2.5.0-cdh5.3.2.jar:/usr/lib/hadoop/.//hadoop-auth-2.5.0-cdh5.3.2.jar:hadoop-auth-2.5.0-cdh5.3.2.jar:/usr/lib/hadoop/.//hadoop-common-2.5.0-cdh5.3.2.jar:/usr/lib/hadoop/.//hadoop-common-2.5.0-cdh5.3.2-tests.jar:hadoop-common-2.5.0-cdh5.3.2.jar:/usr/lib/hadoop/.//hadoop-nfs-2.5.0-cdh5.3.2.jar:hadoop-nfs-2.5.0-cdh5.3.2.jar:../parquet/parquet-avro.jar:../parquet/parquet-avro-javadoc.jar:../parquet/parquet-avro-sources.jar:../parquet/parquet-cascading.jar:../parquet/parquet-cascading-javadoc.jar:../parquet/parquet-cascading-sources.jar:../parquet/parquet-column.jar:../parquet/parquet-column-javadoc.jar:../parquet/parquet-column-sources.jar:../parquet/parquet-common.jar:../parquet/parquet-common-javadoc.jar:../parquet/parquet-common-sources.jar:../parquet/parquet-encoding.jar:../parquet/parquet-encoding-javadoc.jar:../parquet/parquet-encoding-sources.jar:../parquet/parquet-format.jar:../parquet/parquet-format-javadoc.jar:../parquet/parquet-format-sources.jar:../parquet/parquet-generator.jar:../parquet/parquet-generator-javadoc.jar:../parquet/parquet-generator-sources.jar:../parquet/parquet-hadoop-bundle.jar:../parquet/parquet-hadoop-bundle-sources.jar:../parquet/parquet-hadoop.jar:../parquet/parquet-hadoop-javadoc.jar:../parquet/parquet-hadoop-sources.jar:../parquet/parquet-pig-bundle.jar:../parquet/parquet-pig-bundle-sources.jar:../parquet/parquet-pig.jar:../parquet/parquet-pig-javadoc.jar:../parquet/parquet-pig-sources.jar:../parquet/parquet-scrooge.jar:../parquet/parquet-scrooge-javadoc.jar:../parquet/parquet-scrooge-sources.jar:../parquet/parquet-test-hadoop2.jar:../parquet/parquet-thrift.jar:../parquet/parquet-thrift-javadoc.jar:../parquet/parquet-thrift-sources.jar::/usr/lib/hadoop-yarn/lib/activation-1.1.jar:/usr/lib/hadoop-yarn/lib/aopalliance-1.0.jar:/usr/lib/hadoop-yarn/lib/asm-3.2.jar:/usr/lib/hadoop-yarn/lib/commons-cli-1.2.jar:/usr/lib/hadoop-yarn/lib/commons-codec-1.4.jar:/usr/lib/hadoop-yarn/lib/commons-collections-3.2.1.jar:/usr/lib/hadoop-yarn/lib/commons-compress-1.4.1.jar:/usr/lib/hadoop-yarn/lib/commons-httpclient-3.1.jar:/usr/lib/hadoop-yarn/lib/commons-io-2.4.jar:/usr/lib/hadoop-yarn/lib/commons-lang-2.6.jar:/usr/lib/hadoop-yarn/lib/commons-logging-1.1.3.jar:/usr/lib/hadoop-yarn/lib/fst-2.24.jar:/usr/lib/hadoop-yarn/lib/guava-11.0.2.jar:/usr/lib/hadoop-yarn/lib/guice-3.0.jar:/usr/lib/hadoop-yarn/lib/guice-servlet-3.0.jar:/usr/lib/hadoop-yarn/lib/jackson-core-asl-1.8.8.jar:/usr/lib/hadoop-yarn/lib/jackson-jaxrs-1.8.8.jar:/usr/lib/hadoop-yarn/lib/jackson-mapper-asl-1.8.8.jar:/usr/lib/hadoop-yarn/lib/jackson-xc-1.8.8.jar:/usr/lib/hadoop-yarn/lib/javassist-3.18.1-GA.jar:/usr/lib/hadoop-yarn/lib/javax.inject-1.jar:/usr/lib/hadoop-yarn/lib/jaxb-api-2.2.2.jar:/usr/lib/hadoop-yarn/lib/jaxb-impl-2.2.3-1.jar:/usr/lib/hadoop-yarn/lib/jersey-client-1.9.jar:/usr/lib/hadoop-yarn/lib/jersey-core-1.9.jar:/usr/lib/hadoop-yarn/lib/jersey-guice-1.9.jar:/usr/lib/hadoop-yarn/lib/jersey-json-1.9.jar:/usr/lib/hadoop-yarn/lib/jersey-server-1.9.jar:/usr/lib/hadoop-yarn/lib/jettison-1.1.jar:/usr/lib/hadoop-yarn/lib/jetty-6.1.26.cloudera.4.jar:/usr/lib/hadoop-yarn/lib/jetty-util-6.1.26.cloudera.4.jar:/usr/lib/hadoop-yarn/lib/jsr305-1.3.9.jar:/usr/lib/hadoop-yarn/lib/leveldbjni-all-1.8.jar:/usr/lib/hadoop-yarn/lib/log4j-1.2.17.jar:/usr/lib/hadoop-yarn/lib/objenesis-2.1.jar:/usr/lib/hadoop-yarn/lib/protobuf-java-2.5.0.jar:/usr/lib/hadoop-yarn/lib/servlet-api-2.5.jar:/usr/lib/hadoop-yarn/lib/stax-api-1.0-2.jar:/usr/lib/hadoop-yarn/lib/xz-1.0.jar:/usr/lib/hadoop-yarn/lib/zookeeper-3.4.5-cdh5.3.2.jar:' as a work-around. 16/05/03 23:19:38 WARN spark.SparkConf: Setting 'spark.driver.extraClassPath' to ':/etc/hadoop/conf:/usr/lib/hadoop/lib/*:/usr/lib/hadoop-hdfs/./:/usr/lib/hadoop-hdfs/lib/*:/usr/lib/hadoop-hdfs/.//*:/usr/lib/hadoop-yarn/.//*:/usr/lib/hadoop-mapreduce/lib/*:/usr/lib/hadoop-mapreduce/contrib/streaming/*:/etc/hbase/conf:/usr/lib/hbase/hbase-client-0.98.6-cdh5.2.0-sogou.jar:/usr/lib/hbase/hbase-client-sogou.jar:/usr/lib/hbase/hbase-common-0.98.6-cdh5.2.0-sogou.jar:/usr/lib/hbase/hbase-common-0.98.6-cdh5.2.0-sogou-tests.jar:/usr/lib/hbase/hbase-common-sogou.jar:/usr/lib/hbase/hbase-common-sogou-tests.jar:/usr/lib/hbase/hbase-examples-0.98.6-cdh5.2.0-sogou.jar:/usr/lib/hbase/hbase-examples-sogou.jar:/usr/lib/hbase/hbase-hadoop2-compat-0.98.6-cdh5.2.0-sogou.jar:/usr/lib/hbase/hbase-hadoop2-compat-0.98.6-cdh5.2.0-sogou-tests.jar:/usr/lib/hbase/hbase-hadoop2-compat-sogou.jar:/usr/lib/hbase/hbase-hadoop2-compat-sogou-tests.jar:/usr/lib/hbase/hbase-hadoop-compat-0.98.6-cdh5.2.0-sogou.jar:/usr/lib/hbase/hbase-hadoop-compat-0.98.6-cdh5.2.0-sogou-tests.jar:/usr/lib/hbase/hbase-hadoop-compat-sogou.jar:/usr/lib/hbase/hbase-hadoop-compat-sogou-tests.jar:/usr/lib/hbase/hbase-it-0.98.6-cdh5.2.0-sogou.jar:/usr/lib/hbase/hbase-it-0.98.6-cdh5.2.0-sogou-tests.jar:/usr/lib/hbase/hbase-it-sogou.jar:/usr/lib/hbase/hbase-it-sogou-tests.jar:/usr/lib/hbase/hbase-prefix-tree-0.98.6-cdh5.2.0-sogou.jar:/usr/lib/hbase/hbase-prefix-tree-sogou.jar:/usr/lib/hbase/hbase-protocol-0.98.6-cdh5.2.0-sogou.jar:/usr/lib/hbase/hbase-protocol-sogou.jar:/usr/lib/hbase/hbase-server-0.98.6-cdh5.2.0-sogou.jar:/usr/lib/hbase/hbase-server-0.98.6-cdh5.2.0-sogou-tests.jar:/usr/lib/hbase/hbase-server-sogou.jar:/usr/lib/hbase/hbase-server-sogou-tests.jar:/usr/lib/hbase/hbase-shell-0.98.6-cdh5.2.0-sogou.jar:/usr/lib/hbase/hbase-shell-sogou.jar:/usr/lib/hbase/hbase-testing-util-0.98.6-cdh5.2.0-sogou.jar:/usr/lib/hbase/hbase-testing-util-sogou.jar:/usr/lib/hbase/hbase-thrift-0.98.6-cdh5.2.0-sogou.jar:/usr/lib/hbase/hbase-thrift-sogou.jar:/usr/lib/hbase/lib/activation-1.1.jar:/usr/lib/hbase/lib/apacheds-i18n-2.0.0-M15.jar:/usr/lib/hbase/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/usr/lib/hbase/lib/api-asn1-api-1.0.0-M20.jar:/usr/lib/hbase/lib/api-util-1.0.0-M20.jar:/usr/lib/hbase/lib/asm-3.2.jar:/usr/lib/hbase/lib/avro.jar:/usr/lib/hbase/lib/commons-beanutils-1.7.0.jar:/usr/lib/hbase/lib/commons-beanutils-core-1.8.0.jar:/usr/lib/hbase/lib/commons-cli-1.2.jar:/usr/lib/hbase/lib/commons-codec-1.7.jar:/usr/lib/hbase/lib/commons-collections-3.2.1.jar:/usr/lib/hbase/lib/commons-compress-1.4.1.jar:/usr/lib/hbase/lib/commons-configuration-1.6.jar:/usr/lib/hbase/lib/commons-daemon-1.0.3.jar:/usr/lib/hbase/lib/commons-digester-1.8.jar:/usr/lib/hbase/lib/commons-el-1.0.jar:/usr/lib/hbase/lib/commons-httpclient-3.1.jar:/usr/lib/hbase/lib/commons-io-2.4.jar:/usr/lib/hbase/lib/commons-lang-2.6.jar:/usr/lib/hbase/lib/commons-logging-1.1.1.jar:/usr/lib/hbase/lib/commons-math-2.1.jar:/usr/lib/hbase/lib/commons-math3-3.1.1.jar:/usr/lib/hbase/lib/commons-net-3.1.jar:/usr/lib/hbase/lib/commons-pool-1.5.4.jar:/usr/lib/hbase/lib/core-3.1.1.jar:/usr/lib/hbase/lib/curator-client-2.6.0.jar:/usr/lib/hbase/lib/curator-framework-2.6.0.jar:/usr/lib/hbase/lib/curator-recipes-2.6.0.jar:/usr/lib/hbase/lib/embedded-jmxtrans-1.0.6.jar:/usr/lib/hbase/lib/findbugs-annotations-1.3.9-1.jar:/usr/lib/hbase/lib/gson-2.2.4.jar:/usr/lib/hbase/lib/guava-12.0.1.jar:/usr/lib/hbase/lib/hamcrest-core-1.3.jar:/usr/lib/hbase/lib/hbase-client-0.98.6-cdh5.2.0-sogou.jar:/usr/lib/hbase/lib/hbase-common-0.98.6-cdh5.2.0-sogou.jar:/usr/lib/hbase/lib/hbase-common-0.98.6-cdh5.2.0-sogou-tests.jar:/usr/lib/hbase/lib/hbase-examples-0.98.6-cdh5.2.0-sogou.jar:/usr/lib/hbase/lib/hbase-hadoop2-compat-0.98.6-cdh5.2.0-sogou.jar:/usr/lib/hbase/lib/hbase-hadoop2-compat-0.98.6-cdh5.2.0-sogou-tests.jar:/usr/lib/hbase/lib/hbase-hadoop-compat-0.98.6-cdh5.2.0-sogou.jar:/usr/lib/hbase/lib/hbase-hadoop-compat-0.98.6-cdh5.2.0-sogou-tests.jar:/usr/lib/hbase/lib/hbase-it-0.98.6-cdh5.2.0-sogou.jar:/usr/lib/hbase/lib/hbase-it-0.98.6-cdh5.2.0-sogou-tests.jar:/usr/lib/hbase/lib/hbase-prefix-tree-0.98.6-cdh5.2.0-sogou.jar:/usr/lib/hbase/lib/hbase-protocol-0.98.6-cdh5.2.0-sogou.jar:/usr/lib/hbase/lib/hbase-server-0.98.6-cdh5.2.0-sogou.jar:/usr/lib/hbase/lib/hbase-server-0.98.6-cdh5.2.0-sogou-tests.jar:/usr/lib/hbase/lib/hbase-shell-0.98.6-cdh5.2.0-sogou.jar:/usr/lib/hbase/lib/hbase-testing-util-0.98.6-cdh5.2.0-sogou.jar:/usr/lib/hbase/lib/hbase-thrift-0.98.6-cdh5.2.0-sogou.jar:/usr/lib/hbase/lib/high-scale-lib-1.1.1.jar:/usr/lib/hbase/lib/hsqldb-1.8.0.10.jar:/usr/lib/hbase/lib/htrace-core-2.04.jar:/usr/lib/hbase/lib/htrace-core.jar:/usr/lib/hbase/lib/httpclient-4.2.5.jar:/usr/lib/hbase/lib/httpcore-4.2.5.jar:/usr/lib/hbase/lib/jamon-runtime-2.3.1.jar:/usr/lib/hbase/lib/jasper-compiler-5.5.23.jar:/usr/lib/hbase/lib/jasper-runtime-5.5.23.jar:/usr/lib/hbase/lib/java-xmlbuilder-0.4.jar:/usr/lib/hbase/lib/jaxb-api-2.1.jar:/usr/lib/hbase/lib/jaxb-impl-2.2.3-1.jar:/usr/lib/hbase/lib/jersey-core-1.8.jar:/usr/lib/hbase/lib/jersey-json-1.8.jar:/usr/lib/hbase/lib/jersey-server-1.8.jar:/usr/lib/hbase/lib/jets3t-0.9.0.jar:/usr/lib/hbase/lib/jettison-1.3.1.jar:/usr/lib/hbase/lib/jetty-6.1.26.cloudera.2.jar:/usr/lib/hbase/lib/jetty-sslengine-6.1.26.cloudera.2.jar:/usr/lib/hbase/lib/jetty-util-6.1.26.cloudera.2.jar:/usr/lib/hbase/lib/jsch-0.1.42.jar:/usr/lib/hbase/lib/jsp-2.1-6.1.14.jar:/usr/lib/hbase/lib/jsp-api-2.1-6.1.14.jar:/usr/lib/hbase/lib/jsp-api-2.1.jar:/usr/lib/hbase/lib/jsr250-api-1.0.jar:/usr/lib/hbase/lib/jsr305-1.3.9.jar:/usr/lib/hbase/lib/junit-4.11.jar:/usr/lib/hbase/lib/log4j-1.2.17.jar:/usr/lib/hbase/lib/metrics-core-2.2.0.jar:/usr/lib/hbase/lib/netty-3.6.6.Final.jar:/usr/lib/hbase/lib/paranamer-2.3.jar:/usr/lib/hbase/lib/protobuf-java-2.5.0.jar:/usr/lib/hbase/lib/servlet-api-2.5-6.1.14.jar:/usr/lib/hbase/lib/servlet-api-2.5.jar:/usr/lib/hbase/lib/slf4j-api-1.7.5.jar:/usr/lib/hbase/lib/slf4j-log4j12.jar:/usr/lib/hbase/lib/snappy-java-1.0.4.1.jar:/usr/lib/hbase/lib/xmlenc-0.52.jar:/usr/lib/hbase/lib/xz-1.0.jar:/usr/lib/hbase/lib/zookeeper.jar:::/usr/lib/hadoop-mapreduce/.//activation-1.1.jar:/usr/lib/hadoop-mapreduce/.//apacheds-i18n-2.0.0-M15.jar:/usr/lib/hadoop-mapreduce/.//apacheds-kerberos-codec-2.0.0-M15.jar:/usr/lib/hadoop-mapreduce/.//api-asn1-api-1.0.0-M20.jar:/usr/lib/hadoop-mapreduce/.//api-util-1.0.0-M20.jar:/usr/lib/hadoop-mapreduce/.//asm-3.2.jar:/usr/lib/avro/avro.jar:/usr/lib/hadoop-mapreduce/.//aws-java-sdk-1.7.4.jar:/usr/lib/hadoop-mapreduce/.//commons-beanutils-1.7.0.jar:/usr/lib/hadoop-mapreduce/.//commons-beanutils-core-1.8.0.jar:/usr/lib/hadoop-mapreduce/.//commons-cli-1.2.jar:/usr/lib/hadoop-mapreduce/.//commons-codec-1.4.jar:/usr/lib/hadoop-mapreduce/.//commons-collections-3.2.1.jar:/usr/lib/hadoop-mapreduce/.//commons-compress-1.4.1.jar:/usr/lib/hadoop-mapreduce/.//commons-configuration-1.6.jar:/usr/lib/hadoop-mapreduce/.//commons-digester-1.8.jar:/usr/lib/hadoop-mapreduce/.//commons-el-1.0.jar:/usr/lib/hadoop-mapreduce/.//commons-httpclient-3.1.jar:/usr/lib/hadoop-mapreduce/.//commons-io-2.4.jar:/usr/lib/hadoop-mapreduce/.//commons-lang-2.6.jar:/usr/lib/hadoop-mapreduce/.//commons-logging-1.1.3.jar:/usr/lib/hadoop-mapreduce/.//commons-math3-3.1.1.jar:/usr/lib/hadoop-mapreduce/.//commons-net-3.1.jar:/usr/lib/hadoop-mapreduce/.//curator-client-2.6.0.jar:/usr/lib/hadoop-mapreduce/.//curator-framework-2.6.0.jar:/usr/lib/hadoop-mapreduce/.//curator-recipes-2.6.0.jar:/usr/lib/hadoop-mapreduce/.//gson-2.2.4.jar:/usr/lib/hadoop-mapreduce/.//guava-11.0.2.jar:/usr/lib/hadoop-mapreduce/.//hadoop-archives-2.5.0-cdh5.3.2.jar:hadoop-archives-2.5.0-cdh5.3.2.jar:/usr/lib/hadoop-mapreduce/.//hadoop-auth-2.5.0-cdh5.3.2.jar:hadoop-auth-2.5.0-cdh5.3.2.jar:/usr/lib/hadoop-mapreduce/.//hadoop-aws-2.5.0-cdh5.3.2.jar:hadoop-aws-2.5.0-cdh5.3.2.jar:/usr/lib/hadoop-mapreduce/.//hadoop-azure-2.5.0-cdh5.3.2.jar:hadoop-azure-2.5.0-cdh5.3.2.jar:/usr/lib/hadoop-mapreduce/.//hadoop-datajoin-2.5.0-cdh5.3.2.jar:hadoop-datajoin-2.5.0-cdh5.3.2.jar:/usr/lib/hadoop-mapreduce/.//hadoop-distcp-2.5.0-cdh5.3.2.jar:hadoop-distcp-2.5.0-cdh5.3.2.jar:/usr/lib/hadoop-mapreduce/.//hadoop-extras-2.5.0-cdh5.3.2.jar:hadoop-extras-2.5.0-cdh5.3.2.jar:/usr/lib/hadoop-mapreduce/.//hadoop-gridmix-2.5.0-cdh5.3.2.jar:hadoop-gridmix-2.5.0-cdh5.3.2.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-app-2.5.0-cdh5.3.2.jar:hadoop-mapreduce-client-app-2.5.0-cdh5.3.2.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-common-2.5.0-cdh5.3.2.jar:hadoop-mapreduce-client-common-2.5.0-cdh5.3.2.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-core-2.5.0-cdh5.3.2.jar:hadoop-mapreduce-client-core-2.5.0-cdh5.3.2.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-hs-2.5.0-cdh5.3.2.jar:hadoop-mapreduce-client-hs-2.5.0-cdh5.3.2.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-hs-plugins-2.5.0-cdh5.3.2.jar:hadoop-mapreduce-client-hs-plugins-2.5.0-cdh5.3.2.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-jobclient-2.5.0-cdh5.3.2.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-jobclient-2.5.0-cdh5.3.2-tests.jar:hadoop-mapreduce-client-jobclient-2.5.0-cdh5.3.2.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-nativetask-2.5.0-cdh5.3.2.jar:hadoop-mapreduce-client-nativetask-2.5.0-cdh5.3.2.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-shuffle-2.5.0-cdh5.3.2.jar:hadoop-mapreduce-client-shuffle-2.5.0-cdh5.3.2.jar:/usr/lib/hadoop-mapreduce/.//hadoop-mapreduce-examples-2.5.0-cdh5.3.2.jar:hadoop-mapreduce-examples-2.5.0-cdh5.3.2.jar:/usr/lib/hadoop-mapreduce/.//hadoop-rumen-2.5.0-cdh5.3.2.jar:hadoop-rumen-2.5.0-cdh5.3.2.jar:/usr/lib/hadoop-mapreduce/.//hadoop-sls-2.5.0-cdh5.3.2.jar:hadoop-sls-2.5.0-cdh5.3.2.jar:/usr/lib/hadoop-mapreduce/.//hadoop-streaming-2.5.0-cdh5.3.2.jar:hadoop-streaming-2.5.0-cdh5.3.2.jar:/usr/lib/hadoop-mapreduce/.//hamcrest-core-1.3.jar:/usr/lib/hadoop-mapreduce/.//httpclient-4.2.5.jar:/usr/lib/hadoop-mapreduce/.//httpcore-4.2.5.jar:/usr/lib/hadoop-mapreduce/.//jackson-core-asl-1.8.8.jar:/usr/lib/hadoop-mapreduce/.//jackson-jaxrs-1.8.8.jar:/usr/lib/hadoop-mapreduce/.//jackson-mapper-asl-1.8.8.jar:/usr/lib/hadoop-mapreduce/.//jackson-xc-1.8.8.jar:/usr/lib/hadoop-mapreduce/.//jasper-compiler-5.5.23.jar:/usr/lib/hadoop-mapreduce/.//jasper-runtime-5.5.23.jar:/usr/lib/hadoop-mapreduce/.//java-xmlbuilder-0.4.jar:/usr/lib/hadoop-mapreduce/.//jaxb-api-2.2.2.jar:/usr/lib/hadoop-mapreduce/.//jaxb-impl-2.2.3-1.jar:/usr/lib/hadoop-mapreduce/.//jersey-core-1.9.jar:/usr/lib/hadoop-mapreduce/.//jersey-json-1.9.jar:/usr/lib/hadoop-mapreduce/.//jersey-server-1.9.jar:/usr/lib/hadoop-mapreduce/.//jets3t-0.9.0.jar:/usr/lib/hadoop-mapreduce/.//jettison-1.1.jar:/usr/lib/hadoop-mapreduce/.//jetty-6.1.26.cloudera.4.jar:/usr/lib/hadoop-mapreduce/.//jetty-util-6.1.26.cloudera.4.jar:/usr/lib/hadoop-mapreduce/.//jsch-0.1.42.jar:/usr/lib/hadoop-mapreduce/.//jsp-api-2.1.jar:/usr/lib/hadoop-mapreduce/.//jsr305-1.3.9.jar:/usr/lib/hadoop-mapreduce/.//junit-4.11.jar:/usr/lib/hadoop-mapreduce/.//log4j-1.2.17.jar:/usr/lib/hadoop-mapreduce/.//metrics-core-3.0.1.jar:/usr/lib/hadoop-mapreduce/.//microsoft-windowsazure-storage-sdk-0.6.0.jar:/usr/lib/hadoop-mapreduce/.//paranamer-2.3.jar:/usr/lib/hadoop-mapreduce/.//protobuf-java-2.5.0.jar:/usr/lib/hadoop-mapreduce/.//servlet-api-2.5.jar:/usr/lib/hadoop-mapreduce/.//snappy-java-1.0.4.1.jar:/usr/lib/hadoop-mapreduce/.//stax-api-1.0-2.jar:/usr/lib/hadoop-mapreduce/.//xmlenc-0.52.jar:/usr/lib/hadoop-mapreduce/.//xz-1.0.jar:/usr/lib/hadoop-mapreduce/.//zookeeper-3.4.5-cdh5.3.2.jar::/usr/lib/hadoop/.//hadoop-annotations-2.5.0-cdh5.3.2.jar:hadoop-annotations-2.5.0-cdh5.3.2.jar:/usr/lib/hadoop/.//hadoop-auth-2.5.0-cdh5.3.2.jar:hadoop-auth-2.5.0-cdh5.3.2.jar:/usr/lib/hadoop/.//hadoop-common-2.5.0-cdh5.3.2.jar:/usr/lib/hadoop/.//hadoop-common-2.5.0-cdh5.3.2-tests.jar:hadoop-common-2.5.0-cdh5.3.2.jar:/usr/lib/hadoop/.//hadoop-nfs-2.5.0-cdh5.3.2.jar:hadoop-nfs-2.5.0-cdh5.3.2.jar::/usr/lib/hadoop/.//hadoop-annotations-2.5.0-cdh5.3.2.jar:hadoop-annotations-2.5.0-cdh5.3.2.jar:/usr/lib/hadoop/.//hadoop-auth-2.5.0-cdh5.3.2.jar:hadoop-auth-2.5.0-cdh5.3.2.jar:/usr/lib/hadoop/.//hadoop-common-2.5.0-cdh5.3.2.jar:/usr/lib/hadoop/.//hadoop-common-2.5.0-cdh5.3.2-tests.jar:hadoop-common-2.5.0-cdh5.3.2.jar:/usr/lib/hadoop/.//hadoop-nfs-2.5.0-cdh5.3.2.jar:hadoop-nfs-2.5.0-cdh5.3.2.jar:../parquet/parquet-avro.jar:../parquet/parquet-avro-javadoc.jar:../parquet/parquet-avro-sources.jar:../parquet/parquet-cascading.jar:../parquet/parquet-cascading-javadoc.jar:../parquet/parquet-cascading-sources.jar:../parquet/parquet-column.jar:../parquet/parquet-column-javadoc.jar:../parquet/parquet-column-sources.jar:../parquet/parquet-common.jar:../parquet/parquet-common-javadoc.jar:../parquet/parquet-common-sources.jar:../parquet/parquet-encoding.jar:../parquet/parquet-encoding-javadoc.jar:../parquet/parquet-encoding-sources.jar:../parquet/parquet-format.jar:../parquet/parquet-format-javadoc.jar:../parquet/parquet-format-sources.jar:../parquet/parquet-generator.jar:../parquet/parquet-generator-javadoc.jar:../parquet/parquet-generator-sources.jar:../parquet/parquet-hadoop-bundle.jar:../parquet/parquet-hadoop-bundle-sources.jar:../parquet/parquet-hadoop.jar:../parquet/parquet-hadoop-javadoc.jar:../parquet/parquet-hadoop-sources.jar:../parquet/parquet-pig-bundle.jar:../parquet/parquet-pig-bundle-sources.jar:../parquet/parquet-pig.jar:../parquet/parquet-pig-javadoc.jar:../parquet/parquet-pig-sources.jar:../parquet/parquet-scrooge.jar:../parquet/parquet-scrooge-javadoc.jar:../parquet/parquet-scrooge-sources.jar:../parquet/parquet-test-hadoop2.jar:../parquet/parquet-thrift.jar:../parquet/parquet-thrift-javadoc.jar:../parquet/parquet-thrift-sources.jar::/usr/lib/hadoop-yarn/lib/activation-1.1.jar:/usr/lib/hadoop-yarn/lib/aopalliance-1.0.jar:/usr/lib/hadoop-yarn/lib/asm-3.2.jar:/usr/lib/hadoop-yarn/lib/commons-cli-1.2.jar:/usr/lib/hadoop-yarn/lib/commons-codec-1.4.jar:/usr/lib/hadoop-yarn/lib/commons-collections-3.2.1.jar:/usr/lib/hadoop-yarn/lib/commons-compress-1.4.1.jar:/usr/lib/hadoop-yarn/lib/commons-httpclient-3.1.jar:/usr/lib/hadoop-yarn/lib/commons-io-2.4.jar:/usr/lib/hadoop-yarn/lib/commons-lang-2.6.jar:/usr/lib/hadoop-yarn/lib/commons-logging-1.1.3.jar:/usr/lib/hadoop-yarn/lib/fst-2.24.jar:/usr/lib/hadoop-yarn/lib/guava-11.0.2.jar:/usr/lib/hadoop-yarn/lib/guice-3.0.jar:/usr/lib/hadoop-yarn/lib/guice-servlet-3.0.jar:/usr/lib/hadoop-yarn/lib/jackson-core-asl-1.8.8.jar:/usr/lib/hadoop-yarn/lib/jackson-jaxrs-1.8.8.jar:/usr/lib/hadoop-yarn/lib/jackson-mapper-asl-1.8.8.jar:/usr/lib/hadoop-yarn/lib/jackson-xc-1.8.8.jar:/usr/lib/hadoop-yarn/lib/javassist-3.18.1-GA.jar:/usr/lib/hadoop-yarn/lib/javax.inject-1.jar:/usr/lib/hadoop-yarn/lib/jaxb-api-2.2.2.jar:/usr/lib/hadoop-yarn/lib/jaxb-impl-2.2.3-1.jar:/usr/lib/hadoop-yarn/lib/jersey-client-1.9.jar:/usr/lib/hadoop-yarn/lib/jersey-core-1.9.jar:/usr/lib/hadoop-yarn/lib/jersey-guice-1.9.jar:/usr/lib/hadoop-yarn/lib/jersey-json-1.9.jar:/usr/lib/hadoop-yarn/lib/jersey-server-1.9.jar:/usr/lib/hadoop-yarn/lib/jettison-1.1.jar:/usr/lib/hadoop-yarn/lib/jetty-6.1.26.cloudera.4.jar:/usr/lib/hadoop-yarn/lib/jetty-util-6.1.26.cloudera.4.jar:/usr/lib/hadoop-yarn/lib/jsr305-1.3.9.jar:/usr/lib/hadoop-yarn/lib/leveldbjni-all-1.8.jar:/usr/lib/hadoop-yarn/lib/log4j-1.2.17.jar:/usr/lib/hadoop-yarn/lib/objenesis-2.1.jar:/usr/lib/hadoop-yarn/lib/protobuf-java-2.5.0.jar:/usr/lib/hadoop-yarn/lib/servlet-api-2.5.jar:/usr/lib/hadoop-yarn/lib/stax-api-1.0-2.jar:/usr/lib/hadoop-yarn/lib/xz-1.0.jar:/usr/lib/hadoop-yarn/lib/zookeeper-3.4.5-cdh5.3.2.jar:' as a work-around. 16/05/03 23:19:38 INFO spark.SecurityManager: Changing view acls to: root,dt_dataminers 16/05/03 23:19:38 INFO spark.SecurityManager: Changing modify acls to: root,dt_dataminers 16/05/03 23:19:38 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root, dt_dataminers); users with modify permissions: Set(root, dt_dataminers) 16/05/03 23:19:40 INFO util.Utils: Successfully started service 'sparkDriver' on port 44957. 16/05/03 23:19:42 INFO slf4j.Slf4jLogger: Slf4jLogger started 16/05/03 23:19:42 INFO Remoting: Starting remoting 16/05/03 23:19:43 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkDriverActorSystem@10.134.71.181:59047] 16/05/03 23:19:43 INFO util.Utils: Successfully started service 'sparkDriverActorSystem' on port 59047. 16/05/03 23:19:43 INFO spark.SparkEnv: Registering MapOutputTracker 16/05/03 23:19:44 INFO spark.SparkEnv: Registering BlockManagerMaster 16/05/03 23:19:44 INFO storage.DiskBlockManager: Created local directory at /tmp/blockmgr-59b08e36-b5a9-4ecc-b640-e1d5f7fa37b6 16/05/03 23:19:44 INFO storage.MemoryStore: MemoryStore started with capacity 4.1 GB 16/05/03 23:19:45 INFO spark.SparkEnv: Registering OutputCommitCoordinator 16/05/03 23:19:46 INFO server.Server: jetty-8.y.z-SNAPSHOT 16/05/03 23:19:46 WARN component.AbstractLifeCycle: FAILED SelectChannelConnector@0.0.0.0:4040: java.net.BindException: Address already in use java.net.BindException: Address already in use at sun.nio.ch.Net.bind0(Native Method) at sun.nio.ch.Net.bind(Net.java:444) at sun.nio.ch.Net.bind(Net.java:436) at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214) at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74) at org.spark-project.jetty.server.nio.SelectChannelConnector.open(SelectChannelConnector.java:187) at org.spark-project.jetty.server.AbstractConnector.doStart(AbstractConnector.java:316) at org.spark-project.jetty.server.nio.SelectChannelConnector.doStart(SelectChannelConnector.java:265) at org.spark-project.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:64) at org.spark-project.jetty.server.Server.doStart(Server.java:293) at org.spark-project.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:64) at org.apache.spark.ui.JettyUtils$.org$apache$spark$ui$JettyUtils$$connect$1(JettyUtils.scala:252) at org.apache.spark.ui.JettyUtils$$anonfun$5.apply(JettyUtils.scala:262) at org.apache.spark.ui.JettyUtils$$anonfun$5.apply(JettyUtils.scala:262) at org.apache.spark.util.Utils$$anonfun$startServiceOnPort$1.apply$mcVI$sp(Utils.scala:2025) at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:141) at org.apache.spark.util.Utils$.startServiceOnPort(Utils.scala:2016) at org.apache.spark.ui.JettyUtils$.startJettyServer(JettyUtils.scala:262) at org.apache.spark.ui.WebUI.bind(WebUI.scala:136) at org.apache.spark.SparkContext$$anonfun$13.apply(SparkContext.scala:481) at org.apache.spark.SparkContext$$anonfun$13.apply(SparkContext.scala:481) at scala.Option.foreach(Option.scala:236) at org.apache.spark.SparkContext.(SparkContext.scala:481) at com.github.cloudml.zen.examples.ml.LDADriver$.main(LDADriver.scala:102) at com.github.cloudml.zen.examples.ml.LDADriver.main(LDADriver.scala) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:731) at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181) at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206) at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121) at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala) 16/05/03 23:19:46 WARN component.AbstractLifeCycle: FAILED org.spark-project.jetty.server.Server@70480c1e: java.net.BindException: Address already in use java.net.BindException: Address already in use at sun.nio.ch.Net.bind0(Native Method) at sun.nio.ch.Net.bind(Net.java:444) at sun.nio.ch.Net.bind(Net.java:436) at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214) at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74) at org.spark-project.jetty.server.nio.SelectChannelConnector.open(SelectChannelConnector.java:187) at org.spark-project.jetty.server.AbstractConnector.doStart(AbstractConnector.java:316) at org.spark-project.jetty.server.nio.SelectChannelConnector.doStart(SelectChannelConnector.java:265) at org.spark-project.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:64) at org.spark-project.jetty.server.Server.doStart(Server.java:293) at org.spark-project.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:64) at org.apache.spark.ui.JettyUtils$.org$apache$spark$ui$JettyUtils$$connect$1(JettyUtils.scala:252) at org.apache.spark.ui.JettyUtils$$anonfun$5.apply(JettyUtils.scala:262) at org.apache.spark.ui.JettyUtils$$anonfun$5.apply(JettyUtils.scala:262) at org.apache.spark.util.Utils$$anonfun$startServiceOnPort$1.apply$mcVI$sp(Utils.scala:2025) at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:141) at org.apache.spark.util.Utils$.startServiceOnPort(Utils.scala:2016) at org.apache.spark.ui.JettyUtils$.startJettyServer(JettyUtils.scala:262) at org.apache.spark.ui.WebUI.bind(WebUI.scala:136) at org.apache.spark.SparkContext$$anonfun$13.apply(SparkContext.scala:481) at org.apache.spark.SparkContext$$anonfun$13.apply(SparkContext.scala:481) at scala.Option.foreach(Option.scala:236) at org.apache.spark.SparkContext.(SparkContext.scala:481) at com.github.cloudml.zen.examples.ml.LDADriver$.main(LDADriver.scala:102) at com.github.cloudml.zen.examples.ml.LDADriver.main(LDADriver.scala) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:731) at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181) at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206) at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121) at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala) 16/05/03 23:19:46 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/stage/kill,null} 16/05/03 23:19:46 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/api,null} 16/05/03 23:19:46 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/,null} 16/05/03 23:19:46 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/static,null} 16/05/03 23:19:46 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/executors/threadDump/json,null} 16/05/03 23:19:46 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/executors/threadDump,null} 16/05/03 23:19:46 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/executors/json,null} 16/05/03 23:19:46 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/executors,null} 16/05/03 23:19:46 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/environment/json,null} 16/05/03 23:19:46 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/environment,null} 16/05/03 23:19:46 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/storage/rdd/json,null} 16/05/03 23:19:46 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/storage/rdd,null} 16/05/03 23:19:46 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/storage/json,null} 16/05/03 23:19:46 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/storage,null} 16/05/03 23:19:46 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/pool/json,null} 16/05/03 23:19:46 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/pool,null} 16/05/03 23:19:46 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/stage/json,null} 16/05/03 23:19:46 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/stage,null} 16/05/03 23:19:46 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/json,null} 16/05/03 23:19:46 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages,null} 16/05/03 23:19:46 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/jobs/job/json,null} 16/05/03 23:19:46 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/jobs/job,null} 16/05/03 23:19:46 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/jobs/json,null} 16/05/03 23:19:46 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/jobs,null} 16/05/03 23:19:46 WARN util.Utils: Service 'SparkUI' could not bind on port 4040. Attempting port 4041. 16/05/03 23:19:46 INFO server.Server: jetty-8.y.z-SNAPSHOT 16/05/03 23:19:46 INFO server.AbstractConnector: Started SelectChannelConnector@0.0.0.0:4041 16/05/03 23:19:46 INFO util.Utils: Successfully started service 'SparkUI' on port 4041. 16/05/03 23:19:46 INFO ui.SparkUI: Started SparkUI at http://10.134.71.181:4041 16/05/03 23:19:46 INFO spark.HttpFileServer: HTTP File server directory is /tmp/spark-d9ab49dc-4f40-4168-a99f-db78a68fe5a4/httpd-868a7b7a-e0f8-439e-922a-55755cc5a9e6 16/05/03 23:19:46 INFO spark.HttpServer: Starting HTTP Server 16/05/03 23:19:47 INFO server.Server: jetty-8.y.z-SNAPSHOT 16/05/03 23:19:47 INFO server.AbstractConnector: Started SocketConnector@0.0.0.0:36833 16/05/03 23:19:47 INFO util.Utils: Successfully started service 'HTTP file server' on port 36833. 16/05/03 23:19:47 INFO spark.SparkContext: Added JAR file:/search/odin/yulei/spark_models/zen/ml/target/zen-ml_2.10-0.3-SNAPSHOT.jar at http://10.134.71.181:36833/jars/zen-ml_2.10-0.3-SNAPSHOT.jar with timestamp 1462288787257 16/05/03 23:19:47 INFO spark.SparkContext: Added JAR file:/search/odin/yulei/spark_models/zen/examples/target/zen-examples_2.10-0.3-SNAPSHOT.jar at http://10.134.71.181:36833/jars/zen-examples_2.10-0.3-SNAPSHOT.jar with timestamp 1462288787260 16/05/03 23:19:47 INFO scheduler.FairSchedulableBuilder: Created default pool default, schedulingMode: FIFO, minShare: 0, weight: 1 16/05/03 23:19:50 INFO impl.TimelineClientImpl: Timeline service address: http://master06.sunshine.nm.ted:48188/ws/v1/timeline/ 16/05/03 23:19:50 INFO client.AHSProxy: Connecting to Application History server at master06.sunshine.nm.ted/10.141.49.81:40200 16/05/03 23:19:50 INFO client.ConfiguredRMFailoverProxyProvider: Failing over to rm2 16/05/03 23:19:50 INFO yarn.Client: Requesting a new application from cluster with 698 NodeManagers 16/05/03 23:19:51 INFO yarn.Client: Verifying our application has not requested more than the maximum memory capability of the cluster (25600 MB per container) 16/05/03 23:19:51 INFO yarn.Client: Will allocate AM container, with 2432 MB memory including 384 MB overhead 16/05/03 23:19:51 INFO yarn.Client: Setting up container launch context for our AM 16/05/03 23:19:51 INFO yarn.Client: Setting up the launch environment for our AM container 16/05/03 23:19:51 INFO yarn.Client: Preparing resources for our AM container 16/05/03 23:19:51 INFO yarn.Client: Source and destination file systems are the same. Not copying viewfs://nsX/user/spark/lib/spark-assembly-1.6.2-SNAPSHOT-hadoop2.5.0-cdh5.3.2.jar 16/05/03 23:20:08 INFO yarn.Client: Uploading resource file:/tmp/spark-d9ab49dc-4f40-4168-a99f-db78a68fe5a4/__spark_conf__4349972245024789305.zip -> viewfs://nsX/user/dt_dataminers/.sparkStaging/application_1459849169969_971497/__spark_conf__4349972245024789305.zip 16/05/03 23:20:10 INFO spark.SecurityManager: Changing view acls to: root,dt_dataminers 16/05/03 23:20:10 INFO spark.SecurityManager: Changing modify acls to: root,dt_dataminers 16/05/03 23:20:10 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root, dt_dataminers); users with modify permissions: Set(root, dt_dataminers) 16/05/03 23:20:10 INFO yarn.Client: Submitting application 971497 to ResourceManager 16/05/03 23:20:10 INFO impl.YarnClientImpl: Submitted application application_1459849169969_971497 16/05/03 23:20:11 INFO yarn.Client: Application report for application_1459849169969_971497 (state: ACCEPTED) 16/05/03 23:20:11 INFO yarn.Client: client token: N/A diagnostics: N/A ApplicationMaster host: N/A ApplicationMaster RPC port: -1 queue: root.dt_dataminers start time: 1462288810715 final status: UNDEFINED tracking URL: http://master06.sunshine.nm.ted:23188/proxy/application_1459849169969_971497/ user: dt_dataminers 16/05/03 23:20:12 INFO yarn.Client: Application report for application_1459849169969_971497 (state: ACCEPTED) 16/05/03 23:20:13 INFO yarn.Client: Application report for application_1459849169969_971497 (state: ACCEPTED) 16/05/03 23:20:14 INFO yarn.Client: Application report for application_1459849169969_971497 (state: ACCEPTED) 16/05/03 23:20:15 INFO yarn.Client: Application report for application_1459849169969_971497 (state: ACCEPTED) 16/05/03 23:20:16 INFO yarn.Client: Application report for application_1459849169969_971497 (state: ACCEPTED) 16/05/03 23:20:17 INFO yarn.Client: Application report for application_1459849169969_971497 (state: ACCEPTED) 16/05/03 23:20:18 INFO yarn.Client: Application report for application_1459849169969_971497 (state: ACCEPTED) 16/05/03 23:20:19 INFO yarn.Client: Application report for application_1459849169969_971497 (state: ACCEPTED) 16/05/03 23:20:20 INFO cluster.YarnSchedulerBackend$YarnSchedulerEndpoint: ApplicationMaster registered as NettyRpcEndpointRef(null) 16/05/03 23:20:20 INFO cluster.YarnClientSchedulerBackend: Add WebUI Filter. org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter, Map(PROXY_HOSTS -> master04.sunshine.nm.ted,master06.sunshine.nm.ted, PROXY_URI_BASES -> http://master04.sunshine.nm.ted:23188/proxy/application_1459849169969_971497,http://master06.sunshine.nm.ted:23188/proxy/application_1459849169969_971497), /proxy/application_1459849169969_971497 16/05/03 23:20:20 INFO ui.JettyUtils: Adding filter: org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter 16/05/03 23:20:20 INFO yarn.Client: Application report for application_1459849169969_971497 (state: RUNNING) 16/05/03 23:20:20 INFO yarn.Client: client token: N/A diagnostics: N/A ApplicationMaster host: 10.141.44.34 ApplicationMaster RPC port: 0 queue: root.dt_dataminers start time: 1462288810715 final status: UNDEFINED tracking URL: http://master06.sunshine.nm.ted:23188/proxy/application_1459849169969_971497/ user: dt_dataminers 16/05/03 23:20:20 INFO cluster.YarnClientSchedulerBackend: Application application_1459849169969_971497 has started running. 16/05/03 23:20:20 INFO util.Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 46355. 16/05/03 23:20:20 INFO netty.NettyBlockTransferService: Server created on 46355 16/05/03 23:20:20 INFO storage.BlockManagerMaster: Trying to register BlockManager 16/05/03 23:20:20 INFO storage.BlockManagerMasterEndpoint: Registering block manager 10.134.71.181:46355 with 4.1 GB RAM, BlockManagerId(driver, 10.134.71.181, 46355) 16/05/03 23:20:20 INFO storage.BlockManagerMaster: Registered BlockManager 16/05/03 23:20:22 INFO scheduler.EventLoggingListener: Logging events to viewfs://nsX/tmp/spark-events/application_1459849169969_971497.snappy 16/05/03 23:20:22 INFO cluster.YarnClientSchedulerBackend: SchedulerBackend is ready for scheduling beginning after waiting maxRegisteredResourcesWaitingTime: 30000(ms) 16/05/03 23:20:22 WARN spark.SparkContext: Checkpoint directory must be non-local if Spark is running on a cluster: /user/dt_dataminers/yulei/distml/ldatest/output2.checkpoint start LDA training appId: application_1459849169969_971497 numTopics = 1000, totalIteration = 50 alpha = 0.1, beta = 0.01, alphaAS = 0.01 inputDataPath = /user/dt_dataminers/yulei/distml/ldatest/input4 outputPath = /user/dt_dataminers/yulei/distml/ldatest/output2 using ZenLDA sampling algorithm. 16/05/03 23:20:26 INFO cluster.YarnClientSchedulerBackend: Registered executor NettyRpcEndpointRef(null) (rsync.cloud193.wd.nm.ted:47531) with ID 169 16/05/03 23:20:26 INFO storage.BlockManagerMasterEndpoint: Registering block manager rsync.cloud193.wd.nm.ted:9091 with 4.3 GB RAM, BlockManagerId(169, rsync.cloud193.wd.nm.ted, 9091) 16/05/03 23:20:26 INFO cluster.YarnClientSchedulerBackend: Registered executor NettyRpcEndpointRef(null) (rsync.cloud177.wd.nm.ted:14915) with ID 75 16/05/03 23:20:26 INFO cluster.YarnClientSchedulerBackend: Registered executor NettyRpcEndpointRef(null) (cloud186.wd.nm.ted:20068) with ID 159 16/05/03 23:20:26 INFO cluster.YarnClientSchedulerBackend: Registered executor NettyRpcEndpointRef(null) (rsync.cloud195.wd.nm.ted:42918) with ID 134 16/05/03 23:20:26 INFO cluster.YarnClientSchedulerBackend: Registered executor NettyRpcEndpointRef(null) (cloud148.wd.nm.ted:34939) with ID 100 16/05/03 23:20:26 INFO cluster.YarnClientSchedulerBackend: Registered executor NettyRpcEndpointRef(null) (cloud155.wd.nm.ted:29786) with ID 98 16/05/03 23:20:26 INFO cluster.YarnClientSchedulerBackend: Registered executor NettyRpcEndpointRef(null) (rsync.cloud214.wd.nm.ted:11174) with ID 67 16/05/03 23:20:26 INFO storage.BlockManagerMasterEndpoint: Registering block manager cloud186.wd.nm.ted:28961 with 4.3 GB RAM, BlockManagerId(159, cloud186.wd.nm.ted, 28961) 16/05/03 23:20:26 INFO cluster.YarnClientSchedulerBackend: Registered executor NettyRpcEndpointRef(null) (cloud218.wd.s2.nm.ted:39862) with ID 58 16/05/03 23:20:26 INFO cluster.YarnClientSchedulerBackend: Registered executor NettyRpcEndpointRef(null) (rsync.cloud196.wd.nm.ted:49323) with ID 49 16/05/03 23:20:26 INFO storage.BlockManagerMasterEndpoint: Registering block manager cloud155.wd.nm.ted:31457 with 4.3 GB RAM, BlockManagerId(98, cloud155.wd.nm.ted, 31457) 16/05/03 23:20:26 INFO storage.BlockManagerMasterEndpoint: Registering block manager rsync.cloud195.wd.nm.ted:36509 with 4.3 GB RAM, BlockManagerId(134, rsync.cloud195.wd.nm.ted, 36509) 16/05/03 23:20:27 INFO storage.BlockManagerMasterEndpoint: Registering block manager rsync.cloud177.wd.nm.ted:34714 with 4.3 GB RAM, BlockManagerId(75, rsync.cloud177.wd.nm.ted, 34714) 16/05/03 23:20:27 INFO storage.BlockManagerMasterEndpoint: Registering block manager cloud148.wd.nm.ted:37011 with 4.3 GB RAM, BlockManagerId(100, cloud148.wd.nm.ted, 37011) 16/05/03 23:20:27 INFO storage.BlockManagerMasterEndpoint: Registering block manager rsync.cloud214.wd.nm.ted:35476 with 4.3 GB RAM, BlockManagerId(67, rsync.cloud214.wd.nm.ted, 35476) 16/05/03 23:20:27 INFO cluster.YarnClientSchedulerBackend: Registered executor NettyRpcEndpointRef(null) (cloud141.wd.nm.ted:9598) with ID 181 16/05/03 23:20:27 INFO cluster.YarnClientSchedulerBackend: Registered executor NettyRpcEndpointRef(null) (rsync.cloud229.wd.s2.nm.ted:29943) with ID 80 16/05/03 23:20:27 INFO cluster.YarnClientSchedulerBackend: Registered executor NettyRpcEndpointRef(null) (cloud204.wd.nm.ted:47790) with ID 8 16/05/03 23:20:27 INFO storage.BlockManagerMasterEndpoint: Registering block manager cloud218.wd.s2.nm.ted:29175 with 4.3 GB RAM, BlockManagerId(58, cloud218.wd.s2.nm.ted, 29175) 16/05/03 23:20:27 INFO cluster.YarnClientSchedulerBackend: Registered executor NettyRpcEndpointRef(null) (cloud10141066104.wd.nm.nop.ted:30750) with ID 50 16/05/03 23:20:27 INFO storage.BlockManagerMasterEndpoint: Registering block manager rsync.cloud229.wd.s2.nm.ted:23560 with 4.3 GB RAM, BlockManagerId(80, rsync.cloud229.wd.s2.nm.ted, 23560) 16/05/03 23:20:27 INFO storage.BlockManagerMasterEndpoint: Registering block manager rsync.cloud196.wd.nm.ted:29978 with 4.3 GB RAM, BlockManagerId(49, rsync.cloud196.wd.nm.ted, 29978) 16/05/03 23:20:27 INFO cluster.YarnClientSchedulerBackend: Registered executor NettyRpcEndpointRef(null) (rsync.cloud10141073108.wd.nm.nop.ted:40207) with ID 28 16/05/03 23:20:27 INFO cluster.YarnClientSchedulerBackend: Registered executor NettyRpcEndpointRef(null) (rsync.cloud187.wd.nm.ted:47568) with ID 146 16/05/03 23:20:27 INFO storage.BlockManagerMasterEndpoint: Registering block manager cloud141.wd.nm.ted:37814 with 4.3 GB RAM, BlockManagerId(181, cloud141.wd.nm.ted, 37814) 16/05/03 23:20:27 INFO storage.BlockManagerMasterEndpoint: Registering block manager cloud204.wd.nm.ted:36972 with 4.3 GB RAM, BlockManagerId(8, cloud204.wd.nm.ted, 36972) 16/05/03 23:20:27 INFO storage.BlockManagerMasterEndpoint: Registering block manager rsync.cloud10141073108.wd.nm.nop.ted:46058 with 4.3 GB RAM, BlockManagerId(28, rsync.cloud10141073108.wd.nm.nop.ted, 46058) 16/05/03 23:20:27 INFO cluster.YarnClientSchedulerBackend: Registered executor NettyRpcEndpointRef(null) (cloud101419282.wd.nm.ss.nop.ted:22487) with ID 25 16/05/03 23:20:27 INFO cluster.YarnClientSchedulerBackend: Registered executor NettyRpcEndpointRef(null) (cloud169.wd.nm.ted:45789) with ID 198 16/05/03 23:20:27 INFO storage.BlockManagerMasterEndpoint: Registering block manager rsync.cloud187.wd.nm.ted:21252 with 4.3 GB RAM, BlockManagerId(146, rsync.cloud187.wd.nm.ted, 21252) 16/05/03 23:20:27 INFO storage.BlockManagerMasterEndpoint: Registering block manager cloud10141066104.wd.nm.nop.ted:36293 with 4.3 GB RAM, BlockManagerId(50, cloud10141066104.wd.nm.nop.ted, 36293) 16/05/03 23:20:27 INFO cluster.YarnClientSchedulerBackend: Registered executor NettyRpcEndpointRef(null) (cloud101411244.wd.nm.ss.nop.ted:32625) with ID 12 16/05/03 23:20:27 INFO cluster.YarnClientSchedulerBackend: Registered executor NettyRpcEndpointRef(null) (rsync.cloud201.wd.nm.ted:11096) with ID 149 16/05/03 23:20:27 INFO storage.BlockManagerMasterEndpoint: Registering block manager cloud169.wd.nm.ted:45607 with 4.3 GB RAM, BlockManagerId(198, cloud169.wd.nm.ted, 45607) 16/05/03 23:20:27 INFO storage.BlockManagerMasterEndpoint: Registering block manager cloud101419282.wd.nm.ss.nop.ted:28636 with 4.3 GB RAM, BlockManagerId(25, cloud101419282.wd.nm.ss.nop.ted, 28636) 16/05/03 23:20:27 INFO storage.BlockManagerMasterEndpoint: Registering block manager rsync.cloud201.wd.nm.ted:8860 with 4.3 GB RAM, BlockManagerId(149, rsync.cloud201.wd.nm.ted, 8860) 16/05/03 23:20:27 INFO storage.BlockManagerMasterEndpoint: Registering block manager cloud101411244.wd.nm.ss.nop.ted:15205 with 4.3 GB RAM, BlockManagerId(12, cloud101411244.wd.nm.ss.nop.ted, 15205) 16/05/03 23:20:27 INFO cluster.YarnClientSchedulerBackend: Registered executor NettyRpcEndpointRef(null) (cloud10141065116.wd.nm.nop.ted:43248) with ID 92 16/05/03 23:20:27 INFO cluster.YarnClientSchedulerBackend: Registered executor NettyRpcEndpointRef(null) (cloud101411364.wd.nm.ss.nop.ted:30189) with ID 29 16/05/03 23:20:27 INFO cluster.YarnClientSchedulerBackend: Registered executor NettyRpcEndpointRef(null) (rsync.cloud10141066108.wd.nm.nop.ted:47996) with ID 157 16/05/03 23:20:27 INFO cluster.YarnClientSchedulerBackend: Registered executor NettyRpcEndpointRef(null) (cloud101417560.wd.nm.ss.nop.ted:16599) with ID 33 16/05/03 23:20:27 INFO cluster.YarnClientSchedulerBackend: Registered executor NettyRpcEndpointRef(null) (cloud101414094.wd.nm.ss.nop.ted:26129) with ID 65 16/05/03 23:20:27 INFO cluster.YarnClientSchedulerBackend: Registered executor NettyRpcEndpointRef(null) (cloud101416090.wd.nm.ss.nop.ted:35553) with ID 93 16/05/03 23:20:27 INFO cluster.YarnClientSchedulerBackend: Registered executor NettyRpcEndpointRef(null) (cloud101411280.wd.nm.ss.nop.ted:43430) with ID 166 16/05/03 23:20:27 INFO storage.BlockManagerMasterEndpoint: Registering block manager cloud10141065116.wd.nm.nop.ted:27666 with 4.3 GB RAM, BlockManagerId(92, cloud10141065116.wd.nm.nop.ted, 27666) 16/05/03 23:20:27 INFO storage.BlockManagerMasterEndpoint: Registering block manager rsync.cloud10141066108.wd.nm.nop.ted:37121 with 4.3 GB RAM, BlockManagerId(157, rsync.cloud10141066108.wd.nm.nop.ted, 37121) 16/05/03 23:20:27 INFO storage.BlockManagerMasterEndpoint: Registering block manager cloud101411364.wd.nm.ss.nop.ted:48299 with 4.3 GB RAM, BlockManagerId(29, cloud101411364.wd.nm.ss.nop.ted, 48299) 16/05/03 23:20:27 INFO cluster.YarnClientSchedulerBackend: Registered executor NettyRpcEndpointRef(null) (rsync.cloud10141051020.wd.nm.nop.ted:46186) with ID 85 16/05/03 23:20:27 INFO storage.BlockManagerMasterEndpoint: Registering block manager cloud101417560.wd.nm.ss.nop.ted:29945 with 4.3 GB RAM, BlockManagerId(33, cloud101417560.wd.nm.ss.nop.ted, 29945) 16/05/03 23:20:27 INFO cluster.YarnClientSchedulerBackend: Registered executor NettyRpcEndpointRef(null) (cloud101411250.wd.nm.ss.nop.ted:32713) with ID 11 16/05/03 23:20:27 INFO cluster.YarnClientSchedulerBackend: Registered executor NettyRpcEndpointRef(null) (cloud1014117124.wd.nm.ss.nop.ted:49323) with ID 177 16/05/03 23:20:27 INFO cluster.YarnClientSchedulerBackend: Registered executor NettyRpcEndpointRef(null) (rsync.cloud029.wd.nm.nop.ted:44178) with ID 71 16/05/03 23:20:27 INFO cluster.YarnClientSchedulerBackend: Registered executor NettyRpcEndpointRef(null) (cloud1014143124.wd.nm.ss.nop.ted:10631) with ID 141 16/05/03 23:20:27 INFO storage.BlockManagerMasterEndpoint: Registering block manager cloud101414094.wd.nm.ss.nop.ted:11057 with 4.3 GB RAM, BlockManagerId(65, cloud101414094.wd.nm.ss.nop.ted, 11057) 16/05/03 23:20:27 INFO storage.BlockManagerMasterEndpoint: Registering block manager cloud101411280.wd.nm.ss.nop.ted:37372 with 4.3 GB RAM, BlockManagerId(166, cloud101411280.wd.nm.ss.nop.ted, 37372) 16/05/03 23:20:27 INFO storage.BlockManagerMasterEndpoint: Registering block manager cloud101416090.wd.nm.ss.nop.ted:49918 with 4.3 GB RAM, BlockManagerId(93, cloud101416090.wd.nm.ss.nop.ted, 49918) 16/05/03 23:20:27 INFO cluster.YarnClientSchedulerBackend: Registered executor NettyRpcEndpointRef(null) (rsync.cloud10141051026.wd.nm.nop.ted:34707) with ID 19 16/05/03 23:20:27 INFO cluster.YarnClientSchedulerBackend: Registered executor NettyRpcEndpointRef(null) (cloud1014118122.wd.nm.ss.nop.ted:27353) with ID 165 16/05/03 23:20:27 INFO storage.BlockManagerMasterEndpoint: Registering block manager cloud101411250.wd.nm.ss.nop.ted:31935 with 4.3 GB RAM, BlockManagerId(11, cloud101411250.wd.nm.ss.nop.ted, 31935) 16/05/03 23:20:27 INFO storage.BlockManagerMasterEndpoint: Registering block manager cloud1014117124.wd.nm.ss.nop.ted:47896 with 4.3 GB RAM, BlockManagerId(177, cloud1014117124.wd.nm.ss.nop.ted, 47896) 16/05/03 23:20:27 INFO storage.BlockManagerMasterEndpoint: Registering block manager cloud1014143124.wd.nm.ss.nop.ted:14899 with 4.3 GB RAM, BlockManagerId(141, cloud1014143124.wd.nm.ss.nop.ted, 14899) 16/05/03 23:20:27 INFO storage.BlockManagerMasterEndpoint: Registering block manager rsync.cloud029.wd.nm.nop.ted:15348 with 4.3 GB RAM, BlockManagerId(71, rsync.cloud029.wd.nm.nop.ted, 15348) 16/05/03 23:20:27 INFO storage.BlockManagerMasterEndpoint: Registering block manager rsync.cloud10141051020.wd.nm.nop.ted:28319 with 4.3 GB RAM, BlockManagerId(85, rsync.cloud10141051020.wd.nm.nop.ted, 28319) 16/05/03 23:20:27 INFO storage.BlockManagerMasterEndpoint: Registering block manager rsync.cloud10141051026.wd.nm.nop.ted:44083 with 4.3 GB RAM, BlockManagerId(19, rsync.cloud10141051026.wd.nm.nop.ted, 44083) 16/05/03 23:20:27 INFO storage.BlockManagerMasterEndpoint: Registering block manager cloud1014118122.wd.nm.ss.nop.ted:14364 with 4.3 GB RAM, BlockManagerId(165, cloud1014118122.wd.nm.ss.nop.ted, 14364) 16/05/03 23:20:27 INFO cluster.YarnClientSchedulerBackend: Registered executor NettyRpcEndpointRef(null) (cloud1014119124.wd.nm.ss.nop.ted:26730) with ID 21 16/05/03 23:20:27 INFO cluster.YarnClientSchedulerBackend: Registered executor NettyRpcEndpointRef(null) (cloud083.wd.nm.nop.ted:39840) with ID 108 16/05/03 23:20:27 INFO cluster.YarnClientSchedulerBackend: Registered executor NettyRpcEndpointRef(null) (cloud101412022.wd.nm.ss.nop.ted:28961) with ID 183 16/05/03 23:20:27 INFO cluster.YarnClientSchedulerBackend: Registered executor NettyRpcEndpointRef(null) (cloud101411358.wd.nm.ss.nop.ted:42710) with ID 195 16/05/03 23:20:27 INFO cluster.YarnClientSchedulerBackend: Registered executor NettyRpcEndpointRef(null) (cloud1014111120.wd.nm.ss.nop.ted:38344) with ID 55 16/05/03 23:20:27 INFO cluster.YarnClientSchedulerBackend: Registered executor NettyRpcEndpointRef(null) (cloud101411270.wd.nm.ss.nop.ted:14314) with ID 35 16/05/03 23:20:27 INFO cluster.YarnClientSchedulerBackend: Registered executor NettyRpcEndpointRef(null) (cloud101417546.wd.nm.ss.nop.ted:17616) with ID 13 16/05/03 23:20:27 INFO cluster.YarnClientSchedulerBackend: Registered executor NettyRpcEndpointRef(null) (cloud101419370.wd.nm.ss.nop.ted:38849) with ID 36 16/05/03 23:20:27 INFO cluster.YarnClientSchedulerBackend: Registered executor NettyRpcEndpointRef(null) (cloud101417832.wd.nm.ss.nop.ted:29915) with ID 70 16/05/03 23:20:27 INFO cluster.YarnClientSchedulerBackend: Registered executor NettyRpcEndpointRef(null) (cloud101411384.wd.nm.ss.nop.ted:32340) with ID 96 16/05/03 23:20:27 INFO cluster.YarnClientSchedulerBackend: Registered executor NettyRpcEndpointRef(null) (cloud101412858.wd.nm.ss.nop.ted:19162) with ID 3 16/05/03 23:20:27 INFO cluster.YarnClientSchedulerBackend: Registered executor NettyRpcEndpointRef(null) (cloud101412866.wd.nm.ss.nop.ted:40208) with ID 95 16/05/03 23:20:27 INFO storage.BlockManagerMasterEndpoint: Registering block manager cloud101411358.wd.nm.ss.nop.ted:25470 with 4.3 GB RAM, BlockManagerId(195, cloud101411358.wd.nm.ss.nop.ted, 25470) 16/05/03 23:20:27 INFO storage.BlockManagerMasterEndpoint: Registering block manager cloud1014119124.wd.nm.ss.nop.ted:23949 with 4.3 GB RAM, BlockManagerId(21, cloud1014119124.wd.nm.ss.nop.ted, 23949) 16/05/03 23:20:27 INFO cluster.YarnClientSchedulerBackend: Registered executor NettyRpcEndpointRef(null) (cloud072.wd.nm.nop.ted:9924) with ID 10 16/05/03 23:20:27 INFO cluster.YarnClientSchedulerBackend: Registered executor NettyRpcEndpointRef(null) (cloud1014150108.wd.nm.ss.nop.ted:40848) with ID 114 16/05/03 23:20:27 INFO storage.BlockManagerMasterEndpoint: Registering block manager cloud083.wd.nm.nop.ted:44070 with 4.3 GB RAM, BlockManagerId(108, cloud083.wd.nm.nop.ted, 44070) 16/05/03 23:20:27 INFO storage.BlockManagerMasterEndpoint: Registering block manager cloud1014111120.wd.nm.ss.nop.ted:16282 with 4.3 GB RAM, BlockManagerId(55, cloud1014111120.wd.nm.ss.nop.ted, 16282) 16/05/03 23:20:27 INFO storage.BlockManagerMasterEndpoint: Registering block manager cloud101411270.wd.nm.ss.nop.ted:14771 with 4.3 GB RAM, BlockManagerId(35, cloud101411270.wd.nm.ss.nop.ted, 14771) 16/05/03 23:20:27 INFO cluster.YarnClientSchedulerBackend: Registered executor NettyRpcEndpointRef(null) (cloud10141049124.wd.nm.nop.ted:43160) with ID 78 16/05/03 23:20:27 INFO storage.BlockManagerMasterEndpoint: Registering block manager cloud101417546.wd.nm.ss.nop.ted:29787 with 4.3 GB RAM, BlockManagerId(13, cloud101417546.wd.nm.ss.nop.ted, 29787) 16/05/03 23:20:27 INFO cluster.YarnClientSchedulerBackend: Registered executor NettyRpcEndpointRef(null) (rsync.cloud10141043060.wd.nm.nop.ted:46404) with ID 15 16/05/03 23:20:27 INFO storage.BlockManagerMasterEndpoint: Registering block manager cloud101419370.wd.nm.ss.nop.ted:48467 with 4.3 GB RAM, BlockManagerId(36, cloud101419370.wd.nm.ss.nop.ted, 48467) 16/05/03 23:20:27 INFO storage.BlockManagerMasterEndpoint: Registering block manager cloud101412022.wd.nm.ss.nop.ted:25388 with 4.3 GB RAM, BlockManagerId(183, cloud101412022.wd.nm.ss.nop.ted, 25388) 16/05/03 23:20:27 INFO storage.BlockManagerMasterEndpoint: Registering block manager cloud101411384.wd.nm.ss.nop.ted:17116 with 4.3 GB RAM, BlockManagerId(96, cloud101411384.wd.nm.ss.nop.ted, 17116) 16/05/03 23:20:27 INFO cluster.YarnClientSchedulerBackend: Registered executor NettyRpcEndpointRef(null) (cloud101411282.wd.nm.ss.nop.ted:47857) with ID 54 16/05/03 23:20:27 INFO storage.BlockManagerMasterEndpoint: Registering block manager cloud101412866.wd.nm.ss.nop.ted:47235 with 4.3 GB RAM, BlockManagerId(95, cloud101412866.wd.nm.ss.nop.ted, 47235) 16/05/03 23:20:27 INFO storage.BlockManagerMasterEndpoint: Registering block manager cloud101412858.wd.nm.ss.nop.ted:25665 with 4.3 GB RAM, BlockManagerId(3, cloud101412858.wd.nm.ss.nop.ted, 25665) 16/05/03 23:20:27 INFO cluster.YarnClientSchedulerBackend: Registered executor NettyRpcEndpointRef(null) (cloud1014127116.wd.nm.ss.nop.ted:22865) with ID 31 16/05/03 23:20:27 INFO storage.BlockManagerMasterEndpoint: Registering block manager cloud101417832.wd.nm.ss.nop.ted:49167 with 4.3 GB RAM, BlockManagerId(70, cloud101417832.wd.nm.ss.nop.ted, 49167) 16/05/03 23:20:27 INFO storage.BlockManagerMasterEndpoint: Registering block manager cloud10141049124.wd.nm.nop.ted:14149 with 4.3 GB RAM, BlockManagerId(78, cloud10141049124.wd.nm.nop.ted, 14149) 16/05/03 23:20:27 INFO storage.BlockManagerMasterEndpoint: Registering block manager cloud072.wd.nm.nop.ted:19689 with 4.3 GB RAM, BlockManagerId(10, cloud072.wd.nm.nop.ted, 19689) 16/05/03 23:20:27 INFO storage.BlockManagerMasterEndpoint: Registering block manager rsync.cloud10141043060.wd.nm.nop.ted:24251 with 4.3 GB RAM, BlockManagerId(15, rsync.cloud10141043060.wd.nm.nop.ted, 24251) 16/05/03 23:20:27 INFO storage.BlockManagerMasterEndpoint: Registering block manager cloud1014150108.wd.nm.ss.nop.ted:37671 with 4.3 GB RAM, BlockManagerId(114, cloud1014150108.wd.nm.ss.nop.ted, 37671) 16/05/03 23:20:27 INFO cluster.YarnClientSchedulerBackend: Registered executor NettyRpcEndpointRef(null) (cloud10141066100.wd.nm.nop.ted:49743) with ID 161 16/05/03 23:20:28 INFO cluster.YarnClientSchedulerBackend: Registered executor NettyRpcEndpointRef(null) (cloud101411272.wd.nm.ss.nop.ted:26819) with ID 127 16/05/03 23:20:28 INFO storage.BlockManagerMasterEndpoint: Registering block manager cloud1014127116.wd.nm.ss.nop.ted:41914 with 4.3 GB RAM, BlockManagerId(31, cloud1014127116.wd.nm.ss.nop.ted, 41914) 16/05/03 23:20:28 INFO storage.BlockManagerMasterEndpoint: Registering block manager cloud101411282.wd.nm.ss.nop.ted:35426 with 4.3 GB RAM, BlockManagerId(54, cloud101411282.wd.nm.ss.nop.ted, 35426) 16/05/03 23:20:28 INFO cluster.YarnClientSchedulerBackend: Registered executor NettyRpcEndpointRef(null) (cloud1014113124.wd.nm.ss.nop.ted:26377) with ID 46 16/05/03 23:20:28 INFO storage.BlockManagerMasterEndpoint: Registering block manager cloud10141066100.wd.nm.nop.ted:15833 with 4.3 GB RAM, BlockManagerId(161, cloud10141066100.wd.nm.nop.ted, 15833) 16/05/03 23:20:28 INFO cluster.YarnClientSchedulerBackend: Registered executor NettyRpcEndpointRef(null) (cloud101414096.wd.nm.ss.nop.ted:41716) with ID 81 16/05/03 23:20:28 INFO cluster.YarnClientSchedulerBackend: Registered executor NettyRpcEndpointRef(null) (cloud1014117114.wd.nm.ss.nop.ted:17497) with ID 91 16/05/03 23:20:28 INFO cluster.YarnClientSchedulerBackend: Registered executor NettyRpcEndpointRef(null) (cloud10141050068.wd.nm.nop.ted:45385) with ID 167 16/05/03 23:20:28 INFO cluster.YarnClientSchedulerBackend: Registered executor NettyRpcEndpointRef(null) (rsync.cloud094.wd.nm.nop.ted:23052) with ID 77 16/05/03 23:20:28 INFO storage.BlockManagerMasterEndpoint: Registering block manager cloud101411272.wd.nm.ss.nop.ted:33182 with 4.3 GB RAM, BlockManagerId(127, cloud101411272.wd.nm.ss.nop.ted, 33182) 16/05/03 23:20:28 INFO storage.BlockManagerMasterEndpoint: Registering block manager cloud1014113124.wd.nm.ss.nop.ted:38190 with 4.3 GB RAM, BlockManagerId(46, cloud1014113124.wd.nm.ss.nop.ted, 38190) 16/05/03 23:20:28 INFO storage.BlockManagerMasterEndpoint: Registering block manager cloud101414096.wd.nm.ss.nop.ted:22630 with 4.3 GB RAM, BlockManagerId(81, cloud101414096.wd.nm.ss.nop.ted, 22630) 16/05/03 23:20:28 INFO cluster.YarnClientSchedulerBackend: Registered executor NettyRpcEndpointRef(null) (rsync.cloud093.wd.nm.nop.ted:31532) with ID 101 16/05/03 23:20:28 INFO storage.BlockManagerMasterEndpoint: Registering block manager cloud1014117114.wd.nm.ss.nop.ted:22646 with 4.3 GB RAM, BlockManagerId(91, cloud1014117114.wd.nm.ss.nop.ted, 22646) 16/05/03 23:20:28 INFO cluster.YarnClientSchedulerBackend: Registered executor NettyRpcEndpointRef(null) (cloud1014118124.wd.nm.ss.nop.ted:9792) with ID 66 16/05/03 23:20:28 INFO storage.BlockManagerMasterEndpoint: Registering block manager cloud10141050068.wd.nm.nop.ted:24423 with 4.3 GB RAM, BlockManagerId(167, cloud10141050068.wd.nm.nop.ted, 24423) 16/05/03 23:20:28 INFO cluster.YarnClientSchedulerBackend: Registered executor NettyRpcEndpointRef(null) (cloud101411268.wd.nm.ss.nop.ted:43844) with ID 133 16/05/03 23:20:28 INFO cluster.YarnClientSchedulerBackend: Registered executor NettyRpcEndpointRef(null) (rsync.cloud036.wd.nm.nop.ted:35912) with ID 82 16/05/03 23:20:28 INFO storage.BlockManagerMasterEndpoint: Registering block manager rsync.cloud094.wd.nm.nop.ted:44896 with 4.3 GB RAM, BlockManagerId(77, rsync.cloud094.wd.nm.nop.ted, 44896) 16/05/03 23:20:28 INFO storage.BlockManagerMasterEndpoint: Registering block manager rsync.cloud093.wd.nm.nop.ted:12811 with 4.3 GB RAM, BlockManagerId(101, rsync.cloud093.wd.nm.nop.ted, 12811) 16/05/03 23:20:28 INFO storage.BlockManagerMasterEndpoint: Registering block manager cloud1014118124.wd.nm.ss.nop.ted:45375 with 4.3 GB RAM, BlockManagerId(66, cloud1014118124.wd.nm.ss.nop.ted, 45375) 16/05/03 23:20:28 INFO cluster.YarnClientSchedulerBackend: Registered executor NettyRpcEndpointRef(null) (rsync.cloud108.wd.nm.nop.ted:34955) with ID 16 16/05/03 23:20:28 INFO cluster.YarnClientSchedulerBackend: Registered executor NettyRpcEndpointRef(null) (cloud101414434.wd.nm.ss.nop.ted:33658) with ID 162 16/05/03 23:20:28 INFO cluster.YarnClientSchedulerBackend: Registered executor NettyRpcEndpointRef(null) (rsync.cloud10141080021.wd.nm.nop.ted:27735) with ID 113 16/05/03 23:20:28 INFO cluster.YarnClientSchedulerBackend: Registered executor NettyRpcEndpointRef(null) (cloud10141065122.wd.nm.nop.ted:8944) with ID 83 16/05/03 23:20:28 INFO cluster.YarnClientSchedulerBackend: Registered executor NettyRpcEndpointRef(null) (cloud101411844.wd.nm.ss.nop.ted:34192) with ID 111 16/05/03 23:20:28 INFO storage.BlockManagerMasterEndpoint: Registering block manager cloud101411268.wd.nm.ss.nop.ted:36871 with 4.3 GB RAM, BlockManagerId(133, cloud101411268.wd.nm.ss.nop.ted, 36871) 16/05/03 23:20:28 INFO storage.BlockManagerMasterEndpoint: Registering block manager rsync.cloud036.wd.nm.nop.ted:10086 with 4.3 GB RAM, BlockManagerId(82, rsync.cloud036.wd.nm.nop.ted, 10086) 16/05/03 23:20:28 INFO cluster.YarnClientSchedulerBackend: Registered executor NettyRpcEndpointRef(null) (cloud101414092.wd.nm.ss.nop.ted:29393) with ID 155 16/05/03 23:20:28 INFO cluster.YarnClientSchedulerBackend: Registered executor NettyRpcEndpointRef(null) (rsync.cloud10141040037.wd.nm.nop.ted:44044) with ID 148 16/05/03 23:20:28 INFO storage.BlockManagerMasterEndpoint: Registering block manager rsync.cloud108.wd.nm.nop.ted:33924 with 4.3 GB RAM, BlockManagerId(16, rsync.cloud108.wd.nm.nop.ted, 33924) 16/05/03 23:20:28 INFO storage.BlockManagerMasterEndpoint: Registering block manager rsync.cloud10141080021.wd.nm.nop.ted:10597 with 4.3 GB RAM, BlockManagerId(113, rsync.cloud10141080021.wd.nm.nop.ted, 10597) 16/05/03 23:20:28 INFO storage.BlockManagerMasterEndpoint: Registering block manager cloud101414434.wd.nm.ss.nop.ted:23828 with 4.3 GB RAM, BlockManagerId(162, cloud101414434.wd.nm.ss.nop.ted, 23828) 16/05/03 23:20:28 INFO cluster.YarnClientSchedulerBackend: Registered executor NettyRpcEndpointRef(null) (cloud101412752.wd.nm.ss.nop.ted:38406) with ID 24 16/05/03 23:20:28 INFO storage.BlockManagerMasterEndpoint: Registering block manager cloud10141065122.wd.nm.nop.ted:30218 with 4.3 GB RAM, BlockManagerId(83, cloud10141065122.wd.nm.nop.ted, 30218) 16/05/03 23:20:28 INFO storage.BlockManagerMasterEndpoint: Registering block manager cloud101411844.wd.nm.ss.nop.ted:10867 with 4.3 GB RAM, BlockManagerId(111, cloud101411844.wd.nm.ss.nop.ted, 10867) 16/05/03 23:20:28 INFO storage.BlockManagerMasterEndpoint: Registering block manager cloud101414092.wd.nm.ss.nop.ted:36276 with 4.3 GB RAM, BlockManagerId(155, cloud101414092.wd.nm.ss.nop.ted, 36276) 16/05/03 23:20:28 INFO storage.BlockManagerMasterEndpoint: Registering block manager rsync.cloud10141040037.wd.nm.nop.ted:24111 with 4.3 GB RAM, BlockManagerId(148, rsync.cloud10141040037.wd.nm.nop.ted, 24111) 16/05/03 23:20:28 INFO cluster.YarnClientSchedulerBackend: Registered executor NettyRpcEndpointRef(null) (rsync.cloud019.wd.nm.nop.ted:49538) with ID 7 16/05/03 23:20:28 INFO cluster.YarnClientSchedulerBackend: Registered executor NettyRpcEndpointRef(null) (rsync.cloud10141073120.wd.nm.nop.ted:17493) with ID 109 16/05/03 23:20:28 INFO cluster.YarnClientSchedulerBackend: Registered executor NettyRpcEndpointRef(null) (cloud101412054.wd.nm.ss.nop.ted:18737) with ID 150 16/05/03 23:20:28 INFO storage.BlockManagerMasterEndpoint: Registering block manager cloud101412752.wd.nm.ss.nop.ted:37546 with 4.3 GB RAM, BlockManagerId(24, cloud101412752.wd.nm.ss.nop.ted, 37546) 16/05/03 23:20:28 INFO cluster.YarnClientSchedulerBackend: Registered executor NettyRpcEndpointRef(null) (cloud101417570.wd.nm.ss.nop.ted:48670) with ID 122 16/05/03 23:20:28 INFO cluster.YarnClientSchedulerBackend: Registered executor NettyRpcEndpointRef(null) (cloud1014119122.wd.nm.ss.nop.ted:9496) with ID 139 16/05/03 23:20:28 INFO cluster.YarnClientSchedulerBackend: Registered executor NettyRpcEndpointRef(null) (cloud101411834.wd.nm.ss.nop.ted:45654) with ID 124 16/05/03 23:20:28 INFO storage.BlockManagerMasterEndpoint: Registering block manager rsync.cloud019.wd.nm.nop.ted:26675 with 4.3 GB RAM, BlockManagerId(7, rsync.cloud019.wd.nm.nop.ted, 26675) 16/05/03 23:20:28 INFO storage.BlockManagerMasterEndpoint: Registering block manager rsync.cloud10141073120.wd.nm.nop.ted:48981 with 4.3 GB RAM, BlockManagerId(109, rsync.cloud10141073120.wd.nm.nop.ted, 48981) 16/05/03 23:20:28 INFO storage.BlockManagerMasterEndpoint: Registering block manager cloud101412054.wd.nm.ss.nop.ted:12667 with 4.3 GB RAM, BlockManagerId(150, cloud101412054.wd.nm.ss.nop.ted, 12667) 16/05/03 23:20:28 INFO storage.BlockManagerMasterEndpoint: Registering block manager cloud101417570.wd.nm.ss.nop.ted:28961 with 4.3 GB RAM, BlockManagerId(122, cloud101417570.wd.nm.ss.nop.ted, 28961) 16/05/03 23:20:28 INFO cluster.YarnClientSchedulerBackend: Registered executor NettyRpcEndpointRef(null) (cloud057.wd.nm.nop.ted:42424) with ID 132 16/05/03 23:20:28 INFO storage.BlockManagerMasterEndpoint: Registering block manager cloud101411834.wd.nm.ss.nop.ted:48361 with 4.3 GB RAM, BlockManagerId(124, cloud101411834.wd.nm.ss.nop.ted, 48361) 16/05/03 23:20:28 INFO storage.BlockManagerMasterEndpoint: Registering block manager cloud1014119122.wd.nm.ss.nop.ted:19128 with 4.3 GB RAM, BlockManagerId(139, cloud1014119122.wd.nm.ss.nop.ted, 19128) 16/05/03 23:20:28 INFO storage.MemoryStore: Block broadcast_0 stored as values in memory (estimated size 249.3 KB, free 249.3 KB) 16/05/03 23:20:28 INFO cluster.YarnClientSchedulerBackend: Registered executor NettyRpcEndpointRef(null) (cloud088.wd.nm.nop.ted:31523) with ID 170 16/05/03 23:20:28 INFO storage.BlockManagerMasterEndpoint: Registering block manager cloud057.wd.nm.nop.ted:36991 with 4.3 GB RAM, BlockManagerId(132, cloud057.wd.nm.nop.ted, 36991) 16/05/03 23:20:28 INFO cluster.YarnClientSchedulerBackend: Registered executor NettyRpcEndpointRef(null) (cloud1014143118.wd.nm.ss.nop.ted:38289) with ID 126 16/05/03 23:20:28 INFO cluster.YarnClientSchedulerBackend: Registered executor NettyRpcEndpointRef(null) (cloud101414438.wd.nm.ss.nop.ted:26444) with ID 62 16/05/03 23:20:28 INFO storage.BlockManagerMasterEndpoint: Registering block manager cloud088.wd.nm.nop.ted:28118 with 4.3 GB RAM, BlockManagerId(170, cloud088.wd.nm.nop.ted, 28118) 16/05/03 23:20:28 INFO cluster.YarnClientSchedulerBackend: Registered executor NettyRpcEndpointRef(null) (cloud101412224.wd.nm.ss.nop.ted:36945) with ID 53 16/05/03 23:20:28 INFO storage.BlockManagerMasterEndpoint: Registering block manager cloud1014143118.wd.nm.ss.nop.ted:37038 with 4.3 GB RAM, BlockManagerId(126, cloud1014143118.wd.nm.ss.nop.ted, 37038) 16/05/03 23:20:28 INFO cluster.YarnClientSchedulerBackend: Registered executor NettyRpcEndpointRef(null) (cloud1014121102.wd.nm.ss.nop.ted:14595) with ID 143 16/05/03 23:20:28 INFO storage.BlockManagerMasterEndpoint: Registering block manager cloud101414438.wd.nm.ss.nop.ted:25433 with 4.3 GB RAM, BlockManagerId(62, cloud101414438.wd.nm.ss.nop.ted, 25433) 16/05/03 23:20:28 INFO storage.BlockManagerMasterEndpoint: Registering block manager cloud101412224.wd.nm.ss.nop.ted:46180 with 4.3 GB RAM, BlockManagerId(53, cloud101412224.wd.nm.ss.nop.ted, 46180) 16/05/03 23:20:28 INFO storage.BlockManagerMasterEndpoint: Registering block manager cloud1014121102.wd.nm.ss.nop.ted:16704 with 4.3 GB RAM, BlockManagerId(143, cloud1014121102.wd.nm.ss.nop.ted, 16704) 16/05/03 23:20:28 INFO cluster.YarnClientSchedulerBackend: Registered executor NettyRpcEndpointRef(null) (cloud10141050074.wd.nm.nop.ted:37452) with ID 168 16/05/03 23:20:28 INFO cluster.YarnClientSchedulerBackend: Registered executor NettyRpcEndpointRef(null) (cloud10141073118.wd.nm.nop.ted:43469) with ID 152 16/05/03 23:20:28 INFO storage.BlockManagerMasterEndpoint: Registering block manager cloud10141050074.wd.nm.nop.ted:40299 with 4.3 GB RAM, BlockManagerId(168, cloud10141050074.wd.nm.nop.ted, 40299) 16/05/03 23:20:28 INFO cluster.YarnClientSchedulerBackend: Registered executor NettyRpcEndpointRef(null) (cloud101412194.wd.nm.ss.nop.ted:36750) with ID 160 16/05/03 23:20:28 INFO cluster.YarnClientSchedulerBackend: Registered executor NettyRpcEndpointRef(null) (rsync.cloud089.wd.nm.nop.ted:17553) with ID 34 16/05/03 23:20:28 INFO cluster.YarnClientSchedulerBackend: Registered executor NettyRpcEndpointRef(null) (cloud1014150122.wd.nm.ss.nop.ted:39716) with ID 56 16/05/03 23:20:28 INFO cluster.YarnClientSchedulerBackend: Registered executor NettyRpcEndpointRef(null) (rsync.cloud005.wd.nm.nop.ted:37506) with ID 200 16/05/03 23:20:28 INFO storage.BlockManagerMasterEndpoint: Registering block manager cloud10141073118.wd.nm.nop.ted:9021 with 4.3 GB RAM, BlockManagerId(152, cloud10141073118.wd.nm.nop.ted, 9021) 16/05/03 23:20:28 INFO cluster.YarnClientSchedulerBackend: Registered executor NettyRpcEndpointRef(null) (cloud096.wd.nm.nop.ted:10021) with ID 27 16/05/03 23:20:28 INFO cluster.YarnClientSchedulerBackend: Registered executor NettyRpcEndpointRef(null) (rsync.cloud10141049112.wd.nm.nop.ted:30463) with ID 88 16/05/03 23:20:28 INFO storage.BlockManagerMasterEndpoint: Registering block manager cloud101412194.wd.nm.ss.nop.ted:49808 with 4.3 GB RAM, BlockManagerId(160, cloud101412194.wd.nm.ss.nop.ted, 49808) 16/05/03 23:20:28 INFO storage.BlockManagerMasterEndpoint: Registering block manager rsync.cloud089.wd.nm.nop.ted:12685 with 4.3 GB RAM, BlockManagerId(34, rsync.cloud089.wd.nm.nop.ted, 12685) 16/05/03 23:20:28 INFO storage.BlockManagerMasterEndpoint: Registering block manager cloud1014150122.wd.nm.ss.nop.ted:35108 with 4.3 GB RAM, BlockManagerId(56, cloud1014150122.wd.nm.ss.nop.ted, 35108) 16/05/03 23:20:28 INFO storage.BlockManagerMasterEndpoint: Registering block manager rsync.cloud005.wd.nm.nop.ted:41728 with 4.3 GB RAM, BlockManagerId(200, rsync.cloud005.wd.nm.nop.ted, 41728) 16/05/03 23:20:28 INFO cluster.YarnClientSchedulerBackend: Registered executor NettyRpcEndpointRef(null) (cloud101412182.wd.nm.ss.nop.ted:45531) with ID 104 16/05/03 23:20:28 INFO storage.BlockManagerMasterEndpoint: Registering block manager rsync.cloud10141049112.wd.nm.nop.ted:30906 with 4.3 GB RAM, BlockManagerId(88, rsync.cloud10141049112.wd.nm.nop.ted, 30906) 16/05/03 23:20:28 INFO cluster.YarnClientSchedulerBackend: Registered executor NettyRpcEndpointRef(null) (cloud101412792.wd.nm.ss.nop.ted:11426) with ID 106 16/05/03 23:20:29 INFO storage.BlockManagerMasterEndpoint: Registering block manager cloud096.wd.nm.nop.ted:35384 with 4.3 GB RAM, BlockManagerId(27, cloud096.wd.nm.nop.ted, 35384) 16/05/03 23:20:29 INFO cluster.YarnClientSchedulerBackend: Registered executor NettyRpcEndpointRef(null) (rsync.cloud111.wd.nm.nop.ted:15018) with ID 18 16/05/03 23:20:29 INFO cluster.YarnClientSchedulerBackend: Registered executor NettyRpcEndpointRef(null) (cloud101417548.wd.nm.ss.nop.ted:23153) with ID 158 16/05/03 23:20:29 INFO cluster.YarnClientSchedulerBackend: Registered executor NettyRpcEndpointRef(null) (rsync.cloud10141066090.wd.nm.nop.ted:31956) with ID 119 16/05/03 23:20:29 INFO cluster.YarnClientSchedulerBackend: Registered executor NettyRpcEndpointRef(null) (cloud101414324.wd.nm.ss.nop.ted:32404) with ID 84 16/05/03 23:20:29 INFO cluster.YarnClientSchedulerBackend: Registered executor NettyRpcEndpointRef(null) (rsync.cloud058.wd.nm.nop.ted:17331) with ID 190 16/05/03 23:20:29 INFO cluster.YarnClientSchedulerBackend: Registered executor NettyRpcEndpointRef(null) (rsync.cloud061.wd.nm.nop.ted:23709) with ID 196 16/05/03 23:20:29 INFO storage.BlockManagerMasterEndpoint: Registering block manager cloud101412182.wd.nm.ss.nop.ted:29105 with 4.3 GB RAM, BlockManagerId(104, cloud101412182.wd.nm.ss.nop.ted, 29105) 16/05/03 23:20:29 INFO storage.BlockManagerMasterEndpoint: Registering block manager cloud101412792.wd.nm.ss.nop.ted:39242 with 4.3 GB RAM, BlockManagerId(106, cloud101412792.wd.nm.ss.nop.ted, 39242) 16/05/03 23:20:29 INFO storage.BlockManagerMasterEndpoint: Registering block manager rsync.cloud10141066090.wd.nm.nop.ted:24564 with 4.3 GB RAM, BlockManagerId(119, rsync.cloud10141066090.wd.nm.nop.ted, 24564) 16/05/03 23:20:29 INFO cluster.YarnClientSchedulerBackend: Registered executor NettyRpcEndpointRef(null) (cloud101414366.wd.nm.ss.nop.ted:32478) with ID 182 16/05/03 23:20:29 INFO storage.BlockManagerMasterEndpoint: Registering block manager cloud101417548.wd.nm.ss.nop.ted:18741 with 4.3 GB RAM, BlockManagerId(158, cloud101417548.wd.nm.ss.nop.ted, 18741) 16/05/03 23:20:29 INFO storage.BlockManagerMasterEndpoint: Registering block manager rsync.cloud111.wd.nm.nop.ted:29738 with 4.3 GB RAM, BlockManagerId(18, rsync.cloud111.wd.nm.nop.ted, 29738) 16/05/03 23:20:29 INFO storage.BlockManagerMasterEndpoint: Registering block manager cloud101414324.wd.nm.ss.nop.ted:39077 with 4.3 GB RAM, BlockManagerId(84, cloud101414324.wd.nm.ss.nop.ted, 39077) 16/05/03 23:20:29 INFO cluster.YarnClientSchedulerBackend: Registered executor NettyRpcEndpointRef(null) (cloud028.wd.nm.nop.ted:22790) with ID 107 16/05/03 23:20:29 INFO storage.BlockManagerMasterEndpoint: Registering block manager rsync.cloud058.wd.nm.nop.ted:15777 with 4.3 GB RAM, BlockManagerId(190, rsync.cloud058.wd.nm.nop.ted, 15777) 16/05/03 23:20:29 INFO storage.BlockManagerMasterEndpoint: Registering block manager rsync.cloud061.wd.nm.nop.ted:14722 with 4.3 GB RAM, BlockManagerId(196, rsync.cloud061.wd.nm.nop.ted, 14722) 16/05/03 23:20:29 INFO cluster.YarnClientSchedulerBackend: Registered executor NettyRpcEndpointRef(null) (rsync.cloud113.wd.nm.nop.ted:16217) with ID 187 16/05/03 23:20:29 INFO storage.BlockManagerMasterEndpoint: Registering block manager cloud101414366.wd.nm.ss.nop.ted:9749 with 4.3 GB RAM, BlockManagerId(182, cloud101414366.wd.nm.ss.nop.ted, 9749) 16/05/03 23:20:29 INFO storage.BlockManagerMasterEndpoint: Registering block manager cloud028.wd.nm.nop.ted:28993 with 4.3 GB RAM, BlockManagerId(107, cloud028.wd.nm.nop.ted, 28993) 16/05/03 23:20:29 INFO cluster.YarnClientSchedulerBackend: Registered executor NettyRpcEndpointRef(null) (cloud101412098.wd.nm.ss.nop.ted:38791) with ID 64 16/05/03 23:20:29 INFO cluster.YarnClientSchedulerBackend: Registered executor NettyRpcEndpointRef(null) (cloud090.wd.nm.nop.ted:10773) with ID 163 16/05/03 23:20:29 INFO storage.BlockManagerMasterEndpoint: Registering block manager rsync.cloud113.wd.nm.nop.ted:23966 with 4.3 GB RAM, BlockManagerId(187, rsync.cloud113.wd.nm.nop.ted, 23966) 16/05/03 23:20:29 INFO cluster.YarnClientSchedulerBackend: Registered executor NettyRpcEndpointRef(null) (cloud101412786.wd.nm.ss.nop.ted:47720) with ID 135 16/05/03 23:20:29 INFO cluster.YarnClientSchedulerBackend: Registered executor NettyRpcEndpointRef(null) (cloud120.wd.nm.nop.ted:19328) with ID 176 16/05/03 23:20:29 INFO cluster.YarnClientSchedulerBackend: Registered executor NettyRpcEndpointRef(null) (rsync.cloud017.wd.nm.nop.ted:47723) with ID 79 16/05/03 23:20:29 INFO cluster.YarnClientSchedulerBackend: Registered executor NettyRpcEndpointRef(null) (rsync.cloud10141040053.wd.nm.nop.ted:28148) with ID 147 16/05/03 23:20:29 INFO cluster.YarnClientSchedulerBackend: Registered executor NettyRpcEndpointRef(null) (cloud101419290.wd.nm.ss.nop.ted:25292) with ID 41 16/05/03 23:20:29 INFO storage.BlockManagerMasterEndpoint: Registering block manager cloud090.wd.nm.nop.ted:17464 with 4.3 GB RAM, BlockManagerId(163, cloud090.wd.nm.nop.ted, 17464) 16/05/03 23:20:29 INFO storage.BlockManagerMasterEndpoint: Registering block manager cloud101412786.wd.nm.ss.nop.ted:15430 with 4.3 GB RAM, BlockManagerId(135, cloud101412786.wd.nm.ss.nop.ted, 15430) 16/05/03 23:20:29 INFO storage.BlockManagerMasterEndpoint: Registering block manager cloud101412098.wd.nm.ss.nop.ted:24728 with 4.3 GB RAM, BlockManagerId(64, cloud101412098.wd.nm.ss.nop.ted, 24728) 16/05/03 23:20:29 INFO cluster.YarnClientSchedulerBackend: Registered executor NettyRpcEndpointRef(null) (cloud121.wd.nm.nop.ted:44092) with ID 48 16/05/03 23:20:29 INFO storage.BlockManagerMasterEndpoint: Registering block manager rsync.cloud10141040053.wd.nm.nop.ted:9518 with 4.3 GB RAM, BlockManagerId(147, rsync.cloud10141040053.wd.nm.nop.ted, 9518) 16/05/03 23:20:29 INFO storage.BlockManagerMasterEndpoint: Registering block manager cloud120.wd.nm.nop.ted:41245 with 4.3 GB RAM, BlockManagerId(176, cloud120.wd.nm.nop.ted, 41245) 16/05/03 23:20:29 INFO storage.BlockManagerMasterEndpoint: Registering block manager rsync.cloud017.wd.nm.nop.ted:27709 with 4.3 GB RAM, BlockManagerId(79, rsync.cloud017.wd.nm.nop.ted, 27709) 16/05/03 23:20:29 INFO storage.BlockManagerMasterEndpoint: Registering block manager cloud101419290.wd.nm.ss.nop.ted:48517 with 4.3 GB RAM, BlockManagerId(41, cloud101419290.wd.nm.ss.nop.ted, 48517) 16/05/03 23:20:29 INFO storage.BlockManagerMasterEndpoint: Registering block manager cloud121.wd.nm.nop.ted:36043 with 4.3 GB RAM, BlockManagerId(48, cloud121.wd.nm.nop.ted, 36043) 16/05/03 23:20:29 INFO cluster.YarnClientSchedulerBackend: Registered executor NettyRpcEndpointRef(null) (rsync.cloud007.wd.nm.nop.ted:49656) with ID 86 16/05/03 23:20:29 INFO cluster.YarnClientSchedulerBackend: Registered executor NettyRpcEndpointRef(null) (cloud137.wd.nm.nop.ted:33912) with ID 44 16/05/03 23:20:29 INFO cluster.YarnClientSchedulerBackend: Registered executor NettyRpcEndpointRef(null) (cloud1014121120.wd.nm.ss.nop.ted:24004) with ID 51 16/05/03 23:20:29 INFO cluster.YarnClientSchedulerBackend: Registered executor NettyRpcEndpointRef(null) (cloud10141065106.wd.nm.nop.ted:31096) with ID 142 16/05/03 23:20:29 INFO storage.BlockManagerMasterEndpoint: Registering block manager rsync.cloud007.wd.nm.nop.ted:42297 with 4.3 GB RAM, BlockManagerId(86, rsync.cloud007.wd.nm.nop.ted, 42297) 16/05/03 23:20:29 INFO cluster.YarnClientSchedulerBackend: Registered executor NettyRpcEndpointRef(null) (cloud1014118100.wd.nm.ss.nop.ted:35252) with ID 186 16/05/03 23:20:29 INFO cluster.YarnClientSchedulerBackend: Registered executor NettyRpcEndpointRef(null) (cloud1014113114.wd.nm.ss.nop.ted:28247) with ID 180 16/05/03 23:20:29 INFO cluster.YarnClientSchedulerBackend: Registered executor NettyRpcEndpointRef(null) (cloud027.wd.nm.nop.ted:14785) with ID 43 16/05/03 23:20:29 INFO storage.BlockManagerMasterEndpoint: Registering block manager cloud1014121120.wd.nm.ss.nop.ted:40903 with 4.3 GB RAM, BlockManagerId(51, cloud1014121120.wd.nm.ss.nop.ted, 40903) 16/05/03 23:20:29 INFO storage.BlockManagerMasterEndpoint: Registering block manager cloud137.wd.nm.nop.ted:35924 with 4.3 GB RAM, BlockManagerId(44, cloud137.wd.nm.nop.ted, 35924) 16/05/03 23:20:29 INFO storage.BlockManagerMasterEndpoint: Registering block manager cloud10141065106.wd.nm.nop.ted:8720 with 4.3 GB RAM, BlockManagerId(142, cloud10141065106.wd.nm.nop.ted, 8720) 16/05/03 23:20:29 INFO cluster.YarnClientSchedulerBackend: Registered executor NettyRpcEndpointRef(null) (cloud101412024.wd.nm.ss.nop.ted:21428) with ID 178 16/05/03 23:20:29 INFO cluster.YarnClientSchedulerBackend: Registered executor NettyRpcEndpointRef(null) (cloud101411898.wd.nm.ss.nop.ted:20945) with ID 189 16/05/03 23:20:29 INFO storage.BlockManagerMasterEndpoint: Registering block manager cloud1014118100.wd.nm.ss.nop.ted:19531 with 4.3 GB RAM, BlockManagerId(186, cloud1014118100.wd.nm.ss.nop.ted, 19531) 16/05/03 23:20:29 INFO cluster.YarnClientSchedulerBackend: Registered executor NettyRpcEndpointRef(null) (cloud101411360.wd.nm.ss.nop.ted:14824) with ID 72 16/05/03 23:20:29 INFO storage.BlockManagerMasterEndpoint: Registering block manager cloud1014113114.wd.nm.ss.nop.ted:22679 with 4.3 GB RAM, BlockManagerId(180, cloud1014113114.wd.nm.ss.nop.ted, 22679) 16/05/03 23:20:29 INFO cluster.YarnClientSchedulerBackend: Registered executor NettyRpcEndpointRef(null) (cloud10141065118.wd.nm.nop.ted:28761) with ID 1 16/05/03 23:20:29 INFO storage.BlockManagerMasterEndpoint: Registering block manager cloud027.wd.nm.nop.ted:44416 with 4.3 GB RAM, BlockManagerId(43, cloud027.wd.nm.nop.ted, 44416) 16/05/03 23:20:29 INFO cluster.YarnClientSchedulerBackend: Registered executor NettyRpcEndpointRef(null) (cloud015.wd.nm.nop.ted:16018) with ID 52 16/05/03 23:20:29 INFO storage.BlockManagerMasterEndpoint: Registering block manager cloud101412024.wd.nm.ss.nop.ted:12563 with 4.3 GB RAM, BlockManagerId(178, cloud101412024.wd.nm.ss.nop.ted, 12563) 16/05/03 23:20:29 INFO cluster.YarnClientSchedulerBackend: Registered executor NettyRpcEndpointRef(null) (cloud1014117118.wd.nm.ss.nop.ted:37148) with ID 123 16/05/03 23:20:29 INFO storage.BlockManagerMasterEndpoint: Registering block manager cloud101411898.wd.nm.ss.nop.ted:28153 with 4.3 GB RAM, BlockManagerId(189, cloud101411898.wd.nm.ss.nop.ted, 28153) 16/05/03 23:20:29 INFO cluster.YarnClientSchedulerBackend: Registered executor NettyRpcEndpointRef(null) (cloud101412080.wd.nm.ss.nop.ted:39109) with ID 191 16/05/03 23:20:29 INFO storage.BlockManagerMasterEndpoint: Registering block manager cloud101411360.wd.nm.ss.nop.ted:35550 with 4.3 GB RAM, BlockManagerId(72, cloud101411360.wd.nm.ss.nop.ted, 35550) 16/05/03 23:20:29 INFO storage.BlockManagerMasterEndpoint: Registering block manager cloud10141065118.wd.nm.nop.ted:17163 with 4.3 GB RAM, BlockManagerId(1, cloud10141065118.wd.nm.nop.ted, 17163) 16/05/03 23:20:29 INFO cluster.YarnClientSchedulerBackend: Registered executor NettyRpcEndpointRef(null) (cloud122.wd.nm.nop.ted:39956) with ID 39 16/05/03 23:20:29 INFO storage.BlockManagerMasterEndpoint: Registering block manager cloud015.wd.nm.nop.ted:46187 with 4.3 GB RAM, BlockManagerId(52, cloud015.wd.nm.nop.ted, 46187) 16/05/03 23:20:29 INFO storage.BlockManagerMasterEndpoint: Registering block manager cloud1014117118.wd.nm.ss.nop.ted:29356 with 4.3 GB RAM, BlockManagerId(123, cloud1014117118.wd.nm.ss.nop.ted, 29356) 16/05/03 23:20:29 INFO storage.BlockManagerMasterEndpoint: Registering block manager cloud101412080.wd.nm.ss.nop.ted:43126 with 4.3 GB RAM, BlockManagerId(191, cloud101412080.wd.nm.ss.nop.ted, 43126) 16/05/03 23:20:29 INFO cluster.YarnClientSchedulerBackend: Registered executor NettyRpcEndpointRef(null) (cloud10141064025.wd.nm.nop.ted:32016) with ID 102 16/05/03 23:20:29 INFO cluster.YarnClientSchedulerBackend: Registered executor NettyRpcEndpointRef(null) (rsync.cloud095.wd.nm.nop.ted:22932) with ID 130 16/05/03 23:20:29 INFO cluster.YarnClientSchedulerBackend: Registered executor NettyRpcEndpointRef(null) (cloud1014140122.wd.nm.ss.nop.ted:47619) with ID 97 16/05/03 23:20:29 INFO cluster.YarnClientSchedulerBackend: Registered executor NettyRpcEndpointRef(null) (cloud018.wd.nm.nop.ted:16702) with ID 128 16/05/03 23:20:29 INFO cluster.YarnClientSchedulerBackend: Registered executor NettyRpcEndpointRef(null) (cloud101418694.wd.nm.ss.nop.ted:23392) with ID 9 16/05/03 23:20:29 INFO cluster.YarnClientSchedulerBackend: Registered executor NettyRpcEndpointRef(null) (cloud101412766.wd.nm.ss.nop.ted:23277) with ID 137 16/05/03 23:20:29 INFO storage.BlockManagerMasterEndpoint: Registering block manager cloud122.wd.nm.nop.ted:25108 with 4.3 GB RAM, BlockManagerId(39, cloud122.wd.nm.nop.ted, 25108) 16/05/03 23:20:29 INFO storage.BlockManagerMasterEndpoint: Registering block manager cloud10141064025.wd.nm.nop.ted:42088 with 4.3 GB RAM, BlockManagerId(102, cloud10141064025.wd.nm.nop.ted, 42088) 16/05/03 23:20:29 INFO storage.BlockManagerMasterEndpoint: Registering block manager rsync.cloud095.wd.nm.nop.ted:22378 with 4.3 GB RAM, BlockManagerId(130, rsync.cloud095.wd.nm.nop.ted, 22378) 16/05/03 23:20:29 INFO cluster.YarnClientSchedulerBackend: Registered executor NettyRpcEndpointRef(null) (rsync.cloud004.wd.nm.nop.ted:46144) with ID 154 16/05/03 23:20:29 INFO storage.BlockManagerMasterEndpoint: Registering block manager cloud1014140122.wd.nm.ss.nop.ted:38453 with 4.3 GB RAM, BlockManagerId(97, cloud1014140122.wd.nm.ss.nop.ted, 38453) 16/05/03 23:20:29 INFO storage.BlockManagerMasterEndpoint: Registering block manager cloud101418694.wd.nm.ss.nop.ted:33993 with 4.3 GB RAM, BlockManagerId(9, cloud101418694.wd.nm.ss.nop.ted, 33993) 16/05/03 23:20:29 INFO cluster.YarnClientSchedulerBackend: Registered executor NettyRpcEndpointRef(null) (cloud10141049106.wd.nm.nop.ted:38759) with ID 59 16/05/03 23:20:29 INFO storage.BlockManagerMasterEndpoint: Registering block manager cloud018.wd.nm.nop.ted:45728 with 4.3 GB RAM, BlockManagerId(128, cloud018.wd.nm.nop.ted, 45728) 16/05/03 23:20:29 INFO storage.BlockManagerMasterEndpoint: Registering block manager cloud101412766.wd.nm.ss.nop.ted:37536 with 4.3 GB RAM, BlockManagerId(137, cloud101412766.wd.nm.ss.nop.ted, 37536) 16/05/03 23:20:29 INFO cluster.YarnClientSchedulerBackend: Registered executor NettyRpcEndpointRef(null) (cloud10141032033.wd.nm.nop.ted:47212) with ID 17 16/05/03 23:20:29 INFO cluster.YarnClientSchedulerBackend: Registered executor NettyRpcEndpointRef(null) (cloud101412154.wd.nm.ss.nop.ted:47956) with ID 37 16/05/03 23:20:29 INFO storage.BlockManagerMasterEndpoint: Registering block manager rsync.cloud004.wd.nm.nop.ted:34344 with 4.3 GB RAM, BlockManagerId(154, rsync.cloud004.wd.nm.nop.ted, 34344) 16/05/03 23:20:29 INFO storage.BlockManagerMasterEndpoint: Registering block manager cloud10141049106.wd.nm.nop.ted:11139 with 4.3 GB RAM, BlockManagerId(59, cloud10141049106.wd.nm.nop.ted, 11139) 16/05/03 23:20:29 INFO cluster.YarnClientSchedulerBackend: Registered executor NettyRpcEndpointRef(null) (cloud101417730.wd.nm.ss.nop.ted:34874) with ID 175 16/05/03 23:20:29 INFO storage.BlockManagerMasterEndpoint: Registering block manager cloud10141032033.wd.nm.nop.ted:44955 with 4.3 GB RAM, BlockManagerId(17, cloud10141032033.wd.nm.nop.ted, 44955) 16/05/03 23:20:29 INFO cluster.YarnClientSchedulerBackend: Registered executor NettyRpcEndpointRef(null) (cloud009.wd.nm.nop.ted:14896) with ID 110 16/05/03 23:20:29 INFO cluster.YarnClientSchedulerBackend: Registered executor NettyRpcEndpointRef(null) (cloud101412156.wd.nm.ss.nop.ted:30141) with ID 69 16/05/03 23:20:29 INFO cluster.YarnClientSchedulerBackend: Registered executor NettyRpcEndpointRef(null) (cloud101417822.wd.nm.ss.nop.ted:27333) with ID 45 16/05/03 23:20:30 INFO cluster.YarnClientSchedulerBackend: Registered executor NettyRpcEndpointRef(null) (cloud101417736.wd.nm.ss.nop.ted:42818) with ID 30 16/05/03 23:20:30 INFO storage.BlockManagerMasterEndpoint: Registering block manager cloud101412154.wd.nm.ss.nop.ted:24590 with 4.3 GB RAM, BlockManagerId(37, cloud101412154.wd.nm.ss.nop.ted, 24590) 16/05/03 23:20:30 INFO cluster.YarnClientSchedulerBackend: Registered executor NettyRpcEndpointRef(null) (rsync.cloud10141032025.wd.nm.nop.ted:36391) with ID 184 16/05/03 23:20:30 INFO storage.BlockManagerMasterEndpoint: Registering block manager cloud101417730.wd.nm.ss.nop.ted:19414 with 4.3 GB RAM, BlockManagerId(175, cloud101417730.wd.nm.ss.nop.ted, 19414) 16/05/03 23:20:30 INFO storage.BlockManagerMasterEndpoint: Registering block manager cloud009.wd.nm.nop.ted:47825 with 4.3 GB RAM, BlockManagerId(110, cloud009.wd.nm.nop.ted, 47825) 16/05/03 23:20:30 INFO cluster.YarnClientSchedulerBackend: Registered executor NettyRpcEndpointRef(null) (cloud101417724.wd.nm.ss.nop.ted:16742) with ID 60 16/05/03 23:20:30 INFO cluster.YarnClientSchedulerBackend: Registered executor NettyRpcEndpointRef(null) (cloud1014177124.wd.nm.ss.nop.ted:22786) with ID 171 16/05/03 23:20:30 INFO storage.BlockManagerMasterEndpoint: Registering block manager cloud101412156.wd.nm.ss.nop.ted:16926 with 4.3 GB RAM, BlockManagerId(69, cloud101412156.wd.nm.ss.nop.ted, 16926) 16/05/03 23:20:30 INFO storage.BlockManagerMasterEndpoint: Registering block manager cloud101417822.wd.nm.ss.nop.ted:47846 with 4.3 GB RAM, BlockManagerId(45, cloud101417822.wd.nm.ss.nop.ted, 47846) 16/05/03 23:20:30 INFO storage.BlockManagerMasterEndpoint: Registering block manager rsync.cloud10141032025.wd.nm.nop.ted:35591 with 4.3 GB RAM, BlockManagerId(184, rsync.cloud10141032025.wd.nm.nop.ted, 35591) 16/05/03 23:20:30 INFO storage.BlockManagerMasterEndpoint: Registering block manager cloud101417736.wd.nm.ss.nop.ted:20519 with 4.3 GB RAM, BlockManagerId(30, cloud101417736.wd.nm.ss.nop.ted, 20519) 16/05/03 23:20:30 INFO cluster.YarnClientSchedulerBackend: Registered executor NettyRpcEndpointRef(null) (cloud101412160.wd.nm.ss.nop.ted:40937) with ID 125 16/05/03 23:20:30 INFO cluster.YarnClientSchedulerBackend: Registered executor NettyRpcEndpointRef(null) (cloud10141040046.wd.nm.nop.ted:43503) with ID 40 16/05/03 23:20:30 INFO storage.BlockManagerMasterEndpoint: Registering block manager cloud101417724.wd.nm.ss.nop.ted:30929 with 4.3 GB RAM, BlockManagerId(60, cloud101417724.wd.nm.ss.nop.ted, 30929) 16/05/03 23:20:30 INFO cluster.YarnClientSchedulerBackend: Registered executor NettyRpcEndpointRef(null) (cloud1014176116.wd.nm.ss.nop.ted:30485) with ID 174 16/05/03 23:20:30 INFO cluster.YarnClientSchedulerBackend: Registered executor NettyRpcEndpointRef(null) (cloud101417590.wd.nm.ss.nop.ted:40211) with ID 32 16/05/03 23:20:30 INFO storage.BlockManagerMasterEndpoint: Registering block manager cloud1014177124.wd.nm.ss.nop.ted:27057 with 4.3 GB RAM, BlockManagerId(171, cloud1014177124.wd.nm.ss.nop.ted, 27057) 16/05/03 23:20:30 INFO cluster.YarnClientSchedulerBackend: Registered executor NettyRpcEndpointRef(null) (cloud101417770.wd.nm.ss.nop.ted:41746) with ID 153 16/05/03 23:20:30 INFO storage.BlockManagerMasterEndpoint: Registering block manager cloud101412160.wd.nm.ss.nop.ted:18791 with 4.3 GB RAM, BlockManagerId(125, cloud101412160.wd.nm.ss.nop.ted, 18791) 16/05/03 23:20:30 INFO storage.BlockManagerMasterEndpoint: Registering block manager cloud10141040046.wd.nm.nop.ted:45738 with 4.3 GB RAM, BlockManagerId(40, cloud10141040046.wd.nm.nop.ted, 45738) 16/05/03 23:20:30 INFO cluster.YarnClientSchedulerBackend: Registered executor NettyRpcEndpointRef(null) (cloud10141072023.wd.nm.nop.ted:20141) with ID 173 16/05/03 23:20:30 INFO cluster.YarnClientSchedulerBackend: Registered executor NettyRpcEndpointRef(null) (cloud10141064023.wd.nm.nop.ted:47923) with ID 87 16/05/03 23:20:30 INFO storage.BlockManagerMasterEndpoint: Registering block manager cloud1014176116.wd.nm.ss.nop.ted:14440 with 4.3 GB RAM, BlockManagerId(174, cloud1014176116.wd.nm.ss.nop.ted, 14440) 16/05/03 23:20:30 INFO storage.BlockManagerMasterEndpoint: Registering block manager cloud101417590.wd.nm.ss.nop.ted:43267 with 4.3 GB RAM, BlockManagerId(32, cloud101417590.wd.nm.ss.nop.ted, 43267) 16/05/03 23:20:30 INFO cluster.YarnClientSchedulerBackend: Registered executor NettyRpcEndpointRef(null) (rsync.cloud10141032047.wd.nm.nop.ted:37647) with ID 179 16/05/03 23:20:30 INFO cluster.YarnClientSchedulerBackend: Registered executor NettyRpcEndpointRef(null) (cloud101417732.wd.nm.ss.nop.ted:18828) with ID 73 16/05/03 23:20:30 INFO storage.BlockManagerMasterEndpoint: Registering block manager cloud101417770.wd.nm.ss.nop.ted:49025 with 4.3 GB RAM, BlockManagerId(153, cloud101417770.wd.nm.ss.nop.ted, 49025) 16/05/03 23:20:30 INFO cluster.YarnClientSchedulerBackend: Registered executor NettyRpcEndpointRef(null) (cloud101412162.wd.nm.ss.nop.ted:8967) with ID 194 16/05/03 23:20:30 INFO storage.MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 22.1 KB, free 271.4 KB) 16/05/03 23:20:30 INFO storage.BlockManagerMasterEndpoint: Registering block manager cloud10141072023.wd.nm.nop.ted:48444 with 4.3 GB RAM, BlockManagerId(173, cloud10141072023.wd.nm.nop.ted, 48444) 16/05/03 23:20:30 INFO storage.BlockManagerInfo: Added broadcast_0_piece0 in memory on 10.134.71.181:46355 (size: 22.1 KB, free: 4.1 GB) 16/05/03 23:20:30 INFO storage.BlockManagerMasterEndpoint: Registering block manager cloud10141064023.wd.nm.nop.ted:22395 with 4.3 GB RAM, BlockManagerId(87, cloud10141064023.wd.nm.nop.ted, 22395) 16/05/03 23:20:30 INFO spark.SparkContext: Created broadcast 0 from textFile at LDADriver.scala:151 16/05/03 23:20:30 INFO storage.BlockManagerMasterEndpoint: Registering block manager rsync.cloud10141032047.wd.nm.nop.ted:16226 with 4.3 GB RAM, BlockManagerId(179, rsync.cloud10141032047.wd.nm.nop.ted, 16226) 16/05/03 23:20:30 INFO storage.BlockManagerMasterEndpoint: Registering block manager cloud101417732.wd.nm.ss.nop.ted:36226 with 4.3 GB RAM, BlockManagerId(73, cloud101417732.wd.nm.ss.nop.ted, 36226) 16/05/03 23:20:30 INFO storage.BlockManagerMasterEndpoint: Registering block manager cloud101412162.wd.nm.ss.nop.ted:16953 with 4.3 GB RAM, BlockManagerId(194, cloud101412162.wd.nm.ss.nop.ted, 16953) 16/05/03 23:20:31 INFO lzo.GPLNativeCodeLoader: Loaded native gpl library 16/05/03 23:20:31 INFO lzo.LzoCodec: Successfully loaded & initialized native-lzo library [hadoop-lzo rev 8e266e052e423af592871e2dfe09d54c03f6a0e8] 16/05/03 23:20:31 INFO cluster.YarnClientSchedulerBackend: Registered executor NettyRpcEndpointRef(null) (cloud1014140106.wd.nm.ss.nop.ted:27393) with ID 4 16/05/03 23:20:31 INFO storage.BlockManagerMasterEndpoint: Registering block manager cloud1014140106.wd.nm.ss.nop.ted:34605 with 4.3 GB RAM, BlockManagerId(4, cloud1014140106.wd.nm.ss.nop.ted, 34605) 16/05/03 23:20:31 INFO mapred.FileInputFormat: Total input paths to process : 0 16/05/03 23:20:31 INFO spark.SparkContext: Starting job: reduce at EdgeRDDImpl.scala:89 16/05/03 23:20:31 INFO scheduler.DAGScheduler: Job 0 finished: reduce at EdgeRDDImpl.scala:89, took 0.004286 s 16/05/03 23:20:31 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/metrics/json,null} 16/05/03 23:20:31 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/stage/kill,null} 16/05/03 23:20:31 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/api,null} 16/05/03 23:20:31 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/,null} 16/05/03 23:20:31 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/static,null} 16/05/03 23:20:31 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/executors/threadDump/json,null} 16/05/03 23:20:31 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/executors/threadDump,null} 16/05/03 23:20:31 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/executors/json,null} 16/05/03 23:20:31 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/executors,null} 16/05/03 23:20:31 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/environment/json,null} 16/05/03 23:20:31 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/environment,null} 16/05/03 23:20:31 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/storage/rdd/json,null} 16/05/03 23:20:31 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/storage/rdd,null} 16/05/03 23:20:31 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/storage/json,null} 16/05/03 23:20:31 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/storage,null} 16/05/03 23:20:31 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/pool/json,null} 16/05/03 23:20:31 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/pool,null} 16/05/03 23:20:31 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/stage/json,null} 16/05/03 23:20:31 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/stage,null} 16/05/03 23:20:31 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/json,null} 16/05/03 23:20:31 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages,null} 16/05/03 23:20:31 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/jobs/job/json,null} 16/05/03 23:20:31 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/jobs/job,null} 16/05/03 23:20:31 INFO cluster.YarnClientSchedulerBackend: Registered executor NettyRpcEndpointRef(null) (cloud10141073106.wd.nm.nop.ted:30823) with ID 145 16/05/03 23:20:31 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/jobs/json,null} 16/05/03 23:20:31 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/jobs,null} 16/05/03 23:20:31 INFO ui.SparkUI: Stopped Spark web UI at http://10.134.71.181:4041 16/05/03 23:20:31 INFO storage.BlockManagerMasterEndpoint: Registering block manager cloud10141073106.wd.nm.nop.ted:37422 with 4.3 GB RAM, BlockManagerId(145, cloud10141073106.wd.nm.nop.ted, 37422) 16/05/03 23:20:31 INFO cluster.YarnClientSchedulerBackend: Interrupting monitor thread 16/05/03 23:20:31 INFO cluster.YarnClientSchedulerBackend: Shutting down all executors 16/05/03 23:20:31 INFO cluster.YarnClientSchedulerBackend: Asking each executor to shut down 16/05/03 23:20:31 INFO cluster.YarnClientSchedulerBackend: Disabling executor 111. 16/05/03 23:20:32 INFO cluster.YarnClientSchedulerBackend: Disabling executor 149. 16/05/03 23:20:32 INFO cluster.YarnClientSchedulerBackend: Disabling executor 78. 16/05/03 23:20:32 INFO cluster.YarnClientSchedulerBackend: Disabling executor 167. 16/05/03 23:20:32 INFO cluster.YarnClientSchedulerBackend: Disabling executor 84. 16/05/03 23:20:32 INFO cluster.YarnClientSchedulerBackend: Disabling executor 39. 16/05/03 23:20:32 INFO cluster.YarnClientSchedulerBackend: Disabling executor 161. 16/05/03 23:20:32 INFO cluster.YarnClientSchedulerBackend: Disabling executor 155. 16/05/03 23:20:32 INFO cluster.YarnClientSchedulerBackend: Disabling executor 66. 16/05/03 23:20:32 INFO cluster.YarnClientSchedulerBackend: Disabling executor 51. 16/05/03 23:20:32 INFO cluster.YarnClientSchedulerBackend: Disabling executor 194. 16/05/03 23:20:32 INFO cluster.YarnClientSchedulerBackend: Disabling executor 30. 16/05/03 23:20:32 INFO cluster.YarnClientSchedulerBackend: Disabling executor 173. 16/05/03 23:20:32 INFO cluster.YarnClientSchedulerBackend: Registered executor NettyRpcEndpointRef(null) (rsync.cloud10141043064.wd.nm.nop.ted:44732) with ID 57 16/05/03 23:20:32 INFO storage.BlockManagerMasterEndpoint: Registering block manager rsync.cloud10141043064.wd.nm.nop.ted:30794 with 4.3 GB RAM, BlockManagerId(57, rsync.cloud10141043064.wd.nm.nop.ted, 30794) 16/05/03 23:20:32 ERROR scheduler.LiveListenerBus: SparkListenerBus has already stopped! Dropping event SparkListenerBlockManagerAdded(1462288832105,BlockManagerId(57, rsync.cloud10141043064.wd.nm.nop.ted, 30794),4588044288) 16/05/03 23:20:32 ERROR scheduler.LiveListenerBus: SparkListenerBus has already stopped! Dropping event SparkListenerExecutorAdded(1462288832047,57,org.apache.spark.scheduler.cluster.ExecutorData@42da1b4b) 16/05/03 23:20:32 INFO cluster.YarnClientSchedulerBackend: Disabling executor 176. 16/05/03 23:20:32 INFO cluster.YarnClientSchedulerBackend: Disabling executor 158. 16/05/03 23:20:32 INFO cluster.YarnClientSchedulerBackend: Disabling executor 182. 16/05/03 23:20:32 INFO cluster.YarnClientSchedulerBackend: Disabling executor 48. 16/05/03 23:20:32 INFO cluster.YarnClientSchedulerBackend: Disabling executor 45. 16/05/03 23:20:32 INFO cluster.YarnClientSchedulerBackend: Disabling executor 54. 16/05/03 23:20:32 INFO cluster.YarnClientSchedulerBackend: Disabling executor 72. 16/05/03 23:20:32 INFO cluster.YarnClientSchedulerBackend: Disabling executor 12. 16/05/03 23:20:32 INFO cluster.YarnClientSchedulerBackend: Disabling executor 33. 16/05/03 23:20:32 INFO cluster.YarnClientSchedulerBackend: Disabling executor 27. 16/05/03 23:20:32 INFO cluster.YarnClientSchedulerBackend: Disabling executor 170. 16/05/03 23:20:32 INFO cluster.YarnClientSchedulerBackend: Disabling executor 69. 16/05/03 23:20:32 INFO cluster.YarnClientSchedulerBackend: Disabling executor 179. 16/05/03 23:20:32 INFO cluster.YarnClientSchedulerBackend: Disabling executor 152. 16/05/03 23:20:32 INFO cluster.YarnClientSchedulerBackend: Disabling executor 98. 16/05/03 23:20:32 INFO cluster.YarnClientSchedulerBackend: Disabling executor 15. 16/05/03 23:20:32 INFO cluster.YarnClientSchedulerBackend: Disabling executor 60. 16/05/03 23:20:32 INFO cluster.YarnClientSchedulerBackend: Disabling executor 137. 16/05/03 23:20:32 INFO cluster.YarnClientSchedulerBackend: Disabling executor 8. 16/05/03 23:20:32 INFO cluster.YarnClientSchedulerBackend: Disabling executor 143. 16/05/03 23:20:32 INFO cluster.YarnClientSchedulerBackend: Disabling executor 36. 16/05/03 23:20:32 INFO cluster.YarnClientSchedulerBackend: Disabling executor 119. 16/05/03 23:20:32 INFO cluster.YarnClientSchedulerBackend: Disabling executor 21. 16/05/03 23:20:32 INFO cluster.YarnClientSchedulerBackend: Disabling executor 146. 16/05/03 23:20:32 INFO cluster.YarnClientSchedulerBackend: Disabling executor 128. 16/05/03 23:20:32 INFO cluster.YarnClientSchedulerBackend: Disabling executor 125. 16/05/03 23:20:32 INFO cluster.YarnClientSchedulerBackend: Disabling executor 71. 16/05/03 23:20:32 INFO cluster.YarnClientSchedulerBackend: Disabling executor 110. 16/05/03 23:20:32 WARN cluster.YarnSchedulerBackend$YarnSchedulerEndpoint: Container marked as failed: container_1459849169969_971497_01_000041 on host: cloud122.wd.nm.nop.ted. Exit status: 1. Diagnostics: Exception from container-launch. Container id: container_1459849169969_971497_01_000041 Exit code: 1 Stack trace: ExitCodeException exitCode=1: at org.apache.hadoop.util.Shell.runCommand(Shell.java:561) at org.apache.hadoop.util.Shell.run(Shell.java:478) at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:725) at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:214) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:299) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) Container exited with a non-zero exit code 1 16/05/03 23:20:32 INFO cluster.YarnClientSchedulerBackend: Disabling executor 92. 16/05/03 23:20:32 INFO cluster.YarnClientSchedulerBackend: Disabling executor 113. 16/05/03 23:20:32 INFO cluster.YarnClientSchedulerBackend: Disabling executor 196. 16/05/03 23:20:32 INFO cluster.YarnClientSchedulerBackend: Disabling executor 18. 16/05/03 23:20:32 INFO cluster.YarnClientSchedulerBackend: Disabling executor 134. 16/05/03 23:20:32 INFO cluster.YarnClientSchedulerBackend: Disabling executor 184. 16/05/03 23:20:32 INFO cluster.YarnClientSchedulerBackend: Disabling executor 107. 16/05/03 23:20:32 INFO cluster.YarnClientSchedulerBackend: Disabling executor 101. 16/05/03 23:20:32 INFO cluster.YarnClientSchedulerBackend: Disabling executor 24. 16/05/03 23:20:32 INFO cluster.YarnClientSchedulerBackend: Disabling executor 95. 16/05/03 23:20:32 INFO cluster.YarnClientSchedulerBackend: Disabling executor 200. 16/05/03 23:20:32 INFO cluster.YarnClientSchedulerBackend: Disabling executor 53. 16/05/03 23:20:32 INFO cluster.YarnClientSchedulerBackend: Disabling executor 80. 16/05/03 23:20:32 INFO cluster.YarnClientSchedulerBackend: Disabling executor 122. 16/05/03 23:20:32 INFO cluster.YarnClientSchedulerBackend: Registered executor NettyRpcEndpointRef(null) (cloud168.wd.nm.ted:31682) with ID 68 16/05/03 23:20:32 ERROR scheduler.LiveListenerBus: SparkListenerBus has already stopped! Dropping event SparkListenerExecutorAdded(1462288832353,68,org.apache.spark.scheduler.cluster.ExecutorData@e73bf2c7) 16/05/03 23:20:32 INFO cluster.YarnClientSchedulerBackend: Disabling executor 190. 16/05/03 23:20:32 INFO cluster.YarnClientSchedulerBackend: Disabling executor 178. 16/05/03 23:20:32 INFO cluster.YarnClientSchedulerBackend: Disabling executor 35. 16/05/03 23:20:32 INFO cluster.YarnClientSchedulerBackend: Disabling executor 83. 16/05/03 23:20:32 INFO cluster.YarnClientSchedulerBackend: Disabling executor 104. 16/05/03 23:20:32 INFO cluster.YarnClientSchedulerBackend: Disabling executor 41. 16/05/03 23:20:32 INFO cluster.YarnClientSchedulerBackend: Disabling executor 86. 16/05/03 23:20:32 INFO cluster.YarnClientSchedulerBackend: Disabling executor 166. 16/05/03 23:20:32 INFO cluster.YarnClientSchedulerBackend: Disabling executor 62. 16/05/03 23:20:32 INFO cluster.YarnClientSchedulerBackend: Disabling executor 59. 16/05/03 23:20:32 INFO cluster.YarnClientSchedulerBackend: Disabling executor 133. 16/05/03 23:20:32 INFO cluster.YarnClientSchedulerBackend: Disabling executor 77. 16/05/03 23:20:32 INFO cluster.YarnClientSchedulerBackend: Disabling executor 7. 16/05/03 23:20:32 INFO cluster.YarnClientSchedulerBackend: Disabling executor 1. 16/05/03 23:20:32 INFO cluster.YarnClientSchedulerBackend: Disabling executor 187. 16/05/03 23:20:32 INFO cluster.YarnClientSchedulerBackend: Disabling executor 50. 16/05/03 23:20:32 INFO cluster.YarnClientSchedulerBackend: Disabling executor 56. 16/05/03 23:20:32 INFO storage.BlockManagerMasterEndpoint: Registering block manager cloud168.wd.nm.ted:24611 with 4.3 GB RAM, BlockManagerId(68, cloud168.wd.nm.ted, 24611) 16/05/03 23:20:32 ERROR scheduler.LiveListenerBus: SparkListenerBus has already stopped! Dropping event SparkListenerBlockManagerAdded(1462288832448,BlockManagerId(68, cloud168.wd.nm.ted, 24611),4588044288) 16/05/03 23:20:32 INFO cluster.YarnClientSchedulerBackend: Disabling executor 148. 16/05/03 23:20:32 INFO cluster.YarnClientSchedulerBackend: Disabling executor 160. 16/05/03 23:20:32 INFO cluster.YarnClientSchedulerBackend: Disabling executor 44. 16/05/03 23:20:32 INFO cluster.YarnClientSchedulerBackend: Disabling executor 142. 16/05/03 23:20:32 INFO cluster.YarnClientSchedulerBackend: Disabling executor 181. 16/05/03 23:20:32 INFO cluster.YarnClientSchedulerBackend: Disabling executor 65. 16/05/03 23:20:32 INFO cluster.YarnClientSchedulerBackend: Disabling executor 17. 16/05/03 23:20:32 INFO cluster.YarnClientSchedulerBackend: Disabling executor 157. 16/05/03 23:20:32 INFO cluster.YarnClientSchedulerBackend: Disabling executor 169. 16/05/03 23:20:32 INFO cluster.YarnClientSchedulerBackend: Disabling executor 163. 16/05/03 23:20:32 INFO cluster.YarnClientSchedulerBackend: Disabling executor 4. 16/05/03 23:20:32 INFO cluster.YarnClientSchedulerBackend: Disabling executor 97. 16/05/03 23:20:32 INFO cluster.YarnClientSchedulerBackend: Disabling executor 32. 16/05/03 23:20:32 INFO cluster.YarnClientSchedulerBackend: Disabling executor 11. 16/05/03 23:20:32 INFO cluster.YarnClientSchedulerBackend: Disabling executor 175. 16/05/03 23:20:32 INFO cluster.YarnClientSchedulerBackend: Disabling executor 139. 16/05/03 23:20:32 INFO cluster.YarnClientSchedulerBackend: Disabling executor 29. 16/05/03 23:20:32 INFO cluster.YarnClientSchedulerBackend: Disabling executor 82. 16/05/03 23:20:32 INFO cluster.YarnClientSchedulerBackend: Disabling executor 145. 16/05/03 23:20:32 INFO cluster.YarnClientSchedulerBackend: Disabling executor 130. 16/05/03 23:20:32 INFO cluster.YarnClientSchedulerBackend: Disabling executor 124. 16/05/03 23:20:32 INFO cluster.YarnClientSchedulerBackend: Disabling executor 189. 16/05/03 23:20:32 INFO cluster.YarnClientSchedulerBackend: Registered executor NettyRpcEndpointRef(null) (cloud101411222.wd.nm.ss.nop.ted:48388) with ID 151 16/05/03 23:20:32 ERROR scheduler.LiveListenerBus: SparkListenerBus has already stopped! Dropping event SparkListenerExecutorAdded(1462288832500,151,org.apache.spark.scheduler.cluster.ExecutorData@ee811b5b) 16/05/03 23:20:32 INFO cluster.YarnClientSchedulerBackend: Disabling executor 154. 16/05/03 23:20:32 INFO cluster.YarnClientSchedulerBackend: Disabling executor 85. 16/05/03 23:20:32 INFO cluster.YarnClientSchedulerBackend: Registered executor NettyRpcEndpointRef(null) (cloud003.wd.nm.nop.ted:29300) with ID 118 16/05/03 23:20:32 ERROR scheduler.LiveListenerBus: SparkListenerBus has already stopped! Dropping event SparkListenerExecutorAdded(1462288832555,118,org.apache.spark.scheduler.cluster.ExecutorData@9d81344a) 16/05/03 23:20:32 INFO cluster.YarnClientSchedulerBackend: Disabling executor 79. 16/05/03 23:20:32 INFO cluster.YarnClientSchedulerBackend: Disabling executor 198. 16/05/03 23:20:32 INFO cluster.YarnClientSchedulerBackend: Disabling executor 180. 16/05/03 23:20:32 INFO cluster.YarnClientSchedulerBackend: Disabling executor 70. 16/05/03 23:20:32 INFO cluster.YarnClientSchedulerBackend: Disabling executor 87. 16/05/03 23:20:32 INFO cluster.YarnClientSchedulerBackend: Disabling executor 64. 16/05/03 23:20:32 INFO cluster.YarnClientSchedulerBackend: Disabling executor 106. 16/05/03 23:20:32 INFO storage.BlockManagerMasterEndpoint: Registering block manager cloud101411222.wd.nm.ss.nop.ted:28484 with 4.3 GB RAM, BlockManagerId(151, cloud101411222.wd.nm.ss.nop.ted, 28484) 16/05/03 23:20:32 INFO cluster.YarnClientSchedulerBackend: Disabling executor 46. 16/05/03 23:20:32 ERROR scheduler.LiveListenerBus: SparkListenerBus has already stopped! Dropping event SparkListenerBlockManagerAdded(1462288832584,BlockManagerId(151, cloud101411222.wd.nm.ss.nop.ted, 28484),4588044288) 16/05/03 23:20:32 INFO cluster.YarnClientSchedulerBackend: Disabling executor 91. 16/05/03 23:20:32 INFO cluster.YarnClientSchedulerBackend: Disabling executor 195. 16/05/03 23:20:32 INFO cluster.YarnClientSchedulerBackend: Disabling executor 183. 16/05/03 23:20:32 INFO cluster.YarnClientSchedulerBackend: Disabling executor 177. 16/05/03 23:20:32 INFO cluster.YarnClientSchedulerBackend: Disabling executor 159. 16/05/03 23:20:32 INFO cluster.YarnClientSchedulerBackend: Disabling executor 52. 16/05/03 23:20:32 INFO cluster.YarnClientSchedulerBackend: Disabling executor 88. 16/05/03 23:20:32 INFO cluster.YarnClientSchedulerBackend: Disabling executor 100. 16/05/03 23:20:32 INFO cluster.YarnClientSchedulerBackend: Disabling executor 162. 16/05/03 23:20:32 INFO cluster.YarnClientSchedulerBackend: Stopped 16/05/03 23:20:32 INFO spark.MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped! 16/05/03 23:20:32 INFO cluster.YarnClientSchedulerBackend: Disabling executor 28. 16/05/03 23:20:32 ERROR netty.Inbox: Ignoring error org.apache.spark.SparkException: Could not find CoarseGrainedScheduler or it has been stopped. at org.apache.spark.rpc.netty.Dispatcher.postMessage(Dispatcher.scala:161) at org.apache.spark.rpc.netty.Dispatcher.postOneWayMessage(Dispatcher.scala:131) at org.apache.spark.rpc.netty.NettyRpcEnv.send(NettyRpcEnv.scala:192) at org.apache.spark.rpc.netty.NettyRpcEndpointRef.send(NettyRpcEnv.scala:516) at org.apache.spark.scheduler.cluster.CoarseGrainedSchedulerBackend.reviveOffers(CoarseGrainedSchedulerBackend.scala:345) at org.apache.spark.scheduler.TaskSchedulerImpl.executorLost(TaskSchedulerImpl.scala:497) at org.apache.spark.scheduler.cluster.CoarseGrainedSchedulerBackend$DriverEndpoint.disableExecutor(CoarseGrainedSchedulerBackend.scala:290) at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnDriverEndpoint$$anonfun$onDisconnected$1.apply(YarnSchedulerBackend.scala:121) at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnDriverEndpoint$$anonfun$onDisconnected$1.apply(YarnSchedulerBackend.scala:120) at scala.Option.foreach(Option.scala:236) at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnDriverEndpoint.onDisconnected(YarnSchedulerBackend.scala:120) at org.apache.spark.rpc.netty.Inbox$$anonfun$process$1.apply$mcV$sp(Inbox.scala:142) at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:204) at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:100) at org.apache.spark.rpc.netty.Dispatcher$MessageLoop.run(Dispatcher.scala:215) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) 16/05/03 23:20:32 INFO cluster.YarnClientSchedulerBackend: Disabling executor 34. 16/05/03 23:20:32 ERROR netty.Inbox: Ignoring error org.apache.spark.SparkException: Could not find CoarseGrainedScheduler or it has been stopped. at org.apache.spark.rpc.netty.Dispatcher.postMessage(Dispatcher.scala:161) at org.apache.spark.rpc.netty.Dispatcher.postOneWayMessage(Dispatcher.scala:131) at org.apache.spark.rpc.netty.NettyRpcEnv.send(NettyRpcEnv.scala:192) at org.apache.spark.rpc.netty.NettyRpcEndpointRef.send(NettyRpcEnv.scala:516) at org.apache.spark.scheduler.cluster.CoarseGrainedSchedulerBackend.reviveOffers(CoarseGrainedSchedulerBackend.scala:345) at org.apache.spark.scheduler.TaskSchedulerImpl.executorLost(TaskSchedulerImpl.scala:497) at org.apache.spark.scheduler.cluster.CoarseGrainedSchedulerBackend$DriverEndpoint.disableExecutor(CoarseGrainedSchedulerBackend.scala:290) at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnDriverEndpoint$$anonfun$onDisconnected$1.apply(YarnSchedulerBackend.scala:121) at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnDriverEndpoint$$anonfun$onDisconnected$1.apply(YarnSchedulerBackend.scala:120) at scala.Option.foreach(Option.scala:236) at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnDriverEndpoint.onDisconnected(YarnSchedulerBackend.scala:120) at org.apache.spark.rpc.netty.Inbox$$anonfun$process$1.apply$mcV$sp(Inbox.scala:142) at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:204) at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:100) at org.apache.spark.rpc.netty.Dispatcher$MessageLoop.run(Dispatcher.scala:215) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) 16/05/03 23:20:32 INFO cluster.YarnClientSchedulerBackend: Registered executor NettyRpcEndpointRef(null) (rsync.cloud105.wd.nm.nop.ted:42423) with ID 140 16/05/03 23:20:32 ERROR scheduler.LiveListenerBus: SparkListenerBus has already stopped! Dropping event SparkListenerExecutorAdded(1462288832631,140,org.apache.spark.scheduler.cluster.ExecutorData@c426f11c) 16/05/03 23:20:32 INFO cluster.YarnClientSchedulerBackend: Disabling executor 55. 16/05/03 23:20:32 ERROR netty.Inbox: Ignoring error org.apache.spark.SparkException: Could not find CoarseGrainedScheduler or it has been stopped. at org.apache.spark.rpc.netty.Dispatcher.postMessage(Dispatcher.scala:161) at org.apache.spark.rpc.netty.Dispatcher.postOneWayMessage(Dispatcher.scala:131) at org.apache.spark.rpc.netty.NettyRpcEnv.send(NettyRpcEnv.scala:192) at org.apache.spark.rpc.netty.NettyRpcEndpointRef.send(NettyRpcEnv.scala:516) at org.apache.spark.scheduler.cluster.CoarseGrainedSchedulerBackend.reviveOffers(CoarseGrainedSchedulerBackend.scala:345) at org.apache.spark.scheduler.TaskSchedulerImpl.executorLost(TaskSchedulerImpl.scala:497) at org.apache.spark.scheduler.cluster.CoarseGrainedSchedulerBackend$DriverEndpoint.disableExecutor(CoarseGrainedSchedulerBackend.scala:290) at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnDriverEndpoint$$anonfun$onDisconnected$1.apply(YarnSchedulerBackend.scala:121) at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnDriverEndpoint$$anonfun$onDisconnected$1.apply(YarnSchedulerBackend.scala:120) at scala.Option.foreach(Option.scala:236) at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnDriverEndpoint.onDisconnected(YarnSchedulerBackend.scala:120) at org.apache.spark.rpc.netty.Inbox$$anonfun$process$1.apply$mcV$sp(Inbox.scala:142) at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:204) at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:100) at org.apache.spark.rpc.netty.Dispatcher$MessageLoop.run(Dispatcher.scala:215) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) 16/05/03 23:20:32 INFO storage.BlockManagerMasterEndpoint: Registering block manager cloud003.wd.nm.nop.ted:16531 with 4.3 GB RAM, BlockManagerId(118, cloud003.wd.nm.nop.ted, 16531) 16/05/03 23:20:32 ERROR scheduler.LiveListenerBus: SparkListenerBus has already stopped! Dropping event SparkListenerBlockManagerAdded(1462288832638,BlockManagerId(118, cloud003.wd.nm.nop.ted, 16531),4588044288) 16/05/03 23:20:32 INFO cluster.YarnClientSchedulerBackend: Disabling executor 49. 16/05/03 23:20:32 ERROR netty.Inbox: Ignoring error org.apache.spark.SparkException: Could not find CoarseGrainedScheduler or it has been stopped. at org.apache.spark.rpc.netty.Dispatcher.postMessage(Dispatcher.scala:161) at org.apache.spark.rpc.netty.Dispatcher.postOneWayMessage(Dispatcher.scala:131) at org.apache.spark.rpc.netty.NettyRpcEnv.send(NettyRpcEnv.scala:192) at org.apache.spark.rpc.netty.NettyRpcEndpointRef.send(NettyRpcEnv.scala:516) at org.apache.spark.scheduler.cluster.CoarseGrainedSchedulerBackend.reviveOffers(CoarseGrainedSchedulerBackend.scala:345) at org.apache.spark.scheduler.TaskSchedulerImpl.executorLost(TaskSchedulerImpl.scala:497) at org.apache.spark.scheduler.cluster.CoarseGrainedSchedulerBackend$DriverEndpoint.disableExecutor(CoarseGrainedSchedulerBackend.scala:290) at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnDriverEndpoint$$anonfun$onDisconnected$1.apply(YarnSchedulerBackend.scala:121) at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnDriverEndpoint$$anonfun$onDisconnected$1.apply(YarnSchedulerBackend.scala:120) at scala.Option.foreach(Option.scala:236) at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnDriverEndpoint.onDisconnected(YarnSchedulerBackend.scala:120) at org.apache.spark.rpc.netty.Inbox$$anonfun$process$1.apply$mcV$sp(Inbox.scala:142) at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:204) at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:100) at org.apache.spark.rpc.netty.Dispatcher$MessageLoop.run(Dispatcher.scala:215) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) 16/05/03 23:20:32 WARN netty.NettyRpcEndpointRef: Error sending message [message = RemoveExecutor(98,Container marked as failed: container_1459849169969_971497_01_000100 on host: rsync.cloud155.wd.nm.ted. Exit status: 1. Diagnostics: Exception from container-launch. Container id: container_1459849169969_971497_01_000100 Exit code: 1 Stack trace: ExitCodeException exitCode=1: at org.apache.hadoop.util.Shell.runCommand(Shell.java:561) at org.apache.hadoop.util.Shell.run(Shell.java:478) at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:725) at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:214) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:299) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) Container exited with a non-zero exit code 1 )] in 1 attempts org.apache.spark.SparkException: Could not find CoarseGrainedScheduler or it has been stopped. at org.apache.spark.rpc.netty.Dispatcher.postMessage(Dispatcher.scala:161) at org.apache.spark.rpc.netty.Dispatcher.postLocalMessage(Dispatcher.scala:126) at org.apache.spark.rpc.netty.NettyRpcEnv.ask(NettyRpcEnv.scala:227) at org.apache.spark.rpc.netty.NettyRpcEndpointRef.ask(NettyRpcEnv.scala:511) at org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:100) at org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:77) at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnSchedulerEndpoint$$anonfun$org$apache$spark$scheduler$cluster$YarnSchedulerBackend$$handleExecutorDisconnectedFromDriver$1.applyOrElse(YarnSchedulerBackend.scala:148) at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnSchedulerEndpoint$$anonfun$org$apache$spark$scheduler$cluster$YarnSchedulerBackend$$handleExecutorDisconnectedFromDriver$1.applyOrElse(YarnSchedulerBackend.scala:146) at scala.concurrent.Future$$anonfun$onSuccess$1.apply(Future.scala:117) at scala.concurrent.Future$$anonfun$onSuccess$1.apply(Future.scala:115) at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:32) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) 16/05/03 23:20:32 INFO cluster.YarnClientSchedulerBackend: Disabling executor 150. 16/05/03 23:20:32 WARN netty.NettyRpcEndpointRef: Error sending message [message = RemoveExecutor(48,Container marked as failed: container_1459849169969_971497_01_000050 on host: cloud121.wd.nm.nop.ted. Exit status: 1. Diagnostics: Exception from container-launch. Container id: container_1459849169969_971497_01_000050 Exit code: 1 Stack trace: ExitCodeException exitCode=1: at org.apache.hadoop.util.Shell.runCommand(Shell.java:561) at org.apache.hadoop.util.Shell.run(Shell.java:478) at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:725) at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:214) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:299) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) Container exited with a non-zero exit code 1 )] in 1 attempts org.apache.spark.SparkException: Could not find CoarseGrainedScheduler or it has been stopped. at org.apache.spark.rpc.netty.Dispatcher.postMessage(Dispatcher.scala:161) at org.apache.spark.rpc.netty.Dispatcher.postLocalMessage(Dispatcher.scala:126) at org.apache.spark.rpc.netty.NettyRpcEnv.ask(NettyRpcEnv.scala:227) at org.apache.spark.rpc.netty.NettyRpcEndpointRef.ask(NettyRpcEnv.scala:511) at org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:100) at org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:77) at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnSchedulerEndpoint$$anonfun$org$apache$spark$scheduler$cluster$YarnSchedulerBackend$$handleExecutorDisconnectedFromDriver$1.applyOrElse(YarnSchedulerBackend.scala:148) at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnSchedulerEndpoint$$anonfun$org$apache$spark$scheduler$cluster$YarnSchedulerBackend$$handleExecutorDisconnectedFromDriver$1.applyOrElse(YarnSchedulerBackend.scala:146) at scala.concurrent.Future$$anonfun$onSuccess$1.apply(Future.scala:117) at scala.concurrent.Future$$anonfun$onSuccess$1.apply(Future.scala:115) at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:32) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) 16/05/03 23:20:32 WARN netty.NettyRpcEndpointRef: Error sending message [message = RemoveExecutor(44,Container marked as failed: container_1459849169969_971497_01_000046 on host: cloud137.wd.nm.nop.ted. Exit status: 1. Diagnostics: Exception from container-launch. Container id: container_1459849169969_971497_01_000046 Exit code: 1 Stack trace: ExitCodeException exitCode=1: at org.apache.hadoop.util.Shell.runCommand(Shell.java:561) at org.apache.hadoop.util.Shell.run(Shell.java:478) at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:725) at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:214) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:299) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) Container exited with a non-zero exit code 1 )] in 1 attempts org.apache.spark.SparkException: Could not find CoarseGrainedScheduler or it has been stopped. at org.apache.spark.rpc.netty.Dispatcher.postMessage(Dispatcher.scala:161) at org.apache.spark.rpc.netty.Dispatcher.postLocalMessage(Dispatcher.scala:126) at org.apache.spark.rpc.netty.NettyRpcEnv.ask(NettyRpcEnv.scala:227) at org.apache.spark.rpc.netty.NettyRpcEndpointRef.ask(NettyRpcEnv.scala:511) at org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:100) at org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:77) at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnSchedulerEndpoint$$anonfun$org$apache$spark$scheduler$cluster$YarnSchedulerBackend$$handleExecutorDisconnectedFromDriver$1.applyOrElse(YarnSchedulerBackend.scala:148) at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnSchedulerEndpoint$$anonfun$org$apache$spark$scheduler$cluster$YarnSchedulerBackend$$handleExecutorDisconnectedFromDriver$1.applyOrElse(YarnSchedulerBackend.scala:146) at scala.concurrent.Future$$anonfun$onSuccess$1.apply(Future.scala:117) at scala.concurrent.Future$$anonfun$onSuccess$1.apply(Future.scala:115) at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:32) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) 16/05/03 23:20:32 WARN netty.NettyRpcEndpointRef: Error sending message [message = RemoveExecutor(101,Container marked as failed: container_1459849169969_971497_01_000103 on host: rsync.cloud093.wd.nm.nop.ted. Exit status: 1. Diagnostics: Exception from container-launch. Container id: container_1459849169969_971497_01_000103 Exit code: 1 Stack trace: ExitCodeException exitCode=1: at org.apache.hadoop.util.Shell.runCommand(Shell.java:561) at org.apache.hadoop.util.Shell.run(Shell.java:478) at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:725) at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:214) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:299) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) Container exited with a non-zero exit code 1 )] in 1 attempts org.apache.spark.SparkException: Could not find CoarseGrainedScheduler or it has been stopped. at org.apache.spark.rpc.netty.Dispatcher.postMessage(Dispatcher.scala:161) at org.apache.spark.rpc.netty.Dispatcher.postLocalMessage(Dispatcher.scala:126) at org.apache.spark.rpc.netty.NettyRpcEnv.ask(NettyRpcEnv.scala:227) at org.apache.spark.rpc.netty.NettyRpcEndpointRef.ask(NettyRpcEnv.scala:511) at org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:100) at org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:77) at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnSchedulerEndpoint$$anonfun$org$apache$spark$scheduler$cluster$YarnSchedulerBackend$$handleExecutorDisconnectedFromDriver$1.applyOrElse(YarnSchedulerBackend.scala:148) at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnSchedulerEndpoint$$anonfun$org$apache$spark$scheduler$cluster$YarnSchedulerBackend$$handleExecutorDisconnectedFromDriver$1.applyOrElse(YarnSchedulerBackend.scala:146) at scala.concurrent.Future$$anonfun$onSuccess$1.apply(Future.scala:117) at scala.concurrent.Future$$anonfun$onSuccess$1.apply(Future.scala:115) at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:32) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) 16/05/03 23:20:32 ERROR netty.Inbox: Ignoring error org.apache.spark.SparkException: Could not find CoarseGrainedScheduler or it has been stopped. at org.apache.spark.rpc.netty.Dispatcher.postMessage(Dispatcher.scala:161) at org.apache.spark.rpc.netty.Dispatcher.postOneWayMessage(Dispatcher.scala:131) at org.apache.spark.rpc.netty.NettyRpcEnv.send(NettyRpcEnv.scala:192) at org.apache.spark.rpc.netty.NettyRpcEndpointRef.send(NettyRpcEnv.scala:516) at org.apache.spark.scheduler.cluster.CoarseGrainedSchedulerBackend.reviveOffers(CoarseGrainedSchedulerBackend.scala:345) at org.apache.spark.scheduler.TaskSchedulerImpl.executorLost(TaskSchedulerImpl.scala:497) at org.apache.spark.scheduler.cluster.CoarseGrainedSchedulerBackend$DriverEndpoint.disableExecutor(CoarseGrainedSchedulerBackend.scala:290) at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnDriverEndpoint$$anonfun$onDisconnected$1.apply(YarnSchedulerBackend.scala:121) at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnDriverEndpoint$$anonfun$onDisconnected$1.apply(YarnSchedulerBackend.scala:120) at scala.Option.foreach(Option.scala:236) at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnDriverEndpoint.onDisconnected(YarnSchedulerBackend.scala:120) at org.apache.spark.rpc.netty.Inbox$$anonfun$process$1.apply$mcV$sp(Inbox.scala:142) at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:204) at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:100) at org.apache.spark.rpc.netty.Dispatcher$MessageLoop.run(Dispatcher.scala:215) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) 16/05/03 23:20:32 INFO cluster.YarnClientSchedulerBackend: Disabling executor 73. 16/05/03 23:20:32 ERROR netty.Inbox: Ignoring error org.apache.spark.SparkException: Could not find CoarseGrainedScheduler or it has been stopped. at org.apache.spark.rpc.netty.Dispatcher.postMessage(Dispatcher.scala:161) at org.apache.spark.rpc.netty.Dispatcher.postOneWayMessage(Dispatcher.scala:131) at org.apache.spark.rpc.netty.NettyRpcEnv.send(NettyRpcEnv.scala:192) at org.apache.spark.rpc.netty.NettyRpcEndpointRef.send(NettyRpcEnv.scala:516) at org.apache.spark.scheduler.cluster.CoarseGrainedSchedulerBackend.reviveOffers(CoarseGrainedSchedulerBackend.scala:345) at org.apache.spark.scheduler.TaskSchedulerImpl.executorLost(TaskSchedulerImpl.scala:497) at org.apache.spark.scheduler.cluster.CoarseGrainedSchedulerBackend$DriverEndpoint.disableExecutor(CoarseGrainedSchedulerBackend.scala:290) at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnDriverEndpoint$$anonfun$onDisconnected$1.apply(YarnSchedulerBackend.scala:121) at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnDriverEndpoint$$anonfun$onDisconnected$1.apply(YarnSchedulerBackend.scala:120) at scala.Option.foreach(Option.scala:236) at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnDriverEndpoint.onDisconnected(YarnSchedulerBackend.scala:120) at org.apache.spark.rpc.netty.Inbox$$anonfun$process$1.apply$mcV$sp(Inbox.scala:142) at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:204) at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:100) at org.apache.spark.rpc.netty.Dispatcher$MessageLoop.run(Dispatcher.scala:215) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) 16/05/03 23:20:32 INFO cluster.YarnClientSchedulerBackend: Disabling executor 165. 16/05/03 23:20:32 ERROR netty.Inbox: Ignoring error org.apache.spark.SparkException: Could not find CoarseGrainedScheduler or it has been stopped. at org.apache.spark.rpc.netty.Dispatcher.postMessage(Dispatcher.scala:161) at org.apache.spark.rpc.netty.Dispatcher.postOneWayMessage(Dispatcher.scala:131) at org.apache.spark.rpc.netty.NettyRpcEnv.send(NettyRpcEnv.scala:192) at org.apache.spark.rpc.netty.NettyRpcEndpointRef.send(NettyRpcEnv.scala:516) at org.apache.spark.scheduler.cluster.CoarseGrainedSchedulerBackend.reviveOffers(CoarseGrainedSchedulerBackend.scala:345) at org.apache.spark.scheduler.TaskSchedulerImpl.executorLost(TaskSchedulerImpl.scala:497) at org.apache.spark.scheduler.cluster.CoarseGrainedSchedulerBackend$DriverEndpoint.disableExecutor(CoarseGrainedSchedulerBackend.scala:290) at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnDriverEndpoint$$anonfun$onDisconnected$1.apply(YarnSchedulerBackend.scala:121) at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnDriverEndpoint$$anonfun$onDisconnected$1.apply(YarnSchedulerBackend.scala:120) at scala.Option.foreach(Option.scala:236) at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnDriverEndpoint.onDisconnected(YarnSchedulerBackend.scala:120) at org.apache.spark.rpc.netty.Inbox$$anonfun$process$1.apply$mcV$sp(Inbox.scala:142) at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:204) at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:100) at org.apache.spark.rpc.netty.Dispatcher$MessageLoop.run(Dispatcher.scala:215) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) 16/05/03 23:20:32 WARN netty.NettyRpcEndpointRef: Error sending message [message = RemoveExecutor(15,Container marked as failed: container_1459849169969_971497_01_000016 on host: rsync.cloud10141043060.wd.nm.nop.ted. Exit status: 1. Diagnostics: Exception from container-launch. Container id: container_1459849169969_971497_01_000016 Exit code: 1 Stack trace: ExitCodeException exitCode=1: at org.apache.hadoop.util.Shell.runCommand(Shell.java:561) at org.apache.hadoop.util.Shell.run(Shell.java:478) at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:725) at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:214) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:299) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) Container exited with a non-zero exit code 1 )] in 1 attempts org.apache.spark.SparkException: Could not find CoarseGrainedScheduler or it has been stopped. at org.apache.spark.rpc.netty.Dispatcher.postMessage(Dispatcher.scala:161) at org.apache.spark.rpc.netty.Dispatcher.postLocalMessage(Dispatcher.scala:126) at org.apache.spark.rpc.netty.NettyRpcEnv.ask(NettyRpcEnv.scala:227) at org.apache.spark.rpc.netty.NettyRpcEndpointRef.ask(NettyRpcEnv.scala:511) at org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:100) at org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:77) at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnSchedulerEndpoint$$anonfun$org$apache$spark$scheduler$cluster$YarnSchedulerBackend$$handleExecutorDisconnectedFromDriver$1.applyOrElse(YarnSchedulerBackend.scala:148) at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnSchedulerEndpoint$$anonfun$org$apache$spark$scheduler$cluster$YarnSchedulerBackend$$handleExecutorDisconnectedFromDriver$1.applyOrElse(YarnSchedulerBackend.scala:146) at scala.concurrent.Future$$anonfun$onSuccess$1.apply(Future.scala:117) at scala.concurrent.Future$$anonfun$onSuccess$1.apply(Future.scala:115) at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:32) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) 16/05/03 23:20:32 ERROR util.Utils: Uncaught exception in thread driver-revive-thread org.apache.spark.SparkException: Could not find CoarseGrainedScheduler or it has been stopped. at org.apache.spark.rpc.netty.Dispatcher.postMessage(Dispatcher.scala:161) at org.apache.spark.rpc.netty.Dispatcher.postOneWayMessage(Dispatcher.scala:131) at org.apache.spark.rpc.netty.NettyRpcEnv.send(NettyRpcEnv.scala:192) at org.apache.spark.rpc.netty.NettyRpcEndpointRef.send(NettyRpcEnv.scala:516) at org.apache.spark.scheduler.cluster.CoarseGrainedSchedulerBackend$DriverEndpoint$$anon$1$$anonfun$run$1$$anonfun$apply$mcV$sp$1.apply(CoarseGrainedSchedulerBackend.scala:102) at org.apache.spark.scheduler.cluster.CoarseGrainedSchedulerBackend$DriverEndpoint$$anon$1$$anonfun$run$1$$anonfun$apply$mcV$sp$1.apply(CoarseGrainedSchedulerBackend.scala:102) at scala.Option.foreach(Option.scala:236) at org.apache.spark.scheduler.cluster.CoarseGrainedSchedulerBackend$DriverEndpoint$$anon$1$$anonfun$run$1.apply$mcV$sp(CoarseGrainedSchedulerBackend.scala:102) at org.apache.spark.util.Utils$.tryLogNonFatalError(Utils.scala:1229) at org.apache.spark.scheduler.cluster.CoarseGrainedSchedulerBackend$DriverEndpoint$$anon$1.run(CoarseGrainedSchedulerBackend.scala:101) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:304) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) 16/05/03 23:20:32 INFO cluster.YarnClientSchedulerBackend: Disabling executor 9. 16/05/03 23:20:32 WARN netty.NettyRpcEndpointRef: Error sending message [message = RemoveExecutor(18,Container marked as failed: container_1459849169969_971497_01_000019 on host: cloud111.wd.nm.nop.ted. Exit status: 1. Diagnostics: Exception from container-launch. Container id: container_1459849169969_971497_01_000019 Exit code: 1 Stack trace: ExitCodeException exitCode=1: at org.apache.hadoop.util.Shell.runCommand(Shell.java:561) at org.apache.hadoop.util.Shell.run(Shell.java:478) at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:725) at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:214) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:299) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) Container exited with a non-zero exit code 1 )] in 1 attempts org.apache.spark.SparkException: Could not find CoarseGrainedScheduler or it has been stopped. at org.apache.spark.rpc.netty.Dispatcher.postMessage(Dispatcher.scala:161) at org.apache.spark.rpc.netty.Dispatcher.postLocalMessage(Dispatcher.scala:126) at org.apache.spark.rpc.netty.NettyRpcEnv.ask(NettyRpcEnv.scala:227) at org.apache.spark.rpc.netty.NettyRpcEndpointRef.ask(NettyRpcEnv.scala:511) at org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:100) at org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:77) at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnSchedulerEndpoint$$anonfun$org$apache$spark$scheduler$cluster$YarnSchedulerBackend$$handleExecutorDisconnectedFromDriver$1.applyOrElse(YarnSchedulerBackend.scala:148) at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnSchedulerEndpoint$$anonfun$org$apache$spark$scheduler$cluster$YarnSchedulerBackend$$handleExecutorDisconnectedFromDriver$1.applyOrElse(YarnSchedulerBackend.scala:146) at scala.concurrent.Future$$anonfun$onSuccess$1.apply(Future.scala:117) at scala.concurrent.Future$$anonfun$onSuccess$1.apply(Future.scala:115) at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:32) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) 16/05/03 23:20:32 WARN netty.NettyRpcEndpointRef: Error sending message [message = RemoveExecutor(11,Container marked as failed: container_1459849169969_971497_01_000012 on host: cloud101411250.wd.nm.ss.nop.ted. Exit status: 1. Diagnostics: Exception from container-launch. Container id: container_1459849169969_971497_01_000012 Exit code: 1 Stack trace: ExitCodeException exitCode=1: at org.apache.hadoop.util.Shell.runCommand(Shell.java:561) at org.apache.hadoop.util.Shell.run(Shell.java:478) at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:725) at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:214) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:299) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) Container exited with a non-zero exit code 1 )] in 1 attempts org.apache.spark.SparkException: Could not find CoarseGrainedScheduler or it has been stopped. at org.apache.spark.rpc.netty.Dispatcher.postMessage(Dispatcher.scala:161) at org.apache.spark.rpc.netty.Dispatcher.postLocalMessage(Dispatcher.scala:126) at org.apache.spark.rpc.netty.NettyRpcEnv.ask(NettyRpcEnv.scala:227) at org.apache.spark.rpc.netty.NettyRpcEndpointRef.ask(NettyRpcEnv.scala:511) at org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:100) at org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:77) at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnSchedulerEndpoint$$anonfun$org$apache$spark$scheduler$cluster$YarnSchedulerBackend$$handleExecutorDisconnectedFromDriver$1.applyOrElse(YarnSchedulerBackend.scala:148) at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnSchedulerEndpoint$$anonfun$org$apache$spark$scheduler$cluster$YarnSchedulerBackend$$handleExecutorDisconnectedFromDriver$1.applyOrElse(YarnSchedulerBackend.scala:146) at scala.concurrent.Future$$anonfun$onSuccess$1.apply(Future.scala:117) at scala.concurrent.Future$$anonfun$onSuccess$1.apply(Future.scala:115) at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:32) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) 16/05/03 23:20:32 WARN netty.NettyRpcEndpointRef: Error sending message [message = RemoveExecutor(84,Container marked as failed: container_1459849169969_971497_01_000086 on host: cloud101414324.wd.nm.ss.nop.ted. Exit status: 1. Diagnostics: Exception from container-launch. Container id: container_1459849169969_971497_01_000086 Exit code: 1 Stack trace: ExitCodeException exitCode=1: at org.apache.hadoop.util.Shell.runCommand(Shell.java:561) at org.apache.hadoop.util.Shell.run(Shell.java:478) at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:725) at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:214) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:299) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) Container exited with a non-zero exit code 1 )] in 1 attempts org.apache.spark.SparkException: Could not find CoarseGrainedScheduler or it has been stopped. at org.apache.spark.rpc.netty.Dispatcher.postMessage(Dispatcher.scala:161) at org.apache.spark.rpc.netty.Dispatcher.postLocalMessage(Dispatcher.scala:126) at org.apache.spark.rpc.netty.NettyRpcEnv.ask(NettyRpcEnv.scala:227) at org.apache.spark.rpc.netty.NettyRpcEndpointRef.ask(NettyRpcEnv.scala:511) at org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:100) at org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:77) at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnSchedulerEndpoint$$anonfun$org$apache$spark$scheduler$cluster$YarnSchedulerBackend$$handleExecutorDisconnectedFromDriver$1.applyOrElse(YarnSchedulerBackend.scala:148) at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnSchedulerEndpoint$$anonfun$org$apache$spark$scheduler$cluster$YarnSchedulerBackend$$handleExecutorDisconnectedFromDriver$1.applyOrElse(YarnSchedulerBackend.scala:146) at scala.concurrent.Future$$anonfun$onSuccess$1.apply(Future.scala:117) at scala.concurrent.Future$$anonfun$onSuccess$1.apply(Future.scala:115) at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:32) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) 16/05/03 23:20:32 ERROR netty.Inbox: Ignoring error org.apache.spark.SparkException: Could not find CoarseGrainedScheduler or it has been stopped. at org.apache.spark.rpc.netty.Dispatcher.postMessage(Dispatcher.scala:161) at org.apache.spark.rpc.netty.Dispatcher.postOneWayMessage(Dispatcher.scala:131) at org.apache.spark.rpc.netty.NettyRpcEnv.send(NettyRpcEnv.scala:192) at org.apache.spark.rpc.netty.NettyRpcEndpointRef.send(NettyRpcEnv.scala:516) at org.apache.spark.scheduler.cluster.CoarseGrainedSchedulerBackend.reviveOffers(CoarseGrainedSchedulerBackend.scala:345) at org.apache.spark.scheduler.TaskSchedulerImpl.executorLost(TaskSchedulerImpl.scala:497) at org.apache.spark.scheduler.cluster.CoarseGrainedSchedulerBackend$DriverEndpoint.disableExecutor(CoarseGrainedSchedulerBackend.scala:290) at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnDriverEndpoint$$anonfun$onDisconnected$1.apply(YarnSchedulerBackend.scala:121) at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnDriverEndpoint$$anonfun$onDisconnected$1.apply(YarnSchedulerBackend.scala:120) at scala.Option.foreach(Option.scala:236) at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnDriverEndpoint.onDisconnected(YarnSchedulerBackend.scala:120) at org.apache.spark.rpc.netty.Inbox$$anonfun$process$1.apply$mcV$sp(Inbox.scala:142) at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:204) at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:100) at org.apache.spark.rpc.netty.Dispatcher$MessageLoop.run(Dispatcher.scala:215) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) 16/05/03 23:20:32 WARN netty.NettyRpcEndpointRef: Error sending message [message = RemoveExecutor(110,Container marked as failed: container_1459849169969_971497_01_000112 on host: cloud009.wd.nm.nop.ted. Exit status: 1. Diagnostics: Exception from container-launch. Container id: container_1459849169969_971497_01_000112 Exit code: 1 Stack trace: ExitCodeException exitCode=1: at org.apache.hadoop.util.Shell.runCommand(Shell.java:561) at org.apache.hadoop.util.Shell.run(Shell.java:478) at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:725) at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:214) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:299) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) Container exited with a non-zero exit code 1 )] in 1 attempts org.apache.spark.SparkException: Could not find CoarseGrainedScheduler or it has been stopped. at org.apache.spark.rpc.netty.Dispatcher.postMessage(Dispatcher.scala:161) at org.apache.spark.rpc.netty.Dispatcher.postLocalMessage(Dispatcher.scala:126) at org.apache.spark.rpc.netty.NettyRpcEnv.ask(NettyRpcEnv.scala:227) at org.apache.spark.rpc.netty.NettyRpcEndpointRef.ask(NettyRpcEnv.scala:511) at org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:100) at org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:77) at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnSchedulerEndpoint$$anonfun$org$apache$spark$scheduler$cluster$YarnSchedulerBackend$$handleExecutorDisconnectedFromDriver$1.applyOrElse(YarnSchedulerBackend.scala:148) at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnSchedulerEndpoint$$anonfun$org$apache$spark$scheduler$cluster$YarnSchedulerBackend$$handleExecutorDisconnectedFromDriver$1.applyOrElse(YarnSchedulerBackend.scala:146) at scala.concurrent.Future$$anonfun$onSuccess$1.apply(Future.scala:117) at scala.concurrent.Future$$anonfun$onSuccess$1.apply(Future.scala:115) at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:32) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) 16/05/03 23:20:32 WARN netty.NettyRpcEndpointRef: Error sending message [message = RemoveExecutor(198,Container marked as failed: container_1459849169969_971497_01_000204 on host: rsync.cloud169.wd.nm.ted. Exit status: 1. Diagnostics: Exception from container-launch. Container id: container_1459849169969_971497_01_000204 Exit code: 1 Stack trace: ExitCodeException exitCode=1: at org.apache.hadoop.util.Shell.runCommand(Shell.java:561) at org.apache.hadoop.util.Shell.run(Shell.java:478) at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:725) at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:214) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:299) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) Container exited with a non-zero exit code 1 )] in 1 attempts org.apache.spark.SparkException: Could not find CoarseGrainedScheduler or it has been stopped. at org.apache.spark.rpc.netty.Dispatcher.postMessage(Dispatcher.scala:161) at org.apache.spark.rpc.netty.Dispatcher.postLocalMessage(Dispatcher.scala:126) at org.apache.spark.rpc.netty.NettyRpcEnv.ask(NettyRpcEnv.scala:227) at org.apache.spark.rpc.netty.NettyRpcEndpointRef.ask(NettyRpcEnv.scala:511) at org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:100) at org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:77) at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnSchedulerEndpoint$$anonfun$org$apache$spark$scheduler$cluster$YarnSchedulerBackend$$handleExecutorDisconnectedFromDriver$1.applyOrElse(YarnSchedulerBackend.scala:148) at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnSchedulerEndpoint$$anonfun$org$apache$spark$scheduler$cluster$YarnSchedulerBackend$$handleExecutorDisconnectedFromDriver$1.applyOrElse(YarnSchedulerBackend.scala:146) at scala.concurrent.Future$$anonfun$onSuccess$1.apply(Future.scala:117) at scala.concurrent.Future$$anonfun$onSuccess$1.apply(Future.scala:115) at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:32) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) 16/05/03 23:20:32 INFO storage.BlockManagerMasterEndpoint: Registering block manager rsync.cloud105.wd.nm.nop.ted:38206 with 4.3 GB RAM, BlockManagerId(140, rsync.cloud105.wd.nm.nop.ted, 38206) 16/05/03 23:20:32 WARN netty.NettyRpcEndpointRef: Error sending message [message = RemoveExecutor(159,Container marked as failed: container_1459849169969_971497_01_000164 on host: rsync.cloud186.wd.nm.ted. Exit status: 1. Diagnostics: Exception from container-launch. Container id: container_1459849169969_971497_01_000164 Exit code: 1 Stack trace: ExitCodeException exitCode=1: at org.apache.hadoop.util.Shell.runCommand(Shell.java:561) at org.apache.hadoop.util.Shell.run(Shell.java:478) at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:725) at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:214) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:299) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) Container exited with a non-zero exit code 1 )] in 1 attempts org.apache.spark.SparkException: Could not find CoarseGrainedScheduler or it has been stopped. at org.apache.spark.rpc.netty.Dispatcher.postMessage(Dispatcher.scala:161) at org.apache.spark.rpc.netty.Dispatcher.postLocalMessage(Dispatcher.scala:126) at org.apache.spark.rpc.netty.NettyRpcEnv.ask(NettyRpcEnv.scala:227) at org.apache.spark.rpc.netty.NettyRpcEndpointRef.ask(NettyRpcEnv.scala:511) at org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:100) at org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:77) at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnSchedulerEndpoint$$anonfun$org$apache$spark$scheduler$cluster$YarnSchedulerBackend$$handleExecutorDisconnectedFromDriver$1.applyOrElse(YarnSchedulerBackend.scala:148) at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnSchedulerEndpoint$$anonfun$org$apache$spark$scheduler$cluster$YarnSchedulerBackend$$handleExecutorDisconnectedFromDriver$1.applyOrElse(YarnSchedulerBackend.scala:146) at scala.concurrent.Future$$anonfun$onSuccess$1.apply(Future.scala:117) at scala.concurrent.Future$$anonfun$onSuccess$1.apply(Future.scala:115) at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:32) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) 16/05/03 23:20:32 INFO cluster.YarnClientSchedulerBackend: Disabling executor 67. 16/05/03 23:20:32 ERROR netty.Inbox: Ignoring error org.apache.spark.SparkException: Could not find CoarseGrainedScheduler or it has been stopped. at org.apache.spark.rpc.netty.Dispatcher.postMessage(Dispatcher.scala:161) at org.apache.spark.rpc.netty.Dispatcher.postOneWayMessage(Dispatcher.scala:131) at org.apache.spark.rpc.netty.NettyRpcEnv.send(NettyRpcEnv.scala:192) at org.apache.spark.rpc.netty.NettyRpcEndpointRef.send(NettyRpcEnv.scala:516) at org.apache.spark.scheduler.cluster.CoarseGrainedSchedulerBackend.reviveOffers(CoarseGrainedSchedulerBackend.scala:345) at org.apache.spark.scheduler.TaskSchedulerImpl.executorLost(TaskSchedulerImpl.scala:497) at org.apache.spark.scheduler.cluster.CoarseGrainedSchedulerBackend$DriverEndpoint.disableExecutor(CoarseGrainedSchedulerBackend.scala:290) at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnDriverEndpoint$$anonfun$onDisconnected$1.apply(YarnSchedulerBackend.scala:121) at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnDriverEndpoint$$anonfun$onDisconnected$1.apply(YarnSchedulerBackend.scala:120) at scala.Option.foreach(Option.scala:236) at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnDriverEndpoint.onDisconnected(YarnSchedulerBackend.scala:120) at org.apache.spark.rpc.netty.Inbox$$anonfun$process$1.apply$mcV$sp(Inbox.scala:142) at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:204) at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:100) at org.apache.spark.rpc.netty.Dispatcher$MessageLoop.run(Dispatcher.scala:215) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) 16/05/03 23:20:32 ERROR scheduler.LiveListenerBus: SparkListenerBus has already stopped! Dropping event SparkListenerBlockManagerAdded(1462288832717,BlockManagerId(140, rsync.cloud105.wd.nm.nop.ted, 38206),4588044288) 16/05/03 23:20:32 WARN netty.NettyRpcEndpointRef: Error sending message [message = RemoveExecutor(79,Container marked as failed: container_1459849169969_971497_01_000081 on host: rsync.cloud017.wd.nm.nop.ted. Exit status: 1. Diagnostics: Exception from container-launch. Container id: container_1459849169969_971497_01_000081 Exit code: 1 Stack trace: ExitCodeException exitCode=1: at org.apache.hadoop.util.Shell.runCommand(Shell.java:561) at org.apache.hadoop.util.Shell.run(Shell.java:478) at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:725) at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:214) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:299) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) Container exited with a non-zero exit code 1 )] in 1 attempts org.apache.spark.SparkException: Could not find CoarseGrainedScheduler or it has been stopped. at org.apache.spark.rpc.netty.Dispatcher.postMessage(Dispatcher.scala:161) at org.apache.spark.rpc.netty.Dispatcher.postLocalMessage(Dispatcher.scala:126) at org.apache.spark.rpc.netty.NettyRpcEnv.ask(NettyRpcEnv.scala:227) at org.apache.spark.rpc.netty.NettyRpcEndpointRef.ask(NettyRpcEnv.scala:511) at org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:100) at org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:77) at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnSchedulerEndpoint$$anonfun$org$apache$spark$scheduler$cluster$YarnSchedulerBackend$$handleExecutorDisconnectedFromDriver$1.applyOrElse(YarnSchedulerBackend.scala:148) at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnSchedulerEndpoint$$anonfun$org$apache$spark$scheduler$cluster$YarnSchedulerBackend$$handleExecutorDisconnectedFromDriver$1.applyOrElse(YarnSchedulerBackend.scala:146) at scala.concurrent.Future$$anonfun$onSuccess$1.apply(Future.scala:117) at scala.concurrent.Future$$anonfun$onSuccess$1.apply(Future.scala:115) at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:32) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) 16/05/03 23:20:32 WARN netty.NettyRpcEndpointRef: Error sending message [message = RemoveExecutor(82,Container marked as failed: container_1459849169969_971497_01_000084 on host: cloud036.wd.nm.nop.ted. Exit status: 1. Diagnostics: Exception from container-launch. Container id: container_1459849169969_971497_01_000084 Exit code: 1 Stack trace: ExitCodeException exitCode=1: at org.apache.hadoop.util.Shell.runCommand(Shell.java:561) at org.apache.hadoop.util.Shell.run(Shell.java:478) at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:725) at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:214) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:299) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) Container exited with a non-zero exit code 1 )] in 1 attempts org.apache.spark.SparkException: Could not find CoarseGrainedScheduler or it has been stopped. at org.apache.spark.rpc.netty.Dispatcher.postMessage(Dispatcher.scala:161) at org.apache.spark.rpc.netty.Dispatcher.postLocalMessage(Dispatcher.scala:126) at org.apache.spark.rpc.netty.NettyRpcEnv.ask(NettyRpcEnv.scala:227) at org.apache.spark.rpc.netty.NettyRpcEndpointRef.ask(NettyRpcEnv.scala:511) at org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:100) at org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:77) at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnSchedulerEndpoint$$anonfun$org$apache$spark$scheduler$cluster$YarnSchedulerBackend$$handleExecutorDisconnectedFromDriver$1.applyOrElse(YarnSchedulerBackend.scala:148) at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnSchedulerEndpoint$$anonfun$org$apache$spark$scheduler$cluster$YarnSchedulerBackend$$handleExecutorDisconnectedFromDriver$1.applyOrElse(YarnSchedulerBackend.scala:146) at scala.concurrent.Future$$anonfun$onSuccess$1.apply(Future.scala:117) at scala.concurrent.Future$$anonfun$onSuccess$1.apply(Future.scala:115) at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:32) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) 16/05/03 23:20:32 INFO cluster.YarnClientSchedulerBackend: Disabling executor 186. 16/05/03 23:20:32 ERROR netty.Inbox: Ignoring error org.apache.spark.SparkException: Could not find CoarseGrainedScheduler or it has been stopped. at org.apache.spark.rpc.netty.Dispatcher.postMessage(Dispatcher.scala:161) at org.apache.spark.rpc.netty.Dispatcher.postOneWayMessage(Dispatcher.scala:131) at org.apache.spark.rpc.netty.NettyRpcEnv.send(NettyRpcEnv.scala:192) at org.apache.spark.rpc.netty.NettyRpcEndpointRef.send(NettyRpcEnv.scala:516) at org.apache.spark.scheduler.cluster.CoarseGrainedSchedulerBackend.reviveOffers(CoarseGrainedSchedulerBackend.scala:345) at org.apache.spark.scheduler.TaskSchedulerImpl.executorLost(TaskSchedulerImpl.scala:497) at org.apache.spark.scheduler.cluster.CoarseGrainedSchedulerBackend$DriverEndpoint.disableExecutor(CoarseGrainedSchedulerBackend.scala:290) at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnDriverEndpoint$$anonfun$onDisconnected$1.apply(YarnSchedulerBackend.scala:121) at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnDriverEndpoint$$anonfun$onDisconnected$1.apply(YarnSchedulerBackend.scala:120) at scala.Option.foreach(Option.scala:236) at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnDriverEndpoint.onDisconnected(YarnSchedulerBackend.scala:120) at org.apache.spark.rpc.netty.Inbox$$anonfun$process$1.apply$mcV$sp(Inbox.scala:142) at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:204) at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:100) at org.apache.spark.rpc.netty.Dispatcher$MessageLoop.run(Dispatcher.scala:215) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) 16/05/03 23:20:32 INFO cluster.YarnClientSchedulerBackend: Disabling executor 58. 16/05/03 23:20:32 ERROR netty.Inbox: Ignoring error org.apache.spark.SparkException: Could not find CoarseGrainedScheduler or it has been stopped. at org.apache.spark.rpc.netty.Dispatcher.postMessage(Dispatcher.scala:161) at org.apache.spark.rpc.netty.Dispatcher.postOneWayMessage(Dispatcher.scala:131) at org.apache.spark.rpc.netty.NettyRpcEnv.send(NettyRpcEnv.scala:192) at org.apache.spark.rpc.netty.NettyRpcEndpointRef.send(NettyRpcEnv.scala:516) at org.apache.spark.scheduler.cluster.CoarseGrainedSchedulerBackend.reviveOffers(CoarseGrainedSchedulerBackend.scala:345) at org.apache.spark.scheduler.TaskSchedulerImpl.executorLost(TaskSchedulerImpl.scala:497) at org.apache.spark.scheduler.cluster.CoarseGrainedSchedulerBackend$DriverEndpoint.disableExecutor(CoarseGrainedSchedulerBackend.scala:290) at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnDriverEndpoint$$anonfun$onDisconnected$1.apply(YarnSchedulerBackend.scala:121) at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnDriverEndpoint$$anonfun$onDisconnected$1.apply(YarnSchedulerBackend.scala:120) at scala.Option.foreach(Option.scala:236) at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnDriverEndpoint.onDisconnected(YarnSchedulerBackend.scala:120) at org.apache.spark.rpc.netty.Inbox$$anonfun$process$1.apply$mcV$sp(Inbox.scala:142) at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:204) at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:100) at org.apache.spark.rpc.netty.Dispatcher$MessageLoop.run(Dispatcher.scala:215) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) 16/05/03 23:20:32 INFO cluster.YarnClientSchedulerBackend: Disabling executor 147. 16/05/03 23:20:32 ERROR netty.Inbox: Ignoring error org.apache.spark.SparkException: Could not find CoarseGrainedScheduler or it has been stopped. at org.apache.spark.rpc.netty.Dispatcher.postMessage(Dispatcher.scala:161) at org.apache.spark.rpc.netty.Dispatcher.postOneWayMessage(Dispatcher.scala:131) at org.apache.spark.rpc.netty.NettyRpcEnv.send(NettyRpcEnv.scala:192) at org.apache.spark.rpc.netty.NettyRpcEndpointRef.send(NettyRpcEnv.scala:516) at org.apache.spark.scheduler.cluster.CoarseGrainedSchedulerBackend.reviveOffers(CoarseGrainedSchedulerBackend.scala:345) at org.apache.spark.scheduler.TaskSchedulerImpl.executorLost(TaskSchedulerImpl.scala:497) at org.apache.spark.scheduler.cluster.CoarseGrainedSchedulerBackend$DriverEndpoint.disableExecutor(CoarseGrainedSchedulerBackend.scala:290) at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnDriverEndpoint$$anonfun$onDisconnected$1.apply(YarnSchedulerBackend.scala:121) at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnDriverEndpoint$$anonfun$onDisconnected$1.apply(YarnSchedulerBackend.scala:120) at scala.Option.foreach(Option.scala:236) at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnDriverEndpoint.onDisconnected(YarnSchedulerBackend.scala:120) at org.apache.spark.rpc.netty.Inbox$$anonfun$process$1.apply$mcV$sp(Inbox.scala:142) at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:204) at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:100) at org.apache.spark.rpc.netty.Dispatcher$MessageLoop.run(Dispatcher.scala:215) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) 16/05/03 23:20:32 INFO cluster.YarnClientSchedulerBackend: Disabling executor 40. 16/05/03 23:20:32 ERROR netty.Inbox: Ignoring error org.apache.spark.SparkException: Could not find CoarseGrainedScheduler or it has been stopped. at org.apache.spark.rpc.netty.Dispatcher.postMessage(Dispatcher.scala:161) at org.apache.spark.rpc.netty.Dispatcher.postOneWayMessage(Dispatcher.scala:131) at org.apache.spark.rpc.netty.NettyRpcEnv.send(NettyRpcEnv.scala:192) at org.apache.spark.rpc.netty.NettyRpcEndpointRef.send(NettyRpcEnv.scala:516) at org.apache.spark.scheduler.cluster.CoarseGrainedSchedulerBackend.reviveOffers(CoarseGrainedSchedulerBackend.scala:345) at org.apache.spark.scheduler.TaskSchedulerImpl.executorLost(TaskSchedulerImpl.scala:497) at org.apache.spark.scheduler.cluster.CoarseGrainedSchedulerBackend$DriverEndpoint.disableExecutor(CoarseGrainedSchedulerBackend.scala:290) at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnDriverEndpoint$$anonfun$onDisconnected$1.apply(YarnSchedulerBackend.scala:121) at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnDriverEndpoint$$anonfun$onDisconnected$1.apply(YarnSchedulerBackend.scala:120) at scala.Option.foreach(Option.scala:236) at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnDriverEndpoint.onDisconnected(YarnSchedulerBackend.scala:120) at org.apache.spark.rpc.netty.Inbox$$anonfun$process$1.apply$mcV$sp(Inbox.scala:142) at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:204) at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:100) at org.apache.spark.rpc.netty.Dispatcher$MessageLoop.run(Dispatcher.scala:215) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) 16/05/03 23:20:32 INFO cluster.YarnClientSchedulerBackend: Disabling executor 168. 16/05/03 23:20:32 ERROR netty.Inbox: Ignoring error org.apache.spark.SparkException: Could not find CoarseGrainedScheduler or it has been stopped. at org.apache.spark.rpc.netty.Dispatcher.postMessage(Dispatcher.scala:161) at org.apache.spark.rpc.netty.Dispatcher.postOneWayMessage(Dispatcher.scala:131) at org.apache.spark.rpc.netty.NettyRpcEnv.send(NettyRpcEnv.scala:192) at org.apache.spark.rpc.netty.NettyRpcEndpointRef.send(NettyRpcEnv.scala:516) at org.apache.spark.scheduler.cluster.CoarseGrainedSchedulerBackend.reviveOffers(CoarseGrainedSchedulerBackend.scala:345) at org.apache.spark.scheduler.TaskSchedulerImpl.executorLost(TaskSchedulerImpl.scala:497) at org.apache.spark.scheduler.cluster.CoarseGrainedSchedulerBackend$DriverEndpoint.disableExecutor(CoarseGrainedSchedulerBackend.scala:290) at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnDriverEndpoint$$anonfun$onDisconnected$1.apply(YarnSchedulerBackend.scala:121) at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnDriverEndpoint$$anonfun$onDisconnected$1.apply(YarnSchedulerBackend.scala:120) at scala.Option.foreach(Option.scala:236) at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnDriverEndpoint.onDisconnected(YarnSchedulerBackend.scala:120) at org.apache.spark.rpc.netty.Inbox$$anonfun$process$1.apply$mcV$sp(Inbox.scala:142) at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:204) at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:100) at org.apache.spark.rpc.netty.Dispatcher$MessageLoop.run(Dispatcher.scala:215) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) 16/05/03 23:20:32 INFO cluster.YarnClientSchedulerBackend: Disabling executor 16. 16/05/03 23:20:32 ERROR netty.Inbox: Ignoring error org.apache.spark.SparkException: Could not find CoarseGrainedScheduler or it has been stopped. at org.apache.spark.rpc.netty.Dispatcher.postMessage(Dispatcher.scala:161) at org.apache.spark.rpc.netty.Dispatcher.postOneWayMessage(Dispatcher.scala:131) at org.apache.spark.rpc.netty.NettyRpcEnv.send(NettyRpcEnv.scala:192) at org.apache.spark.rpc.netty.NettyRpcEndpointRef.send(NettyRpcEnv.scala:516) at org.apache.spark.scheduler.cluster.CoarseGrainedSchedulerBackend.reviveOffers(CoarseGrainedSchedulerBackend.scala:345) at org.apache.spark.scheduler.TaskSchedulerImpl.executorLost(TaskSchedulerImpl.scala:497) at org.apache.spark.scheduler.cluster.CoarseGrainedSchedulerBackend$DriverEndpoint.disableExecutor(CoarseGrainedSchedulerBackend.scala:290) at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnDriverEndpoint$$anonfun$onDisconnected$1.apply(YarnSchedulerBackend.scala:121) at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnDriverEndpoint$$anonfun$onDisconnected$1.apply(YarnSchedulerBackend.scala:120) at scala.Option.foreach(Option.scala:236) at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnDriverEndpoint.onDisconnected(YarnSchedulerBackend.scala:120) at org.apache.spark.rpc.netty.Inbox$$anonfun$process$1.apply$mcV$sp(Inbox.scala:142) at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:204) at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:100) at org.apache.spark.rpc.netty.Dispatcher$MessageLoop.run(Dispatcher.scala:215) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) 16/05/03 23:20:32 INFO cluster.YarnClientSchedulerBackend: Disabling executor 141. 16/05/03 23:20:32 ERROR netty.Inbox: Ignoring error org.apache.spark.SparkException: Could not find CoarseGrainedScheduler or it has been stopped. at org.apache.spark.rpc.netty.Dispatcher.postMessage(Dispatcher.scala:161) at org.apache.spark.rpc.netty.Dispatcher.postOneWayMessage(Dispatcher.scala:131) at org.apache.spark.rpc.netty.NettyRpcEnv.send(NettyRpcEnv.scala:192) at org.apache.spark.rpc.netty.NettyRpcEndpointRef.send(NettyRpcEnv.scala:516) at org.apache.spark.scheduler.cluster.CoarseGrainedSchedulerBackend.reviveOffers(CoarseGrainedSchedulerBackend.scala:345) at org.apache.spark.scheduler.TaskSchedulerImpl.executorLost(TaskSchedulerImpl.scala:497) at org.apache.spark.scheduler.cluster.CoarseGrainedSchedulerBackend$DriverEndpoint.disableExecutor(CoarseGrainedSchedulerBackend.scala:290) at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnDriverEndpoint$$anonfun$onDisconnected$1.apply(YarnSchedulerBackend.scala:121) at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnDriverEndpoint$$anonfun$onDisconnected$1.apply(YarnSchedulerBackend.scala:120) at scala.Option.foreach(Option.scala:236) at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnDriverEndpoint.onDisconnected(YarnSchedulerBackend.scala:120) at org.apache.spark.rpc.netty.Inbox$$anonfun$process$1.apply$mcV$sp(Inbox.scala:142) at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:204) at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:100) at org.apache.spark.rpc.netty.Dispatcher$MessageLoop.run(Dispatcher.scala:215) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) 16/05/03 23:20:32 INFO cluster.YarnClientSchedulerBackend: Disabling executor 3. 16/05/03 23:20:32 ERROR netty.Inbox: Ignoring error org.apache.spark.SparkException: Could not find CoarseGrainedScheduler or it has been stopped. at org.apache.spark.rpc.netty.Dispatcher.postMessage(Dispatcher.scala:161) at org.apache.spark.rpc.netty.Dispatcher.postOneWayMessage(Dispatcher.scala:131) at org.apache.spark.rpc.netty.NettyRpcEnv.send(NettyRpcEnv.scala:192) at org.apache.spark.rpc.netty.NettyRpcEndpointRef.send(NettyRpcEnv.scala:516) at org.apache.spark.scheduler.cluster.CoarseGrainedSchedulerBackend.reviveOffers(CoarseGrainedSchedulerBackend.scala:345) at org.apache.spark.scheduler.TaskSchedulerImpl.executorLost(TaskSchedulerImpl.scala:497) at org.apache.spark.scheduler.cluster.CoarseGrainedSchedulerBackend$DriverEndpoint.disableExecutor(CoarseGrainedSchedulerBackend.scala:290) at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnDriverEndpoint$$anonfun$onDisconnected$1.apply(YarnSchedulerBackend.scala:121) at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnDriverEndpoint$$anonfun$onDisconnected$1.apply(YarnSchedulerBackend.scala:120) at scala.Option.foreach(Option.scala:236) at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnDriverEndpoint.onDisconnected(YarnSchedulerBackend.scala:120) at org.apache.spark.rpc.netty.Inbox$$anonfun$process$1.apply$mcV$sp(Inbox.scala:142) at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:204) at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:100) at org.apache.spark.rpc.netty.Dispatcher$MessageLoop.run(Dispatcher.scala:215) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) 16/05/03 23:20:32 INFO cluster.YarnClientSchedulerBackend: Disabling executor 19. 16/05/03 23:20:32 ERROR netty.Inbox: Ignoring error org.apache.spark.SparkException: Could not find CoarseGrainedScheduler or it has been stopped. at org.apache.spark.rpc.netty.Dispatcher.postMessage(Dispatcher.scala:161) at org.apache.spark.rpc.netty.Dispatcher.postOneWayMessage(Dispatcher.scala:131) at org.apache.spark.rpc.netty.NettyRpcEnv.send(NettyRpcEnv.scala:192) at org.apache.spark.rpc.netty.NettyRpcEndpointRef.send(NettyRpcEnv.scala:516) at org.apache.spark.scheduler.cluster.CoarseGrainedSchedulerBackend.reviveOffers(CoarseGrainedSchedulerBackend.scala:345) at org.apache.spark.scheduler.TaskSchedulerImpl.executorLost(TaskSchedulerImpl.scala:497) at org.apache.spark.scheduler.cluster.CoarseGrainedSchedulerBackend$DriverEndpoint.disableExecutor(CoarseGrainedSchedulerBackend.scala:290) at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnDriverEndpoint$$anonfun$onDisconnected$1.apply(YarnSchedulerBackend.scala:121) at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnDriverEndpoint$$anonfun$onDisconnected$1.apply(YarnSchedulerBackend.scala:120) at scala.Option.foreach(Option.scala:236) at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnDriverEndpoint.onDisconnected(YarnSchedulerBackend.scala:120) at org.apache.spark.rpc.netty.Inbox$$anonfun$process$1.apply$mcV$sp(Inbox.scala:142) at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:204) at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:100) at org.apache.spark.rpc.netty.Dispatcher$MessageLoop.run(Dispatcher.scala:215) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) 16/05/03 23:20:32 INFO cluster.YarnClientSchedulerBackend: Disabling executor 132. 16/05/03 23:20:32 ERROR netty.Inbox: Ignoring error org.apache.spark.SparkException: Could not find CoarseGrainedScheduler or it has been stopped. at org.apache.spark.rpc.netty.Dispatcher.postMessage(Dispatcher.scala:161) at org.apache.spark.rpc.netty.Dispatcher.postOneWayMessage(Dispatcher.scala:131) at org.apache.spark.rpc.netty.NettyRpcEnv.send(NettyRpcEnv.scala:192) at org.apache.spark.rpc.netty.NettyRpcEndpointRef.send(NettyRpcEnv.scala:516) at org.apache.spark.scheduler.cluster.CoarseGrainedSchedulerBackend.reviveOffers(CoarseGrainedSchedulerBackend.scala:345) at org.apache.spark.scheduler.TaskSchedulerImpl.executorLost(TaskSchedulerImpl.scala:497) at org.apache.spark.scheduler.cluster.CoarseGrainedSchedulerBackend$DriverEndpoint.disableExecutor(CoarseGrainedSchedulerBackend.scala:290) at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnDriverEndpoint$$anonfun$onDisconnected$1.apply(YarnSchedulerBackend.scala:121) at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnDriverEndpoint$$anonfun$onDisconnected$1.apply(YarnSchedulerBackend.scala:120) at scala.Option.foreach(Option.scala:236) at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnDriverEndpoint.onDisconnected(YarnSchedulerBackend.scala:120) at org.apache.spark.rpc.netty.Inbox$$anonfun$process$1.apply$mcV$sp(Inbox.scala:142) at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:204) at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:100) at org.apache.spark.rpc.netty.Dispatcher$MessageLoop.run(Dispatcher.scala:215) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) 16/05/03 23:20:32 INFO cluster.YarnClientSchedulerBackend: Disabling executor 31. 16/05/03 23:20:32 ERROR netty.Inbox: Ignoring error org.apache.spark.SparkException: Could not find CoarseGrainedScheduler or it has been stopped. at org.apache.spark.rpc.netty.Dispatcher.postMessage(Dispatcher.scala:161) at org.apache.spark.rpc.netty.Dispatcher.postOneWayMessage(Dispatcher.scala:131) at org.apache.spark.rpc.netty.NettyRpcEnv.send(NettyRpcEnv.scala:192) at org.apache.spark.rpc.netty.NettyRpcEndpointRef.send(NettyRpcEnv.scala:516) at org.apache.spark.scheduler.cluster.CoarseGrainedSchedulerBackend.reviveOffers(CoarseGrainedSchedulerBackend.scala:345) at org.apache.spark.scheduler.TaskSchedulerImpl.executorLost(TaskSchedulerImpl.scala:497) at org.apache.spark.scheduler.cluster.CoarseGrainedSchedulerBackend$DriverEndpoint.disableExecutor(CoarseGrainedSchedulerBackend.scala:290) at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnDriverEndpoint$$anonfun$onDisconnected$1.apply(YarnSchedulerBackend.scala:121) at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnDriverEndpoint$$anonfun$onDisconnected$1.apply(YarnSchedulerBackend.scala:120) at scala.Option.foreach(Option.scala:236) at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnDriverEndpoint.onDisconnected(YarnSchedulerBackend.scala:120) at org.apache.spark.rpc.netty.Inbox$$anonfun$process$1.apply$mcV$sp(Inbox.scala:142) at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:204) at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:100) at org.apache.spark.rpc.netty.Dispatcher$MessageLoop.run(Dispatcher.scala:215) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) 16/05/03 23:20:32 INFO cluster.YarnClientSchedulerBackend: Disabling executor 25. 16/05/03 23:20:32 ERROR netty.Inbox: Ignoring error org.apache.spark.SparkException: Could not find CoarseGrainedScheduler or it has been stopped. at org.apache.spark.rpc.netty.Dispatcher.postMessage(Dispatcher.scala:161) at org.apache.spark.rpc.netty.Dispatcher.postOneWayMessage(Dispatcher.scala:131) at org.apache.spark.rpc.netty.NettyRpcEnv.send(NettyRpcEnv.scala:192) at org.apache.spark.rpc.netty.NettyRpcEndpointRef.send(NettyRpcEnv.scala:516) at org.apache.spark.scheduler.cluster.CoarseGrainedSchedulerBackend.reviveOffers(CoarseGrainedSchedulerBackend.scala:345) at org.apache.spark.scheduler.TaskSchedulerImpl.executorLost(TaskSchedulerImpl.scala:497) at org.apache.spark.scheduler.cluster.CoarseGrainedSchedulerBackend$DriverEndpoint.disableExecutor(CoarseGrainedSchedulerBackend.scala:290) at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnDriverEndpoint$$anonfun$onDisconnected$1.apply(YarnSchedulerBackend.scala:121) at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnDriverEndpoint$$anonfun$onDisconnected$1.apply(YarnSchedulerBackend.scala:120) at scala.Option.foreach(Option.scala:236) at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnDriverEndpoint.onDisconnected(YarnSchedulerBackend.scala:120) at org.apache.spark.rpc.netty.Inbox$$anonfun$process$1.apply$mcV$sp(Inbox.scala:142) at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:204) at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:100) at org.apache.spark.rpc.netty.Dispatcher$MessageLoop.run(Dispatcher.scala:215) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) 16/05/03 23:20:32 INFO cluster.YarnClientSchedulerBackend: Disabling executor 93. 16/05/03 23:20:32 ERROR netty.Inbox: Ignoring error org.apache.spark.SparkException: Could not find CoarseGrainedScheduler or it has been stopped. at org.apache.spark.rpc.netty.Dispatcher.postMessage(Dispatcher.scala:161) at org.apache.spark.rpc.netty.Dispatcher.postOneWayMessage(Dispatcher.scala:131) at org.apache.spark.rpc.netty.NettyRpcEnv.send(NettyRpcEnv.scala:192) at org.apache.spark.rpc.netty.NettyRpcEndpointRef.send(NettyRpcEnv.scala:516) at org.apache.spark.scheduler.cluster.CoarseGrainedSchedulerBackend.reviveOffers(CoarseGrainedSchedulerBackend.scala:345) at org.apache.spark.scheduler.TaskSchedulerImpl.executorLost(TaskSchedulerImpl.scala:497) at org.apache.spark.scheduler.cluster.CoarseGrainedSchedulerBackend$DriverEndpoint.disableExecutor(CoarseGrainedSchedulerBackend.scala:290) at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnDriverEndpoint$$anonfun$onDisconnected$1.apply(YarnSchedulerBackend.scala:121) at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnDriverEndpoint$$anonfun$onDisconnected$1.apply(YarnSchedulerBackend.scala:120) at scala.Option.foreach(Option.scala:236) at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnDriverEndpoint.onDisconnected(YarnSchedulerBackend.scala:120) at org.apache.spark.rpc.netty.Inbox$$anonfun$process$1.apply$mcV$sp(Inbox.scala:142) at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:204) at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:100) at org.apache.spark.rpc.netty.Dispatcher$MessageLoop.run(Dispatcher.scala:215) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) 16/05/03 23:20:32 INFO cluster.YarnClientSchedulerBackend: Disabling executor 10. 16/05/03 23:20:32 ERROR netty.Inbox: Ignoring error org.apache.spark.SparkException: Could not find CoarseGrainedScheduler or it has been stopped. at org.apache.spark.rpc.netty.Dispatcher.postMessage(Dispatcher.scala:161) at org.apache.spark.rpc.netty.Dispatcher.postOneWayMessage(Dispatcher.scala:131) at org.apache.spark.rpc.netty.NettyRpcEnv.send(NettyRpcEnv.scala:192) at org.apache.spark.rpc.netty.NettyRpcEndpointRef.send(NettyRpcEnv.scala:516) at org.apache.spark.scheduler.cluster.CoarseGrainedSchedulerBackend.reviveOffers(CoarseGrainedSchedulerBackend.scala:345) at org.apache.spark.scheduler.TaskSchedulerImpl.executorLost(TaskSchedulerImpl.scala:497) at org.apache.spark.scheduler.cluster.CoarseGrainedSchedulerBackend$DriverEndpoint.disableExecutor(CoarseGrainedSchedulerBackend.scala:290) at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnDriverEndpoint$$anonfun$onDisconnected$1.apply(YarnSchedulerBackend.scala:121) at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnDriverEndpoint$$anonfun$onDisconnected$1.apply(YarnSchedulerBackend.scala:120) at scala.Option.foreach(Option.scala:236) at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnDriverEndpoint.onDisconnected(YarnSchedulerBackend.scala:120) at org.apache.spark.rpc.netty.Inbox$$anonfun$process$1.apply$mcV$sp(Inbox.scala:142) at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:204) at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:100) at org.apache.spark.rpc.netty.Dispatcher$MessageLoop.run(Dispatcher.scala:215) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) 16/05/03 23:20:32 INFO cluster.YarnClientSchedulerBackend: Disabling executor 135. 16/05/03 23:20:32 ERROR netty.Inbox: Ignoring error org.apache.spark.SparkException: Could not find CoarseGrainedScheduler or it has been stopped. at org.apache.spark.rpc.netty.Dispatcher.postMessage(Dispatcher.scala:161) at org.apache.spark.rpc.netty.Dispatcher.postOneWayMessage(Dispatcher.scala:131) at org.apache.spark.rpc.netty.NettyRpcEnv.send(NettyRpcEnv.scala:192) at org.apache.spark.rpc.netty.NettyRpcEndpointRef.send(NettyRpcEnv.scala:516) at org.apache.spark.scheduler.cluster.CoarseGrainedSchedulerBackend.reviveOffers(CoarseGrainedSchedulerBackend.scala:345) at org.apache.spark.scheduler.TaskSchedulerImpl.executorLost(TaskSchedulerImpl.scala:497) at org.apache.spark.scheduler.cluster.CoarseGrainedSchedulerBackend$DriverEndpoint.disableExecutor(CoarseGrainedSchedulerBackend.scala:290) at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnDriverEndpoint$$anonfun$onDisconnected$1.apply(YarnSchedulerBackend.scala:121) at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnDriverEndpoint$$anonfun$onDisconnected$1.apply(YarnSchedulerBackend.scala:120) at scala.Option.foreach(Option.scala:236) at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnDriverEndpoint.onDisconnected(YarnSchedulerBackend.scala:120) at org.apache.spark.rpc.netty.Inbox$$anonfun$process$1.apply$mcV$sp(Inbox.scala:142) at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:204) at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:100) at org.apache.spark.rpc.netty.Dispatcher$MessageLoop.run(Dispatcher.scala:215) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) 16/05/03 23:20:32 INFO cluster.YarnClientSchedulerBackend: Disabling executor 109. 16/05/03 23:20:32 ERROR netty.Inbox: Ignoring error org.apache.spark.SparkException: Could not find CoarseGrainedScheduler or it has been stopped. at org.apache.spark.rpc.netty.Dispatcher.postMessage(Dispatcher.scala:161) at org.apache.spark.rpc.netty.Dispatcher.postOneWayMessage(Dispatcher.scala:131) at org.apache.spark.rpc.netty.NettyRpcEnv.send(NettyRpcEnv.scala:192) at org.apache.spark.rpc.netty.NettyRpcEndpointRef.send(NettyRpcEnv.scala:516) at org.apache.spark.scheduler.cluster.CoarseGrainedSchedulerBackend.reviveOffers(CoarseGrainedSchedulerBackend.scala:345) at org.apache.spark.scheduler.TaskSchedulerImpl.executorLost(TaskSchedulerImpl.scala:497) at org.apache.spark.scheduler.cluster.CoarseGrainedSchedulerBackend$DriverEndpoint.disableExecutor(CoarseGrainedSchedulerBackend.scala:290) at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnDriverEndpoint$$anonfun$onDisconnected$1.apply(YarnSchedulerBackend.scala:121) at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnDriverEndpoint$$anonfun$onDisconnected$1.apply(YarnSchedulerBackend.scala:120) at scala.Option.foreach(Option.scala:236) at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnDriverEndpoint.onDisconnected(YarnSchedulerBackend.scala:120) at org.apache.spark.rpc.netty.Inbox$$anonfun$process$1.apply$mcV$sp(Inbox.scala:142) at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:204) at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:100) at org.apache.spark.rpc.netty.Dispatcher$MessageLoop.run(Dispatcher.scala:215) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) 16/05/03 23:20:32 INFO cluster.YarnClientSchedulerBackend: Disabling executor 114. 16/05/03 23:20:32 ERROR netty.Inbox: Ignoring error org.apache.spark.SparkException: Could not find CoarseGrainedScheduler or it has been stopped. at org.apache.spark.rpc.netty.Dispatcher.postMessage(Dispatcher.scala:161) at org.apache.spark.rpc.netty.Dispatcher.postOneWayMessage(Dispatcher.scala:131) at org.apache.spark.rpc.netty.NettyRpcEnv.send(NettyRpcEnv.scala:192) at org.apache.spark.rpc.netty.NettyRpcEndpointRef.send(NettyRpcEnv.scala:516) at org.apache.spark.scheduler.cluster.CoarseGrainedSchedulerBackend.reviveOffers(CoarseGrainedSchedulerBackend.scala:345) at org.apache.spark.scheduler.TaskSchedulerImpl.executorLost(TaskSchedulerImpl.scala:497) at org.apache.spark.scheduler.cluster.CoarseGrainedSchedulerBackend$DriverEndpoint.disableExecutor(CoarseGrainedSchedulerBackend.scala:290) at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnDriverEndpoint$$anonfun$onDisconnected$1.apply(YarnSchedulerBackend.scala:121) at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnDriverEndpoint$$anonfun$onDisconnected$1.apply(YarnSchedulerBackend.scala:120) at scala.Option.foreach(Option.scala:236) at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnDriverEndpoint.onDisconnected(YarnSchedulerBackend.scala:120) at org.apache.spark.rpc.netty.Inbox$$anonfun$process$1.apply$mcV$sp(Inbox.scala:142) at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:204) at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:100) at org.apache.spark.rpc.netty.Dispatcher$MessageLoop.run(Dispatcher.scala:215) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) 16/05/03 23:20:32 INFO cluster.YarnClientSchedulerBackend: Disabling executor 191. 16/05/03 23:20:32 ERROR netty.Inbox: Ignoring error org.apache.spark.SparkException: Could not find CoarseGrainedScheduler or it has been stopped. at org.apache.spark.rpc.netty.Dispatcher.postMessage(Dispatcher.scala:161) at org.apache.spark.rpc.netty.Dispatcher.postOneWayMessage(Dispatcher.scala:131) at org.apache.spark.rpc.netty.NettyRpcEnv.send(NettyRpcEnv.scala:192) at org.apache.spark.rpc.netty.NettyRpcEndpointRef.send(NettyRpcEnv.scala:516) at org.apache.spark.scheduler.cluster.CoarseGrainedSchedulerBackend.reviveOffers(CoarseGrainedSchedulerBackend.scala:345) at org.apache.spark.scheduler.TaskSchedulerImpl.executorLost(TaskSchedulerImpl.scala:497) at org.apache.spark.scheduler.cluster.CoarseGrainedSchedulerBackend$DriverEndpoint.disableExecutor(CoarseGrainedSchedulerBackend.scala:290) at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnDriverEndpoint$$anonfun$onDisconnected$1.apply(YarnSchedulerBackend.scala:121) at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnDriverEndpoint$$anonfun$onDisconnected$1.apply(YarnSchedulerBackend.scala:120) at scala.Option.foreach(Option.scala:236) at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnDriverEndpoint.onDisconnected(YarnSchedulerBackend.scala:120) at org.apache.spark.rpc.netty.Inbox$$anonfun$process$1.apply$mcV$sp(Inbox.scala:142) at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:204) at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:100) at org.apache.spark.rpc.netty.Dispatcher$MessageLoop.run(Dispatcher.scala:215) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) 16/05/03 23:20:32 INFO cluster.YarnClientSchedulerBackend: Disabling executor 153. 16/05/03 23:20:32 ERROR netty.Inbox: Ignoring error org.apache.spark.SparkException: Could not find CoarseGrainedScheduler or it has been stopped. at org.apache.spark.rpc.netty.Dispatcher.postMessage(Dispatcher.scala:161) at org.apache.spark.rpc.netty.Dispatcher.postOneWayMessage(Dispatcher.scala:131) at org.apache.spark.rpc.netty.NettyRpcEnv.send(NettyRpcEnv.scala:192) at org.apache.spark.rpc.netty.NettyRpcEndpointRef.send(NettyRpcEnv.scala:516) at org.apache.spark.scheduler.cluster.CoarseGrainedSchedulerBackend.reviveOffers(CoarseGrainedSchedulerBackend.scala:345) at org.apache.spark.scheduler.TaskSchedulerImpl.executorLost(TaskSchedulerImpl.scala:497) at org.apache.spark.scheduler.cluster.CoarseGrainedSchedulerBackend$DriverEndpoint.disableExecutor(CoarseGrainedSchedulerBackend.scala:290) at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnDriverEndpoint$$anonfun$onDisconnected$1.apply(YarnSchedulerBackend.scala:121) at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnDriverEndpoint$$anonfun$onDisconnected$1.apply(YarnSchedulerBackend.scala:120) at scala.Option.foreach(Option.scala:236) at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnDriverEndpoint.onDisconnected(YarnSchedulerBackend.scala:120) at org.apache.spark.rpc.netty.Inbox$$anonfun$process$1.apply$mcV$sp(Inbox.scala:142) at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:204) at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:100) at org.apache.spark.rpc.netty.Dispatcher$MessageLoop.run(Dispatcher.scala:215) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) 16/05/03 23:20:32 INFO cluster.YarnClientSchedulerBackend: Disabling executor 171. 16/05/03 23:20:32 ERROR netty.Inbox: Ignoring error org.apache.spark.SparkException: Could not find CoarseGrainedScheduler or it has been stopped. at org.apache.spark.rpc.netty.Dispatcher.postMessage(Dispatcher.scala:161) at org.apache.spark.rpc.netty.Dispatcher.postOneWayMessage(Dispatcher.scala:131) at org.apache.spark.rpc.netty.NettyRpcEnv.send(NettyRpcEnv.scala:192) at org.apache.spark.rpc.netty.NettyRpcEndpointRef.send(NettyRpcEnv.scala:516) at org.apache.spark.scheduler.cluster.CoarseGrainedSchedulerBackend.reviveOffers(CoarseGrainedSchedulerBackend.scala:345) at org.apache.spark.scheduler.TaskSchedulerImpl.executorLost(TaskSchedulerImpl.scala:497) at org.apache.spark.scheduler.cluster.CoarseGrainedSchedulerBackend$DriverEndpoint.disableExecutor(CoarseGrainedSchedulerBackend.scala:290) at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnDriverEndpoint$$anonfun$onDisconnected$1.apply(YarnSchedulerBackend.scala:121) at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnDriverEndpoint$$anonfun$onDisconnected$1.apply(YarnSchedulerBackend.scala:120) at scala.Option.foreach(Option.scala:236) at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnDriverEndpoint.onDisconnected(YarnSchedulerBackend.scala:120) at org.apache.spark.rpc.netty.Inbox$$anonfun$process$1.apply$mcV$sp(Inbox.scala:142) at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:204) at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:100) at org.apache.spark.rpc.netty.Dispatcher$MessageLoop.run(Dispatcher.scala:215) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) 16/05/03 23:20:32 INFO cluster.YarnClientSchedulerBackend: Disabling executor 75. 16/05/03 23:20:32 ERROR netty.Inbox: Ignoring error org.apache.spark.SparkException: Could not find CoarseGrainedScheduler or it has been stopped. at org.apache.spark.rpc.netty.Dispatcher.postMessage(Dispatcher.scala:161) at org.apache.spark.rpc.netty.Dispatcher.postOneWayMessage(Dispatcher.scala:131) at org.apache.spark.rpc.netty.NettyRpcEnv.send(NettyRpcEnv.scala:192) at org.apache.spark.rpc.netty.NettyRpcEndpointRef.send(NettyRpcEnv.scala:516) at org.apache.spark.scheduler.cluster.CoarseGrainedSchedulerBackend.reviveOffers(CoarseGrainedSchedulerBackend.scala:345) at org.apache.spark.scheduler.TaskSchedulerImpl.executorLost(TaskSchedulerImpl.scala:497) at org.apache.spark.scheduler.cluster.CoarseGrainedSchedulerBackend$DriverEndpoint.disableExecutor(CoarseGrainedSchedulerBackend.scala:290) at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnDriverEndpoint$$anonfun$onDisconnected$1.apply(YarnSchedulerBackend.scala:121) at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnDriverEndpoint$$anonfun$onDisconnected$1.apply(YarnSchedulerBackend.scala:120) at scala.Option.foreach(Option.scala:236) at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnDriverEndpoint.onDisconnected(YarnSchedulerBackend.scala:120) at org.apache.spark.rpc.netty.Inbox$$anonfun$process$1.apply$mcV$sp(Inbox.scala:142) at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:204) at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:100) at org.apache.spark.rpc.netty.Dispatcher$MessageLoop.run(Dispatcher.scala:215) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) 16/05/03 23:20:32 INFO cluster.YarnClientSchedulerBackend: Disabling executor 108. 16/05/03 23:20:32 ERROR netty.Inbox: Ignoring error org.apache.spark.SparkException: Could not find CoarseGrainedScheduler or it has been stopped. at org.apache.spark.rpc.netty.Dispatcher.postMessage(Dispatcher.scala:161) at org.apache.spark.rpc.netty.Dispatcher.postOneWayMessage(Dispatcher.scala:131) at org.apache.spark.rpc.netty.NettyRpcEnv.send(NettyRpcEnv.scala:192) at org.apache.spark.rpc.netty.NettyRpcEndpointRef.send(NettyRpcEnv.scala:516) at org.apache.spark.scheduler.cluster.CoarseGrainedSchedulerBackend.reviveOffers(CoarseGrainedSchedulerBackend.scala:345) at org.apache.spark.scheduler.TaskSchedulerImpl.executorLost(TaskSchedulerImpl.scala:497) at org.apache.spark.scheduler.cluster.CoarseGrainedSchedulerBackend$DriverEndpoint.disableExecutor(CoarseGrainedSchedulerBackend.scala:290) at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnDriverEndpoint$$anonfun$onDisconnected$1.apply(YarnSchedulerBackend.scala:121) at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnDriverEndpoint$$anonfun$onDisconnected$1.apply(YarnSchedulerBackend.scala:120) at scala.Option.foreach(Option.scala:236) at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnDriverEndpoint.onDisconnected(YarnSchedulerBackend.scala:120) at org.apache.spark.rpc.netty.Inbox$$anonfun$process$1.apply$mcV$sp(Inbox.scala:142) at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:204) at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:100) at org.apache.spark.rpc.netty.Dispatcher$MessageLoop.run(Dispatcher.scala:215) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) 16/05/03 23:20:32 INFO cluster.YarnClientSchedulerBackend: Disabling executor 13. 16/05/03 23:20:32 ERROR netty.Inbox: Ignoring error org.apache.spark.SparkException: Could not find CoarseGrainedScheduler or it has been stopped. at org.apache.spark.rpc.netty.Dispatcher.postMessage(Dispatcher.scala:161) at org.apache.spark.rpc.netty.Dispatcher.postOneWayMessage(Dispatcher.scala:131) at org.apache.spark.rpc.netty.NettyRpcEnv.send(NettyRpcEnv.scala:192) at org.apache.spark.rpc.netty.NettyRpcEndpointRef.send(NettyRpcEnv.scala:516) at org.apache.spark.scheduler.cluster.CoarseGrainedSchedulerBackend.reviveOffers(CoarseGrainedSchedulerBackend.scala:345) at org.apache.spark.scheduler.TaskSchedulerImpl.executorLost(TaskSchedulerImpl.scala:497) at org.apache.spark.scheduler.cluster.CoarseGrainedSchedulerBackend$DriverEndpoint.disableExecutor(CoarseGrainedSchedulerBackend.scala:290) at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnDriverEndpoint$$anonfun$onDisconnected$1.apply(YarnSchedulerBackend.scala:121) at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnDriverEndpoint$$anonfun$onDisconnected$1.apply(YarnSchedulerBackend.scala:120) at scala.Option.foreach(Option.scala:236) at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnDriverEndpoint.onDisconnected(YarnSchedulerBackend.scala:120) at org.apache.spark.rpc.netty.Inbox$$anonfun$process$1.apply$mcV$sp(Inbox.scala:142) at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:204) at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:100) at org.apache.spark.rpc.netty.Dispatcher$MessageLoop.run(Dispatcher.scala:215) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) 16/05/03 23:20:32 INFO cluster.YarnClientSchedulerBackend: Disabling executor 37. 16/05/03 23:20:32 ERROR netty.Inbox: Ignoring error org.apache.spark.SparkException: Could not find CoarseGrainedScheduler or it has been stopped. at org.apache.spark.rpc.netty.Dispatcher.postMessage(Dispatcher.scala:161) at org.apache.spark.rpc.netty.Dispatcher.postOneWayMessage(Dispatcher.scala:131) at org.apache.spark.rpc.netty.NettyRpcEnv.send(NettyRpcEnv.scala:192) at org.apache.spark.rpc.netty.NettyRpcEndpointRef.send(NettyRpcEnv.scala:516) at org.apache.spark.scheduler.cluster.CoarseGrainedSchedulerBackend.reviveOffers(CoarseGrainedSchedulerBackend.scala:345) at org.apache.spark.scheduler.TaskSchedulerImpl.executorLost(TaskSchedulerImpl.scala:497) at org.apache.spark.scheduler.cluster.CoarseGrainedSchedulerBackend$DriverEndpoint.disableExecutor(CoarseGrainedSchedulerBackend.scala:290) at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnDriverEndpoint$$anonfun$onDisconnected$1.apply(YarnSchedulerBackend.scala:121) at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnDriverEndpoint$$anonfun$onDisconnected$1.apply(YarnSchedulerBackend.scala:120) at scala.Option.foreach(Option.scala:236) at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnDriverEndpoint.onDisconnected(YarnSchedulerBackend.scala:120) at org.apache.spark.rpc.netty.Inbox$$anonfun$process$1.apply$mcV$sp(Inbox.scala:142) at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:204) at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:100) at org.apache.spark.rpc.netty.Dispatcher$MessageLoop.run(Dispatcher.scala:215) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) 16/05/03 23:20:32 WARN netty.NettyRpcEndpointRef: Error sending message [message = RemoveExecutor(86,Container marked as failed: container_1459849169969_971497_01_000088 on host: rsync.cloud007.wd.nm.nop.ted. Exit status: 1. Diagnostics: Exception from container-launch. Container id: container_1459849169969_971497_01_000088 Exit code: 1 Stack trace: ExitCodeException exitCode=1: at org.apache.hadoop.util.Shell.runCommand(Shell.java:561) at org.apache.hadoop.util.Shell.run(Shell.java:478) at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:725) at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:214) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:299) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) Container exited with a non-zero exit code 1 )] in 1 attempts org.apache.spark.SparkException: Could not find CoarseGrainedScheduler or it has been stopped. at org.apache.spark.rpc.netty.Dispatcher.postMessage(Dispatcher.scala:161) at org.apache.spark.rpc.netty.Dispatcher.postLocalMessage(Dispatcher.scala:126) at org.apache.spark.rpc.netty.NettyRpcEnv.ask(NettyRpcEnv.scala:227) at org.apache.spark.rpc.netty.NettyRpcEndpointRef.ask(NettyRpcEnv.scala:511) at org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:100) at org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:77) at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnSchedulerEndpoint$$anonfun$org$apache$spark$scheduler$cluster$YarnSchedulerBackend$$handleExecutorDisconnectedFromDriver$1.applyOrElse(YarnSchedulerBackend.scala:148) at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnSchedulerEndpoint$$anonfun$org$apache$spark$scheduler$cluster$YarnSchedulerBackend$$handleExecutorDisconnectedFromDriver$1.applyOrElse(YarnSchedulerBackend.scala:146) at scala.concurrent.Future$$anonfun$onSuccess$1.apply(Future.scala:117) at scala.concurrent.Future$$anonfun$onSuccess$1.apply(Future.scala:115) at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:32) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) 16/05/03 23:20:32 WARN netty.NettyRpcEndpointRef: Error sending message [message = RemoveExecutor(97,Container marked as failed: container_1459849169969_971497_01_000099 on host: cloud1014140122.wd.nm.ss.nop.ted. Exit status: 1. Diagnostics: Exception from container-launch. Container id: container_1459849169969_971497_01_000099 Exit code: 1 Stack trace: ExitCodeException exitCode=1: at org.apache.hadoop.util.Shell.runCommand(Shell.java:561) at org.apache.hadoop.util.Shell.run(Shell.java:478) at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:725) at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:214) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:299) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) Container exited with a non-zero exit code 1 )] in 1 attempts org.apache.spark.SparkException: Could not find CoarseGrainedScheduler or it has been stopped. at org.apache.spark.rpc.netty.Dispatcher.postMessage(Dispatcher.scala:161) at org.apache.spark.rpc.netty.Dispatcher.postLocalMessage(Dispatcher.scala:126) at org.apache.spark.rpc.netty.NettyRpcEnv.ask(NettyRpcEnv.scala:227) at org.apache.spark.rpc.netty.NettyRpcEndpointRef.ask(NettyRpcEnv.scala:511) at org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:100) at org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:77) at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnSchedulerEndpoint$$anonfun$org$apache$spark$scheduler$cluster$YarnSchedulerBackend$$handleExecutorDisconnectedFromDriver$1.applyOrElse(YarnSchedulerBackend.scala:148) at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnSchedulerEndpoint$$anonfun$org$apache$spark$scheduler$cluster$YarnSchedulerBackend$$handleExecutorDisconnectedFromDriver$1.applyOrElse(YarnSchedulerBackend.scala:146) at scala.concurrent.Future$$anonfun$onSuccess$1.apply(Future.scala:117) at scala.concurrent.Future$$anonfun$onSuccess$1.apply(Future.scala:115) at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:32) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) 16/05/03 23:20:32 WARN netty.NettyRpcEndpointRef: Error sending message [message = RemoveExecutor(78,Container marked as failed: container_1459849169969_971497_01_000080 on host: rsync.cloud10141049124.wd.nm.nop.ted. Exit status: 1. Diagnostics: Exception from container-launch. Container id: container_1459849169969_971497_01_000080 Exit code: 1 Stack trace: ExitCodeException exitCode=1: at org.apache.hadoop.util.Shell.runCommand(Shell.java:561) at org.apache.hadoop.util.Shell.run(Shell.java:478) at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:725) at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:214) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:299) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) Container exited with a non-zero exit code 1 )] in 1 attempts org.apache.spark.SparkException: Could not find CoarseGrainedScheduler or it has been stopped. at org.apache.spark.rpc.netty.Dispatcher.postMessage(Dispatcher.scala:161) at org.apache.spark.rpc.netty.Dispatcher.postLocalMessage(Dispatcher.scala:126) at org.apache.spark.rpc.netty.NettyRpcEnv.ask(NettyRpcEnv.scala:227) at org.apache.spark.rpc.netty.NettyRpcEndpointRef.ask(NettyRpcEnv.scala:511) at org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:100) at org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:77) at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnSchedulerEndpoint$$anonfun$org$apache$spark$scheduler$cluster$YarnSchedulerBackend$$handleExecutorDisconnectedFromDriver$1.applyOrElse(YarnSchedulerBackend.scala:148) at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnSchedulerEndpoint$$anonfun$org$apache$spark$scheduler$cluster$YarnSchedulerBackend$$handleExecutorDisconnectedFromDriver$1.applyOrElse(YarnSchedulerBackend.scala:146) at scala.concurrent.Future$$anonfun$onSuccess$1.apply(Future.scala:117) at scala.concurrent.Future$$anonfun$onSuccess$1.apply(Future.scala:115) at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:32) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) 16/05/03 23:20:32 WARN netty.NettyRpcEndpointRef: Error sending message [message = RemoveExecutor(46,Container marked as failed: container_1459849169969_971497_01_000048 on host: cloud1014113124.wd.nm.ss.nop.ted. Exit status: 1. Diagnostics: Exception from container-launch. Container id: container_1459849169969_971497_01_000048 Exit code: 1 Stack trace: ExitCodeException exitCode=1: at org.apache.hadoop.util.Shell.runCommand(Shell.java:561) at org.apache.hadoop.util.Shell.run(Shell.java:478) at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:725) at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:214) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:299) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) Container exited with a non-zero exit code 1 )] in 1 attempts org.apache.spark.SparkException: Could not find CoarseGrainedScheduler or it has been stopped. at org.apache.spark.rpc.netty.Dispatcher.postMessage(Dispatcher.scala:161) at org.apache.spark.rpc.netty.Dispatcher.postLocalMessage(Dispatcher.scala:126) at org.apache.spark.rpc.netty.NettyRpcEnv.ask(NettyRpcEnv.scala:227) at org.apache.spark.rpc.netty.NettyRpcEndpointRef.ask(NettyRpcEnv.scala:511) at org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:100) at org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:77) at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnSchedulerEndpoint$$anonfun$org$apache$spark$scheduler$cluster$YarnSchedulerBackend$$handleExecutorDisconnectedFromDriver$1.applyOrElse(YarnSchedulerBackend.scala:148) at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnSchedulerEndpoint$$anonfun$org$apache$spark$scheduler$cluster$YarnSchedulerBackend$$handleExecutorDisconnectedFromDriver$1.applyOrElse(YarnSchedulerBackend.scala:146) at scala.concurrent.Future$$anonfun$onSuccess$1.apply(Future.scala:117) at scala.concurrent.Future$$anonfun$onSuccess$1.apply(Future.scala:115) at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:32) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) 16/05/03 23:20:32 WARN netty.NettyRpcEndpointRef: Error sending message [message = RemoveExecutor(113,Container marked as failed: container_1459849169969_971497_01_000115 on host: rsync.cloud10141080021.wd.nm.nop.ted. Exit status: 1. Diagnostics: Exception from container-launch. Container id: container_1459849169969_971497_01_000115 Exit code: 1 Stack trace: ExitCodeException exitCode=1: at org.apache.hadoop.util.Shell.runCommand(Shell.java:561) at org.apache.hadoop.util.Shell.run(Shell.java:478) at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:725) at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:214) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:299) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) Container exited with a non-zero exit code 1 )] in 1 attempts org.apache.spark.SparkException: Could not find CoarseGrainedScheduler or it has been stopped. at org.apache.spark.rpc.netty.Dispatcher.postMessage(Dispatcher.scala:161) at org.apache.spark.rpc.netty.Dispatcher.postLocalMessage(Dispatcher.scala:126) at org.apache.spark.rpc.netty.NettyRpcEnv.ask(NettyRpcEnv.scala:227) at org.apache.spark.rpc.netty.NettyRpcEndpointRef.ask(NettyRpcEnv.scala:511) at org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:100) at org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:77) at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnSchedulerEndpoint$$anonfun$org$apache$spark$scheduler$cluster$YarnSchedulerBackend$$handleExecutorDisconnectedFromDriver$1.applyOrElse(YarnSchedulerBackend.scala:148) at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnSchedulerEndpoint$$anonfun$org$apache$spark$scheduler$cluster$YarnSchedulerBackend$$handleExecutorDisconnectedFromDriver$1.applyOrElse(YarnSchedulerBackend.scala:146) at scala.concurrent.Future$$anonfun$onSuccess$1.apply(Future.scala:117) at scala.concurrent.Future$$anonfun$onSuccess$1.apply(Future.scala:115) at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:32) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) 16/05/03 23:20:32 INFO storage.MemoryStore: MemoryStore cleared 16/05/03 23:20:32 INFO storage.BlockManager: BlockManager stopped 16/05/03 23:20:32 INFO cluster.YarnClientSchedulerBackend: Disabling executor 102. 16/05/03 23:20:32 ERROR netty.Inbox: Ignoring error org.apache.spark.SparkException: Could not find CoarseGrainedScheduler or it has been stopped. at org.apache.spark.rpc.netty.Dispatcher.postMessage(Dispatcher.scala:161) at org.apache.spark.rpc.netty.Dispatcher.postOneWayMessage(Dispatcher.scala:131) at org.apache.spark.rpc.netty.NettyRpcEnv.send(NettyRpcEnv.scala:192) at org.apache.spark.rpc.netty.NettyRpcEndpointRef.send(NettyRpcEnv.scala:516) at org.apache.spark.scheduler.cluster.CoarseGrainedSchedulerBackend.reviveOffers(CoarseGrainedSchedulerBackend.scala:345) at org.apache.spark.scheduler.TaskSchedulerImpl.executorLost(TaskSchedulerImpl.scala:497) at org.apache.spark.scheduler.cluster.CoarseGrainedSchedulerBackend$DriverEndpoint.disableExecutor(CoarseGrainedSchedulerBackend.scala:290) at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnDriverEndpoint$$anonfun$onDisconnected$1.apply(YarnSchedulerBackend.scala:121) at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnDriverEndpoint$$anonfun$onDisconnected$1.apply(YarnSchedulerBackend.scala:120) at scala.Option.foreach(Option.scala:236) at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnDriverEndpoint.onDisconnected(YarnSchedulerBackend.scala:120) at org.apache.spark.rpc.netty.Inbox$$anonfun$process$1.apply$mcV$sp(Inbox.scala:142) at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:204) at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:100) at org.apache.spark.rpc.netty.Dispatcher$MessageLoop.run(Dispatcher.scala:215) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) 16/05/03 23:20:32 INFO cluster.YarnClientSchedulerBackend: Disabling executor 96. 16/05/03 23:20:32 ERROR netty.Inbox: Ignoring error org.apache.spark.SparkException: Could not find CoarseGrainedScheduler or it has been stopped. at org.apache.spark.rpc.netty.Dispatcher.postMessage(Dispatcher.scala:161) at org.apache.spark.rpc.netty.Dispatcher.postOneWayMessage(Dispatcher.scala:131) at org.apache.spark.rpc.netty.NettyRpcEnv.send(NettyRpcEnv.scala:192) at org.apache.spark.rpc.netty.NettyRpcEndpointRef.send(NettyRpcEnv.scala:516) at org.apache.spark.scheduler.cluster.CoarseGrainedSchedulerBackend.reviveOffers(CoarseGrainedSchedulerBackend.scala:345) at org.apache.spark.scheduler.TaskSchedulerImpl.executorLost(TaskSchedulerImpl.scala:497) at org.apache.spark.scheduler.cluster.CoarseGrainedSchedulerBackend$DriverEndpoint.disableExecutor(CoarseGrainedSchedulerBackend.scala:290) at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnDriverEndpoint$$anonfun$onDisconnected$1.apply(YarnSchedulerBackend.scala:121) at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnDriverEndpoint$$anonfun$onDisconnected$1.apply(YarnSchedulerBackend.scala:120) at scala.Option.foreach(Option.scala:236) at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnDriverEndpoint.onDisconnected(YarnSchedulerBackend.scala:120) at org.apache.spark.rpc.netty.Inbox$$anonfun$process$1.apply$mcV$sp(Inbox.scala:142) at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:204) at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:100) at org.apache.spark.rpc.netty.Dispatcher$MessageLoop.run(Dispatcher.scala:215) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) 16/05/03 23:20:32 INFO cluster.YarnClientSchedulerBackend: Disabling executor 123. 16/05/03 23:20:32 ERROR netty.Inbox: Ignoring error org.apache.spark.SparkException: Could not find CoarseGrainedScheduler or it has been stopped. at org.apache.spark.rpc.netty.Dispatcher.postMessage(Dispatcher.scala:161) at org.apache.spark.rpc.netty.Dispatcher.postOneWayMessage(Dispatcher.scala:131) at org.apache.spark.rpc.netty.NettyRpcEnv.send(NettyRpcEnv.scala:192) at org.apache.spark.rpc.netty.NettyRpcEndpointRef.send(NettyRpcEnv.scala:516) at org.apache.spark.scheduler.cluster.CoarseGrainedSchedulerBackend.reviveOffers(CoarseGrainedSchedulerBackend.scala:345) at org.apache.spark.scheduler.TaskSchedulerImpl.executorLost(TaskSchedulerImpl.scala:497) at org.apache.spark.scheduler.cluster.CoarseGrainedSchedulerBackend$DriverEndpoint.disableExecutor(CoarseGrainedSchedulerBackend.scala:290) at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnDriverEndpoint$$anonfun$onDisconnected$1.apply(YarnSchedulerBackend.scala:121) at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnDriverEndpoint$$anonfun$onDisconnected$1.apply(YarnSchedulerBackend.scala:120) at scala.Option.foreach(Option.scala:236) at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnDriverEndpoint.onDisconnected(YarnSchedulerBackend.scala:120) at org.apache.spark.rpc.netty.Inbox$$anonfun$process$1.apply$mcV$sp(Inbox.scala:142) at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:204) at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:100) at org.apache.spark.rpc.netty.Dispatcher$MessageLoop.run(Dispatcher.scala:215) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) 16/05/03 23:20:32 INFO cluster.YarnClientSchedulerBackend: Disabling executor 81. 16/05/03 23:20:32 ERROR netty.Inbox: Ignoring error org.apache.spark.SparkException: Could not find CoarseGrainedScheduler or it has been stopped. at org.apache.spark.rpc.netty.Dispatcher.postMessage(Dispatcher.scala:161) at org.apache.spark.rpc.netty.Dispatcher.postOneWayMessage(Dispatcher.scala:131) at org.apache.spark.rpc.netty.NettyRpcEnv.send(NettyRpcEnv.scala:192) at org.apache.spark.rpc.netty.NettyRpcEndpointRef.send(NettyRpcEnv.scala:516) at org.apache.spark.scheduler.cluster.CoarseGrainedSchedulerBackend.reviveOffers(CoarseGrainedSchedulerBackend.scala:345) at org.apache.spark.scheduler.TaskSchedulerImpl.executorLost(TaskSchedulerImpl.scala:497) at org.apache.spark.scheduler.cluster.CoarseGrainedSchedulerBackend$DriverEndpoint.disableExecutor(CoarseGrainedSchedulerBackend.scala:290) at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnDriverEndpoint$$anonfun$onDisconnected$1.apply(YarnSchedulerBackend.scala:121) at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnDriverEndpoint$$anonfun$onDisconnected$1.apply(YarnSchedulerBackend.scala:120) at scala.Option.foreach(Option.scala:236) at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnDriverEndpoint.onDisconnected(YarnSchedulerBackend.scala:120) at org.apache.spark.rpc.netty.Inbox$$anonfun$process$1.apply$mcV$sp(Inbox.scala:142) at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:204) at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:100) at org.apache.spark.rpc.netty.Dispatcher$MessageLoop.run(Dispatcher.scala:215) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) 16/05/03 23:20:32 INFO cluster.YarnClientSchedulerBackend: Disabling executor 126. 16/05/03 23:20:32 ERROR netty.Inbox: Ignoring error org.apache.spark.SparkException: Could not find CoarseGrainedScheduler or it has been stopped. at org.apache.spark.rpc.netty.Dispatcher.postMessage(Dispatcher.scala:161) at org.apache.spark.rpc.netty.Dispatcher.postOneWayMessage(Dispatcher.scala:131) at org.apache.spark.rpc.netty.NettyRpcEnv.send(NettyRpcEnv.scala:192) at org.apache.spark.rpc.netty.NettyRpcEndpointRef.send(NettyRpcEnv.scala:516) at org.apache.spark.scheduler.cluster.CoarseGrainedSchedulerBackend.reviveOffers(CoarseGrainedSchedulerBackend.scala:345) at org.apache.spark.scheduler.TaskSchedulerImpl.executorLost(TaskSchedulerImpl.scala:497) at org.apache.spark.scheduler.cluster.CoarseGrainedSchedulerBackend$DriverEndpoint.disableExecutor(CoarseGrainedSchedulerBackend.scala:290) at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnDriverEndpoint$$anonfun$onDisconnected$1.apply(YarnSchedulerBackend.scala:121) at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnDriverEndpoint$$anonfun$onDisconnected$1.apply(YarnSchedulerBackend.scala:120) at scala.Option.foreach(Option.scala:236) at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnDriverEndpoint.onDisconnected(YarnSchedulerBackend.scala:120) at org.apache.spark.rpc.netty.Inbox$$anonfun$process$1.apply$mcV$sp(Inbox.scala:142) at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:204) at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:100) at org.apache.spark.rpc.netty.Dispatcher$MessageLoop.run(Dispatcher.scala:215) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) 16/05/03 23:20:32 INFO cluster.YarnClientSchedulerBackend: Disabling executor 174. 16/05/03 23:20:32 ERROR netty.Inbox: Ignoring error org.apache.spark.SparkException: Could not find CoarseGrainedScheduler or it has been stopped. at org.apache.spark.rpc.netty.Dispatcher.postMessage(Dispatcher.scala:161) at org.apache.spark.rpc.netty.Dispatcher.postOneWayMessage(Dispatcher.scala:131) at org.apache.spark.rpc.netty.NettyRpcEnv.send(NettyRpcEnv.scala:192) at org.apache.spark.rpc.netty.NettyRpcEndpointRef.send(NettyRpcEnv.scala:516) at org.apache.spark.scheduler.cluster.CoarseGrainedSchedulerBackend.reviveOffers(CoarseGrainedSchedulerBackend.scala:345) at org.apache.spark.scheduler.TaskSchedulerImpl.executorLost(TaskSchedulerImpl.scala:497) at org.apache.spark.scheduler.cluster.CoarseGrainedSchedulerBackend$DriverEndpoint.disableExecutor(CoarseGrainedSchedulerBackend.scala:290) at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnDriverEndpoint$$anonfun$onDisconnected$1.apply(YarnSchedulerBackend.scala:121) at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnDriverEndpoint$$anonfun$onDisconnected$1.apply(YarnSchedulerBackend.scala:120) at scala.Option.foreach(Option.scala:236) at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnDriverEndpoint.onDisconnected(YarnSchedulerBackend.scala:120) at org.apache.spark.rpc.netty.Inbox$$anonfun$process$1.apply$mcV$sp(Inbox.scala:142) at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:204) at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:100) at org.apache.spark.rpc.netty.Dispatcher$MessageLoop.run(Dispatcher.scala:215) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) 16/05/03 23:20:32 INFO cluster.YarnClientSchedulerBackend: Disabling executor 127. 16/05/03 23:20:32 ERROR netty.Inbox: Ignoring error org.apache.spark.SparkException: Could not find CoarseGrainedScheduler or it has been stopped. at org.apache.spark.rpc.netty.Dispatcher.postMessage(Dispatcher.scala:161) at org.apache.spark.rpc.netty.Dispatcher.postOneWayMessage(Dispatcher.scala:131) at org.apache.spark.rpc.netty.NettyRpcEnv.send(NettyRpcEnv.scala:192) at org.apache.spark.rpc.netty.NettyRpcEndpointRef.send(NettyRpcEnv.scala:516) at org.apache.spark.scheduler.cluster.CoarseGrainedSchedulerBackend.reviveOffers(CoarseGrainedSchedulerBackend.scala:345) at org.apache.spark.scheduler.TaskSchedulerImpl.executorLost(TaskSchedulerImpl.scala:497) at org.apache.spark.scheduler.cluster.CoarseGrainedSchedulerBackend$DriverEndpoint.disableExecutor(CoarseGrainedSchedulerBackend.scala:290) at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnDriverEndpoint$$anonfun$onDisconnected$1.apply(YarnSchedulerBackend.scala:121) at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnDriverEndpoint$$anonfun$onDisconnected$1.apply(YarnSchedulerBackend.scala:120) at scala.Option.foreach(Option.scala:236) at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnDriverEndpoint.onDisconnected(YarnSchedulerBackend.scala:120) at org.apache.spark.rpc.netty.Inbox$$anonfun$process$1.apply$mcV$sp(Inbox.scala:142) at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:204) at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:100) at org.apache.spark.rpc.netty.Dispatcher$MessageLoop.run(Dispatcher.scala:215) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) 16/05/03 23:20:32 INFO cluster.YarnClientSchedulerBackend: Registered executor NettyRpcEndpointRef(null) (cloud101414436.wd.nm.ss.nop.ted:49601) with ID 61 16/05/03 23:20:32 ERROR scheduler.LiveListenerBus: SparkListenerBus has already stopped! Dropping event SparkListenerExecutorAdded(1462288832880,61,org.apache.spark.scheduler.cluster.ExecutorData@a1cd4a80) 16/05/03 23:20:32 INFO storage.BlockManagerMaster: BlockManagerMaster stopped 16/05/03 23:20:32 INFO scheduler.OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped! 16/05/03 23:20:32 WARN netty.NettyRpcEndpointRef: Error sending message [message = RemoveExecutor(85,Container marked as failed: container_1459849169969_971497_01_000087 on host: cloud10141051020.wd.nm.nop.ted. Exit status: 1. Diagnostics: Exception from container-launch. Container id: container_1459849169969_971497_01_000087 Exit code: 1 Stack trace: ExitCodeException exitCode=1: at org.apache.hadoop.util.Shell.runCommand(Shell.java:561) at org.apache.hadoop.util.Shell.run(Shell.java:478) at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:725) at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:214) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:299) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) Container exited with a non-zero exit code 1 )] in 1 attempts org.apache.spark.SparkException: Could not find CoarseGrainedScheduler or it has been stopped. at org.apache.spark.rpc.netty.Dispatcher.postMessage(Dispatcher.scala:161) at org.apache.spark.rpc.netty.Dispatcher.postLocalMessage(Dispatcher.scala:126) at org.apache.spark.rpc.netty.NettyRpcEnv.ask(NettyRpcEnv.scala:227) at org.apache.spark.rpc.netty.NettyRpcEndpointRef.ask(NettyRpcEnv.scala:511) at org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:100) at org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:77) at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnSchedulerEndpoint$$anonfun$org$apache$spark$scheduler$cluster$YarnSchedulerBackend$$handleExecutorDisconnectedFromDriver$1.applyOrElse(YarnSchedulerBackend.scala:148) at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnSchedulerEndpoint$$anonfun$org$apache$spark$scheduler$cluster$YarnSchedulerBackend$$handleExecutorDisconnectedFromDriver$1.applyOrElse(YarnSchedulerBackend.scala:146) at scala.concurrent.Future$$anonfun$onSuccess$1.apply(Future.scala:117) at scala.concurrent.Future$$anonfun$onSuccess$1.apply(Future.scala:115) at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:32) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) 16/05/03 23:20:32 WARN netty.NettyRpcEndpointRef: Error sending message [message = RemoveExecutor(87,Container marked as failed: container_1459849169969_971497_01_000089 on host: rsync.cloud10141064023.wd.nm.nop.ted. Exit status: 1. Diagnostics: Exception from container-launch. Container id: container_1459849169969_971497_01_000089 Exit code: 1 Stack trace: ExitCodeException exitCode=1: at org.apache.hadoop.util.Shell.runCommand(Shell.java:561) at org.apache.hadoop.util.Shell.run(Shell.java:478) at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:725) at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:214) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:299) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) Container exited with a non-zero exit code 1 )] in 1 attempts java.util.concurrent.RejectedExecutionException: Task java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask@3aee6637 rejected from java.util.concurrent.ScheduledThreadPoolExecutor@7e99f5b2[Terminated, pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 0] at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2048) at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:821) at java.util.concurrent.ScheduledThreadPoolExecutor.delayedExecute(ScheduledThreadPoolExecutor.java:325) at java.util.concurrent.ScheduledThreadPoolExecutor.schedule(ScheduledThreadPoolExecutor.java:530) at org.apache.spark.rpc.netty.NettyRpcEnv.ask(NettyRpcEnv.scala:239) at org.apache.spark.rpc.netty.NettyRpcEndpointRef.ask(NettyRpcEnv.scala:511) at org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:100) at org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:77) at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnSchedulerEndpoint$$anonfun$org$apache$spark$scheduler$cluster$YarnSchedulerBackend$$handleExecutorDisconnectedFromDriver$1.applyOrElse(YarnSchedulerBackend.scala:148) at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnSchedulerEndpoint$$anonfun$org$apache$spark$scheduler$cluster$YarnSchedulerBackend$$handleExecutorDisconnectedFromDriver$1.applyOrElse(YarnSchedulerBackend.scala:146) at scala.concurrent.Future$$anonfun$onSuccess$1.apply(Future.scala:117) at scala.concurrent.Future$$anonfun$onSuccess$1.apply(Future.scala:115) at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:32) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) 16/05/03 23:20:32 INFO remote.RemoteActorRefProvider$RemotingTerminator: Shutting down remote daemon. 16/05/03 23:20:32 INFO cluster.YarnClientSchedulerBackend: Disabling executor 43. 16/05/03 23:20:33 ERROR netty.Inbox: Ignoring error java.lang.IllegalStateException: RpcEnv already stopped. at org.apache.spark.rpc.netty.Dispatcher.postMessage(Dispatcher.scala:159) at org.apache.spark.rpc.netty.Dispatcher.postOneWayMessage(Dispatcher.scala:131) at org.apache.spark.rpc.netty.NettyRpcEnv.send(NettyRpcEnv.scala:192) at org.apache.spark.rpc.netty.NettyRpcEndpointRef.send(NettyRpcEnv.scala:516) at org.apache.spark.scheduler.cluster.CoarseGrainedSchedulerBackend.reviveOffers(CoarseGrainedSchedulerBackend.scala:345) at org.apache.spark.scheduler.TaskSchedulerImpl.executorLost(TaskSchedulerImpl.scala:497) at org.apache.spark.scheduler.cluster.CoarseGrainedSchedulerBackend$DriverEndpoint.disableExecutor(CoarseGrainedSchedulerBackend.scala:290) at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnDriverEndpoint$$anonfun$onDisconnected$1.apply(YarnSchedulerBackend.scala:121) at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnDriverEndpoint$$anonfun$onDisconnected$1.apply(YarnSchedulerBackend.scala:120) at scala.Option.foreach(Option.scala:236) at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnDriverEndpoint.onDisconnected(YarnSchedulerBackend.scala:120) at org.apache.spark.rpc.netty.Inbox$$anonfun$process$1.apply$mcV$sp(Inbox.scala:142) at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:204) at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:100) at org.apache.spark.rpc.netty.Dispatcher$MessageLoop.run(Dispatcher.scala:215) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) 16/05/03 23:20:33 ERROR server.TransportRequestHandler: Error while invoking RpcHandler#receive() for one-way message. java.lang.IllegalStateException: RpcEnv already stopped. at org.apache.spark.rpc.netty.Dispatcher.postMessage(Dispatcher.scala:159) at org.apache.spark.rpc.netty.Dispatcher.postOneWayMessage(Dispatcher.scala:131) at org.apache.spark.rpc.netty.NettyRpcHandler.receive(NettyRpcEnv.scala:578) at org.apache.spark.network.server.TransportRequestHandler.processOneWayMessage(TransportRequestHandler.java:170) at org.apache.spark.network.server.TransportRequestHandler.handle(TransportRequestHandler.java:104) at org.apache.spark.network.server.TransportChannelHandler.channelRead0(TransportChannelHandler.java:104) at org.apache.spark.network.server.TransportChannelHandler.channelRead0(TransportChannelHandler.java:51) at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294) at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:266) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294) at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294) at org.apache.spark.network.util.TransportFrameDecoder.channelRead(TransportFrameDecoder.java:86) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294) at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:846) at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:131) at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:511) at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468) at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354) at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111) at java.lang.Thread.run(Thread.java:745) 16/05/03 23:20:33 WARN netty.NettyRpcEndpointRef: Error sending message [message = RemoveExecutor(130,Container marked as failed: container_1459849169969_971497_01_000134 on host: cloud095.wd.nm.nop.ted. Exit status: 1. Diagnostics: Exception from container-launch. Container id: container_1459849169969_971497_01_000134 Exit code: 1 Stack trace: ExitCodeException exitCode=1: at org.apache.hadoop.util.Shell.runCommand(Shell.java:561) at org.apache.hadoop.util.Shell.run(Shell.java:478) at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:725) at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:214) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:299) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) Container exited with a non-zero exit code 1 )] in 1 attempts java.util.concurrent.RejectedExecutionException: Task java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask@2425d18f rejected from java.util.concurrent.ScheduledThreadPoolExecutor@7e99f5b2[Terminated, pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 0] at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2048) at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:821) at java.util.concurrent.ScheduledThreadPoolExecutor.delayedExecute(ScheduledThreadPoolExecutor.java:325) at java.util.concurrent.ScheduledThreadPoolExecutor.schedule(ScheduledThreadPoolExecutor.java:530) at org.apache.spark.rpc.netty.NettyRpcEnv.ask(NettyRpcEnv.scala:239) at org.apache.spark.rpc.netty.NettyRpcEndpointRef.ask(NettyRpcEnv.scala:511) at org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:100) at org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:77) at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnSchedulerEndpoint$$anonfun$org$apache$spark$scheduler$cluster$YarnSchedulerBackend$$handleExecutorDisconnectedFromDriver$1.applyOrElse(YarnSchedulerBackend.scala:148) at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnSchedulerEndpoint$$anonfun$org$apache$spark$scheduler$cluster$YarnSchedulerBackend$$handleExecutorDisconnectedFromDriver$1.applyOrElse(YarnSchedulerBackend.scala:146) at scala.concurrent.Future$$anonfun$onSuccess$1.apply(Future.scala:117) at scala.concurrent.Future$$anonfun$onSuccess$1.apply(Future.scala:115) at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:32) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) 16/05/03 23:20:33 INFO cluster.YarnClientSchedulerBackend: Registered executor NettyRpcEndpointRef(null) (rsync.cloud183.wd.nm.ted:35353) with ID 193 16/05/03 23:20:33 ERROR scheduler.LiveListenerBus: SparkListenerBus has already stopped! Dropping event SparkListenerExecutorAdded(1462288833009,193,org.apache.spark.scheduler.cluster.ExecutorData@d8d68fb6) 16/05/03 23:20:33 WARN netty.NettyRpcEndpointRef: Error sending message [message = RemoveExecutor(180,Container marked as failed: container_1459849169969_971497_01_000186 on host: cloud1014113114.wd.nm.ss.nop.ted. Exit status: 1. Diagnostics: Exception from container-launch. Container id: container_1459849169969_971497_01_000186 Exit code: 1 Stack trace: ExitCodeException exitCode=1: at org.apache.hadoop.util.Shell.runCommand(Shell.java:561) at org.apache.hadoop.util.Shell.run(Shell.java:478) at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:725) at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:214) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:299) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) Container exited with a non-zero exit code 1 )] in 1 attempts java.util.concurrent.RejectedExecutionException: Task java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask@7c43c586 rejected from java.util.concurrent.ScheduledThreadPoolExecutor@7e99f5b2[Terminated, pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 0] at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2048) at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:821) at java.util.concurrent.ScheduledThreadPoolExecutor.delayedExecute(ScheduledThreadPoolExecutor.java:325) at java.util.concurrent.ScheduledThreadPoolExecutor.schedule(ScheduledThreadPoolExecutor.java:530) at org.apache.spark.rpc.netty.NettyRpcEnv.ask(NettyRpcEnv.scala:239) at org.apache.spark.rpc.netty.NettyRpcEndpointRef.ask(NettyRpcEnv.scala:511) at org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:100) at org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:77) at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnSchedulerEndpoint$$anonfun$org$apache$spark$scheduler$cluster$YarnSchedulerBackend$$handleExecutorDisconnectedFromDriver$1.applyOrElse(YarnSchedulerBackend.scala:148) at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnSchedulerEndpoint$$anonfun$org$apache$spark$scheduler$cluster$YarnSchedulerBackend$$handleExecutorDisconnectedFromDriver$1.applyOrElse(YarnSchedulerBackend.scala:146) at scala.concurrent.Future$$anonfun$onSuccess$1.apply(Future.scala:117) at scala.concurrent.Future$$anonfun$onSuccess$1.apply(Future.scala:115) at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:32) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) 16/05/03 23:20:33 INFO remote.RemoteActorRefProvider$RemotingTerminator: Remote daemon shut down; proceeding with flushing remote transports. 16/05/03 23:20:33 ERROR server.TransportRequestHandler: Error sending result RpcResponse{requestId=4887039326450737701, body=NioManagedBuffer{buf=java.nio.HeapByteBuffer[pos=0 lim=162 cap=162]}} to rsync.cloud183.wd.nm.ted/10.141.28.73:35353; closing connection java.nio.channels.ClosedChannelException 16/05/03 23:20:33 INFO spark.SparkContext: Successfully stopped SparkContext 16/05/03 23:20:33 ERROR server.TransportRequestHandler: Error while invoking RpcHandler#receive() for one-way message. java.lang.IllegalStateException: RpcEnv already stopped. at org.apache.spark.rpc.netty.Dispatcher.postMessage(Dispatcher.scala:159) at org.apache.spark.rpc.netty.Dispatcher.postOneWayMessage(Dispatcher.scala:131) at org.apache.spark.rpc.netty.NettyRpcHandler.receive(NettyRpcEnv.scala:578) at org.apache.spark.network.server.TransportRequestHandler.processOneWayMessage(TransportRequestHandler.java:170) at org.apache.spark.network.server.TransportRequestHandler.handle(TransportRequestHandler.java:104) at org.apache.spark.network.server.TransportChannelHandler.channelRead0(TransportChannelHandler.java:104) at org.apache.spark.network.server.TransportChannelHandler.channelRead0(TransportChannelHandler.java:51) at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294) at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:266) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294) at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294) at org.apache.spark.network.util.TransportFrameDecoder.channelRead(TransportFrameDecoder.java:86) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294) at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:846) at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:131) at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:511) at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468) at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354) at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111) at java.lang.Thread.run(Thread.java:745) 16/05/03 23:20:33 ERROR cluster.YarnScheduler: Lost executor 39 on cloud122.wd.nm.nop.ted: Container marked as failed: container_1459849169969_971497_01_000041 on host: cloud122.wd.nm.nop.ted. Exit status: 1. Diagnostics: Exception from container-launch. Container id: container_1459849169969_971497_01_000041 Exit code: 1 Stack trace: ExitCodeException exitCode=1: at org.apache.hadoop.util.Shell.runCommand(Shell.java:561) at org.apache.hadoop.util.Shell.run(Shell.java:478) at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:725) at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:214) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:299) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) Container exited with a non-zero exit code 1 16/05/03 23:20:33 ERROR scheduler.LiveListenerBus: SparkListenerBus has already stopped! Dropping event SparkListenerExecutorRemoved(1462288833038,39,Container marked as failed: container_1459849169969_971497_01_000041 on host: cloud122.wd.nm.nop.ted. Exit status: 1. Diagnostics: Exception from container-launch. Container id: container_1459849169969_971497_01_000041 Exit code: 1 Stack trace: ExitCodeException exitCode=1: at org.apache.hadoop.util.Shell.runCommand(Shell.java:561) at org.apache.hadoop.util.Shell.run(Shell.java:478) at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:725) at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:214) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:299) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) Container exited with a non-zero exit code 1 ) 16/05/03 23:20:33 WARN cluster.YarnSchedulerBackend$YarnSchedulerEndpoint: Container marked as failed: container_1459849169969_971497_01_000151 on host: rsync.cloud187.wd.nm.ted. Exit status: 1. Diagnostics: Exception from container-launch. Container id: container_1459849169969_971497_01_000151 Exit code: 1 Stack trace: ExitCodeException exitCode=1: at org.apache.hadoop.util.Shell.runCommand(Shell.java:561) at org.apache.hadoop.util.Shell.run(Shell.java:478) at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:725) at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:214) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:299) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) Container exited with a non-zero exit code 1 16/05/03 23:20:33 ERROR server.TransportRequestHandler: Error while invoking RpcHandler#receive() for one-way message. java.lang.IllegalStateException: RpcEnv already stopped. at org.apache.spark.rpc.netty.Dispatcher.postMessage(Dispatcher.scala:159) at org.apache.spark.rpc.netty.Dispatcher.postOneWayMessage(Dispatcher.scala:131) at org.apache.spark.rpc.netty.NettyRpcHandler.receive(NettyRpcEnv.scala:578) at org.apache.spark.network.server.TransportRequestHandler.processOneWayMessage(TransportRequestHandler.java:170) at org.apache.spark.network.server.TransportRequestHandler.handle(TransportRequestHandler.java:104) at org.apache.spark.network.server.TransportChannelHandler.channelRead0(TransportChannelHandler.java:104) at org.apache.spark.network.server.TransportChannelHandler.channelRead0(TransportChannelHandler.java:51) at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294) at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:266) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294) at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294) at org.apache.spark.network.util.TransportFrameDecoder.channelRead(TransportFrameDecoder.java:86) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294) at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:846) at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:131) at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:511) at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468) at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354) at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111) at java.lang.Thread.run(Thread.java:745) 16/05/03 23:20:33 ERROR server.TransportRequestHandler: Error while invoking RpcHandler#receive() for one-way message. java.lang.IllegalStateException: RpcEnv already stopped. at org.apache.spark.rpc.netty.Dispatcher.postMessage(Dispatcher.scala:159) at org.apache.spark.rpc.netty.Dispatcher.postOneWayMessage(Dispatcher.scala:131) at org.apache.spark.rpc.netty.NettyRpcHandler.receive(NettyRpcEnv.scala:578) at org.apache.spark.network.server.TransportRequestHandler.processOneWayMessage(TransportRequestHandler.java:170) at org.apache.spark.network.server.TransportRequestHandler.handle(TransportRequestHandler.java:104) at org.apache.spark.network.server.TransportChannelHandler.channelRead0(TransportChannelHandler.java:104) at org.apache.spark.network.server.TransportChannelHandler.channelRead0(TransportChannelHandler.java:51) at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294) at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:266) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294) at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294) at org.apache.spark.network.util.TransportFrameDecoder.channelRead(TransportFrameDecoder.java:86) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294) at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:846) at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:131) at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:511) at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468) at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354) at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111) at java.lang.Thread.run(Thread.java:745) Total time consumed: 62.136951095 seconds 16/05/03 23:20:33 WARN netty.NettyRpcEndpointRef: Error sending message [message = RemoveExecutor(107,Container marked as failed: container_1459849169969_971497_01_000109 on host: cloud028.wd.nm.nop.ted. Exit status: 1. Diagnostics: Exception from container-launch. Container id: container_1459849169969_971497_01_000109 Exit code: 1 Stack trace: ExitCodeException exitCode=1: at org.apache.hadoop.util.Shell.runCommand(Shell.java:561) at org.apache.hadoop.util.Shell.run(Shell.java:478) at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:725) at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:214) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:299) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) Container exited with a non-zero exit code 1 )] in 1 attempts java.util.concurrent.RejectedExecutionException: Task java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask@7bf18d80 rejected from java.util.concurrent.ScheduledThreadPoolExecutor@7e99f5b2[Terminated, pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 0] at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2048) at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:821) at java.util.concurrent.ScheduledThreadPoolExecutor.delayedExecute(ScheduledThreadPoolExecutor.java:325) at java.util.concurrent.ScheduledThreadPoolExecutor.schedule(ScheduledThreadPoolExecutor.java:530) at org.apache.spark.rpc.netty.NettyRpcEnv.ask(NettyRpcEnv.scala:239) at org.apache.spark.rpc.netty.NettyRpcEndpointRef.ask(NettyRpcEnv.scala:511) at org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:100) at org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:77) at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnSchedulerEndpoint$$anonfun$org$apache$spark$scheduler$cluster$YarnSchedulerBackend$$handleExecutorDisconnectedFromDriver$1.applyOrElse(YarnSchedulerBackend.scala:148) at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnSchedulerEndpoint$$anonfun$org$apache$spark$scheduler$cluster$YarnSchedulerBackend$$handleExecutorDisconnectedFromDriver$1.applyOrElse(YarnSchedulerBackend.scala:146) at scala.concurrent.Future$$anonfun$onSuccess$1.apply(Future.scala:117) at scala.concurrent.Future$$anonfun$onSuccess$1.apply(Future.scala:115) at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:32) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) 16/05/03 23:20:33 WARN netty.NettyRpcEndpointRef: Error sending message [message = RemoveExecutor(152,Container marked as failed: container_1459849169969_971497_01_000157 on host: rsync.cloud10141073118.wd.nm.nop.ted. Exit status: 1. Diagnostics: Exception from container-launch. Container id: container_1459849169969_971497_01_000157 Exit code: 1 Stack trace: ExitCodeException exitCode=1: at org.apache.hadoop.util.Shell.runCommand(Shell.java:561) at org.apache.hadoop.util.Shell.run(Shell.java:478) at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:725) at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:214) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:299) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) Container exited with a non-zero exit code 1 )] in 1 attempts java.util.concurrent.RejectedExecutionException: Task java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask@4ade98f5 rejected from java.util.concurrent.ScheduledThreadPoolExecutor@7e99f5b2[Terminated, pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 0] at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2048) at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:821) at java.util.concurrent.ScheduledThreadPoolExecutor.delayedExecute(ScheduledThreadPoolExecutor.java:325) at java.util.concurrent.ScheduledThreadPoolExecutor.schedule(ScheduledThreadPoolExecutor.java:530) at org.apache.spark.rpc.netty.NettyRpcEnv.ask(NettyRpcEnv.scala:239) at org.apache.spark.rpc.netty.NettyRpcEndpointRef.ask(NettyRpcEnv.scala:511) at org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:100) at org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:77) at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnSchedulerEndpoint$$anonfun$org$apache$spark$scheduler$cluster$YarnSchedulerBackend$$handleExecutorDisconnectedFromDriver$1.applyOrElse(YarnSchedulerBackend.scala:148) at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnSchedulerEndpoint$$anonfun$org$apache$spark$scheduler$cluster$YarnSchedulerBackend$$handleExecutorDisconnectedFromDriver$1.applyOrElse(YarnSchedulerBackend.scala:146) at scala.concurrent.Future$$anonfun$onSuccess$1.apply(Future.scala:117) at scala.concurrent.Future$$anonfun$onSuccess$1.apply(Future.scala:115) at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:32) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) 16/05/03 23:20:33 INFO cluster.YarnClientSchedulerBackend: Asked to remove non-existent executor 39 16/05/03 23:20:33 WARN netty.NettyRpcEndpointRef: Error sending message [message = RemoveExecutor(146,Container marked as failed: container_1459849169969_971497_01_000151 on host: rsync.cloud187.wd.nm.ted. Exit status: 1. Diagnostics: Exception from container-launch. Container id: container_1459849169969_971497_01_000151 Exit code: 1 Stack trace: ExitCodeException exitCode=1: at org.apache.hadoop.util.Shell.runCommand(Shell.java:561) at org.apache.hadoop.util.Shell.run(Shell.java:478) at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:725) at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:214) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:299) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) Container exited with a non-zero exit code 1 )] in 1 attempts java.util.concurrent.RejectedExecutionException: Task java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask@5ef9f657 rejected from java.util.concurrent.ScheduledThreadPoolExecutor@7e99f5b2[Terminated, pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 0] at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2048) at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:821) at java.util.concurrent.ScheduledThreadPoolExecutor.delayedExecute(ScheduledThreadPoolExecutor.java:325) at java.util.concurrent.ScheduledThreadPoolExecutor.schedule(ScheduledThreadPoolExecutor.java:530) at org.apache.spark.rpc.netty.NettyRpcEnv.ask(NettyRpcEnv.scala:239) at org.apache.spark.rpc.netty.NettyRpcEndpointRef.ask(NettyRpcEnv.scala:511) at org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:100) at org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:77) at org.apache.spark.scheduler.cluster.CoarseGrainedSchedulerBackend.removeExecutor(CoarseGrainedSchedulerBackend.scala:359) at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnSchedulerEndpoint$$anonfun$receive$1.applyOrElse(YarnSchedulerBackend.scala:176) at org.apache.spark.rpc.netty.Inbox$$anonfun$process$1.apply$mcV$sp(Inbox.scala:116) at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:204) at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:100) at org.apache.spark.rpc.netty.Dispatcher$MessageLoop.run(Dispatcher.scala:215) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) 16/05/03 23:20:33 ERROR server.TransportRequestHandler: Error while invoking RpcHandler#receive() for one-way message. java.lang.IllegalStateException: RpcEnv already stopped. at org.apache.spark.rpc.netty.Dispatcher.postMessage(Dispatcher.scala:159) at org.apache.spark.rpc.netty.Dispatcher.postOneWayMessage(Dispatcher.scala:131) at org.apache.spark.rpc.netty.NettyRpcHandler.receive(NettyRpcEnv.scala:578) at org.apache.spark.network.server.TransportRequestHandler.processOneWayMessage(TransportRequestHandler.java:170) at org.apache.spark.network.server.TransportRequestHandler.handle(TransportRequestHandler.java:104) at org.apache.spark.network.server.TransportChannelHandler.channelRead0(TransportChannelHandler.java:104) at org.apache.spark.network.server.TransportChannelHandler.channelRead0(TransportChannelHandler.java:51) at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294) at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:266) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294) at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294) at org.apache.spark.network.util.TransportFrameDecoder.channelRead(TransportFrameDecoder.java:86) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294) at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:846) at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:131) at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:511) at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468) at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354) at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111) at java.lang.Thread.run(Thread.java:745) 16/05/03 23:20:33 ERROR cluster.YarnScheduler: Lost executor 149 on rsync.cloud201.wd.nm.ted: Container marked as failed: container_1459849169969_971497_01_000154 on host: rsync.cloud201.wd.nm.ted. Exit status: 1. Diagnostics: Exception from container-launch. Container id: container_1459849169969_971497_01_000154 Exit code: 1 Stack trace: ExitCodeException exitCode=1: at org.apache.hadoop.util.Shell.runCommand(Shell.java:561) at org.apache.hadoop.util.Shell.run(Shell.java:478) at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:725) at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:214) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:299) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) Container exited with a non-zero exit code 1 16/05/03 23:20:34 ERROR util.Utils: Uncaught exception in thread driver-revive-thread java.lang.IllegalStateException: RpcEnv already stopped. at org.apache.spark.rpc.netty.Dispatcher.postMessage(Dispatcher.scala:159) at org.apache.spark.rpc.netty.Dispatcher.postOneWayMessage(Dispatcher.scala:131) at org.apache.spark.rpc.netty.NettyRpcEnv.send(NettyRpcEnv.scala:192) at org.apache.spark.rpc.netty.NettyRpcEndpointRef.send(NettyRpcEnv.scala:516) at org.apache.spark.scheduler.cluster.CoarseGrainedSchedulerBackend$DriverEndpoint$$anon$1$$anonfun$run$1$$anonfun$apply$mcV$sp$1.apply(CoarseGrainedSchedulerBackend.scala:102) at org.apache.spark.scheduler.cluster.CoarseGrainedSchedulerBackend$DriverEndpoint$$anon$1$$anonfun$run$1$$anonfun$apply$mcV$sp$1.apply(CoarseGrainedSchedulerBackend.scala:102) at scala.Option.foreach(Option.scala:236) at org.apache.spark.scheduler.cluster.CoarseGrainedSchedulerBackend$DriverEndpoint$$anon$1$$anonfun$run$1.apply$mcV$sp(CoarseGrainedSchedulerBackend.scala:102) at org.apache.spark.util.Utils$.tryLogNonFatalError(Utils.scala:1229) at org.apache.spark.scheduler.cluster.CoarseGrainedSchedulerBackend$DriverEndpoint$$anon$1.run(CoarseGrainedSchedulerBackend.scala:101) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:304) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) 16/05/03 23:20:33 ERROR server.TransportRequestHandler: Error while invoking RpcHandler#receive() for one-way message. java.lang.IllegalStateException: RpcEnv already stopped. at org.apache.spark.rpc.netty.Dispatcher.postMessage(Dispatcher.scala:159) at org.apache.spark.rpc.netty.Dispatcher.postOneWayMessage(Dispatcher.scala:131) at org.apache.spark.rpc.netty.NettyRpcHandler.receive(NettyRpcEnv.scala:578) at org.apache.spark.network.server.TransportRequestHandler.processOneWayMessage(TransportRequestHandler.java:170) at org.apache.spark.network.server.TransportRequestHandler.handle(TransportRequestHandler.java:104) at org.apache.spark.network.server.TransportChannelHandler.channelRead0(TransportChannelHandler.java:104) at org.apache.spark.network.server.TransportChannelHandler.channelRead0(TransportChannelHandler.java:51) at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294) at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:266) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294) at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294) at org.apache.spark.network.util.TransportFrameDecoder.channelRead(TransportFrameDecoder.java:86) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294) at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:846) at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:131) at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:511) at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468) at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354) at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111) at java.lang.Thread.run(Thread.java:745) 16/05/03 23:20:34 WARN netty.NettyRpcEndpointRef: Error sending message [message = RemoveExecutor(128,Container marked as failed: container_1459849169969_971497_01_000132 on host: rsync.cloud018.wd.nm.nop.ted. Exit status: 1. Diagnostics: Exception from container-launch. Container id: container_1459849169969_971497_01_000132 Exit code: 1 Stack trace: ExitCodeException exitCode=1: at org.apache.hadoop.util.Shell.runCommand(Shell.java:561) at org.apache.hadoop.util.Shell.run(Shell.java:478) at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:725) at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:214) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:299) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) Container exited with a non-zero exit code 1 )] in 1 attempts java.util.concurrent.RejectedExecutionException: Task java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask@36759950 rejected from java.util.concurrent.ScheduledThreadPoolExecutor@7e99f5b2[Terminated, pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 0] at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2048) at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:821) at java.util.concurrent.ScheduledThreadPoolExecutor.delayedExecute(ScheduledThreadPoolExecutor.java:325) at java.util.concurrent.ScheduledThreadPoolExecutor.schedule(ScheduledThreadPoolExecutor.java:530) at org.apache.spark.rpc.netty.NettyRpcEnv.ask(NettyRpcEnv.scala:239) at org.apache.spark.rpc.netty.NettyRpcEndpointRef.ask(NettyRpcEnv.scala:511) at org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:100) at org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:77) at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnSchedulerEndpoint$$anonfun$org$apache$spark$scheduler$cluster$YarnSchedulerBackend$$handleExecutorDisconnectedFromDriver$1.applyOrElse(YarnSchedulerBackend.scala:148) at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnSchedulerEndpoint$$anonfun$org$apache$spark$scheduler$cluster$YarnSchedulerBackend$$handleExecutorDisconnectedFromDriver$1.applyOrElse(YarnSchedulerBackend.scala:146) at scala.concurrent.Future$$anonfun$onSuccess$1.apply(Future.scala:117) at scala.concurrent.Future$$anonfun$onSuccess$1.apply(Future.scala:115) at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:32) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) 16/05/03 23:20:34 ERROR scheduler.LiveListenerBus: SparkListenerBus has already stopped! Dropping event SparkListenerExecutorRemoved(1462288834068,149,Container marked as failed: container_1459849169969_971497_01_000154 on host: rsync.cloud201.wd.nm.ted. Exit status: 1. Diagnostics: Exception from container-launch. Container id: container_1459849169969_971497_01_000154 Exit code: 1 Stack trace: ExitCodeException exitCode=1: at org.apache.hadoop.util.Shell.runCommand(Shell.java:561) at org.apache.hadoop.util.Shell.run(Shell.java:478) at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:725) at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:214) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:299) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) Container exited with a non-zero exit code 1 ) 16/05/03 23:20:34 ERROR cluster.YarnScheduler: Lost executor 179 on rsync.cloud10141032047.wd.nm.nop.ted: Container marked as failed: container_1459849169969_971497_01_000185 on host: cloud10141032047.wd.nm.nop.ted. Exit status: 1. Diagnostics: Exception from container-launch. Container id: container_1459849169969_971497_01_000185 Exit code: 1 Stack trace: ExitCodeException exitCode=1: at org.apache.hadoop.util.Shell.runCommand(Shell.java:561) at org.apache.hadoop.util.Shell.run(Shell.java:478) at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:725) at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:214) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:299) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) Container exited with a non-zero exit code 1 Exception in thread "main" java.lang.UnsupportedOperationException: empty collection at org.apache.spark.rdd.RDD$$anonfun$reduce$1$$anonfun$apply$40.apply(RDD.scala:1027) at org.apache.spark.rdd.RDD$$anonfun$reduce$1$$anonfun$apply$40.apply(RDD.scala:1027) at scala.Option.getOrElse(Option.scala:120) at org.apache.spark.rdd.RDD$$anonfun$reduce$1.apply(RDD.scala:1027) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:111) at org.apache.spark.rdd.RDD.withScope(RDD.scala:316) at org.apache.spark.rdd.RDD.reduce(RDD.scala:1007) at org.apache.spark.graphx2.impl.EdgeRDDImpl.count(EdgeRDDImpl.scala:89) at com.github.cloudml.zen.ml.clustering.LDA$.initializeCorpusEdges(LDA.scala:328) at com.github.cloudml.zen.examples.ml.LDADriver$.loadCorpus(LDADriver.scala:152) at com.github.cloudml.zen.examples.ml.LDADriver$.main(LDADriver.scala:113) at com.github.cloudml.zen.examples.ml.LDADriver.main(LDADriver.scala) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:731) at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181) at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206) at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121) at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala) 16/05/03 23:20:34 ERROR scheduler.LiveListenerBus: SparkListenerBus has already stopped! Dropping event SparkListenerExecutorRemoved(1462288834111,179,Container marked as failed: container_1459849169969_971497_01_000185 on host: cloud10141032047.wd.nm.nop.ted. Exit status: 1. Diagnostics: Exception from container-launch. Container id: container_1459849169969_971497_01_000185 Exit code: 1 Stack trace: ExitCodeException exitCode=1: at org.apache.hadoop.util.Shell.runCommand(Shell.java:561) at org.apache.hadoop.util.Shell.run(Shell.java:478) at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:725) at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:214) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:299) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) Container exited with a non-zero exit code 1 ) 16/05/03 23:20:34 ERROR server.TransportRequestHandler: Error while invoking RpcHandler#receive() for one-way message. java.lang.IllegalStateException: RpcEnv already stopped. at org.apache.spark.rpc.netty.Dispatcher.postMessage(Dispatcher.scala:159) at org.apache.spark.rpc.netty.Dispatcher.postOneWayMessage(Dispatcher.scala:131) at org.apache.spark.rpc.netty.NettyRpcHandler.receive(NettyRpcEnv.scala:578) at org.apache.spark.network.server.TransportRequestHandler.processOneWayMessage(TransportRequestHandler.java:170) at org.apache.spark.network.server.TransportRequestHandler.handle(TransportRequestHandler.java:104) at org.apache.spark.network.server.TransportChannelHandler.channelRead0(TransportChannelHandler.java:104) at org.apache.spark.network.server.TransportChannelHandler.channelRead0(TransportChannelHandler.java:51) at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294) at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:266) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294) at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294) at org.apache.spark.network.util.TransportFrameDecoder.channelRead(TransportFrameDecoder.java:86) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294) at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:846) at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:131) at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:511) at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468) at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354) at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111) at java.lang.Thread.run(Thread.java:745) 16/05/03 23:20:34 WARN netty.NettyRpcEndpointRef: Error sending message [message = RemoveExecutor(182,Container marked as failed: container_1459849169969_971497_01_000188 on host: cloud101414366.wd.nm.ss.nop.ted. Exit status: 1. Diagnostics: Exception from container-launch. Container id: container_1459849169969_971497_01_000188 Exit code: 1 Stack trace: ExitCodeException exitCode=1: at org.apache.hadoop.util.Shell.runCommand(Shell.java:561) at org.apache.hadoop.util.Shell.run(Shell.java:478) at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:725) at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:214) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:299) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) Container exited with a non-zero exit code 1 )] in 1 attempts java.util.concurrent.RejectedExecutionException: Task java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask@2cb6344b rejected from java.util.concurrent.ScheduledThreadPoolExecutor@7e99f5b2[Terminated, pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 0] at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2048) at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:821) at java.util.concurrent.ScheduledThreadPoolExecutor.delayedExecute(ScheduledThreadPoolExecutor.java:325) at java.util.concurrent.ScheduledThreadPoolExecutor.schedule(ScheduledThreadPoolExecutor.java:530) at org.apache.spark.rpc.netty.NettyRpcEnv.ask(NettyRpcEnv.scala:239) at org.apache.spark.rpc.netty.NettyRpcEndpointRef.ask(NettyRpcEnv.scala:511) at org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:100) at org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:77) at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnSchedulerEndpoint$$anonfun$org$apache$spark$scheduler$cluster$YarnSchedulerBackend$$handleExecutorDisconnectedFromDriver$1.applyOrElse(YarnSchedulerBackend.scala:148) at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnSchedulerEndpoint$$anonfun$org$apache$spark$scheduler$cluster$YarnSchedulerBackend$$handleExecutorDisconnectedFromDriver$1.applyOrElse(YarnSchedulerBackend.scala:146) at scala.concurrent.Future$$anonfun$onSuccess$1.apply(Future.scala:117) at scala.concurrent.Future$$anonfun$onSuccess$1.apply(Future.scala:115) at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:32) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) 16/05/03 23:20:34 ERROR server.TransportRequestHandler: Error while invoking RpcHandler#receive() for one-way message. java.lang.IllegalStateException: RpcEnv already stopped. at org.apache.spark.rpc.netty.Dispatcher.postMessage(Dispatcher.scala:159) at org.apache.spark.rpc.netty.Dispatcher.postOneWayMessage(Dispatcher.scala:131) at org.apache.spark.rpc.netty.NettyRpcHandler.receive(NettyRpcEnv.scala:578) at org.apache.spark.network.server.TransportRequestHandler.processOneWayMessage(TransportRequestHandler.java:170) at org.apache.spark.network.server.TransportRequestHandler.handle(TransportRequestHandler.java:104) at org.apache.spark.network.server.TransportChannelHandler.channelRead0(TransportChannelHandler.java:104) at org.apache.spark.network.server.TransportChannelHandler.channelRead0(TransportChannelHandler.java:51) at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294) at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:266) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294) at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294) at org.apache.spark.network.util.TransportFrameDecoder.channelRead(TransportFrameDecoder.java:86) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294) at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:846) at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:131) at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:511) at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468) at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354) at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111) at java.lang.Thread.run(Thread.java:745) 16/05/03 23:20:34 ERROR server.TransportRequestHandler: Error while invoking RpcHandler#receive() for one-way message. java.lang.IllegalStateException: RpcEnv already stopped. at org.apache.spark.rpc.netty.Dispatcher.postMessage(Dispatcher.scala:159) at org.apache.spark.rpc.netty.Dispatcher.postOneWayMessage(Dispatcher.scala:131) at org.apache.spark.rpc.netty.NettyRpcHandler.receive(NettyRpcEnv.scala:578) at org.apache.spark.network.server.TransportRequestHandler.processOneWayMessage(TransportRequestHandler.java:170) at org.apache.spark.network.server.TransportRequestHandler.handle(TransportRequestHandler.java:104) at org.apache.spark.network.server.TransportChannelHandler.channelRead0(TransportChannelHandler.java:104) at org.apache.spark.network.server.TransportChannelHandler.channelRead0(TransportChannelHandler.java:51) at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294) at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:266) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294) at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294) at org.apache.spark.network.util.TransportFrameDecoder.channelRead(TransportFrameDecoder.java:86) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294) at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:846) at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:131) at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:511) at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468) at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354) at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111) at java.lang.Thread.run(Thread.java:745) 16/05/03 23:20:34 WARN netty.NettyRpcEndpointRef: Error sending message [message = RemoveExecutor(8,Container marked as failed: container_1459849169969_971497_01_000009 on host: rsync.cloud204.wd.nm.ted. Exit status: 1. Diagnostics: Exception from container-launch. Container id: container_1459849169969_971497_01_000009 Exit code: 1 Stack trace: ExitCodeException exitCode=1: at org.apache.hadoop.util.Shell.runCommand(Shell.java:561) at org.apache.hadoop.util.Shell.run(Shell.java:478) at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:725) at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:214) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:299) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) Container exited with a non-zero exit code 1 )] in 1 attempts java.util.concurrent.RejectedExecutionException: Task java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask@6f57cdb5 rejected from java.util.concurrent.ScheduledThreadPoolExecutor@7e99f5b2[Terminated, pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 0] at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2048) at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:821) at java.util.concurrent.ScheduledThreadPoolExecutor.delayedExecute(ScheduledThreadPoolExecutor.java:325) at java.util.concurrent.ScheduledThreadPoolExecutor.schedule(ScheduledThreadPoolExecutor.java:530) at org.apache.spark.rpc.netty.NettyRpcEnv.ask(NettyRpcEnv.scala:239) at org.apache.spark.rpc.netty.NettyRpcEndpointRef.ask(NettyRpcEnv.scala:511) at org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:100) at org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:77) at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnSchedulerEndpoint$$anonfun$org$apache$spark$scheduler$cluster$YarnSchedulerBackend$$handleExecutorDisconnectedFromDriver$1.applyOrElse(YarnSchedulerBackend.scala:148) at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnSchedulerEndpoint$$anonfun$org$apache$spark$scheduler$cluster$YarnSchedulerBackend$$handleExecutorDisconnectedFromDriver$1.applyOrElse(YarnSchedulerBackend.scala:146) at scala.concurrent.Future$$anonfun$onSuccess$1.apply(Future.scala:117) at scala.concurrent.Future$$anonfun$onSuccess$1.apply(Future.scala:115) at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:32) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) 16/05/03 23:20:34 WARN netty.NettyRpcEndpointRef: Error sending message [message = RemoveExecutor(29,Container marked as failed: container_1459849169969_971497_01_000030 on host: cloud101411364.wd.nm.ss.nop.ted. Exit status: 1. Diagnostics: Exception from container-launch. Container id: container_1459849169969_971497_01_000030 Exit code: 1 Stack trace: ExitCodeException exitCode=1: at org.apache.hadoop.util.Shell.runCommand(Shell.java:561) at org.apache.hadoop.util.Shell.run(Shell.java:478) at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:725) at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:214) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:299) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) Container exited with a non-zero exit code 1 )] in 1 attempts java.util.concurrent.RejectedExecutionException: Task java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask@7d6b2f63 rejected from java.util.concurrent.ScheduledThreadPoolExecutor@7e99f5b2[Terminated, pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 0] at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2048) at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:821) at java.util.concurrent.ScheduledThreadPoolExecutor.delayedExecute(ScheduledThreadPoolExecutor.java:325) at java.util.concurrent.ScheduledThreadPoolExecutor.schedule(ScheduledThreadPoolExecutor.java:530) at org.apache.spark.rpc.netty.NettyRpcEnv.ask(NettyRpcEnv.scala:239) at org.apache.spark.rpc.netty.NettyRpcEndpointRef.ask(NettyRpcEnv.scala:511) at org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:100) at org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:77) at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnSchedulerEndpoint$$anonfun$org$apache$spark$scheduler$cluster$YarnSchedulerBackend$$handleExecutorDisconnectedFromDriver$1.applyOrElse(YarnSchedulerBackend.scala:148) at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnSchedulerEndpoint$$anonfun$org$apache$spark$scheduler$cluster$YarnSchedulerBackend$$handleExecutorDisconnectedFromDriver$1.applyOrElse(YarnSchedulerBackend.scala:146) at scala.concurrent.Future$$anonfun$onSuccess$1.apply(Future.scala:117) at scala.concurrent.Future$$anonfun$onSuccess$1.apply(Future.scala:115) at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:32) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) 16/05/03 23:20:34 ERROR cluster.YarnScheduler: Lost executor 146 on rsync.cloud187.wd.nm.ted: Container marked as failed: container_1459849169969_971497_01_000151 on host: rsync.cloud187.wd.nm.ted. Exit status: 1. Diagnostics: Exception from container-launch. Container id: container_1459849169969_971497_01_000151 Exit code: 1 Stack trace: ExitCodeException exitCode=1: at org.apache.hadoop.util.Shell.runCommand(Shell.java:561) at org.apache.hadoop.util.Shell.run(Shell.java:478) at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:725) at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:214) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:299) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) Container exited with a non-zero exit code 1 16/05/03 23:20:34 ERROR scheduler.LiveListenerBus: SparkListenerBus has already stopped! Dropping event SparkListenerExecutorRemoved(1462288834252,146,Container marked as failed: container_1459849169969_971497_01_000151 on host: rsync.cloud187.wd.nm.ted. Exit status: 1. Diagnostics: Exception from container-launch. Container id: container_1459849169969_971497_01_000151 Exit code: 1 Stack trace: ExitCodeException exitCode=1: at org.apache.hadoop.util.Shell.runCommand(Shell.java:561) at org.apache.hadoop.util.Shell.run(Shell.java:478) at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:725) at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:214) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:299) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) Container exited with a non-zero exit code 1 ) 16/05/03 23:20:34 ERROR cluster.YarnScheduler: Lost executor 51 on cloud1014121120.wd.nm.ss.nop.ted: Container marked as failed: container_1459849169969_971497_01_000053 on host: cloud1014121120.wd.nm.ss.nop.ted. Exit status: 1. Diagnostics: Exception from container-launch. Container id: container_1459849169969_971497_01_000053 Exit code: 1 Stack trace: ExitCodeException exitCode=1: at org.apache.hadoop.util.Shell.runCommand(Shell.java:561) at org.apache.hadoop.util.Shell.run(Shell.java:478) at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:725) at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:214) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:299) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) Container exited with a non-zero exit code 1 16/05/03 23:20:34 ERROR scheduler.LiveListenerBus: SparkListenerBus has already stopped! Dropping event SparkListenerExecutorRemoved(1462288834253,51,Container marked as failed: container_1459849169969_971497_01_000053 on host: cloud1014121120.wd.nm.ss.nop.ted. Exit status: 1. Diagnostics: Exception from container-launch. Container id: container_1459849169969_971497_01_000053 Exit code: 1 Stack trace: ExitCodeException exitCode=1: at org.apache.hadoop.util.Shell.runCommand(Shell.java:561) at org.apache.hadoop.util.Shell.run(Shell.java:478) at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:725) at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:214) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:299) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) Container exited with a non-zero exit code 1 ) 16/05/03 23:20:34 ERROR cluster.YarnScheduler: Lost executor 71 on rsync.cloud029.wd.nm.nop.ted: Container marked as failed: container_1459849169969_971497_01_000073 on host: cloud029.wd.nm.nop.ted. Exit status: 1. Diagnostics: Exception from container-launch. Container id: container_1459849169969_971497_01_000073 Exit code: 1 Stack trace: ExitCodeException exitCode=1: at org.apache.hadoop.util.Shell.runCommand(Shell.java:561) at org.apache.hadoop.util.Shell.run(Shell.java:478) at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:725) at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:214) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:299) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) Container exited with a non-zero exit code 1 16/05/03 23:20:34 ERROR scheduler.LiveListenerBus: SparkListenerBus has already stopped! Dropping event SparkListenerExecutorRemoved(1462288834255,71,Container marked as failed: container_1459849169969_971497_01_000073 on host: cloud029.wd.nm.nop.ted. Exit status: 1. Diagnostics: Exception from container-launch. Container id: container_1459849169969_971497_01_000073 Exit code: 1 Stack trace: ExitCodeException exitCode=1: at org.apache.hadoop.util.Shell.runCommand(Shell.java:561) at org.apache.hadoop.util.Shell.run(Shell.java:478) at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:725) at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:214) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:299) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) Container exited with a non-zero exit code 1 ) 16/05/03 23:20:34 ERROR cluster.YarnScheduler: Lost executor 21 on cloud1014119124.wd.nm.ss.nop.ted: Container marked as failed: container_1459849169969_971497_01_000022 on host: cloud1014119124.wd.nm.ss.nop.ted. Exit status: 1. Diagnostics: Exception from container-launch. Container id: container_1459849169969_971497_01_000022 Exit code: 1 Stack trace: ExitCodeException exitCode=1: at org.apache.hadoop.util.Shell.runCommand(Shell.java:561) at org.apache.hadoop.util.Shell.run(Shell.java:478) at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:725) at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:214) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:299) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) Container exited with a non-zero exit code 1 16/05/03 23:20:34 ERROR scheduler.LiveListenerBus: SparkListenerBus has already stopped! Dropping event SparkListenerExecutorRemoved(1462288834256,21,Container marked as failed: container_1459849169969_971497_01_000022 on host: cloud1014119124.wd.nm.ss.nop.ted. Exit status: 1. Diagnostics: Exception from container-launch. Container id: container_1459849169969_971497_01_000022 Exit code: 1 Stack trace: ExitCodeException exitCode=1: at org.apache.hadoop.util.Shell.runCommand(Shell.java:561) at org.apache.hadoop.util.Shell.run(Shell.java:478) at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:725) at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:214) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:299) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) Container exited with a non-zero exit code 1 ) 16/05/03 23:20:34 INFO cluster.YarnClientSchedulerBackend: Registered executor NettyRpcEndpointRef(null) (cloud101411828.wd.nm.ss.nop.ted:29937) with ID 99 16/05/03 23:20:34 ERROR server.TransportRequestHandler: Error sending result RpcResponse{requestId=5908618452049789862, body=NioManagedBuffer{buf=java.nio.HeapByteBuffer[pos=0 lim=169 cap=169]}} to cloud101411828.wd.nm.ss.nop.ted/10.141.18.28:29937; closing connection java.nio.channels.ClosedChannelException 16/05/03 23:20:34 WARN netty.NettyRpcEndpointRef: Error sending message [message = RemoveExecutor(100,Container marked as failed: container_1459849169969_971497_01_000102 on host: cloud148.wd.nm.ted. Exit status: 1. Diagnostics: Exception from container-launch. Container id: container_1459849169969_971497_01_000102 Exit code: 1 Stack trace: ExitCodeException exitCode=1: at org.apache.hadoop.util.Shell.runCommand(Shell.java:561) at org.apache.hadoop.util.Shell.run(Shell.java:478) at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:725) at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:214) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:299) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) Container exited with a non-zero exit code 1 )] in 1 attempts java.util.concurrent.RejectedExecutionException: Task java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask@81681a6 rejected from java.util.concurrent.ScheduledThreadPoolExecutor@7e99f5b2[Terminated, pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 0] at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2048) at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:821) at java.util.concurrent.ScheduledThreadPoolExecutor.delayedExecute(ScheduledThreadPoolExecutor.java:325) at java.util.concurrent.ScheduledThreadPoolExecutor.schedule(ScheduledThreadPoolExecutor.java:530) at org.apache.spark.rpc.netty.NettyRpcEnv.ask(NettyRpcEnv.scala:239) at org.apache.spark.rpc.netty.NettyRpcEndpointRef.ask(NettyRpcEnv.scala:511) at org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:100) at org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:77) at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnSchedulerEndpoint$$anonfun$org$apache$spark$scheduler$cluster$YarnSchedulerBackend$$handleExecutorDisconnectedFromDriver$1.applyOrElse(YarnSchedulerBackend.scala:148) at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnSchedulerEndpoint$$anonfun$org$apache$spark$scheduler$cluster$YarnSchedulerBackend$$handleExecutorDisconnectedFromDriver$1.applyOrElse(YarnSchedulerBackend.scala:146) at scala.concurrent.Future$$anonfun$onSuccess$1.apply(Future.scala:117) at scala.concurrent.Future$$anonfun$onSuccess$1.apply(Future.scala:115) at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:32) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) 16/05/03 23:20:34 ERROR server.TransportRequestHandler: Error while invoking RpcHandler#receive() for one-way message. java.lang.IllegalStateException: RpcEnv already stopped. at org.apache.spark.rpc.netty.Dispatcher.postMessage(Dispatcher.scala:159) at org.apache.spark.rpc.netty.Dispatcher.postOneWayMessage(Dispatcher.scala:131) at org.apache.spark.rpc.netty.NettyRpcHandler.receive(NettyRpcEnv.scala:578) at org.apache.spark.network.server.TransportRequestHandler.processOneWayMessage(TransportRequestHandler.java:170) at org.apache.spark.network.server.TransportRequestHandler.handle(TransportRequestHandler.java:104) at org.apache.spark.network.server.TransportChannelHandler.channelRead0(TransportChannelHandler.java:104) at org.apache.spark.network.server.TransportChannelHandler.channelRead0(TransportChannelHandler.java:51) at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294) at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:266) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294) at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294) at org.apache.spark.network.util.TransportFrameDecoder.channelRead(TransportFrameDecoder.java:86) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294) at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:846) at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:131) at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:511) at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468) at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354) at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111) at java.lang.Thread.run(Thread.java:745) 16/05/03 23:20:34 ERROR scheduler.LiveListenerBus: SparkListenerBus has already stopped! Dropping event SparkListenerExecutorAdded(1462288834257,99,org.apache.spark.scheduler.cluster.ExecutorData@c1564485) 16/05/03 23:20:34 ERROR server.TransportRequestHandler: Error while invoking RpcHandler#receive() for one-way message. java.lang.IllegalStateException: RpcEnv already stopped. at org.apache.spark.rpc.netty.Dispatcher.postMessage(Dispatcher.scala:159) at org.apache.spark.rpc.netty.Dispatcher.postOneWayMessage(Dispatcher.scala:131) at org.apache.spark.rpc.netty.NettyRpcHandler.receive(NettyRpcEnv.scala:578) at org.apache.spark.network.server.TransportRequestHandler.processOneWayMessage(TransportRequestHandler.java:170) at org.apache.spark.network.server.TransportRequestHandler.handle(TransportRequestHandler.java:104) at org.apache.spark.network.server.TransportChannelHandler.channelRead0(TransportChannelHandler.java:104) at org.apache.spark.network.server.TransportChannelHandler.channelRead0(TransportChannelHandler.java:51) at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294) at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:266) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294) at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294) at org.apache.spark.network.util.TransportFrameDecoder.channelRead(TransportFrameDecoder.java:86) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294) at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:846) at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:131) at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:511) at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468) at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354) at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111) at java.lang.Thread.run(Thread.java:745) 16/05/03 23:20:34 ERROR server.TransportRequestHandler: Error while invoking RpcHandler#receive() for one-way message. java.lang.IllegalStateException: RpcEnv already stopped. at org.apache.spark.rpc.netty.Dispatcher.postMessage(Dispatcher.scala:159) at org.apache.spark.rpc.netty.Dispatcher.postOneWayMessage(Dispatcher.scala:131) at org.apache.spark.rpc.netty.NettyRpcHandler.receive(NettyRpcEnv.scala:578) at org.apache.spark.network.server.TransportRequestHandler.processOneWayMessage(TransportRequestHandler.java:170) at org.apache.spark.network.server.TransportRequestHandler.handle(TransportRequestHandler.java:104) at org.apache.spark.network.server.TransportChannelHandler.channelRead0(TransportChannelHandler.java:104) at org.apache.spark.network.server.TransportChannelHandler.channelRead0(TransportChannelHandler.java:51) at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294) at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:266) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294) at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294) at org.apache.spark.network.util.TransportFrameDecoder.channelRead(TransportFrameDecoder.java:86) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294) at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:846) at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:131) at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:511) at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468) at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354) at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111) at java.lang.Thread.run(Thread.java:745) 16/05/03 23:20:34 ERROR server.TransportRequestHandler: Error while invoking RpcHandler#receive() for one-way message. java.lang.IllegalStateException: RpcEnv already stopped. at org.apache.spark.rpc.netty.Dispatcher.postMessage(Dispatcher.scala:159) at org.apache.spark.rpc.netty.Dispatcher.postOneWayMessage(Dispatcher.scala:131) at org.apache.spark.rpc.netty.NettyRpcHandler.receive(NettyRpcEnv.scala:578) at org.apache.spark.network.server.TransportRequestHandler.processOneWayMessage(TransportRequestHandler.java:170) at org.apache.spark.network.server.TransportRequestHandler.handle(TransportRequestHandler.java:104) at org.apache.spark.network.server.TransportChannelHandler.channelRead0(TransportChannelHandler.java:104) at org.apache.spark.network.server.TransportChannelHandler.channelRead0(TransportChannelHandler.java:51) at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294) at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:266) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294) at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294) at org.apache.spark.network.util.TransportFrameDecoder.channelRead(TransportFrameDecoder.java:86) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294) at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:846) at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:131) at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:511) at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468) at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354) at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111) at java.lang.Thread.run(Thread.java:745) 16/05/03 23:20:34 INFO cluster.YarnClientSchedulerBackend: Registered executor NettyRpcEndpointRef(null) (cloud1014140100.wd.nm.ss.nop.ted:26861) with ID 20 16/05/03 23:20:34 WARN netty.NettyRpcEndpointRef: Error sending message [message = RemoveExecutor(27,Container marked as failed: container_1459849169969_971497_01_000028 on host: rsync.cloud096.wd.nm.nop.ted. Exit status: 1. Diagnostics: Exception from container-launch. Container id: container_1459849169969_971497_01_000028 Exit code: 1 Stack trace: ExitCodeException exitCode=1: at org.apache.hadoop.util.Shell.runCommand(Shell.java:561) at org.apache.hadoop.util.Shell.run(Shell.java:478) at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:725) at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:214) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:299) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) Container exited with a non-zero exit code 1 )] in 1 attempts java.util.concurrent.RejectedExecutionException: Task java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask@3f57ab0c rejected from java.util.concurrent.ScheduledThreadPoolExecutor@7e99f5b2[Terminated, pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 0] at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2048) at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:821) at java.util.concurrent.ScheduledThreadPoolExecutor.delayedExecute(ScheduledThreadPoolExecutor.java:325) at java.util.concurrent.ScheduledThreadPoolExecutor.schedule(ScheduledThreadPoolExecutor.java:530) at org.apache.spark.rpc.netty.NettyRpcEnv.ask(NettyRpcEnv.scala:239) at org.apache.spark.rpc.netty.NettyRpcEndpointRef.ask(NettyRpcEnv.scala:511) at org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:100) at org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:77) at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnSchedulerEndpoint$$anonfun$org$apache$spark$scheduler$cluster$YarnSchedulerBackend$$handleExecutorDisconnectedFromDriver$1.applyOrElse(YarnSchedulerBackend.scala:148) at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnSchedulerEndpoint$$anonfun$org$apache$spark$scheduler$cluster$YarnSchedulerBackend$$handleExecutorDisconnectedFromDriver$1.applyOrElse(YarnSchedulerBackend.scala:146) at scala.concurrent.Future$$anonfun$onSuccess$1.apply(Future.scala:117) at scala.concurrent.Future$$anonfun$onSuccess$1.apply(Future.scala:115) at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:32) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) 16/05/03 23:20:34 WARN netty.NettyRpcEndpointRef: Error sending message [message = RemoveExecutor(30,Container marked as failed: container_1459849169969_971497_01_000031 on host: cloud101417736.wd.nm.ss.nop.ted. Exit status: 1. Diagnostics: Exception from container-launch. Container id: container_1459849169969_971497_01_000031 Exit code: 1 Stack trace: ExitCodeException exitCode=1: at org.apache.hadoop.util.Shell.runCommand(Shell.java:561) at org.apache.hadoop.util.Shell.run(Shell.java:478) at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:725) at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:214) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:299) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) Container exited with a non-zero exit code 1 )] in 1 attempts java.util.concurrent.RejectedExecutionException: Task java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask@42a5b76a rejected from java.util.concurrent.ScheduledThreadPoolExecutor@7e99f5b2[Terminated, pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 0] at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2048) at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:821) at java.util.concurrent.ScheduledThreadPoolExecutor.delayedExecute(ScheduledThreadPoolExecutor.java:325) at java.util.concurrent.ScheduledThreadPoolExecutor.schedule(ScheduledThreadPoolExecutor.java:530) at org.apache.spark.rpc.netty.NettyRpcEnv.ask(NettyRpcEnv.scala:239) at org.apache.spark.rpc.netty.NettyRpcEndpointRef.ask(NettyRpcEnv.scala:511) at org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:100) at org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:77) at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnSchedulerEndpoint$$anonfun$org$apache$spark$scheduler$cluster$YarnSchedulerBackend$$handleExecutorDisconnectedFromDriver$1.applyOrElse(YarnSchedulerBackend.scala:148) at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnSchedulerEndpoint$$anonfun$org$apache$spark$scheduler$cluster$YarnSchedulerBackend$$handleExecutorDisconnectedFromDriver$1.applyOrElse(YarnSchedulerBackend.scala:146) at scala.concurrent.Future$$anonfun$onSuccess$1.apply(Future.scala:117) at scala.concurrent.Future$$anonfun$onSuccess$1.apply(Future.scala:115) at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:32) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) 16/05/03 23:20:34 ERROR server.TransportRequestHandler: Error sending result RpcResponse{requestId=7776401963812070612, body=NioManagedBuffer{buf=java.nio.HeapByteBuffer[pos=0 lim=170 cap=170]}} to cloud1014140100.wd.nm.ss.nop.ted/10.141.40.100:26861; closing connection java.nio.channels.ClosedChannelException 16/05/03 23:20:34 ERROR scheduler.LiveListenerBus: SparkListenerBus has already stopped! Dropping event SparkListenerExecutorAdded(1462288834276,20,org.apache.spark.scheduler.cluster.ExecutorData@c0920cfc) 16/05/03 23:20:34 ERROR server.TransportRequestHandler: Error while invoking RpcHandler#receive() for one-way message. java.lang.IllegalStateException: RpcEnv already stopped. at org.apache.spark.rpc.netty.Dispatcher.postMessage(Dispatcher.scala:159) at org.apache.spark.rpc.netty.Dispatcher.postOneWayMessage(Dispatcher.scala:131) at org.apache.spark.rpc.netty.NettyRpcHandler.receive(NettyRpcEnv.scala:578) at org.apache.spark.network.server.TransportRequestHandler.processOneWayMessage(TransportRequestHandler.java:170) at org.apache.spark.network.server.TransportRequestHandler.handle(TransportRequestHandler.java:104) at org.apache.spark.network.server.TransportChannelHandler.channelRead0(TransportChannelHandler.java:104) at org.apache.spark.network.server.TransportChannelHandler.channelRead0(TransportChannelHandler.java:51) at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294) at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:266) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294) at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294) at org.apache.spark.network.util.TransportFrameDecoder.channelRead(TransportFrameDecoder.java:86) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294) at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:846) at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:131) at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:511) at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468) at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354) at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111) at java.lang.Thread.run(Thread.java:745) 16/05/03 23:20:34 ERROR server.TransportRequestHandler: Error while invoking RpcHandler#receive() for one-way message. java.lang.IllegalStateException: RpcEnv already stopped. at org.apache.spark.rpc.netty.Dispatcher.postMessage(Dispatcher.scala:159) at org.apache.spark.rpc.netty.Dispatcher.postOneWayMessage(Dispatcher.scala:131) at org.apache.spark.rpc.netty.NettyRpcHandler.receive(NettyRpcEnv.scala:578) at org.apache.spark.network.server.TransportRequestHandler.processOneWayMessage(TransportRequestHandler.java:170) at org.apache.spark.network.server.TransportRequestHandler.handle(TransportRequestHandler.java:104) at org.apache.spark.network.server.TransportChannelHandler.channelRead0(TransportChannelHandler.java:104) at org.apache.spark.network.server.TransportChannelHandler.channelRead0(TransportChannelHandler.java:51) at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294) at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:266) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294) at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294) at org.apache.spark.network.util.TransportFrameDecoder.channelRead(TransportFrameDecoder.java:86) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294) at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:846) at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:131) at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:511) at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468) at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354) at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111) at java.lang.Thread.run(Thread.java:745) 16/05/03 23:20:34 WARN netty.NettyRpcEndpointRef: Error sending message [message = RemoveExecutor(17,Container marked as failed: container_1459849169969_971497_01_000018 on host: rsync.cloud10141032033.wd.nm.nop.ted. Exit status: 1. Diagnostics: Exception from container-launch. Container id: container_1459849169969_971497_01_000018 Exit code: 1 Stack trace: ExitCodeException exitCode=1: at org.apache.hadoop.util.Shell.runCommand(Shell.java:561) at org.apache.hadoop.util.Shell.run(Shell.java:478) at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:725) at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:214) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:299) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) Container exited with a non-zero exit code 1 )] in 1 attempts java.util.concurrent.RejectedExecutionException: Task java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask@350fd641 rejected from java.util.concurrent.ScheduledThreadPoolExecutor@7e99f5b2[Terminated, pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 0] at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2048) at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:821) at java.util.concurrent.ScheduledThreadPoolExecutor.delayedExecute(ScheduledThreadPoolExecutor.java:325) at java.util.concurrent.ScheduledThreadPoolExecutor.schedule(ScheduledThreadPoolExecutor.java:530) at org.apache.spark.rpc.netty.NettyRpcEnv.ask(NettyRpcEnv.scala:239) at org.apache.spark.rpc.netty.NettyRpcEndpointRef.ask(NettyRpcEnv.scala:511) at org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:100) at org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:77) at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnSchedulerEndpoint$$anonfun$org$apache$spark$scheduler$cluster$YarnSchedulerBackend$$handleExecutorDisconnectedFromDriver$1.applyOrElse(YarnSchedulerBackend.scala:148) at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnSchedulerEndpoint$$anonfun$org$apache$spark$scheduler$cluster$YarnSchedulerBackend$$handleExecutorDisconnectedFromDriver$1.applyOrElse(YarnSchedulerBackend.scala:146) at scala.concurrent.Future$$anonfun$onSuccess$1.apply(Future.scala:117) at scala.concurrent.Future$$anonfun$onSuccess$1.apply(Future.scala:115) at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:32) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) 16/05/03 23:20:34 WARN netty.NettyRpcEndpointRef: Error sending message [message = RemoveExecutor(54,Container marked as failed: container_1459849169969_971497_01_000056 on host: cloud101411282.wd.nm.ss.nop.ted. Exit status: 1. Diagnostics: Exception from container-launch. Container id: container_1459849169969_971497_01_000056 Exit code: 1 Stack trace: ExitCodeException exitCode=1: at org.apache.hadoop.util.Shell.runCommand(Shell.java:561) at org.apache.hadoop.util.Shell.run(Shell.java:478) at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:725) at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:214) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:299) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) Container exited with a non-zero exit code 1 )] in 1 attempts java.util.concurrent.RejectedExecutionException: Task java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask@4eef4e15 rejected from java.util.concurrent.ScheduledThreadPoolExecutor@7e99f5b2[Terminated, pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 0] at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2048) at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:821) at java.util.concurrent.ScheduledThreadPoolExecutor.delayedExecute(ScheduledThreadPoolExecutor.java:325) at java.util.concurrent.ScheduledThreadPoolExecutor.schedule(ScheduledThreadPoolExecutor.java:530) at org.apache.spark.rpc.netty.NettyRpcEnv.ask(NettyRpcEnv.scala:239) at org.apache.spark.rpc.netty.NettyRpcEndpointRef.ask(NettyRpcEnv.scala:511) at org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:100) at org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:77) at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnSchedulerEndpoint$$anonfun$org$apache$spark$scheduler$cluster$YarnSchedulerBackend$$handleExecutorDisconnectedFromDriver$1.applyOrElse(YarnSchedulerBackend.scala:148) at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnSchedulerEndpoint$$anonfun$org$apache$spark$scheduler$cluster$YarnSchedulerBackend$$handleExecutorDisconnectedFromDriver$1.applyOrElse(YarnSchedulerBackend.scala:146) at scala.concurrent.Future$$anonfun$onSuccess$1.apply(Future.scala:117) at scala.concurrent.Future$$anonfun$onSuccess$1.apply(Future.scala:115) at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:32) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) 16/05/03 23:20:34 ERROR server.TransportRequestHandler: Error while invoking RpcHandler#receive() for one-way message. java.lang.IllegalStateException: RpcEnv already stopped. at org.apache.spark.rpc.netty.Dispatcher.postMessage(Dispatcher.scala:159) at org.apache.spark.rpc.netty.Dispatcher.postOneWayMessage(Dispatcher.scala:131) at org.apache.spark.rpc.netty.NettyRpcHandler.receive(NettyRpcEnv.scala:578) at org.apache.spark.network.server.TransportRequestHandler.processOneWayMessage(TransportRequestHandler.java:170) at org.apache.spark.network.server.TransportRequestHandler.handle(TransportRequestHandler.java:104) at org.apache.spark.network.server.TransportChannelHandler.channelRead0(TransportChannelHandler.java:104) at org.apache.spark.network.server.TransportChannelHandler.channelRead0(TransportChannelHandler.java:51) at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294) at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:266) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294) at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294) at org.apache.spark.network.util.TransportFrameDecoder.channelRead(TransportFrameDecoder.java:86) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294) at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:846) at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:131) at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:511) at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468) at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354) at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111) at java.lang.Thread.run(Thread.java:745) 16/05/03 23:20:34 ERROR server.TransportRequestHandler: Error while invoking RpcHandler#receive() for one-way message. java.lang.IllegalStateException: RpcEnv already stopped. at org.apache.spark.rpc.netty.Dispatcher.postMessage(Dispatcher.scala:159) at org.apache.spark.rpc.netty.Dispatcher.postOneWayMessage(Dispatcher.scala:131) at org.apache.spark.rpc.netty.NettyRpcHandler.receive(NettyRpcEnv.scala:578) at org.apache.spark.network.server.TransportRequestHandler.processOneWayMessage(TransportRequestHandler.java:170) at org.apache.spark.network.server.TransportRequestHandler.handle(TransportRequestHandler.java:104) at org.apache.spark.network.server.TransportChannelHandler.channelRead0(TransportChannelHandler.java:104) at org.apache.spark.network.server.TransportChannelHandler.channelRead0(TransportChannelHandler.java:51) at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294) at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:266) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294) at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294) at org.apache.spark.network.util.TransportFrameDecoder.channelRead(TransportFrameDecoder.java:86) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294) at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:846) at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:131) at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:511) at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468) at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354) at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111) at java.lang.Thread.run(Thread.java:745) 16/05/03 23:20:34 INFO cluster.YarnClientSchedulerBackend: Registered executor NettyRpcEndpointRef(null) (cloud119.wd.nm.nop.ted:33187) with ID 22 16/05/03 23:20:34 ERROR server.TransportRequestHandler: Error while invoking RpcHandler#receive() for one-way message. java.lang.IllegalStateException: RpcEnv already stopped. at org.apache.spark.rpc.netty.Dispatcher.postMessage(Dispatcher.scala:159) at org.apache.spark.rpc.netty.Dispatcher.postOneWayMessage(Dispatcher.scala:131) at org.apache.spark.rpc.netty.NettyRpcHandler.receive(NettyRpcEnv.scala:578) at org.apache.spark.network.server.TransportRequestHandler.processOneWayMessage(TransportRequestHandler.java:170) at org.apache.spark.network.server.TransportRequestHandler.handle(TransportRequestHandler.java:104) at org.apache.spark.network.server.TransportChannelHandler.channelRead0(TransportChannelHandler.java:104) at org.apache.spark.network.server.TransportChannelHandler.channelRead0(TransportChannelHandler.java:51) at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294) at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:266) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294) at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294) at org.apache.spark.network.util.TransportFrameDecoder.channelRead(TransportFrameDecoder.java:86) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294) at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:846) at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:131) at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:511) at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468) at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354) at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111) at java.lang.Thread.run(Thread.java:745) 16/05/03 23:20:34 WARN netty.NettyRpcEndpointRef: Error sending message [message = RemoveExecutor(173,Container marked as failed: container_1459849169969_971497_01_000179 on host: cloud10141072023.wd.nm.nop.ted. Exit status: 1. Diagnostics: Exception from container-launch. Container id: container_1459849169969_971497_01_000179 Exit code: 1 Stack trace: ExitCodeException exitCode=1: at org.apache.hadoop.util.Shell.runCommand(Shell.java:561) at org.apache.hadoop.util.Shell.run(Shell.java:478) at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:725) at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:214) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:299) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) Container exited with a non-zero exit code 1 )] in 1 attempts java.util.concurrent.RejectedExecutionException: Task java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask@2539759f rejected from java.util.concurrent.ScheduledThreadPoolExecutor@7e99f5b2[Terminated, pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 0] at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2048) at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:821) at java.util.concurrent.ScheduledThreadPoolExecutor.delayedExecute(ScheduledThreadPoolExecutor.java:325) at java.util.concurrent.ScheduledThreadPoolExecutor.schedule(ScheduledThreadPoolExecutor.java:530) at org.apache.spark.rpc.netty.NettyRpcEnv.ask(NettyRpcEnv.scala:239) at org.apache.spark.rpc.netty.NettyRpcEndpointRef.ask(NettyRpcEnv.scala:511) at org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:100) at org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:77) at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnSchedulerEndpoint$$anonfun$org$apache$spark$scheduler$cluster$YarnSchedulerBackend$$handleExecutorDisconnectedFromDriver$1.applyOrElse(YarnSchedulerBackend.scala:148) at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnSchedulerEndpoint$$anonfun$org$apache$spark$scheduler$cluster$YarnSchedulerBackend$$handleExecutorDisconnectedFromDriver$1.applyOrElse(YarnSchedulerBackend.scala:146) at scala.concurrent.Future$$anonfun$onSuccess$1.apply(Future.scala:117) at scala.concurrent.Future$$anonfun$onSuccess$1.apply(Future.scala:115) at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:32) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) 16/05/03 23:20:34 WARN netty.NettyRpcEndpointRef: Error sending message [message = RemoveExecutor(194,Container marked as failed: container_1459849169969_971497_01_000200 on host: cloud101412162.wd.nm.ss.nop.ted. Exit status: 1. Diagnostics: Exception from container-launch. Container id: container_1459849169969_971497_01_000200 Exit code: 1 Stack trace: ExitCodeException exitCode=1: at org.apache.hadoop.util.Shell.runCommand(Shell.java:561) at org.apache.hadoop.util.Shell.run(Shell.java:478) at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:725) at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:214) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:299) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) Container exited with a non-zero exit code 1 )] in 1 attempts java.util.concurrent.RejectedExecutionException: Task java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask@30edda44 rejected from java.util.concurrent.ScheduledThreadPoolExecutor@7e99f5b2[Terminated, pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 0] at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2048) at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:821) at java.util.concurrent.ScheduledThreadPoolExecutor.delayedExecute(ScheduledThreadPoolExecutor.java:325) at java.util.concurrent.ScheduledThreadPoolExecutor.schedule(ScheduledThreadPoolExecutor.java:530) at org.apache.spark.rpc.netty.NettyRpcEnv.ask(NettyRpcEnv.scala:239) at org.apache.spark.rpc.netty.NettyRpcEndpointRef.ask(NettyRpcEnv.scala:511) at org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:100) at org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:77) at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnSchedulerEndpoint$$anonfun$org$apache$spark$scheduler$cluster$YarnSchedulerBackend$$handleExecutorDisconnectedFromDriver$1.applyOrElse(YarnSchedulerBackend.scala:148) at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnSchedulerEndpoint$$anonfun$org$apache$spark$scheduler$cluster$YarnSchedulerBackend$$handleExecutorDisconnectedFromDriver$1.applyOrElse(YarnSchedulerBackend.scala:146) at scala.concurrent.Future$$anonfun$onSuccess$1.apply(Future.scala:117) at scala.concurrent.Future$$anonfun$onSuccess$1.apply(Future.scala:115) at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:32) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) 16/05/03 23:20:34 ERROR scheduler.LiveListenerBus: SparkListenerBus has already stopped! Dropping event SparkListenerExecutorAdded(1462288834307,22,org.apache.spark.scheduler.cluster.ExecutorData@12acf5e8) 16/05/03 23:20:34 ERROR server.TransportRequestHandler: Error while invoking RpcHandler#receive() for one-way message. java.lang.IllegalStateException: RpcEnv already stopped. at org.apache.spark.rpc.netty.Dispatcher.postMessage(Dispatcher.scala:159) at org.apache.spark.rpc.netty.Dispatcher.postOneWayMessage(Dispatcher.scala:131) at org.apache.spark.rpc.netty.NettyRpcHandler.receive(NettyRpcEnv.scala:578) at org.apache.spark.network.server.TransportRequestHandler.processOneWayMessage(TransportRequestHandler.java:170) at org.apache.spark.network.server.TransportRequestHandler.handle(TransportRequestHandler.java:104) at org.apache.spark.network.server.TransportChannelHandler.channelRead0(TransportChannelHandler.java:104) at org.apache.spark.network.server.TransportChannelHandler.channelRead0(TransportChannelHandler.java:51) at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294) at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:266) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294) at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294) at org.apache.spark.network.util.TransportFrameDecoder.channelRead(TransportFrameDecoder.java:86) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294) at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:846) at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:131) at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:511) at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468) at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354) at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111) at java.lang.Thread.run(Thread.java:745) 16/05/03 23:20:34 INFO util.ShutdownHookManager: Shutdown hook called 16/05/03 23:20:34 ERROR cluster.YarnScheduler: Lost executor 36 on cloud101419370.wd.nm.ss.nop.ted: Container marked as failed: container_1459849169969_971497_01_000038 on host: cloud101419370.wd.nm.ss.nop.ted. Exit status: 1. Diagnostics: Exception from container-launch. Container id: container_1459849169969_971497_01_000038 Exit code: 1 Stack trace: ExitCodeException exitCode=1: at org.apache.hadoop.util.Shell.runCommand(Shell.java:561) at org.apache.hadoop.util.Shell.run(Shell.java:478) at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:725) at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:214) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:299) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) Container exited with a non-zero exit code 1 16/05/03 23:20:34 ERROR scheduler.LiveListenerBus: SparkListenerBus has already stopped! Dropping event SparkListenerExecutorRemoved(1462288834350,36,Container marked as failed: container_1459849169969_971497_01_000038 on host: cloud101419370.wd.nm.ss.nop.ted. Exit status: 1. Diagnostics: Exception from container-launch. Container id: container_1459849169969_971497_01_000038 Exit code: 1 Stack trace: ExitCodeException exitCode=1: at org.apache.hadoop.util.Shell.runCommand(Shell.java:561) at org.apache.hadoop.util.Shell.run(Shell.java:478) at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:725) at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:214) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:299) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) Container exited with a non-zero exit code 1 ) 16/05/03 23:20:34 ERROR cluster.YarnScheduler: Lost executor 7 on rsync.cloud019.wd.nm.nop.ted: Container marked as failed: container_1459849169969_971497_01_000008 on host: cloud019.wd.nm.nop.ted. Exit status: 1. Diagnostics: Exception from container-launch. Container id: container_1459849169969_971497_01_000008 Exit code: 1 Stack trace: ExitCodeException exitCode=1: at org.apache.hadoop.util.Shell.runCommand(Shell.java:561) at org.apache.hadoop.util.Shell.run(Shell.java:478) at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:725) at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:214) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:299) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) Container exited with a non-zero exit code 1 16/05/03 23:20:34 ERROR scheduler.LiveListenerBus: SparkListenerBus has already stopped! Dropping event SparkListenerExecutorRemoved(1462288834355,7,Container marked as failed: container_1459849169969_971497_01_000008 on host: cloud019.wd.nm.nop.ted. Exit status: 1. Diagnostics: Exception from container-launch. Container id: container_1459849169969_971497_01_000008 Exit code: 1 Stack trace: ExitCodeException exitCode=1: at org.apache.hadoop.util.Shell.runCommand(Shell.java:561) at org.apache.hadoop.util.Shell.run(Shell.java:478) at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:725) at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:214) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:299) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) Container exited with a non-zero exit code 1 ) 16/05/03 23:20:34 INFO cluster.YarnClientSchedulerBackend: Registered executor NettyRpcEndpointRef(null) (cloud10141032037.wd.nm.nop.ted:25033) with ID 6 16/05/03 23:20:34 ERROR scheduler.LiveListenerBus: SparkListenerBus has already stopped! Dropping event SparkListenerExecutorAdded(1462288834357,6,org.apache.spark.scheduler.cluster.ExecutorData@7dc26555) 16/05/03 23:20:34 INFO util.ShutdownHookManager: Deleting directory /tmp/spark-d9ab49dc-4f40-4168-a99f-db78a68fe5a4/httpd-868a7b7a-e0f8-439e-922a-55755cc5a9e6 16/05/03 23:20:34 INFO util.ShutdownHookManager: Deleting directory /tmp/spark-d9ab49dc-4f40-4168-a99f-db78a68fe5a4 16/05/03 23:20:34 ERROR server.TransportRequestHandler: Error while invoking RpcHandler#receive() for one-way message. java.lang.IllegalStateException: RpcEnv already stopped. at org.apache.spark.rpc.netty.Dispatcher.postMessage(Dispatcher.scala:159) at org.apache.spark.rpc.netty.Dispatcher.postOneWayMessage(Dispatcher.scala:131) at org.apache.spark.rpc.netty.NettyRpcHandler.receive(NettyRpcEnv.scala:578) at org.apache.spark.network.server.TransportRequestHandler.processOneWayMessage(TransportRequestHandler.java:170) at org.apache.spark.network.server.TransportRequestHandler.handle(TransportRequestHandler.java:104) at org.apache.spark.network.server.TransportChannelHandler.channelRead0(TransportChannelHandler.java:104) at org.apache.spark.network.server.TransportChannelHandler.channelRead0(TransportChannelHandler.java:51) at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294) at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:266) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294) at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294) at org.apache.spark.network.util.TransportFrameDecoder.channelRead(TransportFrameDecoder.java:86) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294) at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:846) at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:131) at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:511) at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468) at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354) at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111) at java.lang.Thread.run(Thread.java:745) 16/05/03 23:20:34 ERROR server.TransportRequestHandler: Error while invoking RpcHandler#receive() for one-way message. java.lang.IllegalStateException: RpcEnv already stopped. at org.apache.spark.rpc.netty.Dispatcher.postMessage(Dispatcher.scala:159) at org.apache.spark.rpc.netty.Dispatcher.postOneWayMessage(Dispatcher.scala:131) at org.apache.spark.rpc.netty.NettyRpcHandler.receive(NettyRpcEnv.scala:578) at org.apache.spark.network.server.TransportRequestHandler.processOneWayMessage(TransportRequestHandler.java:170) at org.apache.spark.network.server.TransportRequestHandler.handle(TransportRequestHandler.java:104) at org.apache.spark.network.server.TransportChannelHandler.channelRead0(TransportChannelHandler.java:104) at org.apache.spark.network.server.TransportChannelHandler.channelRead0(TransportChannelHandler.java:51) at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294) at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:266) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294) at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294) at org.apache.spark.network.util.TransportFrameDecoder.channelRead(TransportFrameDecoder.java:86) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294) at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:846) at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:131) at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:511) at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468) at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354) at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111) at java.lang.Thread.run(Thread.java:745)