测试集群上Flume监控本地文件夹+Spark Streaming跑的没问题,但放到生产环境上来测试却一直报错,启动命令如下:
1 2 3 4 5 6 7 |
spark-submit --class com.mobicloud.test.FlumeTest --jars /root/mrwork/spark-examples-1.2.0-cdh5.3.0-hadoop2.5.0-cdh5.3.0.jar, /root/mrwork/spark-streaming-flume-sink_2.10-1.3.1.jar, /root/mrwork/flume-ng-sdk-1.5.0-cdh5.3.0.jar, /root/mrwork/avro-ipc.jar --master yarn-cluster wc.jar slave02 33333 |
报错信息如下:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 |
15/06/17 15:43:08 ERROR scheduler.ReceiverTracker: Deregistered receiver for stream 0: Error starting receiver 0 - org.jboss.netty.channel.ChannelException: Failed to bind to: slave02 /192.168.80.15:33333 at org.jboss.netty.bootstrap.ServerBootstrap.bind(ServerBootstrap.java:272) at org.apache.avro.ipc.NettyServer.<init>(NettyServer.java:106) at org.apache.avro.ipc.NettyServer.<init>(NettyServer.java:119) at org.apache.avro.ipc.NettyServer.<init>(NettyServer.java:74) at org.apache.avro.ipc.NettyServer.<init>(NettyServer.java:68) at org.apache.spark.streaming.flume.FlumeReceiver.initServer(FlumeInputDStream.scala:164) at org.apache.spark.streaming.flume.FlumeReceiver.onStart(FlumeInputDStream.scala:171) at org.apache.spark.streaming.receiver.ReceiverSupervisor.startReceiver(ReceiverSupervisor.scala:121) at org.apache.spark.streaming.receiver.ReceiverSupervisor.start(ReceiverSupervisor.scala:106) at org.apache.spark.streaming.scheduler.ReceiverTracker$ReceiverLauncher$$anonfun$8.apply(ReceiverTracker.scala:277) at org.apache.spark.streaming.scheduler.ReceiverTracker$ReceiverLauncher$$anonfun$8.apply(ReceiverTracker.scala:269) at org.apache.spark.SparkContext$$anonfun$runJob$4.apply(SparkContext.scala:1314) at org.apache.spark.SparkContext$$anonfun$runJob$4.apply(SparkContext.scala:1314) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:61) at org.apache.spark.scheduler.Task.run(Task.scala:56) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:196) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) Caused by: java.net.BindException: Cannot assign requested address at sun.nio.ch.Net.bind0(Native Method) at sun.nio.ch.Net.bind(Net.java:444) at sun.nio.ch.Net.bind(Net.java:436) at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214) at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74) at org.jboss.netty.channel.socket.nio.NioServerBoss$RegisterTask.run(NioServerBoss.java:193) at org.jboss.netty.channel.socket.nio.AbstractNioSelector.processTaskQueue(AbstractNioSelector.java:366) at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:290) at org.jboss.netty.channel.socket.nio.NioServerBoss.run(NioServerBoss.java:42) ... 3 more |
大致意思就是端口无法绑定!!尝试了七八个不同的端口,还是不行,尝试了其它slave节点,也不行。最后尝试了下127.0.0.1地址,发现不报错了,但是通过
没办法,只有上测试集群再测试一次,结果发现测试集群也报这个错了!尝试了几次后居然报大致意思是“jar包不存在,没有权限”的错误,更奇葩了,于是使用网管大法——重启。重启整个集群后,居然神奇的不报错了,一切又好使了。简直是奇迹。
回到生产环境下面,找准了机会重启了整个集群,满怀欣喜的以为问题搞定了,重新提交任务后,发现居然还在报这个错误。又是各种尝试和排错后,还是无法解决。最后尝试了0.0.0.0地址,发现类似127.0.0.1地址一样,虽然不报错了,但是端口根本没有绑定。然后看了下日志,突然发现貌似只有2台服务器正常运行,日志中除了出现“slave08”和“slave12”的信息以外,其它13台机器的信息全部都没有。回头看了下127.0.0.1那次启动的任务日志和测试集群中成功运行的任务的日志,发现居然都是只有2台机器在运行,只不过测试集群中成功运行的任务中恰好有提交任务的那一台机器而已。
至此问题基本确定,就是spark任务启动的时候只有2台机器启动了该spark任务,而要绑定端口的机器却是这2台机器以外的机器,所以报错。当采用127.0.0.1和0.0.0.0时不会报错是因为在启动了spark任务的机器本地绑定了端口。为了验证我的这个猜想,登录到slave08上看了眼,发现33333端口没有被占用,又登录上slave12上,发现33333端口被占用了。猜想是对的!
问题确定了接下来就是查明为何spark启动的时候只有2台机器在运行spark任务了。spark官网上只有一个启动的例子,如下:
1 2 3 4 5 6 7 8 9 |
$ ./bin/spark-submit --class org.apache.spark.examples.SparkPi \ --master yarn-cluster \ --num-executors 3 \ --driver-memory 4g \ --executor-memory 2g \ --executor-cores 1 \ --queue thequeue \ lib/spark-examples*.jar \ 10 |
但却没有每个参数具体何用的说明,于是猜想num-executors是指定运行spark任务的节点数,默认值是2。将该值改成15(集群大小)后,终于可以想绑定哪个机器就绑定哪台机器了。
正确启动spark任务的命令:
1 2 3 4 5 6 7 8 |
spark-submit --class com.mobicloud.test.FlumeTest --jars /root/mrwork/spark-examples-1.2.0-cdh5.3.0-hadoop2.5.0-cdh5.3.0.jar, /root/mrwork/spark-streaming-flume-sink_2.10-1.3.1.jar, /root/mrwork/flume-ng-sdk-1.5.0-cdh5.3.0.jar, /root/mrwork/avro-ipc.jar --master yarn-cluster --num-executors 15 wc.jar slave02 33333 |
吐槽
spark官网你倒是把每个参数干嘛用的说清楚啊!!啊!