- 论坛徽章:
- 6
|
小弟初学spark,下载了spark-1.5.0-bin-hadoop2.6的bin介质,当启动pyspark的时候报错如下:
15/10/15 19:41:31 WARN MetricsSystem: Using default name DAGScheduler for source because spark.app.id is not set.
Traceback (most recent call last):
File "/tmp/spark-1.5.0-bin-hadoop2.6/python/pyspark/shell.py", line 43, in <module>
sc = SparkContext(pyFiles=add_files)
File "/tmp/spark-1.5.0-bin-hadoop2.6/python/pyspark/context.py", line 113, in __init__
conf, jsc, profiler_cls)
File "/tmp/spark-1.5.0-bin-hadoop2.6/python/pyspark/context.py", line 174, in _do_init
self._accumulatorServer = accumulators._start_update_server()
File "/tmp/spark-1.5.0-bin-hadoop2.6/python/pyspark/accumulators.py", line 259, in _start_update_server
server = AccumulatorServer(("localhost", 0), _UpdateRequestHandler)
File "/usr/lib64/python2.6/SocketServer.py", line 412, in __init__
self.server_bind()
File "/usr/lib64/python2.6/SocketServer.py", line 423, in server_bind
self.socket.bind(self.server_address)
File "<string>", line 1, in bind
socket.gaierror: [Errno -2] Name or service not known
能否给帮忙看看是啥情况?本机安装的jdk是1.7,python是2.6的版本,os是redhat6.2 |
|