从本地计算机访问 azure vm 中的 spark docker

access from local machine to spark docker in azure vm

Spark docker 安装在 azure vm(centos 7.2) 中,我想从本地计算机访问 hdfs(Windows)。

我运行curl -i -v -L http://52.234.XXX.XXX:50070/webhdfs/v1/user/helloworld.txt?op=OPEN在Windows,异常是

$ curl -i -v -L http://52.234.XXX.XXX:50070/webhdfs/v1/user/helloworld.txt?op=OP                                                                              EN
* timeout on name lookup is not supported
*   Trying 52.234.XXX.XXX...
* TCP_NODELAY set
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0*                                                                               Connected to 52.234.XXX.XXX (52.234.XXX.XXX) port 50070 (#0)
> GET /webhdfs/v1/user/helloworld.txt?op=OPEN HTTP/1.1
> Host: 52.234.XXX.XXX:50070
> User-Agent: curl/7.54.0
> Accept: */*
>
< HTTP/1.1 307 TEMPORARY_REDIRECT
< Cache-Control: no-cache
< Expires: Fri, 16 Mar 2018 02:16:37 GMT
< Date: Fri, 16 Mar 2018 02:16:37 GMT
< Pragma: no-cache
< Expires: Fri, 16 Mar 2018 02:16:37 GMT
< Date: Fri, 16 Mar 2018 02:16:37 GMT
< Pragma: no-cache
< Location: http://sandbox:50075/webhdfs/v1/user/helloworld.txt?op=OPEN&namenode                                                                              rpcaddress=sandbox:9000&offset=0
< Content-Type: application/octet-stream
< Content-Length: 0
< Server: Jetty(6.1.26)
<
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
* Connection #0 to host 52.234.227.186 left intact
* Issue another request to this URL: 'http://sandbox:50075/webhdfs/v1/user/hello                                                                              world.txt?op=OPEN&namenoderpcaddress=sandbox:9000&offset=0'
* timeout on name lookup is not supported
*   Trying 10.122.118.83...
* TCP_NODELAY set
  0     0    0     0    0     0      0      0 --:--:--  0:00:21 --:--:--     0HT                                                                              TP/1.1 307 TEMPORARY_REDIRECT
Cache-Control: no-cache
Expires: Fri, 16 Mar 2018 02:16:37 GMT
Date: Fri, 16 Mar 2018 02:16:37 GMT
Pragma: no-cache
Expires: Fri, 16 Mar 2018 02:16:37 GMT
Date: Fri, 16 Mar 2018 02:16:37 GMT
Pragma: no-cache
Location: http://sandbox:50075/webhdfs/v1/user/helloworld.txt?op=OPEN&namenoderp                                                                              caddress=sandbox:9000&offset=0
Content-Type: application/octet-stream
Content-Length: 0
Server: Jetty(6.1.26)

* connect to 10.122.118.83 port 50075 failed: Timed out
* Failed to connect to sandbox port 50075: Timed out
* Closing connection 1
curl: (7) Failed to connect to sandbox port 50075: Timed out

centos public ip地址是:52.234.XXX.XXX.

是不是未知ip'10.122.118.83'引起的?它是数据节点的IP地址吗?我已经在 azure vm 网络设置中打开了这些端口。

我从 docker 开始
docker run -it -p 8088:8088 -p 8042:8042 -p 9000:9000 -p 8087:8087 -p 50070:50070 -p 50010:50010 -p 50075:50075 -p 50475:50475 --name sparkdocker -h sandbox --network=host sequenceiq/spark:1.6.0 bash hadoop 的 fs.defaultFS 是 'hdfs://sandbox:9000' centos等同资源组的azure机器访问hdfs(上传、下载、读取文件)没有问题

火花docker ifconfig:

docker0   Link encap:Ethernet  HWaddr 02:42:D9:2A:5D:BB
      inet addr:172.17.0.1  Bcast:172.17.255.255  Mask:255.255.0.0
      UP BROADCAST MULTICAST  MTU:1500  Metric:1
      RX packets:53 errors:0 dropped:0 overruns:0 frame:0
      TX packets:57 errors:0 dropped:0 overruns:0 carrier:0
      collisions:0 txqueuelen:0
      RX bytes:3889 (3.7 KiB)  TX bytes:6674 (6.5 KiB)

eth0      Link encap:Ethernet  HWaddr 00:0D:3A:14:B5:C1
          inet addr:10.0.0.7  Bcast:10.0.0.255  Mask:255.255.255.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:60543 errors:0 dropped:0 overruns:0 frame:0
          TX packets:68081 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:22930277 (21.8 MiB)  TX bytes:11271703 (10.7 MiB)

lo        Link encap:Local Loopback
          inet addr:127.0.0.1  Mask:255.0.0.0
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:14779 errors:0 dropped:0 overruns:0 frame:0
          TX packets:14779 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1
          RX bytes:4032619 (3.8 MiB)  TX bytes:4032619 (3.8 MiB)

centos vm ifconfig:

docker0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
    inet 172.17.0.1  netmask 255.255.0.0  broadcast 172.17.255.255
    ether 02:42:d9:2a:5d:bb  txqueuelen 0  (Ethernet)
    RX packets 53  bytes 3889 (3.7 KiB)
    RX errors 0  dropped 0  overruns 0  frame 0
    TX packets 57  bytes 6674 (6.5 KiB)
    TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 10.0.0.7  netmask 255.255.255.0  broadcast 10.0.0.255
        ether 00:0d:3a:14:b5:c1  txqueuelen 1000  (Ethernet)
        RX packets 60750  bytes 23017881 (21.9 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 68320  bytes 11310643 (10.7 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        loop  txqueuelen 1  (Local Loopback)
        RX packets 14857  bytes 4042781 (3.8 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 14857  bytes 4042781 (3.8 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

如果您想将远程主机名暴露给外部网络,则您的远程主机名不能 sandbox 使用 10.0.0.7 的本地 ip。由于数据节点和名称节点之间的各种网络调用返回远程网络中的外部客户端,因此在整个请求过程中需要外部可解析的 IP 或 DNS 记录。

YARN 服务也一样,通过查看端口 8088 上的集群

我相信这是 core-site.xml 中的设置,这需要类似于 hdfs://external.namenode.fqdn:port

fs.default.name

并在 hdfs-site.xml 中将两者都设置为 true - 因为在云环境中,您的主机名通常是静态的,而 IP 可以更改。此外,在 Azure 网络内,节点知道如何通信,但在集群外,无法解析内部 DNS 名称

dfs.client.use.datanode.hostname 
dfs.datanode.use.datanode.hostname

如果您 运行 在 Azure 中,我可能建议只使用 HD insights 而不是一些单一的数据节点沙箱

无论如何,您不需要远程 Spark 实例。你可以在当地发展。将该 Spark 应用程序部署到远程 YARN(或 Spark Standalone)集群。您也不需要 HDFS...Spark 可以从 Azure blob 存储中读取,并且 运行 在独立的调度程序

另一个建议:永远不要打开不安全的 Hadoop 集群的所有端口和 post public 网络的 IP。请在你这边使用 SSH 转发来安全地连接到 Azure 网络