openthread/otbr docker 容器中的 otbr-agent 启动失败

Starting of otbr-agent in openthread/otbr docker container fails

我正在使用 Raspberry Pi 4 模型 B,我想 运行 Openthread Border Router 应用程序作为 docker 容器。我使用命令 docker run --sysctl "net.ipv6.conf.all.disable_ipv6=0 net.ipv4.conf.all.forwarding=1 net.ipv6.conf.all.forwarding=1" -p 8080:80 --dns=127.0.0.1 -dit --network test-driver-net --volume /dev/ttyACM0:/dev/ttyACM0 --name ot-br --privileged openthread/otbr --radio-url spinel+hdlc+uart:///dev/ttyACM0 来启动容器。我已经尝试了 openthread/otbr:latestopenthread/otbr:reference-device(均于 2020 年 11 月 10 日推送)图像,它们都遇到了同样的问题:

容器启动成功,但是Web-GUI不可用,没有网络操作发生。如果使用 docker logs ot-br:

调用,这里是容器的日志输出
RADIO_URL: spinel+hdlc+uart:///dev/ttyACM0
TUN_INTERFACE_NAME: wpan0
BACKBONE_INTERFACE:
NAT64_PREFIX: 64:ff9b::/96
AUTO_PREFIX_ROUTE: true
AUTO_PREFIX_SLAAC: true
sed: can't read /etc/tayga.conf: No such file or directory
+++ dirname /app/script/server
++ cd /app/script/..
++ [[ ! -n x ]]
++ echo 'Current platform is ubuntu'
Current platform is ubuntu
++ STAGE_DIR=/app/stage
++ BUILD_DIR=/app/build
++ [[ -d /app/stage ]]
++ mkdir -v -p /app/stage
mkdir: created directory '/app/stage'
++ [[ -d /app/build ]]
++ mkdir -v -p /app/build
mkdir: created directory '/app/build'
++ export PATH=/app/stage/usr/bin:/app/stage/usr/sbin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
++ PATH=/app/stage/usr/bin:/app/stage/usr/sbin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
+++ basename /app/script/server
++ TASKNAME=server
++ BEFORE_HOOK=examples/platforms/ubuntu/before_server
++ AFTER_HOOK=examples/platforms/ubuntu/after_server
++ [[ ! -f examples/platforms/ubuntu/before_server ]]
++ BEFORE_HOOK=/dev/null
++ [[ ! -f examples/platforms/ubuntu/after_server ]]
++ AFTER_HOOK=/dev/null
+ . script/_nat64
++ TAYGA_DEFAULT=/etc/default/tayga
++ TAYGA_CONF=/etc/tayga.conf
++ TAYGA_IPV4_ADDR=192.168.255.1
++ TAYGA_IPV6_ADDR=fdaa:bb:1::1
++ TAYGA_TUN_V6_ADDR=fdaa:bb:1::2
++ NAT44_SERVICE=/etc/init.d/otbr-nat44
++ WLAN_IFNAMES=eth0
+ . script/_dns64
++ BIND_CONF_OPTIONS=/etc/bind/named.conf.options
++ NAT64_PREFIX=64:ff9b::/96
++ DNS64_NAMESERVER_ADDR=127.0.0.1
+++ tr '"/"' '"/"'
+++ echo 64:ff9b::/96
++ DNS64_CONF='dns64 64:ff9b::/96 { clients { thread; }; recursive-only yes; };'
++ without NAT64
++ with NAT64
++ local value
+++ printenv NAT64
++ value=0
++ [[ -z 0 ]]
++ [[ 0 == 1 ]]
++ '[' ubuntu = raspbian ']'
++ '[' ubuntu = beagleboneblack ']'
++ '[' ubuntu = ubuntu ']'
++ RESOLV_CONF_HEAD=/etc/resolvconf/resolv.conf.d/head
+ main
+ . /dev/null
+ sudo sysctl --system
* Applying /etc/sysctl.d/10-console-messages.conf ...
kernel.printk = 4 4 1 7
* Applying /etc/sysctl.d/10-ipv6-privacy.conf ...
net.ipv6.conf.all.use_tempaddr = 2
net.ipv6.conf.default.use_tempaddr = 2
* Applying /etc/sysctl.d/10-kernel-hardening.conf ...
kernel.kptr_restrict = 1
* Applying /etc/sysctl.d/10-link-restrictions.conf ...
fs.protected_hardlinks = 1
fs.protected_symlinks = 1
* Applying /etc/sysctl.d/10-magic-sysrq.conf ...
kernel.sysrq = 176
* Applying /etc/sysctl.d/10-network-security.conf ...
net.ipv4.conf.default.rp_filter = 2
net.ipv4.conf.all.rp_filter = 2
* Applying /etc/sysctl.d/10-ptrace.conf ...
* Applying /etc/sysctl.d/10-zeropage.conf ...
vm.mmap_min_addr = 32768
* Applying /etc/sysctl.d/60-otbr-ip-forward.conf ...
net.ipv6.conf.all.forwarding = 1
net.ipv4.ip_forward = 1
* Applying /usr/lib/sysctl.d/protect-links.conf ...
fs.protected_fifos = 1
fs.protected_hardlinks = 1
fs.protected_regular = 2
fs.protected_symlinks = 1
* Applying /etc/sysctl.conf ...
+ nat64_start
+ with NAT64
+ local value
++ printenv NAT64
+ value=0
+ [[ -z 0 ]]
+ [[ 0 == 1 ]]
+ return 0
+ dns64_start
+ with NAT64
+ local value
++ printenv NAT64
+ value=0
+ [[ -z 0 ]]
+ [[ 0 == 1 ]]
+ return 0
+ have systemctl
+ command -v systemctl
+ have service
+ command -v service
+ sudo service rsyslog status
 * rsyslogd is not running
+ sudo service rsyslog start
 * Starting enhanced syslogd rsyslogd                                    [ OK ]
+ sudo service dbus status
 * dbus is not running
+ sudo service dbus start
 * Starting system message bus dbus                                      [ OK ]
+ sudo service avahi-daemon status
Avahi mDNS/DNS-SD Daemon is not running
+ sudo service avahi-daemon start
 * Starting Avahi mDNS/DNS-SD Daemon avahi-daemon                        [ OK ]
+ sudo service otbr-agent status
otbr-agent: unrecognized service
+ sudo service otbr-agent start
otbr-agent: unrecognized service
+ die 'Failed to start otbr-agent!'
+ echo ' *** ERROR:  Failed to start otbr-agent!'
 *** ERROR:  Failed to start otbr-agent!
+ exit 1
Nov 17 10:16:45 373e52c415dd avahi-daemon[104]: New relevant interface eth0.IPv4 for mDNS.
Nov 17 10:16:45 373e52c415dd avahi-daemon[104]: Joining mDNS multicast group on interface lo.IPv6 with address ::1.
Nov 17 10:16:45 373e52c415dd avahi-daemon[104]: New relevant interface lo.IPv6 for mDNS.
Nov 17 10:16:45 373e52c415dd avahi-daemon[104]: Joining mDNS multicast group on interface lo.IPv4 with address 127.0.0.1.
Nov 17 10:16:45 373e52c415dd avahi-daemon[104]: New relevant interface lo.IPv4 for mDNS.
Nov 17 10:16:45 373e52c415dd avahi-daemon[104]: Network interface enumeration completed.
Nov 17 10:16:45 373e52c415dd avahi-daemon[104]: Registering new address record for fe80::42:acff:fe12:2 on eth0.*.
Nov 17 10:16:45 373e52c415dd avahi-daemon[104]: Registering new address record for 172.18.0.2 on eth0.IPv4.
Nov 17 10:16:45 373e52c415dd avahi-daemon[104]: Registering new address record for ::1 on lo.*.
Nov 17 10:16:45 373e52c415dd avahi-daemon[104]: Registering new address record for 127.0.0.1 on lo.IPv4.
Nov 17 10:16:46 373e52c415dd rsyslogd: rsyslogd's groupid changed to 101
Nov 17 10:16:46 373e52c415dd rsyslogd: rsyslogd's userid changed to 101
Nov 17 10:16:46 373e52c415dd rsyslogd: [origin software="rsyslogd" swVersion="8.2001.0" x-pid="47" x-info="https://www.rsyslog.com"] start
Nov 17 10:16:46 373e52c415dd avahi-daemon[104]: Server startup complete. Host name is 373e52c415dd.local. Local service cookie is 3377707272.

有人知道哪里出了问题吗?谢谢你的回答。

这个问题最近已通过 openthread/ot-br-posix#614 解决,并且已推送新的 Docker 图片。请重试。