ovs dpdk场景下pktgen无法发送数据包

pktgen cannot send packet in ovs dpdk scenario

测试设置是:pktgen发送数据包到vhost-user1端口,然后ovs将它转发给vhost-user2,然后testpmd从vhost-user2接收它。

问题是: pktgen 发不出包,testpmd 也收不到包,不知道是什么问题。 需要一些帮助,提前致谢!

OVS: 2.9.0
DPDK: 17.11.6 
pktgen: 3.4.4

OVS 设置:

export DB_SOCK=/usr/local/var/run/openvswitch/db.sock
export PATH=$PATH:/usr/local/share/openvswitch/scripts
rm /usr/local/etc/openvswitch/conf.db

ovsdb-tool create /usr/local/etc/openvswitch/conf.db /usr/local/share/openvswitch/vswitch.ovsschema
ovsdb-server --remote=punix:/usr/local/var/run/openvswitch/db.sock --remote=db:Open_vSwitch,Open_vSwitch,manager_options --pidfile --detach
ovs-vsctl --no-wait init
ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-init=true other_config:dpdk-lcore=0x2 other_config:dpdk-socket-mem="1024,0"
ovs-vswitchd unix:/usr/local/var/run/openvswitch/db.sock --pidfile --detach
ovs-vsctl set Open_vSwitch . other_config:pmd-cpu-mask=0x8 

ovs-vsctl add-br ovs-br0 -- set bridge ovs-br0 datapath_type=netdev
ovs-vsctl add-port ovs-br0 vhost-user0 -- set Interface vhost-user0 type=dpdkvhostuser
ovs-vsctl add-port ovs-br0 vhost-user1 -- set Interface vhost-user1 type=dpdkvhostuser
ovs-vsctl add-port ovs-br0 vhost-user2 -- set Interface vhost-user2 type=dpdkvhostuser
ovs-vsctl add-port ovs-br0 vhost-user3 -- set Interface vhost-user3 type=dpdkvhostuser

sudo ovs-ofctl del-flows ovs-br0
sudo ovs-ofctl add-flow ovs-br0 in_port=2,dl_type=0x800,idle_timeout=0,action=output:3
sudo ovs-ofctl add-flow ovs-br0 in_port=3,dl_type=0x800,idle_timeout=0,action=output:2
sudo ovs-ofctl add-flow ovs-br0 in_port=1,dl_type=0x800,idle_timeout=0,action=output:4
sudo ovs-ofctl add-flow ovs-br0 in_port=4,dl_type=0x800,idle_timeout=0,action=output:1

运行 pktgen:

root@k8s:/home/haosp/OVS_DPDK/pktgen-3.4.4# pktgen -c 0xf --master-lcore 0 -n 1 --socket-mem 512,0 --file-prefix pktgen --no-pci \
> --vdev 'net_virtio_user0,mac=00:00:00:00:00:05,path=/usr/local/var/run/openvswitch/vhost-user0' \
> --vdev 'net_virtio_user1,mac=00:00:00:00:00:01,path=/usr/local/var/run/openvswitch/vhost-user1' \
> -- -P -m "1.[0-1]"

Copyright (c) <2010-2017>, Intel Corporation. All rights reserved. Powered by DPDK
EAL: Detected 4 lcore(s)
EAL: Probing VFIO support...
EAL: VFIO support initialized
Lua 5.3.4  Copyright (C) 1994-2017 Lua.org, PUC-Rio
   Copyright (c) <2010-2017>, Intel Corporation. All rights reserved.
   Pktgen created by: Keith Wiles -- >>> Powered by DPDK <<<

>>> Packet Burst 64, RX Desc 1024, TX Desc 2048, mbufs/port 16384, mbuf cache 2048

=== port to lcore mapping table (# lcores 4) ===
   lcore:    0       1       2       3      Total
port   0: ( D: T) ( 1: 1) ( 0: 0) ( 0: 0) = ( 1: 1)
port   1: ( D: T) ( 1: 1) ( 0: 0) ( 0: 0) = ( 1: 1)
Total   : ( 0: 0) ( 2: 2) ( 0: 0) ( 0: 0)
  Display and Timer on lcore 0, rx:tx counts per port/lcore

Configuring 2 ports, MBUF Size 2176, MBUF Cache Size 2048
Lcore:
    1, RX-TX  
                RX_cnt( 2): (pid= 0:qid= 0) (pid= 1:qid= 0) 
                TX_cnt( 2): (pid= 0:qid= 0) (pid= 1:qid= 0) 

Port :
    0, nb_lcores  1, private 0x5635a661d3a0, lcores:  1 
    1, nb_lcores  1, private 0x5635a661ff70, lcores:  1 



** Default Info (net_virtio_user0, if_index:0) **
   max_rx_queues  :   1, max_tx_queues     :   1
   max_mac_addrs  :  64, max_hash_mac_addrs:   0, max_vmdq_pools:     0
   rx_offload_capa:  28, tx_offload_capa   :   0, reta_size     :     0, flow_type_rss_offloads:0000000000000000
   vmdq_queue_base:   0, vmdq_queue_num    :   0, vmdq_pool_base:     0
** RX Conf **
   pthresh        :   0, hthresh          :   0, wthresh        :     0
   Free Thresh    :   0, Drop Enable      :   0, Deferred Start :     0
** TX Conf **
   pthresh        :   0, hthresh          :   0, wthresh        :     0
   Free Thresh    :   0, RS Thresh        :   0, Deferred Start :     0, TXQ Flags:00000f00

    Create: Default RX  0:0  - Memory used (MBUFs 16384 x (size 2176 + Hdr 128)) + 192 =  36865 KB headroom 128 2176
      Set RX queue stats mapping pid 0, q 0, lcore 1


    Create: Default TX  0:0  - Memory used (MBUFs 16384 x (size 2176 + Hdr 128)) + 192 =  36865 KB headroom 128 2176
    Create: Range TX    0:0  - Memory used (MBUFs 16384 x (size 2176 + Hdr 128)) + 192 =  36865 KB headroom 128 2176
    Create: Sequence TX 0:0  - Memory used (MBUFs 16384 x (size 2176 + Hdr 128)) + 192 =  36865 KB headroom 128 2176
    Create: Special TX  0:0  - Memory used (MBUFs   64 x (size 2176 + Hdr 128)) + 192 =    145 KB headroom 128 2176

                                                                       Port memory used = 147601 KB
Initialize Port 0 -- TxQ 1, RxQ 1,  Src MAC 00:00:00:00:00:05

** Default Info (net_virtio_user1, if_index:0) **
   max_rx_queues  :   1, max_tx_queues     :   1
   max_mac_addrs  :  64, max_hash_mac_addrs:   0, max_vmdq_pools:     0
   rx_offload_capa:  28, tx_offload_capa   :   0, reta_size     :     0, flow_type_rss_offloads:0000000000000000
   vmdq_queue_base:   0, vmdq_queue_num    :   0, vmdq_pool_base:     0
** RX Conf **
   pthresh        :   0, hthresh          :   0, wthresh        :     0
   Free Thresh    :   0, Drop Enable      :   0, Deferred Start :     0
** TX Conf **
   pthresh        :   0, hthresh          :   0, wthresh        :     0
   Free Thresh    :   0, RS Thresh        :   0, Deferred Start :     0, TXQ Flags:00000f00

    Create: Default RX  1:0  - Memory used (MBUFs 16384 x (size 2176 + Hdr 128)) + 192 =  36865 KB headroom 128 2176
      Set RX queue stats mapping pid 1, q 0, lcore 1


    Create: Default TX  1:0  - Memory used (MBUFs 16384 x (size 2176 + Hdr 128)) + 192 =  36865 KB headroom 128 2176
    Create: Range TX    1:0  - Memory used (MBUFs 16384 x (size 2176 + Hdr 128)) + 192 =  36865 KB headroom 128 2176
    Create: Sequence TX 1:0  - Memory used (MBUFs 16384 x (size 2176 + Hdr 128)) + 192 =  36865 KB headroom 128 2176
    Create: Special TX  1:0  - Memory used (MBUFs   64 x (size 2176 + Hdr 128)) + 192 =    145 KB headroom 128 2176

                                                                       Port memory used = 147601 KB
Initialize Port 1 -- TxQ 1, RxQ 1,  Src MAC 00:00:00:00:00:01
                                                                      Total memory used = 295202 KB
Port  0: Link Up - speed 10000 Mbps - full-duplex <Enable promiscuous mode>
!ERROR!: Could not read enough random data for PRNG seed
Port  1: Link Up - speed 10000 Mbps - full-duplex <Enable promiscuous mode>
!ERROR!: Could not read enough random data for PRNG seed


=== Display processing on lcore 0
WARNING: Nothing to do on lcore 2: exiting
WARNING: Nothing to do on lcore 3: exiting
  RX/TX processing lcore:   1 rx:  2 tx:  2
For RX found 2 port(s) for lcore 1
For TX found 2 port(s) for lcore 1

Pktgen:/>set 0 dst mac 00:00:00:00:00:03
Pktgen:/>set all rate 10
Pktgen:/>set 0 count 10000
Pktgen:/>set 1 count 20000
Pktgen:/>str



| Flags:Port      :   P--------------:0   P--------------:1                   0/0
Link State        :   P--------------:0   P--------------:1     ----TotalRate----
Pkts/s Max/Rx     :       <UP-10000-FD>       <UP-10000-FD>                   0/0
       Max/Tx     :                 0/0                 0/0                   0/0
MBits/s Rx/Tx     :               256/0               256/0                 512/0
Broadcast         :                 0/0                 0/0                   0/0
Multicast         :                   0                   0
  64 Bytes        :                   0                   0
  65-127          :                   0                   0
  128-255         :                   0                   0
  256-511         :                   0                   0
  512-1023        :                   0                   0
  1024-1518       :                   0                   0
Runts/Jumbos      :                   0                   0
Errors Rx/Tx      :                 0/0                 0/0
Total Rx Pkts     :                 0/0                 0/0
      Tx Pkts     :                   0                   0
      Rx MBs      :                 256                 256
      Tx MBs      :                   0                   0
ARP/ICMP Pkts     :                   0                   0
Tx Count/% Rate   :                 0/0                 0/0
Pattern Type      :             abcd...             abcd...
Tx Count/% Rate   :          10000 /10%          20000 /10%--------------------
PktSize/Tx Burst  :           64 /   64           64 /   64
Src/Dest Port     :         1234 / 5678         1234 / 5678--------------------
Pkt Type:VLAN ID  :     IPv4 / TCP:0001     IPv4 / TCP:0001
802.1p CoS        :                   0                   0--------------------
ToS Value:        :                   0                   0
  - DSCP value    :                   0                   0--------------------
  - IPP  value    :                   0                   0
Dst  IP Address   :         192.168.1.1         192.168.0.1--------------------
Src  IP Address   :      192.168.0.1/24      192.168.1.1/24
Dst MAC Address   :   00:00:00:00:00:03   00:00:00:00:00:05--------------------
Src MAC Address   :   00:00:00:00:00:05   00:00:00:00:00:01
VendID/PCI Addr   :   0000:0000/00:00.0   0000:0000/00:00.0--------------------
Pktgen:/> str
-- Pktgen Ver: 3.4.4 (DPDK 17.11.6)  Powered by DPDK --------------------------
Pktgen:/>       

运行 testpmd:

./testpmd -c 0xf -n 1 --socket-mem 512,0 --file-prefix testpmd --no-pci \
--vdev 'net_virtio_user2,mac=00:00:00:00:00:02,path=/usr/local/var/run/openvswitch/vhost-user2' \
--vdev 'net_virtio_user3,mac=00:00:00:00:00:03,path=/usr/local/var/run/openvswitch/vhost-user3' \
-- -i -a --burst=64 --txd=2048 --rxd=2048 --coremask=0x4
EAL: Detected 4 lcore(s)
EAL: Detected 1 NUMA nodes
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: 1 hugepages of size 1073741824 reserved, but no mounted hugetlbfs found for that size
EAL: Probing VFIO support...
EAL: VFIO support initialized
update_memory_region(): Too many memory regions
update_memory_region(): Too many memory regions
Interactive-mode selected
Auto-start selected
Warning: NUMA should be configured manually by using --port-numa-config and --ring-numa-config parameters along with --numa.
testpmd: create a new mbuf pool <mbuf_pool_socket_0>: n=171456, size=2176, socket=0
testpmd: preferred mempool ops selected: ring_mp_mc
update_memory_region(): Too many memory regions
update_memory_region(): Too many memory regions
update_memory_region(): Too many memory regions
update_memory_region(): Too many memory regions
Configuring Port 0 (socket 0)
Port 0: 00:00:00:00:00:02
Configuring Port 1 (socket 0)
Port 1: 00:00:00:00:00:03
Checking link statuses...
Done
Start automatic packet forwarding
io packet forwarding - ports=2 - cores=1 - streams=2 - NUMA support enabled, MP allocation mode: native
Logical Core 2 (socket 0) forwards packets on 2 streams:
  RX P=0/Q=0 (socket 0) -> TX P=1/Q=0 (socket 0) peer=02:00:00:00:00:01
  RX P=1/Q=0 (socket 0) -> TX P=0/Q=0 (socket 0) peer=02:00:00:00:00:00

  io packet forwarding packets/burst=64
  nb forwarding cores=1 - nb forwarding ports=2
  port 0: RX queue number: 1 Tx queue number: 1
    Rx offloads=0x0 Tx offloads=0x0
    RX queue: 0
      RX desc=2048 - RX free threshold=0
      RX threshold registers: pthresh=0 hthresh=0  wthresh=0
      RX Offloads=0x0
    TX queue: 0
      TX desc=2048 - TX free threshold=0
      TX threshold registers: pthresh=0 hthresh=0  wthresh=0
      TX offloads=0x0 - TX RS bit threshold=0
  port 1: RX queue number: 1 Tx queue number: 1
    Rx offloads=0x0 Tx offloads=0x0
    RX queue: 0
      RX desc=2048 - RX free threshold=0
      RX threshold registers: pthresh=0 hthresh=0  wthresh=0
      RX Offloads=0x0
    TX queue: 0
      TX desc=2048 - TX free threshold=0
      TX threshold registers: pthresh=0 hthresh=0  wthresh=0
      TX offloads=0x0 - TX RS bit threshold=0
testpmd> show port info
Bad arguments
testpmd> show port stats all

  ######################## NIC statistics for port 0  ########################
  RX-packets: 0          RX-missed: 0          RX-bytes:  0
  RX-errors: 0
  RX-nombuf:  0         
  TX-packets: 0          TX-errors: 0          TX-bytes:  0

  Throughput (since last show)
  Rx-pps:            0
  Tx-pps:            0
  ############################################################################

  ######################## NIC statistics for port 1  ########################
  RX-packets: 0          RX-missed: 0          RX-bytes:  0
  RX-errors: 0
  RX-nombuf:  0         
  TX-packets: 0          TX-errors: 0          TX-bytes:  0

  Throughput (since last show)
  Rx-pps:            0
  Tx-pps:            0
  ############################################################################

OVS 转储流显示:

root@k8s:/home/haosp# ovs-ofctl dump-flows ovs-br0
 cookie=0x0, duration=77519.972s, table=0, n_packets=0, n_bytes=0, ip,in_port="vhost-user1" actions=output:"vhost-user2"
 cookie=0x0, duration=77519.965s, table=0, n_packets=0, n_bytes=0, ip,in_port="vhost-user2" actions=output:"vhost-user1"
 cookie=0x0, duration=77519.959s, table=0, n_packets=0, n_bytes=0, ip,in_port="vhost-user0" actions=output:"vhost-user3"
 cookie=0x0, duration=77518.955s, table=0, n_packets=0, n_bytes=0, ip,in_port="vhost-user3" actions=output:"vhost-user0"

ovs-ofctl 转储端口 ovs-br0 显示:

root@k8s:/home/haosp# ovs-ofctl dump-ports ovs-br0
OFPST_PORT reply (xid=0x2): 5 ports
  port  "vhost-user3": rx pkts=0, bytes=0, drop=0, errs=0, frame=?, over=?, crc=?
           tx pkts=0, bytes=0, drop=6, errs=?, coll=?
  port  "vhost-user1": rx pkts=0, bytes=0, drop=0, errs=0, frame=?, over=?, crc=?
           tx pkts=0, bytes=0, drop=8, errs=?, coll=?
  port  "vhost-user0": rx pkts=0, bytes=0, drop=0, errs=0, frame=?, over=?, crc=?
           tx pkts=0, bytes=0, drop=8, errs=?, coll=?
  port  "vhost-user2": rx pkts=0, bytes=0, drop=0, errs=0, frame=?, over=?, crc=?
           tx pkts=0, bytes=0, drop=8, errs=?, coll=?
  port LOCAL: rx pkts=50, bytes=3732, drop=0, errs=0, frame=0, over=0, crc=0
           tx pkts=0, bytes=0, drop=0, errs=0, coll=0

ovs-ofctl 显示 ovs-br0

root@k8s:/home/haosp# ovs-ofctl show ovs-br0
OFPT_FEATURES_REPLY (xid=0x2): dpid:0000ca4f2b8e6b4b
n_tables:254, n_buffers:0
capabilities: FLOW_STATS TABLE_STATS PORT_STATS QUEUE_STATS ARP_MATCH_IP
actions: output enqueue set_vlan_vid set_vlan_pcp strip_vlan mod_dl_src mod_dl_dst mod_nw_src mod_nw_dst mod_nw_tos mod_tp_src mod_tp_dst
 1(vhost-user0): addr:00:00:00:00:00:00
     config:     0
     state:      LINK_DOWN
     speed: 0 Mbps now, 0 Mbps max
 2(vhost-user1): addr:00:00:00:00:00:00
     config:     0
     state:      LINK_DOWN
     speed: 0 Mbps now, 0 Mbps max
 3(vhost-user2): addr:00:00:00:00:00:00
     config:     0
     state:      0
     speed: 0 Mbps now, 0 Mbps max
 4(vhost-user3): addr:00:00:00:00:00:00
     config:     0
     state:      0
     speed: 0 Mbps now, 0 Mbps max
 LOCAL(ovs-br0): addr:ca:4f:2b:8e:6b:4b
     config:     0
     state:      0
     current:    10MB-FD COPPER
     speed: 10 Mbps now, 0 Mbps max
OFPT_GET_CONFIG_REPLY (xid=0x4): frags=normal miss_send_len=0

ovs-vsctl 显示

root@k8s:/home/haosp# ovs-vsctl show
635ba448-91a0-4c8c-b6ca-4b9513064d7f
    Bridge "ovs-br0"
        Port "vhost-user2"
            Interface "vhost-user2"
                type: dpdkvhostuser
        Port "ovs-br0"
            Interface "ovs-br0"
                type: internal
        Port "vhost-user0"
            Interface "vhost-user0"
                type: dpdkvhostuser
        Port "vhost-user3"
            Interface "vhost-user3"
                type: dpdkvhostuser
        Port "vhost-user1"
            Interface "vhost-user1"
                type: dpdkvhostuser

pktgen好像发不了包,ovs statatics也显示没有收到包, 我还不知道,这让我很困惑

如果目标是在 Pktgen and testpmd that is connected by OVS-DPDK 之间进行数据包传输,则必须使用 net_vhost 和 virtio_user 对 .

DPDK Pktgen (net_vhost) <==> OVS-DPDK port-1 (virtio_user) {Rule to forward} OVS-DPDK port-2 (virtio_user) <==> DPDK Pktgen (net_vhost) 

在当前设置中,您必须进行以下更改

  1. 通过从 --vdev net_virtio_user0,mac=00:00:00:00:00:05,path=/usr/local/var/run/openvswitch/vhost-user0 更改为 --vdev net_vhost0,iface=/usr/local/var/run/openvswitch/vhost-user0
  2. 来启动 DPDK pktgen
  3. 通过从 --vdev 'net_virtio_user2,mac=00:00:00:00:00:02,path=/usr/local/var/run/openvswitch/vhost-user2' 更改为 --vdev 'net_vhost0,iface=/usr/local/var/run/openvswitch/vhost-user2'
  4. 来启动 DPDK testpmd
  5. 然后用--vdev=virtio_user0,path=/usr/local/var/run/openvswitch/vhost-user0 and --vdev=virtio_user1,path=/usr/local/var/run/openvswitch/vhost-user2
  6. 启动DPDK-OVS
  7. 添加规则以允许 pktgen 和 testpmd 之间的端口到端口转发

注:

  1. 请更新多个端口的命令行。
  2. 下面与 pktgen 和 l2fwd 设置共享的屏幕截图