DragonFly On-Line Manual Pages


BNX(4)		      DragonFly Kernel Interfaces Manual		BNX(4)

NAME

bnx -- Broadcom BCM57785/BCM5718 families 10/100/Gigabit Ethernet device

SYNOPSIS

device miibus device bnx Alternatively, to load the driver as a module at boot time, place the following line in /boot/loader.conf: if_bnx_load="YES"

DESCRIPTION

The bnx driver supports the PCIe Ethernet adapters based on Broadcom BCM57785/BCM5718 families chips. The following features are supported in the bnx driver: IP/TCP/UDP checksum offloading TCP segmentation offloading (TSO) VLAN tag stripping and inserting Interrupt coalescing Receive Side Scaling (RSS), up to 4 reception queues Multiple vector MSI-X Multiple transmission queues (BCM5717C, BCM5719 and BCM5720 only), up to 4 transmission queues By default, the bnx driver will try enabling as many reception queues as are allowed by the number of CPUs in the system. For BCM5717C, BCM5719 and BCM5720, in addition to the reception queues, by default, the bnx driver will try enabling as many transmission queues as are allowed by the number of CPUs in the system and the number of enabled reception queues. If multiple transmission queues are enabled, the round-robin arbitration is performed among the transmission queues. It should be noted that if both TSO and multiple transmission queues are enabled, the round-robin arbitration between transmission queues is done at the TSO packet boundary. The bnx driver supports the following media types: autoselect Enable autoselection of the media types and options 10baseT/UTP Set 10Mbps operation. The mediaopt option can also be used to select either full-duplex or half-duplex modes. 100baseTX Set 100Mbps (Fast Ethernet) operation. The mediaopt option can also be used to select either full-duplex or half-duplex modes. 1000baseT Set 1000Mbps (Gigabit Ethernet) operation. The mediaopt option can only be set full-duplex mode. The bnx driver supports the following media options: full-duplex Force full duplex operation. half-duplex Force half duplex operation. Note that the 1000baseT media type is only available if it is supported by the adapter. For more information on configuring this device, see ifconfig(8). The bnx driver supports polling(4).

HARDWARE

The bnx driver supports Gigabit Ethernet adapters and Fast Ethernet adapters based on the Broadcom BCM57785/BCM5718 families chips: * Broadcom BCM5717 Gigabit Ethernet * Broadcom BCM5717C Gigabit Ethernet * Broadcom BCM5718 Gigabit Ethernet * Broadcom BCM5719 Gigabit Ethernet * Broadcom BCM5720 Gigabit Ethernet * Broadcom BCM5725 Gigabit Ethernet * Broadcom BCM5727 Gigabit Ethernet * Broadcom BCM5762 Gigabit Ethernet * Broadcom BCM57761 Gigabit Ethernet * Broadcom BCM57762 Gigabit Ethernet * Broadcom BCM57765 Gigabit Ethernet * Broadcom BCM57766 Gigabit Ethernet * Broadcom BCM57781 Gigabit Ethernet * Broadcom BCM57782 Gigabit Ethernet * Broadcom BCM57785 Gigabit Ethernet * Broadcom BCM57786 Gigabit Ethernet * Broadcom BCM57791 Fast Ethernet * Broadcom BCM57795 Fast Ethernet

TUNABLES

X is the device unit number. hw.bnx.rx_rings hw.bnxX.rx_rings If MSI-X is used, this tunable specifies the number of reception queues to be enabled. Maximum allowed value for these tunables is 4 and it must be power of 2 aligned. Setting these tunables to 0 allows the driver to enable as many reception queues as allowed by the number of CPUs. hw.bnx.tx_rings hw.bnxX.tx_rings For BCM5717C, BCM5719 and BCM5720, if MSI-X is used, this tunable specifies the number of trans- mission queues to be enabled. Maximum allowed value for these tunables is 4, it must be power of 2 aligned and it must be less than or equal to the number of reception queues enabled. Setting these tunables to 0 allows the driver to enable as many transmission queues as allowed by the number of CPUs and number reception queues enabled. hw.bnx.msix.enable hw.bnxX.msix.enable By default, the driver will use MSI-X if it is sup- ported. This behaviour can be turned off by set- ting this tunable to 0. hw.bnxX.msix.offset For BCM5717C, BCM5719 and BCM5720, if more than 1 reception queues and more than 1 transmission queues are enabled, this tunable specifies the leading target CPU for transmission and reception queues processing. The value specificed must be aligned to the number of reception queues enabled and must be less than the power of 2 number of CPUs. hw.bnxX.msix.txoff If more than 1 reception queues are enabled and only 1 transmission queue is enabled, this tunable specifies the target CPU for transmission queue processing. The value specificed must be less than the power of 2 number of CPUs. hw.bnxX.msix.rxoff If more than 1 reception queues are enabled and only 1 transmission queue is enabled, this tunable specifies the leading target CPU for reception queues processing. The value specificed must be aligned to the number of reception queues enabled and must be less than the power of 2 number of CPUs. hw.bnx.msi.enable hw.bnxX.msi.enable If MSI-X is disabled and MSI is supported, the driver will use MSI. This behavior can be turned off by setting this tunable to 0. hw.bnxX.msi.cpu If MSI is used, it specifies the MSI's target CPU. hw.bnxX.npoll.offset If only 1 reception queue and only 1 transmission queue are enabled or more than 1 reception queues and more than 1 transmission queues are enabled, this tunable specifies the leading target CPU for transmission and reception queues polling(4) pro- cessing. The value specificed must be aligned to the number of reception queues enabled and must be less than the power of 2 number of CPUs. hw.bnxX.npoll.txoff If more than 1 reception queues are enabled and only 1 transmission queue is enabled, this tunable specifies the target CPU for transmission queue polling(4) processing. The value specificed must be less than the power of 2 number of CPUs. hw.bnxX.npoll.rxoff If more than 1 reception queues are enabled and only 1 transmission queue is enabled, this tunable specifies the leading target CPU for reception queue polling(4) processing. The value specificed must be aligned to the number of reception queues enabled and must be less than the power of 2 number of CPUs. MIB Variables A number of per-interface variables are implemented in the hw.bnxX branch of the sysctl(3) MIB. rx_rings Number of reception queues enabled (read-only). Use the tunable hw.bnx.rx_rings or hw.bnxX.rx_rings to con- figure it. tx_rings Number of transmission queues enabled (read-only). Use the tunable hw.bnx.tx_rings or hw.bnxX.tx_rings to con- figure it. rx_coal_ticks How often status block should be updated and interrupt should be generated by the device, due to receiving packets. It is used together with rx_coal_bds to achieve RX interrupt moderation. Default value is 150 (microseconds). tx_coal_ticks How often status block should be updated and interrupt should be generated by the device, due to sending pack- ets. It is used together with tx_coal_bds to achieve TX interrupt moderation. Default value is 1023 (microseconds). rx_coal_bds Maximum number of BDs which must be received by the device before the device updates the status block and generates interrupt. It is used together with rx_coal_ticks to achieve RX interrupt moderation. Default value is 0 (disabled). rx_coal_bds_poll Maximum number of BDs which must be received by the device before the device updates the status block dur- ing polling(4). It is used together with rx_coal_ticks to reduce the frequency of status block updating due to RX. Default value is 32. tx_coal_bds Maximum number of sending BDs which must be processed by the device before the device updates the status block and generates interrupt. It is used together with tx_coal_ticks to achieve TX interrupt moderation. Default value is 128. tx_coal_bds_poll Maximum number of sending BDs which must be processed by the device before the device updates the status block during polling(4). It is used together with tx_coal_ticks to reduce the frequency of status block updating due to TX. Default value is 64. force_defrag Force defragment the sending mbuf chains, if the mbuf chain is not a TSO segment and contains more than 1 mbufs. This improves transmission performance on cer- tain low end chips, however, this also increases CPU load. Default value is 0 (disabled). tx_wreg The number of transmission descriptors should be setup before the hardware register is written. Setting this value too high will have negative effect on transmis- sion timeliness. Setting this value too low will hurt overall transmission performance due to the frequent hardware register writing. Default value is 8. std_refill Number of packets should be received before the stan- dard reception producer ring is refilled. Setting this value too low will cause extra thread scheduling cost. Setting this value too high will make chip drop incom- ing packets. Default value is 128 / number of recep- tion queues. rx_coal_bds_int Maximum number of BDs which must be received by the device before the device updates the status block dur- ing host interrupt processing. Default value is 80. tx_coal_bds_int Maximum number of sending BDs which must be processed by the device before the device updates the status block during host interrupt processing. Default value is 64. npoll_offset See the tunable hw.bnxX.npoll.offset. The set value will take effect the next time polling(4) is enabled on the device. npoll_txoff See the tunable hw.bnxX.npoll.txoff. The set value will take effect the next time polling(4) is enabled on the device. npoll_rxoff See the tunable hw.bnxX.npoll.rxoff. The set value will take effect the next time polling(4) is enabled on the device. norxbds Number of times the standard reception producer ring is short of reception BDs. If this value grows fast, it is usually an indication that std_refill is set too high. errors Number of errors, both critical and non-critical, hap- pened.

SEE ALSO

arp(4), bge(4), ifmedia(4), miibus(4), netintro(4), ng_ether(4), polling(4), vlan(4), ifconfig(8)

HISTORY

The bnx device driver first appeared in DragonFly 3.1.

AUTHORS

The bnx driver was based on bge(4) written by Bill Paul <wpaul@windriver.com>. Sepherosa Ziehau added receive side scaling, mul- tiple transmission queues and multiple MSI-X support to DragonFly. DragonFly 3.7 June 16, 2013 DragonFly 3.7