Posted at November 23, 2024 by PCI-DB Team
Install Driver AutomaticallyDevice Name | Mellanox MCX512A-ACAT PCI Card Firmware 16.22.1002 |
---|---|
Category | Network Card |
Manufacturer | Mellanox |
File Size | 2 MB |
Supported OS | OS Independent |
- Software Reset Flow: Software Reset Flow enables the device to recover from fatal errors. The flow includes software detection of a fatal error, automatic creations of an mstdump file for future debug by the software, and resetting of the device. The feature is enabled using an mlxconfig command. Note: The flow is currently not supported on Multi host devices, Socket Direct devices and devices running management traffic (NCSI, MCTP).
- Steering Discard Packet Counters: Any received packet which is dropped by the device is accounted for. To enable this functionality, the following counters were added to count the discard packets (per vport).
- a) nic_receive_steering_discard: Number of packets that completed the NIC Receive Flow Table steering, and were discarded because they did not match any flow in the final Flow Table.
- b) receive_discard_vport_down: Number of packets that were steered to a VPort, and discarded because the VPort was not in a state to receive packets.
- c) transmit_discard_vport_down: Number of packets that were transmitted by a vNIC, and discarded because the VPort was not in a state to transmit packets.
- Pause Frame Duration and XOFF Resend Time: Increased the Pause Frame Duration and the XOFF Resend Time to the maximum value defined by the specification.
- PCI Relax Ordering: mlxconfig configuration can now enable or disable forced PCI relaxed ordering in mkey_context. If this feature is enabled, the software per mkey configuration is ignored.
- Push/Pop VLAN: Added support for Push/Pop VLAN, new FLOW TABLE ENTRY actions. These new actions are used by the driver to implement Q-in-Q functionality. For further information, please refer to the PRM section Flow Table
- Packet Pacing: Added support for Packet Pacing in ConnectX-5 adapter cards. Packet Pacing (traffic shaping) is a rate-limited flow per Send QPs. A rate-limited flow is allowed to transmit a few packets before its transmission rate is evaluated, and the next packet is scheduled for transmission accordingly. Setting and changing the rate is done by modifying the QP. Note: Packet Pacing is not functional in ConnectX-5 Multi host adapter cards.
- vport Mirroring: Packets are mirrored based on certain mirroring policy. The policy is set using the “set FTE command” that supports forward action in the ACL tables (ingress/egress). The firmware support the following destination list format: new destination vport (analyzer) and another Flow Table. This way, the driver can forward the SX/RX packet related to the vport once it reaches the ACL table (forward it to the analyzer vport).
- Resiliency Special Error Event: Firmware uses error events to monitor the health of core transport engines, both Rx and Tx, and to detect if a system hang occurred and was not cured by other error mechanisms. Upon such detection, events are sent to the driver to perform any required action (e.g., software reset).
- QP’s Creation Time: Accelerated QP’s creation time.
- SR-IOV LID based Routing Mode: SR-IOV default routing mode is now LID based. The configuration change is available via mlxconfig tool. Note that in such mode, the VF will get its own LID, hence the GRH is not required. Note: LID based routing support for vports is supported using SM v4.8.1
- Expansion ROM: Added PXE and UEFI to additional ConnectX- 5adapter cards. ConnectX-5 now holds PXE and x86-UEFI
- Host Chaining: Host Chaining allows the user to connect ("chain") one server to another without going through a switch, thus saving switch ports. Host Changing algorithm is as follow: Received packets from the wire with DMAC equal to the host MAC are forwarded to the local host / Received traffic from the physical port with DMAC different than the current MAC are forwarded to the other port / Device allows hosts to transmit traffic only with its permanent MAC / To prevent loops, the received traffic from the wire with SMAC equal to the port permanent MAC is dropped (the packet cannot start a new loop)
- Fast path VLs: Enabled fast path VLs which have lower latency (less than 2.55us) than slow path VLs. Fast path mapping can be configured using OpenSM configuration file.
- Hairpin: Hairpin enables ingress traffic on the network port to egress on the same port or the 2nd port of the adapter. Hairpin enables hardware forwarding of packets from the receive queue to the transmit queue, thus fully offloading software gateways to the hardware. The queues can be allocated on different PCI functions, thus enabling packets’ forwarding between different NIC ports.
- Coherent Accelerator Processor Interface (CAPI v2): The Coherent Accelerator Process Interface (CAPI) enables the user to attach a coherent accelerator to a Power and OpenPower based platforms. This solution delivers performance that exceeds today’s I/O-attached acceleration engines. Note: This feature is available only with IBM Power 9 CPUs.
- NVME-oF Target Offload over DC transport: The NVMe-oF target offload provides the IO data path functionality of an NVMe over Fabrics Front-End subsystem transferring the IO operations to NVMe PCIe subsystems
- Increased the Full Wire Speed (FWS) threshold value to improve EDR link results.
- Added the option to avoid reconfiguration of QoS tables upon link toggling to reduce packet loss and improve performance.
- Fixed an issue that caused traffic to hang when Responder Not Ready (RNR) flow was used.
- Tag Matching supports up to 16K connections.
- Target NVMEoF offload for 4 SSDs are 950K IOPS in ConnectX-5 Ex.
- The HCA does not always identify correctly the presets at the 8G EQ TS2 during speed change to Gen4. As a result, the initial Gen4 Tx configuration might be wrong which might cause speed degrade to Gen1.
- Fixed an issue that resulted in “Destroy LAG” command failure if a VFs received an FLR while its affinity QPs were open.
- When RoCE Dual Port mode is enabled, tcpdump is not functional on the 2nd port.
- Fixed an issue that occasionally cased the keepalive packet to fail and the FIO connection to disconnect (error =5).
- Health counter increases every 50ms instead of 10ms.
- In very rare cases, triggering a function level reset while running NVMf offload traffic might cause a response capsule that carries a bad command identifier of 0 to be sent.
- When a packet is sent on a non-native port, a LAG or a RoCE dual port, and it reaches the ingress mirroring entry, the packet sends the RX a meta data loopback syndrome, on the non-native port, resulting in the packet reaching the wrong meta_data table.
- Signature-accessing WQEs sent locally to the NVMeF target QPs that encounter signature errors, will not send a SIGERR CQE.
- Packet Pacing is not functional in ConnectX-5 Multi host adapter cards
- ParaVport is not supported in ConnectX-5.
- Host Chaining Limitations: MAC address must not be changed, Both ports should be configured to Ethernet when host chaining is enabled, The following capabilities cannot function when host chaining is enabled: SR-IOV, DSCP, NODNIC, Load balancing, LAG, Dual Port RoCE (multi port vHCA)
- mlxconfig tool presents all possible expansion ROM images, instead of presenting only the existing images.
- An Ethernet multicast loopback packet is not counted (even if it is not a local loopback packet) when running the nic_receive_steering_discard command.
- When a dual-port VHCA sends a RoCE packet on its non-native port. and the packet arrives to its affiliated vport FDB, a mismatch might happen on the rules that match the packet source vpor
When connected, the operating system usually installs a generic driver that helps the computer to recognize the newly attached device.
However, proper software must be applied if you want to make use of all features that the network adapter has available. This task also allows computers to properly recognize all device characteristics such as manufacturer, chipset, technology, and others.
Updating the adapter's drivers and utilities version might improve overall performance and stability, increase transfer speeds, fix different compatibility problems and several network-related errors, as well as bring various other changes.
To install this release, simply get the package, extract it if necessary, run the setup, and follow the instructions displayed on-screen. When done, don't forget to perform a system restart and reconnect the network adapter to make sure that all changes take effect properly.
Without further ado, if you intend to apply this version, click the download button and install the package. Moreover, check with our website as often as possible so that you don't miss a single new release.
It is highly recommended to always use the most recent driver version available.
Try to set a system restore point before installing a device driver. This will help if you installed an incorrect or mismatched driver. Problems can arise when your hardware device is too old or not supported any longer.
Device Type: Network Card File Size: 42.1 MB
Install DriverDevice Type: Network Card File Size: 42.3 MB Windows Server 2012
Install DriverDevice Type: Network Card File Size: 42.2 MB Windows Server 2012
Install DriverDevice Type: Network Card File Size: 42.3 MB Windows Server 2012
Install DriverDevice Type: Network Card File Size: 42.2 MB Windows 2008
Install DriverDevice Type: Network Card File Size: 42.2 MB Windows 7 ,Windows 7 64 bit
Install DriverDevice Type: Network Card File Size: 42.1 MB
Install DriverDevice Type: Network Card File Size: 42.2 MB Windows Server 2012
Install DriverDevice Type: Network Card File Size: 42.2 MB Windows 2008
Install DriverDevice Type: Network Card File Size: 42.3 MB Windows 8.1 ,Windows 8.1 64 bit
Install DriverFind Missing Drivers
Recent Devices
Recent Drivers
© 2024 PCI-DB.com - PCI Database Replacement. All rights reserved.