|
@@ -0,0 +1,63 @@
|
|
|
|
+Hyper-V network driver
|
|
|
|
+======================
|
|
|
|
+
|
|
|
|
+Compatibility
|
|
|
|
+=============
|
|
|
|
+
|
|
|
|
+This driver is compatible with Windows Server 2012 R2, 2016 and
|
|
|
|
+Windows 10.
|
|
|
|
+
|
|
|
|
+Features
|
|
|
|
+========
|
|
|
|
+
|
|
|
|
+ Checksum offload
|
|
|
|
+ ----------------
|
|
|
|
+ The netvsc driver supports checksum offload as long as the
|
|
|
|
+ Hyper-V host version does. Windows Server 2016 and Azure
|
|
|
|
+ support checksum offload for TCP and UDP for both IPv4 and
|
|
|
|
+ IPv6. Windows Server 2012 only supports checksum offload for TCP.
|
|
|
|
+
|
|
|
|
+ Receive Side Scaling
|
|
|
|
+ --------------------
|
|
|
|
+ Hyper-V supports receive side scaling. For TCP, packets are
|
|
|
|
+ distributed among available queues based on IP address and port
|
|
|
|
+ number. Current versions of Hyper-V host, only distribute UDP
|
|
|
|
+ packets based on the IP source and destination address.
|
|
|
|
+ The port number is not used as part of the hash value for UDP.
|
|
|
|
+ Fragmented IP packets are not distributed between queues;
|
|
|
|
+ all fragmented packets arrive on the first channel.
|
|
|
|
+
|
|
|
|
+ Generic Receive Offload, aka GRO
|
|
|
|
+ --------------------------------
|
|
|
|
+ The driver supports GRO and it is enabled by default. GRO coalesces
|
|
|
|
+ like packets and significantly reduces CPU usage under heavy Rx
|
|
|
|
+ load.
|
|
|
|
+
|
|
|
|
+ SR-IOV support
|
|
|
|
+ --------------
|
|
|
|
+ Hyper-V supports SR-IOV as a hardware acceleration option. If SR-IOV
|
|
|
|
+ is enabled in both the vSwitch and the guest configuration, then the
|
|
|
|
+ Virtual Function (VF) device is passed to the guest as a PCI
|
|
|
|
+ device. In this case, both a synthetic (netvsc) and VF device are
|
|
|
|
+ visible in the guest OS and both NIC's have the same MAC address.
|
|
|
|
+
|
|
|
|
+ The VF is enslaved by netvsc device. The netvsc driver will transparently
|
|
|
|
+ switch the data path to the VF when it is available and up.
|
|
|
|
+ Network state (addresses, firewall, etc) should be applied only to the
|
|
|
|
+ netvsc device; the slave device should not be accessed directly in
|
|
|
|
+ most cases. The exceptions are if some special queue discipline or
|
|
|
|
+ flow direction is desired, these should be applied directly to the
|
|
|
|
+ VF slave device.
|
|
|
|
+
|
|
|
|
+ Receive Buffer
|
|
|
|
+ --------------
|
|
|
|
+ Packets are received into a receive area which is created when device
|
|
|
|
+ is probed. The receive area is broken into MTU sized chunks and each may
|
|
|
|
+ contain one or more packets. The number of receive sections may be changed
|
|
|
|
+ via ethtool Rx ring parameters.
|
|
|
|
+
|
|
|
|
+ There is a similar send buffer which is used to aggregate packets for sending.
|
|
|
|
+ The send area is broken into chunks of 6144 bytes, each of section may
|
|
|
|
+ contain one or more packets. The send buffer is an optimization, the driver
|
|
|
|
+ will use slower method to handle very large packets or if the send buffer
|
|
|
|
+ area is exhausted.
|