Opened 5 months ago

Last modified 41 hours ago

#18730 new bug

Performance: Investigate improvements to UDP performance

Reported by: kallisti5 Owned by: axeld
Priority: low Milestone: Unscheduled
Component: Network & Internet/UDP Version: R1/beta4
Keywords: Cc:
Blocked By: Blocking:
Platform: All

Description

Our network stack seems to have some inefficencies in it. Comparing iperf3 run on localhost for UDP traffic:

iperf3 -s iperf3 -c 127.0.0.1 -u -b 0

Haiku hrev57492 x86_64 AMD Ryzen 9 5950X 16-core / 32-thread:

run 1:
[ ID] Interval           Transfer     Bitrate         Jitter    Lost/Total Datagrams
[  5]   0.00-10.00  sec  4.71 GBytes  4.05 Gbits/sec  0.000 ms  0/77280 (0%)  sender
[  5]   0.00-10.00  sec  3.94 GBytes  3.38 Gbits/sec  0.108 ms  12684/77278 (16%)  receiver

run 2:
[ ID] Interval           Transfer     Bitrate         Jitter    Lost/Total Datagrams
[  5]   0.00-10.00  sec  19.1 GBytes  16.4 Gbits/sec  0.000 ms  0/313450 (0%)  sender
[  5]   0.00-10.00  sec  19.1 GBytes  16.4 Gbits/sec  0.006 ms  320/313449 (0.1%)  receiver

Linux x86_64 AMD Ryzen 9 5950X 16-core / 32-thread:

[ ID] Interval           Transfer     Bitrate         Jitter    Lost/Total Datagrams
[  5]   0.00-10.00  sec  38.0 GBytes  32.6 Gbits/sec  0.000 ms  0/2451450 (0%)  sender
[  5]   0.00-10.00  sec  38.0 GBytes  32.6 Gbits/sec  0.002 ms  555/2451450 (0.023%)  receiver

One big note for this is HTTP 3 uses UDP.. so we want any improvements here we can get.

Attachments (1)

iperf3 udp loss-hrev57730.png (13.3 KB ) - added by Coldfirex 41 hours ago.

Download all attachments as: .zip

Change History (5)

comment:1 by waddlesplash, 5 months ago

The radical difference between the two runs is very strange. Any idea there? Was the system under load at the time?

comment:2 by kallisti5, 5 months ago

Zero load on the Haiku system. Boot -> launch terminal -> run test by hand

comment:3 by kallisti5, 5 months ago

One reason I raised this is as UDP does away with all the traffic control stuff, it should better represent the "overall speed ability" of our network stack removing potential lingering performance issues with our TCP protocol support.

comment:4 by Coldfirex, 41 hours ago

I noticed something similar which might be related to this ticket.

Running iperf3 locally, with the same settings as above, showed consistently a 98-99% loss on receiver. hrev57730.

This is running in Proxmox with virtio nic. My linux and Windows VMs in the same system do not exhibit this behavior.

Unrelated to this ticket, but when not using UDP, iperf3 is almost 10x slower compared to linux with the same test (5.46 Gbit/sec vs 47.4 Gbit/sec).

Screenshot attached of UDP results.

Last edited 41 hours ago by Coldfirex (previous) (diff)

by Coldfirex, 41 hours ago

Note: See TracTickets for help on using tickets.