Opened 2 years ago

Last modified 2 weeks ago

#18067 new bug

Rapid memory increase while downloading with any command

Reported by: khallebal Owned by: nobody
Priority: normal Milestone: Unscheduled
Component: Network & Internet Version: R1/Development
Keywords: Cc:
Blocked By: Blocking:
Platform: All

Description

Easily reproducible with wget, aria2, curl or pkgman, the rate is like 1k?/s, just download a large iso file 1G+ to see in ProcessConroller the increase.

Attachments (3)

log.txt (200.0 KB ) - added by khallebal 2 years ago.
network card disabled (87.9 KB ) - added by khallebal 4 months ago.
network card enabled (98.9 KB ) - added by khallebal 4 months ago.

Download all attachments as: .zip

Change History (20)

comment:1 by bipolar, 2 years ago

Isn't that just a normal cache thing? Do you notice any adverse effects besides the usage number going up? (as in... those numbers do not go down after the download ends / gets synced to disk? or cause some other issues with the system?)

Maybe watching ActivityMonitor's Memory graphs can help to better see what's going on?

I've too noticed some frequent memory usage spikes (when removing big git repos, for example, memory spikes are periodic in that case, up/down 200 MB+ and growing the longer the operation takes), but all goes back to normal after the operations finish.

comment:2 by khallebal, 2 years ago

Please next time before you make assumptions, try to reproduce the behavior to see for yourself or at least read carefully the reporter's description of the problem. i never said anything about spikes or cache, (you almost rendered this ticket invalid). it is a memory increase at a steady rate, which has nothing to do the spikes you get when you compile or delete files. all i could see using process-controller is that i comes from the kernel(which is not very helpful i know)

Last edited 2 years ago by khallebal (previous) (diff)

by khallebal, 2 years ago

Attachment: log.txt added

comment:3 by khallebal, 2 years ago

Just adding a strace -t call with wget in case it can give us some useful info, see log.txt file.

comment:4 by waddlesplash, 11 months ago

Please retest after hrev57528.

comment:5 by khallebal, 11 months ago

Tested with hrev57536 and still the same increase with the same rate.

comment:6 by waddlesplash, 11 months ago

If you can, please start a download, drop to KDL, run the slabs command, exit KDL, wait a few seconds for memory to increase, then drop to KDL and run the slabs command again.

comment:7 by waddlesplash, 11 months ago

(Then exit KDL and upload your syslog here.)

by khallebal, 4 months ago

Attachment: network card disabled added

by khallebal, 4 months ago

Attachment: network card enabled added

comment:8 by khallebal, 4 months ago

Sorry didn't reply before, for some reason i didn't get a notification email, anyway, here are logs with the network card enabled/disabled, memory does increase actually while the card is disabled but the rate is like 0.1 MB/s, whereas with the card enabled the rate is much higher, and that while the pc just ideling, this ticket is probably the same as #17463 but also more likely #17208, i think they all have the same root cause or at least they're closely related, i just didn't notice before the memory increase while the pc is ideling i guess.

Version 1, edited 4 months ago by khallebal (previous) (next) (diff)

comment:9 by waddlesplash, 4 months ago

Network disabled:

@@ -1,12 +1,12 @@
 KERN: 0xffffffff82006000    block allocator: 16       16       16    65536      0     3805     4032 80000000
 KERN: 0xffffffff820061d0    block allocator: 24       24        8   131072      0     5354     5376 80000000
 KERN: 0xffffffff820063a0    block allocator: 32       32       32   471040      0    14434    14490 80000000
-KERN: 0xffffffff82006570    block allocator: 48       48        8  3121152      0    63939    64008 80000000
+KERN: 0xffffffff82006570    block allocator: 48       48        8  3121152      0    63948    64008 80000000
 KERN: 0xffffffff82006720    block allocator: 64       64       64  1822720      0    28030    28035 80000000
-KERN: 0xffffffff820068d0    block allocator: 80       80        8   311296      0     3784     3800 80000000
+KERN: 0xffffffff820068d0    block allocator: 80       80        8   315392      0     3813     3850 80000000
 KERN: 0xffffffff82006a80    block allocator: 96       96        8    77824      0      773      798 80000000
 KERN: 0xffffffff82006c30   block allocator: 112      112        8   450560      0     3933     3960 80000000
-KERN: 0xffffffff82006de0   block allocator: 128      128      128  2576384      0    19487    19499 80000000
+KERN: 0xffffffff82006de0   block allocator: 128      128      128  2576384      0    19494    19499 80000000
 KERN: 0xffffffff82008000   block allocator: 160      160        8 16760832      0   102294   102300 80000000
 KERN: 0xffffffff820081b0   block allocator: 192      192        8   552960      0     2828     2835 80000000
 KERN: 0xffffffff82008360   block allocator: 224      224        8  1761280      0     7723     7740 80000000
@@ -50,9 +50,9 @@
 KERN: 0xffffffff82028c48             vfs vnodes      128        8   438272      0     3304     3317        0
 KERN: 0xffffffff82028a88                vfs fds       48        8     8192      0      164      168        0
 KERN: 0xffffffff820288c8          cached blocks      104        8   524288      0     1566     5040 20000000
-KERN: 0xffffffff82028708    cache notifications       72        8     4096      0        6       56        0
+KERN: 0xffffffff82028708    cache notifications       72        8     4096      0        7       56        0
 KERN: 0xffffffff82028548              swapblock      160        8   671744    164        0     4100        0
-KERN: 0xffffffff823d8400    block cache buffers     2048        8  6815744      0     3116     3328 20000000
+KERN: 0xffffffff823d8400    block cache buffers     2048        8  6815744      0     3118     3328 20000000
 KERN: 0xffffffff823d8200 packagefs heap buffers    65536        8  1048576      0        6       16        0
 KERN: 0xffffffff8517fa90 packagefs TwoKeyAVLTreeNodes       40        8  4284416      0   105548   105646        0
 KERN: 0xffffffff8517f8d0 packagefs TwoKeyAVLTreeNodes       40        8  4284416      0   105548   105646        0

Network enabled:

@@ -1,25 +1,25 @@
 KERN: 0xffffffff82006000    block allocator: 16       16       16    65536      0     3896     4032 80000000
 KERN: 0xffffffff820061d0    block allocator: 24       24        8   131072      0     5306     5376 80000000
 KERN: 0xffffffff820063a0    block allocator: 32       32       32   475136      0    14525    14616 80000000
-KERN: 0xffffffff82006570    block allocator: 48       48        8  3092480      0    63412    63420 80000000
+KERN: 0xffffffff82006570    block allocator: 48       48        8  3092480      0    63413    63420 80000000
 KERN: 0xffffffff82006720    block allocator: 64       64       64  1900544      0    29184    29232 80000000
-KERN: 0xffffffff820068d0    block allocator: 80       80        8   311296      0     3769     3800 80000000
+KERN: 0xffffffff820068d0    block allocator: 80       80        8   315392      0     3824     3850 80000000
 KERN: 0xffffffff82006a80    block allocator: 96       96        8    73728      0      729      756 80000000
 KERN: 0xffffffff82006c30   block allocator: 112      112        8   446464      0     3897     3924 80000000
 KERN: 0xffffffff82006de0   block allocator: 128      128      128  2584576      0    19526    19561 80000000
-KERN: 0xffffffff82008000   block allocator: 160      160        8 16936960      0   103375   103375 80000000
-KERN: 0xffffffff820081b0   block allocator: 192      192        8   585728      0     2992     3003 80000000
+KERN: 0xffffffff82008000   block allocator: 160      160        8 16941056      0   103376   103400 80000000
+KERN: 0xffffffff820081b0   block allocator: 192      192        8   585728      0     3002     3003 80000000
 KERN: 0xffffffff82008360   block allocator: 224      224        8  1781760      0     7830     7830 80000000
-KERN: 0xffffffff82008510   block allocator: 256      256      256   438272      0     1593     1605 80000000
+KERN: 0xffffffff82008510   block allocator: 256      256      256   720896      0     2640     2640 80000000
 KERN: 0xffffffff820086c0   block allocator: 320      320        8   192512      0      557      564 80000000
 KERN: 0xffffffff820088a0   block allocator: 384      384        8    65536      0      157      160 80000000
 KERN: 0xffffffff82008a80   block allocator: 448      448        8   528384      0     1161     1161 80000000
 KERN: 0xffffffff82008c60   block allocator: 512      512      512   303104      0      591      592 80000000
-KERN: 0xffffffff8200ae00   block allocator: 640      640        8   327680      0      434      510 80000000
+KERN: 0xffffffff8200ae00   block allocator: 640      640        8   327680      0      443      510 80000000
 KERN: 0xffffffff8200ac00   block allocator: 768      768        8    65536      0       41       85 80000000
 KERN: 0xffffffff8200aa00   block allocator: 896      896        8    65536      0       33       73 80000000
 KERN: 0xffffffff8200a800  block allocator: 1024     1024     1024   262144      0      230      256 80000000
-KERN: 0xffffffff8200a600  block allocator: 1280     1280        8   786432      0      574      612 80000000
+KERN: 0xffffffff8200a600  block allocator: 1280     1280        8   786432      0      576      612 80000000
 KERN: 0xffffffff8200a400  block allocator: 1536     1536        8   131072      0       55       84 80000000
 KERN: 0xffffffff8200a200  block allocator: 1792     1792        8    65536      0       30       36 80000000
 KERN: 0xffffffff8200a000  block allocator: 2048     2048     2048   262144      0      126      128 80000000
@@ -52,7 +52,7 @@
 KERN: 0xffffffff820288c8          cached blocks      104        8   524288      0     1442     5040 20000000
 KERN: 0xffffffff82028708    cache notifications       72        8     4096      0        5       56        0
 KERN: 0xffffffff82028548              swapblock      160        8   671744    164        0     4100        0
-KERN: 0xffffffff823d8400    block cache buffers     2048        8  6291456      0     2868     3072 20000000
+KERN: 0xffffffff823d8400    block cache buffers     2048        8  6291456      0     2870     3072 20000000
 KERN: 0xffffffff823d8200 packagefs heap buffers    65536        8  1048576      0        6       16        0
 KERN: 0xffffffff8517fa90 packagefs TwoKeyAVLTreeNodes       40        8  4284416      0   105549   105646        0
 KERN: 0xffffffff8517f8d0 packagefs TwoKeyAVLTreeNodes       40        8  4284416      0   105549   105646        0
@@ -64,8 +64,8 @@
 KERN: 0xffffffff894e3c68 packagefs TwoKeyAVLTreeNodes       40        8        0      0        0        0        0
 KERN: 0xffffffff894e3aa8 packagefs TwoKeyAVLTreeNodes       40        8        0      0        0        0        0
 KERN: 0xffffffff8945f000    block cache buffers     2048        8   524288      0       17      256 20000000
-KERN: 0xffffffff895e6c00       net buffer cache      360        8     8192      0       14       22        0
-KERN: 0xffffffff89877e00        data node cache     2048        8    65536      0       14       32        0
+KERN: 0xffffffff895e6c00       net buffer cache      360        8     8192      0       16       22        0
+KERN: 0xffffffff89877e00        data node cache     2048        8    65536      0       16       32        0
 KERN: 0xffffffff89833398                  mbufs      256        8   151552      0      546      555        0
 KERN: 0xffffffff8220de00            mbuf chunks     2048        8  1114112      0      533      544        0
 KERN: 0xffffffff8987e200     mbuf jumbo9 chunks     9216        8        0      0        0        0        0

comment:10 by waddlesplash, 4 months ago

This looks like two (or more) leaks. The first one, while network is both disabled and enabled, is in "block allocator 80". The second, while network is enabled, is in "block allocator 160" and "block allocator 256". (These aren't related to the "block cache", but rather are the standard malloc() implementation for the kernel.)

I guess the thing to do now is to create a test build with prints enabled when these allocators are used to see what is allocating memory from them on your system.

comment:11 by waddlesplash, 4 months ago

(Meanwhile, the network buffers, at the bottom of the list, barely moved at all.)

comment:12 by khallebal, 4 months ago

Yes deffinitely 2 or more leaks that's what thought as well, do we have alot of (m)allocations in the networking in general which would explain the difference in the increase rate between when the network card is enaled vs disabled?.

in reply to:  10 comment:13 by khallebal, 4 months ago

Replying to waddlesplash:

I guess the thing to do now is to create a test build with prints enabled when these allocators are used to see what is allocating memory from them on your system.

Unfortunately i have a dying HD atm, so i can't build from source, if you can get me an image i would gladly do the testing.

Last edited 4 months ago by khallebal (previous) (diff)

comment:14 by waddlesplash, 4 months ago

Looking at this again, I notice there aren't any "slab memory manager: created area ..." in syslog between the two runs of the "slabs" command, which is what I would expect to see if this was a kernel memory leak. Do you see that show up if you let things run for a while?

Otherwise, is this perhaps a userland memory leak not a kernel memory leak? Do you see the memory of any particular process increase when downloading files?

comment:15 by khallebal, 4 months ago

Oh sorry i guess i forgot to download something between the 2 slabs, i only disabled and then enabled the NIC here, that explains maybe the fact that the network buffers didn't move. All i could see from ProcessControler is that the increase was coming from the kernel row, is there another way to see more clearly?

comment:16 by waddlesplash, 4 months ago

Capturing the output of "listarea" twice would reveal in more detail where more memory is getting used.

comment:17 by waddlesplash, 2 weeks ago

Please retest under a recent nightly.

Note: See TracTickets for help on using tickets.