Opened 5 years ago

Closed 5 years ago

Last modified 5 years ago

#15804 closed bug (fixed)

WebKit: crash in incrementPageOccupancy

Reported by: X512 Owned by: pulkomandy
Priority: normal Milestone: R1/beta2
Component: Applications/WebPositive Version: R1/Development
Keywords: Cc: ttcoder
Blocked By: Blocking:
Platform: All

Description

This is WebKit https://github.com/haiku/webkit/commit/80fb39b3eb4c7e372556f6e0bcc704b69ad48398.

Build and run by Haiku hrev53967 x86_64.

When to browsing complex web sites like https://discuss.haiku-os.org/ in HaikuLauncher WTF::MetaAllocator::incrementPageOccupancy calls abort() and crashes.

Attachments (4)

HaikuLauncher-5737-debug-14-03-2020-17-06-27.report (203.9 KB ) - added by X512 5 years ago.
crash report
HaikuLauncher-5935-debug-21-03-2020-13-00-31.report (78.2 KB ) - added by KapiX 5 years ago.
mem.cpp (3.0 KB ) - added by KapiX 5 years ago.
Reproducer
vm-traces.patch (3.5 KB ) - added by KapiX 5 years ago.

Download all attachments as: .zip

Change History (42)

comment:1 by KapiX, 5 years ago

Attached report from debug build.

Params for commit: bytes=12288, address=0x7036b000 (always different), effective=-1, errno=Invalid argument

https://github.com/haiku/webkit/blob/80fb39b3eb4c7e372556f6e0bcc704b69ad48398/Source/WTF/wtf/haiku/OSAllocatorHaiku.cpp#L91

commit basically calls mmap. @waddlesplash, since you were working on allocators, maybe you can shed some light on what is not to mmap's liking?

comment:2 by pulkomandy, 5 years ago

Possibly because the address is already used?

We attempt to reserve it with _kern_reserve_address_range (in OSAllocator::ReserveUncommitted) but I'm not sure how that works, since mmap can still allocate it without special handling, so I'm not sure how the "reserved address range" protects us from other mmap() calls elsewhere ending up in the same space.

Probably a good idea to add a debugger("some useful message") instead of just calling CRASH(), too.

comment:3 by KapiX, 5 years ago

I figured it out but I'm not sure how to fix it.

By dumping all commits I found that crash occurs when WebKit tries to expand existing commit. For example first call wants 4096 bytes, next one wants 32768 under the same address. It doesn't decommit it first.

Windows doesn't care, VirtualAlloc(MEM_COMMIT) docs:

An attempt to commit a page that is already committed does not cause the function to fail. This means that you can commit pages without first determining the current commitment state of each page.

POSIX implementation uses mprotect.

Our implementation uses mmap, which is too much. We can't munmap ad-hoc, because we don't have length of existing mapping.

comment:4 by waddlesplash, 5 years ago

We could store the length of the existing mapping, yes?

Or we could just use create_area and resize_area, probably.

comment:5 by pulkomandy, 5 years ago

It should be possible to munmap() with the new size, that should remove all mappings in the range.

However, this means the content of the existing mapping will be lost. IS that ok for webkit, or is it trying to resize and existing mapping?

We can probably implement this with lower level APIs (area_for / create_area / resize_area) but we may need some tracking structures to remember what we're doing.

comment:6 by KapiX, 5 years ago

I tried both approaches.

  1. create_area/delete_area is tricky because WebKit MetaAllocator can commit 3 pages from address X, then decommit 2 pages from address X, which leaves you with one leftover page. decommit implemented with delete_area would dealloc all 3. Without additional bookkeeping that won't work.
  1. mmap/munmap/mmap, no luck. Even though munmap apparently succeeds, mmap with the same parameters will still fail in the same way.

BTW I don't like additional bookkeeping idea, that's what MetaAllocator does.

in reply to:  6 comment:7 by X512, 5 years ago

Replying to KapiX:

  1. create_area/delete_area is tricky because WebKit MetaAllocator can commit 3 pages from address X, then decommit 2 pages from address X, which leaves you with one leftover page. decommit implemented with delete_area would dealloc all 3. Without additional bookkeeping that won't work.

Can resize_area help?

comment:8 by KapiX, 5 years ago

Not in this case, because resize_area cannot change start address.

comment:9 by waddlesplash, 5 years ago

Then you need to use cut_area().

comment:10 by waddlesplash, 5 years ago

Er, that is, you need to write code that triggers vm cut_area, which is probably munmap. So create_area+resize_area+munmap, I guess?

comment:11 by KapiX, 5 years ago

That gets us back to mmap/munmap, which doesn't work.

comment:12 by waddlesplash, 5 years ago

If the problem is:

mmap/munmap/mmap, no luck. Even though munmap apparently succeeds, mmap with the same parameters will still fail in the same way.

Then this sounds like a kernel bug. Can you write a test app that exhibits the problem? Then we can try to trace inside the kernel as to what is going wrong.

comment:13 by pulkomandy, 5 years ago

It seems you can call _kern_unmap_memory(address, size) directly and that will eventually do the cut_area. It appears to handle all cases (cutting multiple areas, handling existing holes between them, etc)

by KapiX, 5 years ago

Attachment: mem.cpp added

Reproducer

comment:14 by KapiX, 5 years ago

_kern_unmap_memory doesn't make a difference.

comment:15 by KapiX, 5 years ago

I started debugging this with above reproducer, but can't proceed further because even printf-debugging breaks down after adding some traces (it won't show me the Desktop [1], so I can't run the test program). Here's what I got so far:

_vm_map_file(fd = -1, offset = 0, size = 8192, mapping 1)
create_anonymous_area [480] mem mmap area: size 0x2000
map_backing_store: aspace 0xffffffff88ab71f0, cache 0xffffffff88c301d8, virtual 0x0000000000100000, offset 0x0, size 8192, addressSpec 1, wiring 0, protection 17, area 0xffffffff85307bd8, areaName 'mem mmap area'
err2 CREATE_AREA_DONT_COMMIT_MEMORY: 0
unmap_address_range: aspace 0xffffffff88ab71f0, address 1048576, size 8192, kernel 0
VMUserAddressSpace::_InsertAreaSlot: address space 0xffffffff88ab71f0, start 0x100000, size 8192, end 0x101fff, addressSpec 1, area 0xffffffff88d72cc8
VMUserAddressSpace::_InsertAreaIntoReservedRegion: address space 0xffffffff88ab71f0, start 0x100000, size 8192, area 0xffffffff88d72cc8
err2 InsertArea: 0
vm_create_anonymous_area: done
_vm_map_file(fd = -1, offset = 0, size = 12288, mapping 1)
create_anonymous_area [480] mem mmap area: size 0x3000
map_backing_store: aspace 0xffffffff88ab71f0, cache 0xffffffff88c0f590, virtual 0x0000000000100000, offset 0x0, size 12288, addressSpec 1, wiring 0, protection 17, area 0xffffffff85307bd8, areaName 'mem mmap area'
err2 CREATE_AREA_DONT_COMMIT_MEMORY: 0
unmap_address_range: aspace 0xffffffff88ab71f0, address 1048576, size 12288, kernel 0
cut_area: aspace 0xffffffff88ab71f0, address 1048576, lastAddress 1060863, kernel 0
VMUserAddressSpace::_InsertAreaSlot: address space 0xffffffff88ab71f0, start 0x100000, size 12288, end 0x102fff, addressSpec 1, area 0xffffffff88d03e40
VMUserAddressSpace::_InsertAreaIntoReservedRegion: address space 0xffffffff88ab71f0, start 0x100000, size 12288, area 0xffffffff88d03e40
121
err2 InsertArea: -2147483643
map_backing_store: err2
map_backing_store: err1
unmap_address_range: aspace 0xffffffff88ab71f0, address 1048576, size 12288, kernel 0
VMUserAddressSpace::_InsertAreaSlot: address space 0xffffffff88ab71f0, start 0x100000, size 12288, end 0x102fff, addressSpec 1, area 0xffffffff88d03e40
121
_vm_map_file(fd = -1, offset = 0, size = 12288, mapping 1)
create_anonymous_area [480] mem mmap area: size 0x3000
map_backing_store: aspace 0xffffffff88ab71f0, cache 0xffffffff88c0f590, virtual 0x0000000000100000, offset 0x0, size 12288, addressSpec 1, wiring 0, protection 17, area 0xffffffff85307bd8, areaName 'mem mmap area'
err2 CREATE_AREA_DONT_COMMIT_MEMORY: 0
unmap_address_range: aspace 0xffffffff88ab71f0, address 1048576, size 12288, kernel 0
VMUserAddressSpace::_InsertAreaSlot: address space 0xffffffff88ab71f0, start 0x100000, size 12288, end 0x102fff, addressSpec 1, area 0xffffffff88d03e40
VMUserAddressSpace::_InsertAreaIntoReservedRegion: address space 0xffffffff88ab71f0, start 0x100000, size 12288, area 0xffffffff88d03e40
121
err2 InsertArea: -2147483643
map_backing_store: err2
map_backing_store: err1
480: DEBUGGER: crash 

What I think is happening is unmapping doesn't return the area to reserved pool, but this is just a guess. It also can't be reserved again after unmapping. However, it's not _InsertAreaIntoReservedRegion that returns B_BAD_VALUE (I believe it's B_ENTRY_NOT_FOUND).

I will attach a patch with these traces.

[1] It shows FirstBootPrompt but after that it goes into some kind of loop (?) printing get_memory_map_etc. It doesn't hang, it just keeps going. It seems this can be triggered by adding one TRACE too many, and even changing trace length can fix it (not always though).

by KapiX, 5 years ago

Attachment: vm-traces.patch added

comment:16 by waddlesplash, 5 years ago

I believe there is a debug_printf or some function line that which will print to serial only and bypass syslog sender; this may help with debugging.

comment:17 by mmlr, 5 years ago

I've investigated this and the issue is with how the reservations work. They essentially create an area in the address space of the team with a special id. When the mmap happens, it cuts away from that reserved area, unreserving the range to be mapped. In the case of the reproducer, it moves the start of the reserved area up by 2 pages.

When the next mmap of 3 pages happens at the same address, it will properly unmap any mapping in its way, so the previous 2 page mmap area will be gone. It will however not touch the reservation. When it then tries to insert the new 3 page area, it will now collide with the reservation area that starts 2 pages in and therefore fail.

So basically:

  1. _kern_reserve_address_range(4 pages):
area 0xffffffff: base_addr = 0x100000 size = 0x4000 name = '' protection = 0x1
  1. mmap(2 pages):
area 0x3e41: base_addr = 0x100000 size = 0x2000 name = 'mem mmap area' protection = 0x11
area 0xffffffff: base_addr = 0x102000 size = 0x2000 name = '' protection = 0x1
  1. mmap(3 pages) first unmaps anything in range, but doesn't touch the reservation:
area 0xffffffff: base_addr = 0x102000 size = 0x2000 name = '' protection = 0x1
  1. mmap(3 pages) fails because it finds only a slot of 2 pages until the next area and doesn't consider cutting off a pice of the reservation

So there are a couple of things at play:

  • Reservations are fully dis- or replaced by new areas, deleting areas will never return space into reservations
  • It follows that anything that gets munmapped becomes free, unreserved space that may be re-used by anything (heap, libraries, etc.) so later using MAP_FIXED in that space is dangerous
  • Reservations are not considered "available space" when the new area only overlaps the reservation, which makes sense considering that this is the mechanism by which the reservation actually prevents other areas from being placed there

What could be changed is to allow the same dis-/replacement of the reserved area when MAP_FIXED is being used. So that a reserved area becomes available space for overlapping MAP_FIXED areas.

The second point is troubling in general though: Once you start to munmap sections of your mappings, there is no atomic way to ensure that the space won't be reused by something you have no control over. You could immediately re-reserve the range you unmap, but this still implies race conditions. Since munmap has no flags, we can't add one that says "unmap and then create a reservation there instead". We could possibly supply such a flag to the original mmap so that whenever something is cut out of such a mapping it "returns" to being reserved space by means of re-creating a reserved area in the same space. A mechanism to combine reserved ranges would then be needed so that it won't result in lots of small individual reserved areas.

Regarding the protection that reservations offer: They only prevent areas from being placed there that do not specify MAP_FIXED/B_EXACT_ADDRESS. No library or system API should ever be doing that (unless it has reserved such a range itself), so using them seems useful. But for it to really work, there needs to be a way to atomically re-reserve ranges that get unmapped.

comment:18 by pulkomandy, 5 years ago

Thanks, that clarifies things a lot.

Yes, the idea of reserving some address space and then mapping things there makes sense. At least I don't see another reason you'd want to reserve address space?

What I expected was that the reserved space would still be "reserved" after you map things in it. In other words, reservation would just be checked at mmap/create_area time, but no "unreserving" would happen. So the reservation stays as it was first defined, even though it now has allocations inside its range. Could that work? It seems simpler than deleting and recreating chunks of the reservation as we go, but maybe requires more changes to handling the whole thing, because reserved areas would not really be part of the VM address space anymore as they seem to be currently.

comment:19 by mmlr, 5 years ago

Actually there is no need to use reservations at all for this use case. As I understand it, this is purely about unmapping unused pages to reduce used physical memory pages. So this is only a question of locking. A mmap area is always created with B_NO_LOCK, so it will not map any pages upfront but fault them in whenever they are accessed. Once they are mapped, they will stay mapped with no way to unmap them except by deleting or resizing the area.

However, a new mmap will first munmap anything in its way and then create a new mmap area with the specified size (and in this case base, as MAP_FIXED is used). Note that the above holds true here: The implied munmap will unmap any pages that were previously mapped to that range and the newly created mmap area will start out with nothing mapped upfront (due to B_NO_LOCK).

So you can get rid of the private reservation API and purely use mmap to manage everything. The reservation happens with an initial large mmap that will reserve address space, but not use actual memory:

void* reserveUncommitted(size_t bytes, bool writable, bool executable, bool includesGuardPages)
{
	UNUSED_PARAM(writable);
	UNUSED_PARAM(executable);

	void* result = 0;
	result = mmap(result, bytes, PROT_NONE, MAP_PRIVATE | MAP_ANON, -1, 0);
	if (result == MAP_FAILED)
		return NULL;

	return result;
}

The commit functions stay the way they are, but decommit becomes:

void decommit(void* address, size_t bytes)
{
	int flags = MAP_PRIVATE | MAP_ANON | MAP_FIXED;
	void *result = mmap(address, bytes, PROT_NONE, flags, -1, 0);
	fprintf(stderr, "decommit: %zu bytes %p %p: %s\n", bytes, address, result, strerror(errno));
	if (result == MAP_FAILED || result != address)
		CRASH();
}

And releaseDecommitted becomes an actual munmap:

void releaseDecommitted(void* address, size_t bytes)
{
	int result = munmap(address, bytes);
	fprintf(stderr, "releaseDecommitted: %zu bytes %p %d: %s\n", bytes, address, result, strerror(errno));
	if (result == -1)
		CRASH();
}

So decommit does in implicit unmap to get rid of mapped pages (and thus actual memory use). And since it also puts a new mapping in place atomically, you never run into the situation that something external could get allocated into your virtual address range. Hence the use of MAP_FIXED stays safe.

Now there's unfortunately a pretty big catch: With a current Haiku it won't help you at all (neither this nor the original version). The problem is that there are unimplemented TODOs in cut_area that hit in exactly this use case. While pages will properly get unmapped and areas will properly be resized and/or cut, the corresponding VMCache is only resized in the simple case, where the area is to be shrunk from the end and not when shrinking the front or splitting off a new area. This means the pages that were allocated to the cache will never be returned to the system for reuse.

https://git.haiku-os.org/haiku/tree/src/system/kernel/vm/vm.cpp?id=a688d43f356e0ec264528191c4abe0a3cbcb5d18#n660

https://git.haiku-os.org/haiku/tree/src/system/kernel/vm/vm.cpp?id=a688d43f356e0ec264528191c4abe0a3cbcb5d18#n689

https://git.haiku-os.org/haiku/tree/src/system/kernel/vm/vm.cpp?id=a688d43f356e0ec264528191c4abe0a3cbcb5d18#n714

You can easily test this with the above code. When you reserve and commit (writable) a large amount of memory (say 1GiB), the memory use of the application will stay near zero. This is due to B_NO_LOCK and no pages actually getting mapped (contrary to what commit is probably supposed to do). Running a loop to write into every page of the memory range will then actually fault in pages and memory use will go to 1GiB. If you now decommit a large range at the beginning or somewhere in the middle, the pages will indeed get unmapped (seen in KDL page_stats), but the memory commitment will not change. If you decommit a large range at the end, things will work as expected.

So the version above is an improvement as it gets rid of the reserved area and it's peculiar workings (and the need for calling a private syscall). But in practice it will not change much until the cache resizing is implemented for all cut cases.

comment:20 by X512, 5 years ago

Why current WebPositive is working fine? Does something changed in newer WebKit?

comment:21 by pulkomandy, 5 years ago

Current webpositive just reserves 1GB of RAM at start on 64bit systems and keeps it forever. On 32bit it reserves a smaller area and can't grow it (there is a performance hit, but that's how it's done on all 32bit ports, there is just no space for a larger area in the address space).

The goal here is to reduce the address space and map memory on-demand. Maybe the mmap-only way is not perfect but would still somewhat help.

comment:22 by ttcoder, 5 years ago

Cc: ttcoder added

/me now understands why Qupzilla and W+ launched simultaneously crash haiku super hard.. Gotta purchase more RAM modules

comment:23 by mmlr, 5 years ago

waddlesplash pointed me to #8466 which I was not aware of before. It basically solves the TODOs about cache resizing also for the head and middle case. The last comment by hamish suggests there is a problem with the middle case when swap is involved that I have not yet spent time to understand. The change otherwise looks fine to me, I'd just like to reduce the code duplication a bit.

comment:24 by mmlr, 5 years ago

The commit chain from https://review.haiku-os.org/c/haiku/+/2590 downwards implements cache resizing for the remaining cut_area cases. It is based off the original patch in #8466 and then actually implements the swap handling for the middle cut / cache split case. Combined with the overcommit for PROT_NONE change in hrev54120 this should make this work perfectly.

The strategy to implement should then indeed be the one I outlined in comment:19 above. Initial reservation happens with a large PROT_NONE mmap resulting in unreserved overcommitting address range blockage. Individual commits would then mmap usable, non-overcommitting reserved but not page mapped memory, only producing as much page pressure as needed. And finally decommits would cut PROT_NONE, so again unreserved overcommitting, holes into the committed areas that would now actually release the pages inbetween.

The distinctions here are:

  • Overcommitting: Memory (of any kind, including swap) is not reserved up front. This means you can always reserve the address space, even if you reserve 100GB but only have a combined RAM + swap of < 100GB.
  • Non-Overcommitting: All allocations need to be backed by some kind of memory, so you can only ever allocate as much as your total RAM + swap - anything already reserved allows you. This seems inconvenient, but it guarantees that you can later actually allocate all of the reservation that you made and won't run into B_NO_MEMORY.
  • Mapped / page backed: The address range is actually backed by RAM pages and not swapped out. You usually start out with memory reserved, but not actually backed by RAM until you access a page, when the resulting page fault maps a page of RAM, possibly pushing something else out of the way to swap.

So what you want is the pure address space reservation (and later the holes done by decommit) to be unmapped and overcommitting, which is now done by PROT_NONE. And anything else you want to be lazily backed by pages so that you only push stuff out of RAM when really needed.

Note that for decommit to actually be able to free pages and reduce memory reservations with the holes it punches into existing allocations, it needs the cut_area changes (or to always overlap previously committed ranges fully, so that it will delete the underlying areas and caches).

comment:25 by waddlesplash, 5 years ago

The cut changes were merged in hrev54148. Also note that PROT_NONE implicitly overcommitting will be changed by https://review.haiku-os.org/c/haiku/+/2611 ; MAP_NORESERVE will be required instead.

comment:26 by X512, 5 years ago

Can't reproduce in hrev54185.

comment:27 by waddlesplash, 5 years ago

Yes, the patches fixing this were merged. mmlr's suggestions above should still be applied however (or we should revert to WebKit's own allocator which may behave like this already?)

comment:28 by waddlesplash, 5 years ago

It appears that WebKit's OSAllocatorPOSIX does exactly what mmlr suggests above ... under OS(LINUX). So we can remove OSAllocatorHaiku and just add || OS(HAIKU) to the appropriate parts of OSAllocatorPOSIX.

comment:29 by pulkomandy, 5 years ago

Yes, the OsAllocatorHaiku had been introduced only because the standard OsAllocatorPosix couldn't be made to work at the time (this goes back a very long time ago, possibly to Maxime Simon's 2009 GSoC project). Now that our mmap implementation is matching Linux and BSD in terms of features, we may now use it and it's one piece of code less to maintain.

comment:30 by KapiX, 5 years ago

I looked at OSAllocatorPOSIX and it already is an #ifdef mess. Adding more ifdefs will make it even less readable, and if it changes merging will be harder (merge conflicts). I would be for leaving OSAllocatorHaiku at least until we can have our port upstream.

Our implementation is not identical to Linux (it uses mprotect in places where we need mmap, so we can't reuse these paths).

comment:31 by waddlesplash, 5 years ago

Adding more ifdefs will make it even less readable

Why do we need more ifdefs? It looks to me like we can use the Linux ifdefs and also activate them under Haiku.

Our implementation is not identical to Linux (it uses mprotect in places where we need mmap, so we can't reuse these paths).

No, it uses mprotect correctly; in fact, it does exactly what mmlr's suggestions above are. It also calls madvise, too, but that's fine.

comment:32 by X512, 5 years ago

Still happens on hrev54212 and this WebKit commit when running in test_app_server.

in reply to:  32 comment:33 by mmlr, 5 years ago

Replying to X512:

Still happens on hrev54212 and this WebKit commit when running in test_app_server.

The OsAllocatorHaiku does not seem to have the required changes. Unless the allocator was switched to the POSIX one, the failure would not be surprising. As analysed above, the actual issue is the reservation of the address space, that essentially disallows allocating and then cutting more than once.

comment:34 by KapiX, 5 years ago

I have changes done locally. I will send a PR soon.

comment:35 by X512, 5 years ago

Should I switch to OSAllocatorPOSIX? What configuration is required (ifdef etc.)?

comment:37 by waddlesplash, 5 years ago

Resolution: fixed
Status: newclosed

comment:38 by nielx, 5 years ago

Milestone: UnscheduledR1/beta2

Assign tickets with status=closed and resolution=fixed within the R1/beta2 development window to the R1/beta2 Milestone

Note: See TracTickets for help on using tickets.