Opened 17 years ago

Closed 17 years ago

#1366 closed bug (fixed)

/dev/urandom doesn't working?

Reported by: kaliber Owned by: axeld
Priority: normal Milestone: R1
Component: - General Version: R1/pre-alpha1
Keywords: Cc:
Blocked By: Blocking:
Platform: All

Description

I think, that cat /dev/urandom should produce some output but it doesn't.

DEBUGS:

random: open("urandom")

and after ctrl-c

random: close() random: free()

Change History (11)

comment:1 by korli, 17 years ago

Did you test on real hardware ? How much time did you wait before ctrl+c ?

comment:2 by kaliber, 17 years ago

At least 10 seconds.

QEMU and VMWare.

comment:3 by korli, 17 years ago

Could you wait a lot more with emulated systems ? urandom init is very cpu consuming. I experienced the same kind of behavior already.

comment:4 by kaliber, 17 years ago

After 4 minutes first data occur. It's too slow... Maybe there is problem with scheduler because the CPU was mostly idle.

comment:5 by axeld, 17 years ago

That actually happens because I changed the thread_yield() function - it now does a snooze(10000) which is obviously much too long for this kind of use. We could either wait for the new scheduler, or just use snooze() with a lower value in there again (like its BeOS version does).

comment:6 by jackburton, 17 years ago

This doesnt' happen anymore, since axel changed urandom to do a snooze(100). urandom is still slow, but at least it doesn't wait 4 minutes before showing some output anymore. Shall we close this ? Or maybe let's keep it open but change the description to "urandom is slow" (see hrev23904)

comment:7 by jackburton, 17 years ago

BTW for the yield issue... can't we modify thread_yield() to just remove the thread from the run queue and reinsert it back ? I tried to do just that, changed urandom to use yield() again, and now it's blazing fast.

comment:8 by axeld, 17 years ago

That was similar to our original version of thread_yield() (I think it put it in the run queue as if would have priority 1). However, our allocator uses thread_yield() when it cannot get an internal lock (it would be not that easy to change it to use semaphores).

While that works fine usually, when there is a high priority thread waiting, no one else would ever have the chance to get that lock, because the high priority thread is constantly trying.

Maybe we should revert to that priority 1 version again - at least it should work better now, since the scheduler does not ignore lower priority threads that often anymore.

in reply to:  8 comment:9 by bonefish, 17 years ago

Replying to axeld:

That was similar to our original version of thread_yield() (I think it put it in the run queue as if would have priority 1). However, our allocator uses thread_yield() when it cannot get an internal lock (it would be not that easy to change it to use semaphores).

While that works fine usually, when there is a high priority thread waiting, no one else would ever have the chance to get that lock, because the high priority thread is constantly trying.

You disabled thread_yield() in hrev21572 due to "ongoing problems". In hrev22515 you fixed a problem in the scheduler with skipping high priority threads. So the main reason might be fixed already, right?

At any rate, thread_yield() should be used sparingly in high priority threads anyway.

Maybe we should revert to that priority 1 version again - at least it should work better now, since the scheduler does not ignore lower priority threads that often anymore.

Since I'll need /dev/urandom to work fast now, so that building Perl doesn't take ages, I'll introduce a "force" boolean parameter to thread_yield() that when true, will use the current implementation and otherwise just call scheduler_reschedule() without priority adjustment or setting was_yielded, so that the thread will continue to run, if no other thread is ready. This should be perfect for the random driver's purpose.

comment:10 by jackburton, 17 years ago

bonefish, did you forget to close this one by chance ?

comment:11 by bonefish, 17 years ago

Resolution: fixed
Status: newclosed

Works well enough since hrev23907.

Note: See TracTickets for help on using tickets.