Opened 15 years ago
Last modified 7 years ago
#5076 new bug
cat /dev/urandom hangs the terminal after a while
Reported by: | jackburton | Owned by: | bonefish |
---|---|---|---|
Priority: | normal | Milestone: | R1 |
Component: | Drivers/TTY | Version: | R1/Development |
Keywords: | gcc4 hybrid | Cc: | |
Blocked By: | Blocking: | #6948, #13822 | |
Platform: | All |
Description
Calling cat /dev/urandom
in Terminal hangs it after some time.
Happens earlier on real hardware, intel core 2 duo, but it's reproducible also on vmware (just wait some more).
Only Terminal hangs. Rest of the system is fine.
Piping the command with "less" doesn't make it hang.
Gcc4 hybrid.
Attachments (3)
Change History (18)
by , 15 years ago
Attachment: | bt-cat.png added |
---|
follow-up: 2 comment:1 by , 15 years ago
http://dev.haiku-os.org/ticket/2851#comment:10 could be related?
follow-up: 3 comment:2 by , 15 years ago
Replying to diver:
http://dev.haiku-os.org/ticket/2851#comment:10 could be related?
Absolutely. Should we decouple writing from that thread, Ingo ? (by passing the data to another thread which does the real writing?)
follow-up: 8 comment:3 by , 15 years ago
Replying to jackburton:
Replying to diver:
http://dev.haiku-os.org/ticket/2851#comment:10 could be related?
Absolutely. Should we decouple writing from that thread, Ingo ? (by passing the data to another thread which does the real writing?)
I don't think this would help at all. I believe the main problem is that no one reads from the slave end of the TTY, so that its buffer runs full eventually. Moving the writing to another thread would at best delay the hang as long as it takes for the Terminal's internal write buffer to run full as well. The only option I see is to drop the writes, i.e. use non-blocking I/O. This would also solve the second issue, that in echo mode writes from the Terminal are echoed back to the TTY master, whose buffer could already be full.
While using non-blocking I/O would prevent Terminal threads from hanging, the problem remains that Ctrl-C and friends still wouldn't work, since they are also writes to the TTY and would be dropped as well. I wonder how that works in Linux.
follow-up: 5 comment:4 by , 15 years ago
The TTY layer itself could do the dropping, but could still filter out control codes (and use some spare bytes in the master for that).
follow-up: 6 comment:5 by , 15 years ago
Replying to axeld:
The TTY layer itself could do the dropping, but could still filter out control codes (and use some spare bytes in the master for that).
This semantics sounds a bit weird. I guess someone has to do some reading up how it is supposed to work.
follow-up: 7 comment:6 by , 15 years ago
Replying to bonefish:
Replying to axeld:
The TTY layer itself could do the dropping, but could still filter out control codes (and use some spare bytes in the master for that).
This semantics sounds a bit weird. I guess someone has to do some reading up how it is supposed to work.
I read a bit of "The tty layer" by Greg Kroah-Hartman, where he talks of the linux tty layer, and I think this could be related:
"The throttle and unthrottle functions are used to help control overruns of the tty layer's input buffers. The throttle function is called when the tty layer's input buffers are getting full. The tty driver should try to signal the device that no more characters are to be sent to it. The unthrottle function is called when the tty layer's input buffers have been emptied out, and it now can accept more data. The tty driver should then signal to the device that data can be received."
We don't have this yet, right ?
comment:7 by , 15 years ago
Replying to jackburton:
We don't have this yet, right ?
No, but we don't have any devices using the tty layer either. I believe this is really only hardware related: If the internal buffer is full, the sender has to be notified, so that it doesn't keep sending more data (which would have to be dropped). In the pseudo tty case the sender can simply be blocked (or in non-blocking mode the write fails).
follow-up: 10 comment:8 by , 14 years ago
Replying to bonefish:
Replying to jackburton:
Replying to diver:
http://dev.haiku-os.org/ticket/2851#comment:10 could be related?
Absolutely. Should we decouple writing from that thread, Ingo ? (by passing the data to another thread which does the real writing?)
I don't think this would help at all. I believe the main problem is that no one reads from the slave end of the TTY, so that its buffer runs full eventually. Moving the writing to another thread would at best delay the hang as long as it takes for the Terminal's internal write buffer to run full as well. The only option I see is to drop the writes, i.e. use non-blocking I/O. This would also solve the second issue, that in echo mode writes from the Terminal are echoed back to the TTY master, whose buffer could already be full.
Looking at the linux tty layer I found that they expose a reduced available buffer to the tty drivers, so that there is always enough room to handle cases like this.
comment:9 by , 14 years ago
Component: | Applications/Terminal → Drivers/TTY |
---|---|
Owner: | changed from | to
follow-up: 11 comment:10 by , 14 years ago
Replying to jackburton:
Looking at the linux tty layer I found that they expose a reduced available buffer to the tty drivers, so that there is always enough room to handle cases like this.
Can you explain how that is supposed to work?
follow-up: 12 comment:11 by , 14 years ago
Replying to bonefish:
Replying to jackburton:
Looking at the linux tty layer I found that they expose a reduced available buffer to the tty drivers, so that there is always enough room to handle cases like this.
Can you explain how that is supposed to work?
Some comments ago you wrote: "This would also solve the second issue, that in echo mode writes from the Terminal are echoed back to the TTY master, whose buffer could already be full. ". If the TTY master buffer reserved some space to handle echo and control characters (by exposing a smaller buffer to drivers), its buffer will never fill up completely, so it won't deadlock.
comment:12 by , 14 years ago
Replying to jackburton:
Replying to bonefish:
Replying to jackburton:
Looking at the linux tty layer I found that they expose a reduced available buffer to the tty drivers, so that there is always enough room to handle cases like this.
Can you explain how that is supposed to work?
Some comments ago you wrote: "This would also solve the second issue, that in echo mode writes from the Terminal are echoed back to the TTY master, whose buffer could already be full. ". If the TTY master buffer reserved some space to handle echo and control characters (by exposing a smaller buffer to drivers), its buffer will never fill up completely, so it won't deadlock.
The driver would be the pseudo terminal (i.e. the master side) and exposing smaller buffers to it cannot possibly help it writing more. So assuming the client side is supposed to see a smaller buffer, this might help with the echo case. Also assuming that the control codes are processed out of order -- which I believe they shouldn't -- the main problem, issue one, remains: cat never reads from the terminal, so the buffer is bound to run full eventually, regardless of how big it is. It would be interesting how Linux respectively the terminal programs solve that.
comment:14 by , 10 years ago
cat /dev/random will freeze terminal after 2 or 3 screens of output now. This is rather annoying as it can be triggered by accident rather easily (cat of a binry file for example)
comment:15 by , 7 years ago
Blocking: | 13822 added |
---|
Backtrace of "cat" thread