Opened 16 years ago

Closed 15 years ago

Last modified 15 years ago

#2789 closed bug (fixed)

writing to usb disks fail after 10-100Mb of writing files

Reported by: rudolfc Owned by: mmlr
Priority: normal Milestone: R1
Component: Drivers Version: R1/Development
Keywords: Cc:
Blocked By: Blocking:
Platform: x86

Description

If I boot haiku, doesn't matter from where (USB or harddisk), and I mount a USB disk for read/write, I can read perfectly from the disk. Writing a single larger file or a multitude of smaller files (a folder for example) works perfectly.

But if you drag another folder to the USB stick after the first one completes, and maybe even another file or folder after the second one completes: writing comes to a standstill.

It is now impossible to unmount the stick. It is also impossible to reboot or shutdown.

The syslog (if tailing) reports:

KERN: usb_disk: acquire_sem failed while waiting for data transfer KERN: usb_disk: write fails with 0x80000009

These two lines repeat forever.

Note: it's perfectly possible to boot haiku from the same stick (if an image was written to it from outside haiku). Haiku is stable, until the writing error occurs.

I tested different brands and different sizes of stick: the behaviour is always the same.

I tested different revisions of haiku. The syslog message came from R27770. It doesn't matter which revision I use, the fault has been there for a few months at least.

Good luck (and thanks in advance for looking at it!)

Rudolf.

Attachments (10)

first_plugin (5.2 KB ) - added by rudolfc 16 years ago.
first_removal (5.4 KB ) - added by rudolfc 16 years ago.
second_plugin (8.6 KB ) - added by rudolfc 16 years ago.
mount_stick (2.5 KB ) - added by rudolfc 16 years ago.
writing_to_stick (65.7 KB ) - added by rudolfc 16 years ago.
reading_back_from_stick (114.8 KB ) - added by rudolfc 16 years ago.
usb_err_another_test (10.2 KB ) - added by rudolfc 16 years ago.
r28980-succesfull-rw-mount (7.1 KB ) - added by rudolfc 16 years ago.
r28980-various-rw-files (216.2 KB ) - added by rudolfc 16 years ago.
r28980-unmount-after-5min-delay (18.3 KB ) - added by rudolfc 16 years ago.

Download all attachments as: .zip

Change History (40)

comment:1 by rudolfc, 16 years ago

Hi again,

I have new info that might be related: On another system I booted from USB, and then mounted a second stick. I formatted that stick using DriveSetup, and copied all files from the current booted stick to that new stick. This way I create a haiku usb stick without the space restriction 'imposed' by using a prepared image from haiku nightly builds.

Trying to get this done fails since writing fail after some time (see above), but also sometimes reading fails (could not read block).

Here's the 'funny' and interesting part:

The haiku builds open up a terminal window at boottime. If I quit this terminal window, I can successfully copy the stick and boot from the new one. If I keep the window open, things go wrong. If I open up a terminal window during copying, I promptly get a read or write error AFAICT. After such an error Haiku is instable and unable to shut down.

So I get the feeling that everything works OK (slow, but OK) as long as only one thread or program is trying to access USB volumes. As soon as a second program/thread tries to access, the problem appears.

When I say 'slow' above, I mean: writing seems normal in speed (tracker progressbar progresses as expected). But after tracker says it's done, activity remains going on on the stick to be written to. This can continue for a minute or so.

Before issuing a second folder copy action, I'd better wait until the stick becomes idle, or I risk the error again.

I tested several recent Haiku images. The problem is persistant.

Hope this info helps..

Rudolf.

comment:2 by oco, 16 years ago

I think i have the same problem. Here is my scenario :

I boot Haiku from a USB hard disk. Then, i start compiling a working directory of the Haiku source tree (already on this partition).

When jam start writing "...", i go to the mount menu in the Tracker and i enter KDL :

PANIC: could not read block 57233188: bytesRead: -1, error: operation timed out

Here is a partial backtrace :

kernel_x86 : panic kernel_x86 : get_cached_block kernel_x86 : block_cache_get_etc kernel_x86 : block_cache_get bfs : CachedNode::InternalSetTo bfs : CachedNode::SetToHeader bfs : BPlusTree::SetTo bfs : 9BPlusTreep5Inode bfs : 5Inodep6Volumex bfs : bfs_get_vnode kernel_x86 : get_vnode kernel_x86 : fix_dirent kernel_x86 : dir_read kernel_x86 : dir_read kernel_x86 : _user_read_dir kernel_x86 : handle_syscall commpage : commpage_syscall jam : file_dirscan jam : timestamp jam : search jam : make ... jam : main jam : _start

If i don't enter the mount menu, i can compile all the source tree even with "jam -j2 haiku-image" on my dualcore.

In the syslog, last lines are :

usb_disk : acquire_sem failed while waiting for data transfer usb_disk : acquire_sem failed while waiting for data transfer usb_disk : receiving the command status wrapper failed usb_disk : read fails with 0x80000009

comment:3 by oco, 16 years ago

It is better with line breaks... I am using hrev28643.

Here is a partial backtrace :

kernel_x86 : panic
kernel_x86 : get_cached_block
kernel_x86 : block_cache_get_etc
kernel_x86 : block_cache_get
bfs : CachedNode::InternalSetTo
bfs : CachedNode::SetToHeader
bfs : BPlusTree::SetTo
bfs : 9BPlusTreep5Inode
bfs :
5Inodep6Volumex
bfs : bfs_get_vnode
kernel_x86 : get_vnode
kernel_x86 : fix_dirent
kernel_x86 : dir_read
kernel_x86 : dir_read
kernel_x86 : _user_read_dir
kernel_x86 : handle_syscall
commpage : commpage_syscall
jam : file_dirscan
jam : timestamp
jam : search
jam : make
...
jam : main
jam : _start

syslog :

usb_disk : acquire_sem failed while waiting for data transfer
usb_disk : acquire_sem failed while waiting for data transfer
usb_disk : receiving the command status wrapper failed
usb_disk : read fails with 0x80000009

comment:4 by mmlr, 16 years ago

Status: newassigned

Can you please retry with a current revision >= hrev28934. Since the error recovery couldn't work, many devices should have blocked completely if at least one error ever occurred.

by rudolfc, 16 years ago

Attachment: first_plugin added

by rudolfc, 16 years ago

Attachment: first_removal added

by rudolfc, 16 years ago

Attachment: second_plugin added

by rudolfc, 16 years ago

Attachment: mount_stick added

by rudolfc, 16 years ago

Attachment: writing_to_stick added

by rudolfc, 16 years ago

Attachment: reading_back_from_stick added

by rudolfc, 16 years ago

Attachment: usb_err_another_test added

comment:5 by rudolfc, 16 years ago

Hi there,

I just tested some time on a image version from 15 or 16 januari 2009 on my Asus P5E3 mainboard. It seems the trouble is more or less the same.

Description: The first stick plugin, if booted from HD fails (new, was OK some time ago) The second plugin recognizes the stick and it pops up in the mount submenu. Writing a complete folder (apps in this case) seems to work if you look at the progress bar, but at about 70% or so it comes to a temporary standstill. It will go-on or resume a bit later but at visibly lower speed.

Reading back the folder (I am constantly misled by the move instead of copy default action in tracker) apparantly succeeds, but is very slow: comparable to the last writing actions.

Once tracker says its finished (progress bar disappears from screen) I cannot unmount the stick. Apparantly it's still in use. If I wait 5 minutes (or was it more?) then I can unmount the stick according to tracker's behaviour though it does not normally succeed. Tracker becomes less responsive, the mount submenu is dead if I remember correctly. Shutdown works today. So I was able to add some parts of the syslog to this bugreport.

In the time between the readback 'completed' (progress bar disappeared) and my 'successfull' unmount stick action I was able to mount another HD partition, write some files, and unmount it again. After the stick unmount this is nolonger possible.

I have added one file from another 'test' today in which more or less the same behaviour applied. There was one extra strange thing (I did not see before BTW): the stick dissapeared from screen (unmounted) completely on it's own (while having a folder open of the stick). I did not issue an unmount command.

The behaviour of having a terminal window open has influence on the stick's behaviour I could not reproduce, seems that's OK these days.

---

A note about my mainboard: in the BIOS there's an option for ownership handoff I believe as well as an entry for legacy support. If I enable legacy support (needed for a boot from stick) Haiku is not successfull in aquiring ownership of the USB hardware even though I specified in the BIOS it should hand it off.

If I disable legacy support in the BIOS Haiku successfully claims ownership. The boot is faster on the 4th icon in this case (timeout trying to claim ownership I take it).

I just mention it for completeness sake, one can never know if this could be a hint of some sort.

Bye!

Rudolf.

comment:6 by rudolfc, 16 years ago

Oh, BTW, the tests I did was done with disabled legacy support in the BIOS. I could not see different behaviour in Haiku using sticks with or without legacy support enabled. Apparantly just the boot speed is influenced from a user's perspective.

Kind regards,

Rudolf.

comment:7 by rudolfc, 16 years ago

Hmm, I tested just now with rev 28911. :-/ So I don't know if it's of interest ...

I'll retest once I can download a new image (site seems down atm)

Bye!

Rudolf.

by rudolfc, 16 years ago

Attachment: r28980-succesfull-rw-mount added

by rudolfc, 16 years ago

Attachment: r28980-various-rw-files added

comment:8 by rudolfc, 16 years ago

Just compiled and tested R28980, BTW the AboutSystem app doesn't work for some reason.

I am typing while still running with a stick inserted, though (all) trouble isn't solved yet. I attached two more files for a succesfull mount and a complete write to stick of all files which make up the haiku system I am now running. A few folders I had to copy back from stick to HD since I (again) mix up copy and move actions in tracker.

I am still unable to unmount the stick, I'll re-attempt in a few minutes.

OK: things I see: -first insert of stick: nogo -second insert of stick: it appears in tracker in the mount menu -first and third attempt to mount the stick: I get KDL'd with: Could not read block 1: bytesread: -1, error: operation timed out This message appears three times after which the mount attempt is stopped (stick icon disappears from desktop and system nolonger KDL's). Now I can use Haiku normally (it does not become unstable) The timeout must be something below or about 1 second or so since I am not aware of waiting time passing by between me selecting mount stick and the KDL happening.

-second and fourth attempt to mount stick: no KDL's, just the icon onscreen. (as it should). -during the 4th attempt I wrote to and read from the stick as described above. As usual, speed seems normal for some time, then copy comes to a standstill but goes on after some time, slower, and with stopping intervals. -Apart from the slow copy running tracker remains responsive. I can still mount/unmount other HD partitions. -After tracker says it's completed I cannot unmount the stick. (I get the force warning). So it looks like background copying is still running.

I'll be back shortly with an unmount attempt description.

Rudolf.

by rudolfc, 16 years ago

in reply to:  8 comment:9 by anevilyak, 16 years ago

Replying to rudolfc:

Just compiled and tested R28980, BTW the AboutSystem app doesn't work for some reason.

FYI, that's tracked in ticket #3337 if you want to keep up on it.

comment:10 by rudolfc, 16 years ago

(back again: thanks for the pointer anevilyak! BTW your entry let me hit a Trac bug? I could not post my last entry I was typing while your responded..)

OK, it seems after 5 minutes or so (or is it 10? it's hard to tell since I'm typing and not concentrating on time :) The unmount seems to succeed. The warning doesn't come up, and some 10, maybe 20 seconds later the stick icon dissapears from the desktop. The stick is also gone from the mount menu.

If you look in the syslog, some error messages are (Still) coming. Finally I remove the stick which issues some more usb messages and I stopped the syslog capture.

I can still mount other HD partitions, haiku remains stable.

OK, just now reinserted the stick: -first plugin: not visible in mount menu -second plugin: visible. Attempted read-only mount: kdl 4 or 5 times, 3 or 4 with the avore mentioned read error, once with something like: write cache attempted on read-only volume or so. -third plugin: ro-mount succesfull. Seems all copied files are neatly on the stick.

I'd say we have improvement over before. I hope the remaining issues can be solved soon now as well.. :-)

Thanks! Good luck, let me know if you need some test done or so..

Rudolf.

comment:11 by rudolfc, 16 years ago

One more comment: I removed the stick *NOT* at the time it's mentioned in the syslog, but slightly before *this* line in the syslog:

KERN: usb_ehci: qtd (0x03792f00) error: 0x001f8049

Bye!

Rudolf.

comment:12 by oco, 16 years ago

I just realize that my scenario is close to #2662.

Nevertheless, after stressing my USB drive since last week, i was only able to get a "could not write back block 325 (operation timed out)" error 2 times after updating then compiling an haiku source tree on two different partitions in a non-reproducible way. I think the two partitions have survived a crash before.

I just finish a full checkout and a jam haiku-image on a clean new partition on this USB hard drive. It just works. I think i will create a new bug if i find a simpler test case for the timeout error.

comment:13 by oco, 16 years ago

Forget the revision : hrev28998

comment:14 by rudolfc, 15 years ago

Hi again,

An update: The sticks all work OK these days over here, but an important part of 'my issue' remains (pasting from somewhere above, with slight modifications):

==

Writing a complete folder (apps in this case) seems to work if you look at the progress bar, but at about 70% or so it comes to a temporary standstill. It will go-on or resume a bit later but at visibly lower speed.

Reading back the folder (I am constantly misled by the move instead of copy default action in tracker) apparantly succeeds, but is very slow: comparable to the last writing actions.

Once tracker says its finished (progress bar disappears from screen) I cannot unmount the stick. Apparantly it's still in use. If I wait 5 minutes (or was it more?) then I can unmount the stick successfully. I have to wait half an hour (or is it an hour?) if I copy the complete content of the haiku alpha image. After that I can unmount the stick, and booting from that stick works correctly after that.

=====

So, in conclusion for the remainder of the bug: USB sticks work OK, but they are dead slow. Both on reading and on writing. Reading speed is probably say 500kb/sec. max, on different tested USB 2.0 ports on different machines. These ports work very fast in Windows.

Two of my sticks writes at approx. 10Mb/sec in Windows if I write an image to it..

Bye!

Rudolf.

comment:15 by mmlr, 15 years ago

Resolution: fixed
Status: assignedclosed

What you are seeing is caches at work. The initial write will be cached up until there is no room for it in the cache anymore. It will then pause until a certain amount has been written back and is available to cache more. The rest of the process will then continue at the real writing speed. When reading there is also caching, but on first read you're limited to the read speed again, so this is expected. That the devices are slow in general comes from the known limitation that the whole stack and therefore the driver operate at virtual memory level while the IO scheduler schedules everything as physical buffers. The corresponding split up and mapping overhead combined with the shortening of the transfer size to one page size per transfer is what leads to this performance issue. That's not a bug though, but a missing feature, which is being worked on and tracked in other tickets. So I'm closing this one.

comment:16 by rudolfc, 15 years ago

Hi there mmlr,

Thanks for your clarification! I already thought this would be caches at work (easily assertainable by pausing for instance movies during playback that stutter, and walking trough them until the end, then restarting playback to see all stuttering gone).

Anyhow: the usb access is not a little slow, but *BIGtime* slow. It's the difference between being able to work on haiku from a stick, or not to be able to do that. On top of that the harddrives apparantly are a *lot* faster, since I can work very good from that. My guess would be that there is a extra problem with USB sticks apart from the missing feature, since that missing feature would also miss for HD access? (or am I babling here?)

Please point me at the bugs that follow the development of the missing feature you are talking about, I want to catch up on this one since it annoys me a lot.. Who knows, maybe I can be of (little) assistance somehow.

Thanks in advance!

Rudolf.

in reply to:  16 comment:17 by mmlr, 15 years ago

Replying to rudolfc:

Anyhow: the usb access is not a little slow, but *BIGtime* slow. It's the difference between being able to work on haiku from a stick, or not to be able to do that.

Yup. That's because reading and especially writing to a USB stick with small buffers is orders of magnitude slower than burst reading/writing. You can easily see that when you dd to a USB stick from another OS and compare writing at 4K vs. writing at 4M blocks. Right now (since the introduction of the IOScheduler that is) we do everything at 4096 bytes per transfer. This also explains why the same USB stack and drivers will actually perform reasonable on R5.

My guess would be that there is a extra problem with USB sticks apart from the missing feature, since that missing feature would also miss for HD access? (or am I babling here?)

It's not missing from the disk side of things. The whole SCSI and ATA/AHCI stuff works directly with the IOSchduler and uses the physical buffers handed to them. So there is no physical to virtual memory mapping going on, resulting in neither the overhead of doing so nor in the splitting up of transfers (which happen because of the page wise mapping of the physical buffers). So you won't see it on normal disks.

Since the whole USB stack interface operates on virtual memory this is a bit more involved. It means to add a physical memory API that can then be used. On the other hand it would of course also be possible to instruct the IOScheduler to use virtual memory instead, but since the IOScheduler doesn't actually know beforehand which device the request will end up this isn't really straight forward.

comment:18 by rudolfc, 15 years ago

Hi there mmlr,

Thanks for the pointers you gave me. I've been testing a lot since that previous message of yours (I'll post benchmark details later) and I've got a few remarks/questions. I hope you will answer them. I'll just post it here, if you think we should continue this topic elsewhere please let me know.

So, I have been tweaking with the usb_disk driver. From the looks of it, you are right: reading mostly takes place at 4k chuncks per read (although on a few occasions 8kb happens as well).

I am guessing 4kb is used because that's equvalent to Be's page size. Correct?

So: question, why is writing taking place at only 2kb chunks instead of 4kb chuncks? This unnessesarily takes down speed.. Is this a fault? if so I'd love to see that fixed.

Question 2: while reading, I see a lot of sequential reading takes place. That is, sequential chunks are read from the disk, although they might be placed on non-sequental adresses in (virtual) memory.

Why is writing taking place in reversed sequential order? I am guessing this happens on for instance writing to the syslog, while writing a new file (so copying a file) does take place in normal sequential order. Is this correct? Couldn't the logfiles be written in sequential order as well? (I'd love that :)

So, why am I asking? Well, I just implemented a simple caching piece of code inside the usb_disk driver that dramatically speeds up I/O.

I tested on two systems, using some 5 different sticks, using cachesizes of: none, 16kb, 32kb, 64kb and 128kb.

The sticks varied from 1Gb to 16Gb sticks, both slow types (write in windows @ 2,5Mb/sec, read @ 6Mb/sec) to fast types (write @ 22Mb/sec, read @ 32Mb/sec).

I found that using 16Kb cache is a nice setting as a compromize between loading small scattered files to large sequential files.

Using 16kB cache made the reading speed go from 0.93Mb/sec on seq. files, to 3.3Mb/sec. I have to note here that the kernel's caching system considerably slows down disk access (I think that's the interfering component, will check later).

For instance, using 128kB cache gives me a read speed (seq. files) of around 20Mb/sec at the start of a file copy for a second or so (for instance during 35Mb of a 160Mb file), then speed drops to around 5Mb/sec. (The 3.3Mb/sec mentioned above is with the slowdown, before it will be above 6Mb or so).

Hmm, come to think of it, maybe the system cache does speedup things, but after the 35Mb it's full and can't place data quicky enough on the HD, I saw that HD copy is slow as well on haiku: around 6-7Mb/sec (half for reading, half for writing if copy takes place inside one HD).

Oh, I have bootspeed figures as well (booting on stick, all types): with 16Mb cache I found the optimum booting speed, being some 15-20 seconds shorter on all systems. using 128Mb slows down speed with say 15 secs.

I have to note here the cache I did only caches sequential reads. I don't have writing speed info yet, I'm implementing the same cache technique in the driver atm to see what that does. I can tell you I can work very nice already with haiku booted from stick now, even playback fullHD movies (that was unthinkable before..). The only stuttering I see is when a write access to the stick takes place. This can be made to be better when caching, though very small non sequential chuncks will always remain slow as long as the internal technique of USB sticks don't improve.

Oh, I also plan to do some testing/benchmarking on a real USB HD to see if that behaves likewise.


mmlr, you did not tell me yet what the plans are to improve things, or which ticket describes those plans? All info you have is interesting to me.. :-)

Bye!

Rudolf.

comment:19 by rudolfc, 15 years ago

Hmm, small correction:

Optimum bootspeed is at 16kB cache, not 16Mb :-)

About the kernel cache: it must be the latter I described, will doublecheck later. I am wondering now if such a simple cache inside the normal ATA/SATA drivers would speedup likewise as I have now on usb_disk.

BTW: I saw a discussion between you and the person who wrote the usb replacement drivers for BeOS (Shiarsku or someting?) about connecting the SCSI stack to the USB stack way of accessing sticks. What's the status of this and what will iit do concerning the speed issues we see now? (I guess nothing, it's just a more 'universal' (much nicer) way of doing USB disk transfers, correct?)

thanks!

Rudolf.

comment:20 by mmlr, 15 years ago

YOU ARE WASTING YOUR TIME.

Ok. Please try to understand what I said above. The IO subsystem in the kernel, basically the IO scheduler with its IO requests do abstract the transfers. These are currently all operating on physical memory because that's what the SCSI stack (and therefore the ATA stack attached to it) work with most efficiently. As soon as these requests are handed to a driver that does not operate on physical memory the request is mapped, page wise, to virtual memory and then fed to the drivers read/write hook. The transfer lengths resulting can only ever reach 4K for that reason.

The way to fix this is to either make the IO scheduler aware of the restrictions down the path so it can make a better decision on what memory space to operate, or to implement a physical IO path through USB.

While doing caching in the usb_disk driver does certainly work, it is not the way to go. Since there already is the IO scheduler and the block/file caches it also completely duplicates existing work.

As long as these fundamental things, which I am working on, aren't resolved, benchmarks or hacks in usb_disk are a waste of time.

comment:21 by mmlr, 15 years ago

I don't want to piss you off BTW. I just want to make it very clear that the problem is perfectly understood. The proper fix is not exactly trivial though, therefore it is taking a bit of time. While I appreciate efforts to find problems, this one just isn't well spent, as the problem is found already. I'd like to avoid anyone to waste their resources on re-finding or working around known problems.

comment:22 by rudolfc, 15 years ago

Well, mmlr, using capitals to kindly inform me of something does give me a bit off pissed off feeling after all. Wouldn't that do the same for you?

Anyhow. I'm a practical kind of guy. Until you prove that what you say works (because it's implemented), I am going forward with this workaround as you call it, if only for my personal version of Haiku.

Tell me, why did you create the usb_disk driver while that's a workaround? The thing to do is complete the SCSI to USB connection instead. Still you did this workaround.

Personally I can understand that: it's better to have something that works, than something that isn't completed and therefore doesn't work.

I asked you some questions, will you answer them for me? I'd like to get educated here. Oh, and please don't dismiss my hardware knowledge too soon. You just _might_ overlook something...

Bye!

Rudolf.

in reply to:  22 comment:23 by anevilyak, 15 years ago

Replying to rudolfc:

Tell me, why did you create the usb_disk driver while that's a workaround? The thing to do is complete the SCSI to USB connection instead. Still you did this workaround.

The usb_disk driver was written for the R5-style driver interface because it was first used as a replacement for Be's anemic USB support in R5. This was *long* before Haiku was even able to boot to a graphical desktop. Also, the I/O scheduler and corresponding driver interface changes were only introduced relatively recently in Haiku (don't recall exact date sorry), so it's not really something the disk driver could have been written for initially.

comment:24 by rudolfc, 15 years ago

Hi anevilyak,

Thanks for the clarifiaction. BTW Is there documentation on this I/O scheduler and driver interface? (preferably with an example of a transfer..) I'd like to know how that works, and what the plans are. I am interested in this, even if it isn't beneficial to Haiku's development directly.

Thanks in advance for any pointers you might have.

Rudolf.

in reply to:  22 comment:25 by mmlr, 15 years ago

Replying to rudolfc:

Well, mmlr, using capitals to kindly inform me of something does give me a bit off pissed off feeling after all. Wouldn't that do the same for you?

It was intended to make more obvious what was more subtle in the other two comments I added: That I know where the problem is and am actively working on resolving it, so adding workarounds is not well spent effort. Since you didn't seem to acknowledge that I tried to be more blunt about it. Of course I can't restrict you in what you invest your time in, but I find it sad to see someone going into a direction that in the long run doesn't pay off. I usually like to be told before spending efforts. And since you wrote that you are just about to implement the write cache as well as wanted to invest time into further benchmarking I saw the chance of still saving you from a bit more time that wouldn't go into the final direction.

Anyhow. I'm a practical kind of guy. Until you prove that what you say works (because it's implemented), I am going forward with this workaround as you call it, if only for my personal version of Haiku.

I'm sorry, but I do also have a bit of knowledge in this field and I guarantee you that what I am saying is actually the case. Why do I know this for sure? Because before the introduction of the IO scheduler and related functionality this issue was not present, because you can easily test if there is a general problem with USB performance or not (which it isn't), and because I already have analyzed and tracked down the exact place the problem is. DoIO::IO() in vfs_request_io.cpp for reading and how writing is done in vm_page.cpp for writing.

Tell me, why did you create the usb_disk driver while that's a workaround? The thing to do is complete the SCSI to USB connection instead. Still you did this workaround.

Because the two things are on a completely different scale. The usb_disk driver is a workaround, yes, but the usb_scsi part needs quite a lot of rework to get integrated properly. Just compare the LOC of both and you see that it wasn't exactly a huge task to implement usb_disk. Also the intention of usb_disk was to provide only a very limited set of actual SCSI, since many devices aren't strictly SCSI and actually have issues when attached to an actual SCSI stack because they don't handle a lot of commands. Taking a look at the quirks lists in other OSes you can easily see that it might be simpler to attach a simplified driver with a limited known good SCSI command set. That's what usb_disk is.

And indeed usb_scsi would run into the exactly same problem in this case simply because there is no physical memory API in the USB stack yet.

I asked you some questions, will you answer them for me? I'd like to get educated here. Oh, and please don't dismiss my hardware knowledge too soon. You just _might_ overlook something...

I certainly don't dismiss your hardware knowledge, but please also acknowledge that I have a very deep understanding of these parts of Haiku since I've worked on most levels already and am kind of familiar with the USB implementation since I've plain simply wrote most of it. If I wasn't sure as to where the reasons are then I'd certainly be open to these efforts, but as I said the issue is understood.

To your questions: Why 4K? -> That's B_PAGE_SIZE, yes Why 2K block reads/writes? -> Because that's depending on a few factors, mostly FS block size as well as device block size influence it. When BFS inodes are read for example they will be 2K on default settings. If actual file content is read/written the blocks will be 4K.

Plans: Short term: change the mapping strategy, which brings back the read speeds to acceptable values (already done) and investigate the best solution on implementing burst write backs of pages to get normal write speeds. Right now all writing (except for directly accessing a device through dd for example) is chopped up into B_PAGE_SIZE transfers. This is simply because of how page write backs work right now and is mostly hidden by the caches and by the low latency on harddisks in normal use cases. On a USB stick you will clearly notice it though, especially if you're waiting for the sync or unmount to complete when you want to finally unplug the device.

Long term: Implement a physical memory API for the USB stack, possibly migrating it to use IORequests right away. This can only work for EHCI however and means a lot of work in general, so this won't be ready too soon. Then usb_scsi can be revisited and be properly integrated. As mentioned above though, it has a lot of features and therefore a higher chance of provoking device issues compared to the limited SCSI implemented in usb_disk, so this has to be very carefully planned and tested to ensure compatibility. Note also that the protocols besides transparent SCSI aren't really relevant anymore, as current devices almost never use anything else (this is thanks to Microsoft which discourages anything else than transparent SCSI, which I agree with because it makes things a lot simpler and more standardized).

Let's move to #4690 for further discussion, as the original issue here has been fixed and #4690 is specifically about the performance (there was another one, but I can't seem to find it right now).

comment:26 by rudolfc, 15 years ago

Hi there mmlr,

Thanks for your detailed in-depth response, I appreciate that a lot. And thanks for you trying to warn me as well, that's appreciated as well. It was the way you wrote that and the just partly response I got that triggered me this way.

Since I've looked at USB's inner workins as well in the past (though I did not actually create something with that yet), I know it's a hell of a job getting it implemented, so I appreciate that a big lot too you did that.

Before I'll switch to the other bugreport I just want to complete this stuff here and tell you what I had to say more or less.

I am a person who will doublecheck everything I do. This goes for Haiku as well, since I kind of love that thing. Since speed performance is real bad in this area and this hasn't changed as far as I could see (measure, black box style) for at least a year.

I was very happy with your pointers, since that gave me the opportunity to start doing just that: verify some stuff. And learn on the way.

Well, I still have my original feeling: there is not one problem, but two. (Of course, that can just be my own stupidity in not recognizing something, but I want to understand the reason anyhow.) Because, if I run dd in windows, on 4kb blocks, I get 8Mb/sec reading speed. If stuff performed reasonably well on Haiku, with 4kB blocks because of B_PAGE_SIZE, I would expect not 8MB read speed, but also not 1Mb read speed. say 3-5Mb would be reasonably well performance in my eyes.

I think this speed fact I neglected to mention to you, while it might be the most important thing I could say about this.

BTW With dd in Haiku, using 4kb blocks, I get the same read and write speed as if I were just plain copying files using Tracker.

Oh, you have to understand a well that I never planned to implement the cache stuff inside Haiku, just for myself over here as a proof of stuff, trying to nail why I think there are two problems. I let you know because I think the results might be interesting/ of importance to make you see that there just might be a possible second issue.

About cache in general: I don't agree per se that you are correct that there should only be one cache. Looking at PC hardware there are more than one as well. I am convinced they exist only because it makes sense to have them. (level1, 2, and 3 cache between CPU en memory, another is MTRR-WC I'd say).

So I am not that convinced yet that in the end you will not end up using some sort of a second cache on the lower levels to deal with the quirks of hardware (to minimize bottlenecks specific to certain kind of hardware).

Well, I suggest to forget about my tests now since it seems from your words you got it all fully under control.

It would be cool if you could point me(us) to the extensions you are doing in this area in that other bugreport (#4690) so things can be tested/tried when they arive.

I would appreciate it if I can follow that since the low speed bothers me this much.

For instance you already mentioned 'change the mapping strategy'. Which revision should I look at for playing with that? I'll look in #4690 from now on, would be nice if you would answer this question there?

Bye! Keep up the good work :-)

Rudolf.

in reply to:  26 comment:27 by mmlr, 15 years ago

Hi Rudolf

It was the way you wrote that and the just partly response I got that triggered me this way.

Understandable, sorry for that again.

Well, I still have my original feeling: there is not one problem, but two. (Of course, that can just be my own stupidity in not recognizing something, but I want to understand the reason anyhow.) Because, if I run dd in windows, on 4kb blocks, I get 8Mb/sec reading speed. If stuff performed reasonably well on Haiku, with 4kB blocks because of B_PAGE_SIZE, I would expect not 8MB read speed, but also not 1Mb read speed. say 3-5Mb would be reasonably well performance in my eyes.

I'm not sure if the dd on Windows will actually directly access the device with that block size. If it writes into a cache then that'd skew the numbers.

BTW With dd in Haiku, using 4kb blocks, I get the same read and write speed as if I were just plain copying files using Tracker.

Yes, that verifies the fact that either transfer ends up using 4K blocks, even though Tracker uses a far larger block size.

About cache in general: I don't agree per se that you are correct that there should only be one cache. Looking at PC hardware there are more than one as well. I am convinced they exist only because it makes sense to have them. (level1, 2, and 3 cache between CPU en memory, another is MTRR-WC I'd say).

Well, those caches do all have different purposes, where level one is near and fast, level two is shared across cores but still nearer than RAM and so forth. The cache we are talking about is only file and block caches and always resides in RAM. There should only be one single representation of a file in memory. All reads and writes to or from should then be mere "pointers" to that memory with a corresponding base address and length to describe the section. This is exactly what iovecs do and how the IORequests work. In the end, there is a single cache to hold the data and populating it and writing it back is done on this set of physical pages through the use of optimized IORequests (holding optimized iovecs) that describe the wanted sections with the least possible descriptors. Then all layers operate on these requests, adjusting offsets and lengths only instead of copying stuff. That's pretty much how the IO path already works for disks right now. The requests aren't yet fully optimized, i.e. they aren't combined at some places where they could be. But file reads for example are described using IORequests, the filesystem provides vectors to map the scattered file to partition offsets, the devfs then adjusts offsets to compensate for partition to disk offsets. All this happens without any data copying, so in the end the disk driver directly DMA reads into the physical pages provided for the cache. That's as good as it can get and is the long term goal for USB as well.

So I am not that convinced yet that in the end you will not end up using some sort of a second cache on the lower levels to deal with the quirks of hardware (to minimize bottlenecks specific to certain kind of hardware).

The upper layers should always provide the longest contiguous buffers/vectors. The drivers down the IO path can then chop that up as needed to accommodate for the different hardware. While flash media is certainly an extreme example, the general rule of long burst reads/writes being faster than small transfers holds true for pretty much all other storage as well. Due to the latency differences you just notice it more on some hardware.

Well, I suggest to forget about my tests now since it seems from your words you got it all fully under control.

Feel free to continue with testing, just be sure to wait for the imminent changes to happen first. Before these are in, pretty much all benchmark results will simply show that it sucks and all the numbers are invalidated as soon as a change happens.

For instance you already mentioned 'change the mapping strategy'. Which revision should I look at for playing with that?

That'd be hrev33503 from just a few minutes ago. It should bring read speeds, while yet not fully optimized, to acceptable levels. As mentioned, the write side is a bit more involved, but I'm working on integrating those changes as well.

comment:28 by rudolfc, 15 years ago

Hi again Michael,

I more or less found the bug I was encountering at begeistert (black screen after bootup using nvidia driver).

It turns out it's the fact that AGP gets enabled. From the looks of it this wasn't a problem before the USB changes, while afterwards this is a problem.

Could it be that the AGP aperture is somehow accidententally used for mapping files? That would explain the 'random' behaviour I see here: system can work, but if I add one file to the running image on stick, on the next boot the screen is black, and if I delete it again, things are working again..

I tested an image from 12 september: all OK. Image from for instance 10 november: trouble. If I copy the nvidia driver and agp busmanager from the 10 november image onto the 12 september image: all OK. Other way around: trouble.

Could you (or Axel) doublecheck for AGP/memory mapping trouble??

Thanks!!

Rudolf.

comment:29 by axeld, 15 years ago

Version: R1/pre-alpha1R1/Development

Unless you wrote an AGP driver, no aperture should be used.

comment:30 by rudolfc, 15 years ago

Hi Axel,

Please note: I did not change a single thing! The nvidia driver supports AGP (but does not use the aperture). AGP is enabled by the driver and the acceleration engine is setup to use it (though app_server doesn't use it atm).

As soon as AGP mode is enabled, the driver(system?) stops working atm (the last entry is the enabling AGP entry in it's logfile). Some other system component must be reponsible for that. Again, could it be that when files are mapped to memory, that a part could be written somewhere AGP related that should not be? So, some mapping stuff that got modified because of the USB speedup might be related to this? Some boundary not checked maybe since blocks larger than 4kb are now used?

Bye!

Rudolf.

Note: See TracTickets for help on using tickets.