API for Obtaining Cursor Shape Bitmap and Change Notification
|Reported by:||AGMS||Owned by:||nobody|
|Has a Patch:||no||Platform:||All|
While updating the BeOS VNC server for Haiku, it occurs to me that it would be nice to have the Haiku specific cursor (hand shape, pointer arrow, insertion marker, etc) displayed at the VNC client side of things. VNC has an API for transmitting the cursor, and the client software does change the system cursor there to match the new bitmap. All we need to have is the bitmap from Haiku and some sort of notification of when it has changed.
Since we're using a BDirectWindow to do the screen scraping, perhaps the API could be added there (that also avoids the problem of deciding which workspace it is on, etc). Though I think someone with more knowledge should decide where it should go. The API could be something like this:
- GetCursorBitmap - reads the current cursor bitmap, includes alpha transparency. Need the pixel format too, if it's not a simple RGBA 32 bit pixel, and maybe worry about endianness too.
- GetCursorSerialNumber - gets the count of the number of times the cursor has been changed. The idea is that the VNC server will periodically call this and if the number has changed, get the new cursor bitmap and transmit it over the network to the client.
- IsCursorDrawn - returns TRUE if the cursor is being drawn into the screen video memory. FALSE if it is a separate video hardware cursor.
If the hardware cursor is off or not implemented, then it does seem to work, since Haiku writes the cursor shape into the screen's bitmap and VNC picks that up. If the hardware cursor is operational, then the user doesn't see anything and the VNC client currently substitutes a black dot or some other fixed graphic to show where the mouse is.
By the way, one other obvious but actually not useful VNC optimisation would be to have the app_server report all the dirty areas of the screen, so VNC can know which parts need updating over the network. It currently scans the whole screen to find the changed parts, which can be slow. On the other hand, the current method is more responsive since it scans in slices so each update will be small enough to keep feedback (mouse movement) fast. So using the dirty areas would actually make things feel slower for the user!