Opened 16 years ago

Closed 5 years ago

Last modified 4 years ago

#2817 closed enhancement (fixed)

setpriority and getpriority are missing to compile ocaml out of the box under Haiku

Reported by: oco Owned by: nobody
Priority: normal Milestone: R1/beta2
Component: System/POSIX Version: R1/Development
Keywords: Cc: umccullough@…
Blocked By: Blocking:
Platform: All

Description

setpriority and getpriority are not yet implemented. There is a TODO in posix/sys/resource.h.

Does the algorithm used in the renice command line tool (in src/bin) could be used in lib root ?

Basically, renice iterate over threads of a team to set the thread priority on each one. A basic conversion function adapt the unix priority to a beos thread one.

If so, maybe i can try to implement this.

This is the only thing missing to compile ocaml out of the box under Haiku (after updating the usual config.sub and config.guess).

Attachments (1)

0001-libroot-add-gs-etpriority-implementation.patch (8.6 KB ) - added by Timothy_Gu 9 years ago.
Patch from #11755.

Download all attachments as: .zip

Change History (26)

comment:1 by umccullough, 15 years ago

Cc: umccullough@… added
Type: bugenhancement

I ran into this when porting a small app last night - instead of using setpriority() from sys/resource.h, I ended up just using set_thread_priority() from OS.h which seems to be pretty close to the same functionality.

Is there a difference in concept between the two? If not, I think it wouldn't be too hard to implement something like this...

comment:2 by kaliber, 14 years ago

Component: System/libroot.soSystem/POSIX
Owner: changed from axeld to nobody

comment:3 by leavengood, 13 years ago

FWIW these could also be useful (though probably not required) for the Rubinius port I am working on. If adding them is not too difficult it could improve our POSIX compatibility. Though reading about them and the code of our renice maybe they just don't make as much sense on a BeOS-like system. I certainly don't know how setting the priority for a user would work, but I guess setting the priority for a process could map to a thread in Haiku, and a process group could be a team.

comment:4 by axeld, 13 years ago

Haiku supports processes (=teams) and process groups as well, so that alone shouldn't be a problem.

comment:5 by bonefish, 13 years ago

While the nice values seem to be somewhat similar to priorities, it's not so simple to map them to those, as priorities are per-thread while nice values are per-process. There are also pthread_setschedprio() and pthread_setschedparam(), setting a per-thread priority which should possibly work like an offset to the nice value (we currently map it directly).

comment:6 by leavengood, 13 years ago

Can we really implement these properly if we don't have nice values for processes which the scheduler uses? Do we even want to add these Unixy nice values? Seems like it might be an archaic concept in the modern world of threads.

So either we just don't implement these, or we just do a fairly dumb mapping like the renice command does and call that good enough.

in reply to:  6 ; comment:7 by bonefish, 13 years ago

Version: R1/pre-alpha1R1/Development

Replying to leavengood:

Can we really implement these properly if we don't have nice values for processes which the scheduler uses?

No, but we can easily introduce per-process nice values. It's not even something the scheduler would need to know about. They could simply be applied to all threads of the process, e.g. by computing the Haiku thread priority from nice value and pthread scheduling priority.

Do we even want to add these Unixy nice values? Seems like it might be an archaic concept in the modern world of threads.

At least they shouldn't be complicated to implement and wouldn't get in the way either.

So either we just don't implement these, or we just do a fairly dumb mapping like the renice command does and call that good enough.

... or approximate them more correctly, as suggested above. At least that's what I would recommend to whoever wants to implement the functionality.

in reply to:  7 ; comment:8 by leavengood, 13 years ago

Replying to bonefish:

No, but we can easily introduce per-process nice values. It's not even something the scheduler would need to know about. They could simply be applied to all threads of the process, e.g. by computing the Haiku thread priority from nice value and pthread scheduling priority.

Alright so the only difference between what you suggest and what is done in renice is the pthread scheduling priority it taken into account.

... or approximate them more correctly, as suggested above. At least that's what I would recommend to whoever wants to implement the functionality.

OK I'll probably take a crack at it over the weekend and either I could attach a patch here or just commit it and it can be reviewed on the commit list. It is not like it could break anything.

in reply to:  8 ; comment:9 by bonefish, 13 years ago

Replying to leavengood:

Alright so the only difference between what you suggest and what is done in renice is the pthread scheduling priority it taken into account.

The main difference is that the nice value would be stored in the kernel team structure.

OK I'll probably take a crack at it over the weekend and either I could attach a patch here or just commit it and it can be reviewed on the commit list. It is not like it could break anything.

Well, it's a kernel change and, though not really complicated, it certainly isn't a no-brainer either. So I don't quite agree that there isn't any potential for breakage. Anyway, it's not like others don't break stuff on a regular basis, and reverting the change is always an option. :-)

in reply to:  9 ; comment:10 by leavengood, 13 years ago

Replying to bonefish:

The main difference is that the nice value would be stored in the kernel team structure.

Ah, OK.

Well, it's a kernel change and, though not really complicated, it certainly isn't a no-brainer either. So I don't quite agree that there isn't any potential for breakage. Anyway, it's not like others don't break stuff on a regular basis, and reverting the change is always an option. :-)

Hmmm, I was thinking it would just be a userland libroot change which called set_thread_priority and similar. I still think I could do it, but having it as a kernel change is a bit more complicated indeed. So I may just do it as a patch then, which I could later commit once it was reviewed.

How do you do kernel work while actually running Haiku? I imagine it isn't too safe to mess with the kernel on the running system. Could I test with Qemu or similar? Or do you build onto another partition and test on that?

Pardon my ignorance on doing kernel-level changes. But while I did do some kernel stuff in school it has been a while. But I'd like to get into some Haiku kernel stuff though so this might be a nice start.

in reply to:  10 comment:11 by bonefish, 13 years ago

Replying to leavengood:

Hmmm, I was thinking it would just be a userland libroot change which called set_thread_priority and similar.

Since one cannot access the pthread scheduling priority of other teams' threads, that would basically limit the implementation to what mmu_man's renice does. getpriority() would only be able to guess from the thread priorities and I don't think there is an interface to iterate through the processes of a process group, either.

I still think I could do it, but having it as a kernel change is a bit more complicated indeed. So I may just do it as a patch then, which I could later commit once it was reviewed.

Feel free.

How do you do kernel work while actually running Haiku? I imagine it isn't too safe to mess with the kernel on the running system. Could I test with Qemu or similar? Or do you build onto another partition and test on that?

I usually work under Linux with qemu (or VMware). If qemu under Haiku works, that would be an option, too, but the turn-around times under Linux are shorter. Net-booting a second machine is slower than using emulators. Working with two installations on one machine -- developing in one, testing with the other -- is forbiddingly time consuming.

comment:12 by luroh, 9 years ago

Milestone: R1Unscheduled

Move POSIX compatibility related tickets out of R1 milestone (FutureHaiku/Features).

comment:13 by Timothy_Gu, 9 years ago

I have implemented the two functions as a part of a GCI 2014 task. See #11755 for more information.

comment:14 by pulkomandy, 9 years ago

Blocking: 11755 added

(In #11755) You should attach your patches to the existing ticket #2817 directly instead of opening a new one.

comment:15 by pulkomandy, 9 years ago

Blocking: 11755 removed

There is a limitation with the approach you implemented in these patches: the priority only applies to existing threads. As discussed in this ticket, a better solution would be to have a process-global "nice" value and also use it when creating new threads to adjust their priorities.

in reply to:  15 comment:16 by Timothy_Gu, 9 years ago

Replying to pulkomandy:

There is a limitation with the approach you implemented in these patches: the priority only applies to existing threads.

I assume you only mean setpriority() here, as getpriority() is kind of irrelevant to the next comment.

As discussed in this ticket, a better solution would be to have a process-global "nice" value and also use it when creating new threads to adjust their priorities.

(Assuming by "process" you mean "team")

This badly conflicts with the current kernel architecture:

  1. What if you spawn a new thread and change the niceness of the new thread to another level? Should the process niceness be changed? If so, to what value?
  2. Say you have a process with two threads, 0 with niceness 1 and 1 with 2. Also assume we have chosen the process niceness to be the maximum numerical values of the niceness of all the threads, so 2 in this case. What if I spawn a new thread from thread 0? Should the new thread have 2 or 1?
  3. I assume the team_info ABI is locked.

A more viable approach is to fix spawn_thread() and fork() to use the niceness of the spawning thread, which is out-of-scope for this patch alone.

by Timothy_Gu, 9 years ago

Patch from #11755.

comment:17 by Timothy_Gu, 9 years ago

patch: 01

comment:18 by pulkomandy, 9 years ago

What is called "team" in the BeAPI is what is called "process" in POSIX.

The setpriority documentation (http://pubs.opengroup.org/onlinepubs/9699919799/functions/setpriority.html) says that the priority is set for the whole process.

 If the process is multi-threaded, the nice value shall affect all system scope threads in the process.

So, the "niceness" is process global. It needs to be stored somewhere, and getpriority can read it there and return it directly. This avoid the rounding errors of your current implementation.

When setpriority is called, the "nice" value for the team should be updated, and the priority for each running thread adjusted to match. When a new thread is spawned, its priority should be adjusted according to the current nice value.

You are right that team_info shouldn't be modified, however it is only the user-visible information for the team (ans since getpriority already allows getting the nice value, it doesn't need to be available in the team_info). In headers/private/kernel/thread_types.h is the definition of struct Team which is the place where the process nice value can be stored.

Of course we need a way to get it on the kernel side, and for this getthreadpriority and setthreadpriority must probably use a new system call.

in reply to:  18 ; comment:19 by Timothy_Gu, 9 years ago

Replying to pulkomandy:

When setpriority is called, the "nice" value for the team should be updated, and the priority for each running thread adjusted to match. When a new thread is spawned, its priority should be adjusted according to the current nice value.

This still does not answer the question of what should the proc niceness be when the user changes the priority of a thread.

In headers/private/kernel/thread_types.h is the definition of struct Team which is the place where the process nice value can be stored.

OK

Of course we need a way to get it on the kernel side, and for this getthreadpriority and setthreadpriority must probably use a new system call.

How do syscalls work? Is there a simple example somewhere?

in reply to:  19 ; comment:20 by anevilyak, 9 years ago

Replying to Timothy_Gu:

This still does not answer the question of what should the proc niceness be when the user changes the priority of a thread.

The POSIX specs are somewhat vague on this subject and leave quite a few things with regards to thread scheduling policy as "implementation-defined", but generally, the nice value for a process/team is completely independent of the priority of any individual thread within it, ergo the answer your question is, changing the priority of a given thread has no impact on the nice value.

in reply to:  20 comment:21 by Timothy_Gu, 9 years ago

Replying to anevilyak:

Replying to Timothy_Gu:

This still does not answer the question of what should the proc niceness be when the user changes the priority of a thread.

The POSIX specs are somewhat vague on this subject and leave quite a few things with regards to thread scheduling policy as "implementation-defined", but generally, the nice value for a process/team is completely independent of the priority of any individual thread within it, ergo the answer your question is, changing the priority of a given thread has no impact on the nice value.

OK, cool

comment:22 by pulkomandy, 6 years ago

patch: 10

comment:23 by pulkomandy, 6 years ago

Patch migrated to Gerrit: https://review.haiku-os.org/#/c/78/

comment:24 by waddlesplash, 5 years ago

Resolution: fixed
Status: newclosed

Implemented in hrev52776.

comment:25 by nielx, 4 years ago

Milestone: UnscheduledR1/beta2

Assign tickets with status=closed and resolution=fixed within the R1/beta2 development window to the R1/beta2 Milestone

Note: See TracTickets for help on using tickets.