DragonFly BSD
DragonFly kernel List (threaded) for 2003-07
[Date Prev][Date Next]  [Thread Prev][Thread Next]  [Date Index][Thread Index]

Re: Call for Developers! Userland threading


From: Craig Dooley <cd5697@xxxxxxxxxx>
Date: Fri, 25 Jul 2003 15:52:42 -0400

Heres a proposal I though of at work today.

- At program init, a UTS is registered as a callback to a message being 
received, and the # of Physical CPUs is requested as a max number of virtual 
cpus to allocate.

-Even unthreaded programs have two context, thread 1, and the UTS.  All 
syscalls are async, and notify the UTS to send the message.  UTS puts that 
thread on it's per virtual CPU wait queue and it is locked there until the 
corresponding response from the kernel.  The kernel will not have to send 
global messages to the whole group, just the issuing "CPU". If there is no 
other context, issue a wait_msg and have the kernel remove us from it's wait 
queues.  Else, search on a global run queue for a free thread to run.

-As mentioned earlier, there is a per virtual CPU wait queue and a global wait 
queue.  Timed stalls or thread_yeilds can be put on the global wait queue.  
The UTS will check these before scheduling and put them at the back of the 
global run queue if they have waited long enough.

-Each thread has an owner token.  On the run queue or global wait queue it's 
unowned, but while it's running or blocked on a cpu it is owned by that cpu.  
It can not be requested to be switched to a different cpu

-Messages specific to the UTS should be sent by the kernel for things like get 
number of cpus and get/set time quantum.  The kernel can preempt the process 
at the end of a settable quantum to change the current process

-Processes can have a simple priority algorithm based on if they gave up the 
cpu or used their whole quantum.

-Whenever a thread is created, if there are more virtual CPUs than threads 
running, rfork another process.  Dont make more processes than virtual CPUs

-If there are more CPUs than threads, the UTS for the cpu when finding no 
work, will put itself on an idle cpu queue.  The next time another UTS runs 
and sees there are more than 1 threads waiting, send a message to the other 
Virtual CPU to wake it up and schedule again.  

I think most of this fits in with the LWKT framework.  On UP machines, there 
should be very little overhead since you can have pure user threading without 
having to worry about blocking because all syscalls become async.  On MP, it 
allows for multiple processes in kernel space, and requires very little lock 
overhead.  Affinity could easily be done by keeping track in the thread 
queues on user side and giving it a priority boost for having been on the 
running cpu.

-- 
Craig Dooley						 cd5697@xxxxxxxxxx

Attachment: pgp00004.pgp
Description: signature



[Date Prev][Date Next]  [Thread Prev][Thread Next]  [Date Index][Thread Index]