DragonFly BSD
DragonFly kernel List (threaded) for 2004-01
[Date Prev][Date Next]  [Thread Prev][Thread Next]  [Date Index][Thread Index]

Re: Caps status

From: Dave Leimbach <leimySPAM2k@xxxxxxx>
Date: 20 Jan 2004 05:04:01 -0600

Matthew Dillon <dillon@xxxxxxxxxxxxxxxxxxxx> writes:

> :I see that there are some caps tests now in the CVS...
> :
> :That seems to imply to me that the API is solidified somewhat?
> :
> :Just curious what it's status is because I'd like to see if I can write
> :up a little useful demo or something with this API and get it on the docs
> :page.
> :
> :Dave
>     The system call API has solidified pretty well, but there's only a
>     synchronous implementation at the moment (which is just fine if
>     you want to start playing around).
>     I need to write up some documentation.  Basically the way it works
>     is that the server registers a rendezvous point (name,uid,gid)
>     and the client connects to it (name,uid,gid).
>     The client sends the server a message.  The client's message is
>     opaque data and will not be copied from the client's address space
>     to the server until the server reads the message.

Cool... sounds like out of band mach messaging in a way.  Except that
I think it does full COW for certain sized messages.  Its more like
a shadow copy until some modification is done which of course means
the pages are shared and the message reception "appears" very fast.

On some diagrams I have seen from Apple's ADC TV on IPC the mach 
messages seem to have a flat line graph for time to send different
sized messages.  They also have an altivec enhanced memcpy that makes
the data transfer pretty freakin fast when it does happen.

Such an enhancement in DragonFly message copying couldn't be bad either
except that there is so much PC stuff out there like SSE1 SSE2 and maybe
some MMX stuff that can be done to optimize memcpy [bcopy?] and then
of course there is the headache of alignments and padding and CPU 
detection, Oh my.  I've seen the Altivec memcpy and its non-trivial :).

Still sounds like a neat project doesn't it? :)  I actually used to 
write a bit of assembly in my day... perhaps I should take a crack at
some of it....

>     The server reads the message (the state will be CAPMS_REQUEST in the
>     caps_msgid that the syscall fills in), processes it, and sends a 
>     reply.  The server may send arbitrary opaque data in the reply.  The
>     data will not be copied until the client reads the reply.

I am not sure how much work it was to implement this... or how much MIG
was involved with the creation of CFMessage from OS X but somehow they made
it so the return value of the server "Callback" function of the runloop that
the message server runs in *is* the reply message.  I thought that was 
awfully clever but not necessarilly optimal. :) </anecdote>

>     When the client reads the reply (the state in the message id returned
>     by the system call will be CAPMS_REPLY), the message is returned to the
>     server yet again in state CAPMS_DISPOSE, which tells the server that
>     it can dispose of the data it sent in the reply.

The opaque replied message comes back to the server upon receipt?  

Seems reasonable :).

>     That's the basics in a nutshell.  Eventually we will support an
>     asych notification interface through the 'upcid' argument
>     in caps_sys_client() and caps_sys_server(), but it's ignored for
>     the moment (and we will eventually support both kqueue notification
>     and upcall notification).

I was just going to ask about kqueue :)

>     The connection is many-to-1.  The server need register the service
>     just once to receive and reply to messages from multiple clients.
>     The server is given no indication (yet) of when a client connects
>     or disconnects from the service.

Is the only thing needed for a client to connect to the server the server's
"string".  Sort of like mach message services?

>     The various routines can also return a caps_cred associated with
>     the message.  I haven't tested this part yet but it should work.

Maybe I will get a chance... Not tonight though.  I am out of town for
2 days... Perhaps late Wednesday night if I am not too tired or Thursday

Thanks for the great explanation Matt! [as usual]


[Date Prev][Date Next]  [Thread Prev][Thread Next]  [Date Index][Thread Index]