DragonFly kernel List (threaded) for 2011-04
[
Date Prev][
Date Next]
[
Thread Prev][
Thread Next]
[
Date Index][
Thread Index]
Re: GSOC: Device mapper mirror target
On Mon, 25 Apr 2011 07:25:31 -0400
Venkatesh Srinivas <me@endeavour.zapto.org> wrote:
> On Mon, Apr 25, 2011 at 6:36 AM, Adam Hoka <adam.hoka@gmail.com> wrote:
> > On Mon, 25 Apr 2011 10:30:58 +0100
> > Alex Hornung <ahornung@gmail.com> wrote:
> >
> >> On 25/04/11 10:15, Adam Hoka wrote:
> >> >> I also don't see the need of delegating every write to secondary mirror
> >> >> legs to a thread. You definitely need a synchronization thread, but I
> >> >> think you should propagate writes to each of the mirror disks from the
> >> >> same context you are in when you get the requests.
> >> >
> >> > That "thread" could be a workq (which will run in a different thread,
> >> > obviously).
> >>
> >> I know what you mean, but I don't see why you would do it like that. You
> >> should be dispatching writes to all the mirror disks immediately and not
> >> queue them to later dispatch them for a different thread. I see no
> >> advantage to that approach, it makes things more complicated and last
> >> but not least it is also less robust.
> >
> > How do I parallelize the writes then? I dont want to queue and forget them,
> > just want to something like this:
> >
> > - add the jobs to a list
> > - run them in parallel, each in his own thread
> > - wait for completion and collect return values
> >
> > maybe it could be implemented with something like this?
> > im just not sure if I can run one task multiple times in parallel.
> >
> > tq = taskqueue_create("dm-mirror", M_WAITOK, taskqueue_thread_enqueue, &tq);
> > taskqueue_start_threads(&sc->sc_tq, numberofmirrorlegs, TDPRI_KERN_DAEMON, -1, "dm-mirror taskq");
> > ...
> > taskq_enqueue(tq, dmmirror_write_task);
> >
> > instead of this:
> >
> > vn_strategy(dev1, io); // block for a while...
> > vn_strategy(dev2, io);
> >
>
> vn_strategy doesn't necessarily block till an I/O completes; it merely
> calls the vnode's strategy vfs op. biodone() is called from inside the
> strategy path when an I/O operation is complete. The path calling
> vn_strategy may wait for completion with biowait().
>
> In the case of nata, for example, ad_strategy() kicks of an ATA
> request; ad_done() is called from the ATA device interrupt and calls
> biodone() to complete the request.
>
> -- vs
Well, it may not block, but I cant rely on it...
--
NetBSD - Simplicity is prerequisite for reliability
[
Date Prev][
Date Next]
[
Thread Prev][
Thread Next]
[
Date Index][
Thread Index]