DragonFly BSD
DragonFly kernel List (threaded) for 2004-12
[Date Prev][Date Next]  [Thread Prev][Thread Next]  [Date Index][Thread Index]

I/O consolidation and direct-to-DMA plans


From: Matthew Dillon <dillon@xxxxxxxxxxxxxxxxxxxx>
Date: Mon, 20 Dec 2004 12:10:54 -0800 (PST)

    Hiten and I have come up with a roadmap for the I/O path cleanup and
    direct-to-dma plans (avoiding having to map KVA buffers).

    We've looked at all existing structures... msf_buf's, sfbufs, struct buf,
    XIO's, UIO's, and BUSDMA (bounce_page and bus_dmamap).  Each of these
    structures represents a piece of the larger 'integrated I/O' puzzle.

    The primary issue is that we need a structure which covers both ends of
    the equation... we need the capability in higher layers to specify either
    a KVA mapped buffer or a page list, and we need the capability in lower
    layers to require either a KVA mapped buffer or a page list, depending on
    the requirements of the layer.   For example, in some instances CAM 
    may supply a buffer pointer to simple SCSI request structures while in
    others CAM might want to pass data from a struct BUF or UIO that it has
    a page list for.  In some cases a driver will have access to a DMA
    engine and require a page list, while in others a driver might need a
    mapped buffer, or might need to create bounce pages.

    What we are going to do is extend the msf_buf abstraction to cover these
    needs and provide a set of API calls that allows upper layers to supply
    data in any form and lower level layers to request data in any form,
    including with address restrictions.  msf_buf's already have a page-list
    (XIO) and KVA mapping abstraction.  We are going to add a bounce-buffer
    abstraction and then work on a bunch of new API calls for msf_bufs to
    cover the needs of various subsystems.

    As a starter we'll have these functions:


    msf_init(struct msf_buf *msf)

	Initialize an msf_buf for use (i.e. zero its fields).

    struct msf_buf *
    msf_create_from_buf(struct msf_buf *opt_msf, void *buf, size_t bytes)

	Populate an existing msf_buf or allocate a new one and install the
	supplied KVA buffer pointer and size.

    struct msf_buf *
    msf_create_from_xio(struct msf_buf *opt_msf, struct xio *xio)

	Populate an existing msf_buf or allocate a new one and install the
	supplied XIO (page list).

    int
    msf_require_buf(struct msf_buf *msf)

	Require that an msf_buf have a mapped KVA buffer.  If the msf_buf
	already has a mapped KVA buffer this is a NOP.  If the msf_buf
	contains a page list a KVA buffer will be allocated and mapped
	based on the page list.

    int
    msf_require_xio(struct msf_buf *msf)

	Require that an mfs_buf have a page list.  If the msf_buf already
	has a page list this is a NOP.  Otherwise a page list is constructed
	from the msf_buf's KVA buffer.

    void
    msf_release(struct msf_buf *msf)

	Release an msf_buf, freeing any resources that were created as side
	effects to the above API calls and zeroing out any resources
	that were originally supplied.


    The plan is to start embedding msf_buf's in various system layers
    (struct buf, busdma, etc...) as independant entities to begin with.  As
    more of this work is accomplished the various layers using msf_buf's 
    will start to become adjacent to each other and we will be able to then
    have one layer pass its msf_buf directly to another without having to
    re-create this.

    Along the way all the various disparate I/O related structures will
    be consolidated.

    Eventually this will allow e.g. the buffer cache to pass an msf_buf
    all the way down to BUSDMA without having to KVA map the buffer, thus
    achieving our goal.

					-Matt
					Matthew Dillon 
					<dillon@xxxxxxxxxxxxx>



[Date Prev][Date Next]  [Thread Prev][Thread Next]  [Date Index][Thread Index]