DragonFly kernel List (threaded) for 2007-10
DragonFly BSD
DragonFly kernel List (threaded) for 2007-10
[Date Prev][Date Next]  [Thread Prev][Thread Next]  [Date Index][Thread Index]

Re: pmap of amd64


From: Matthew Dillon <dillon@xxxxxxxxxxxxxxxxxxxx>
Date: Fri, 12 Oct 2007 19:46:36 -0700 (PDT)

:Hi Matt,
:
:I try to come up the way to organize the kernel memory maps and having
:spent sometime figuring out the freebsd pmap structure which looks
:wired to me (because of not understanding:).
:
:below is the amd64 long mode 4k page VA structure.
:
:some issues for discussions:
:
:1. 4K page or 2M page for kernel?

    4K for sure.

:2. how many PML4/PDP entry for kernel/user?

    The kernel will be running in long mode w/ 4K pages, which gives
    us a 9 bit (512 entry) PML4 table.  I think you want to try
    to implement things as simply as possible so give each cpu its
    own PML4 table.  I strongly recommend:

    * That the user map use only ONE PML4 entry (i.e. 39 bit address space),
      to begin with.  This way the process switch code simply load a single
      pointer to switch user address spaces and we do not have to allocate
      a PML4 table for each user process.

    * The kernel map only needs one PML4 entry but giving it more later on
      is really easy to do.

    * One or more PML4 entries would be used to direct-map all of physical
      memory into kernel space.  This would make the pmap operations a
      lot easier as they would not have to map page table pages into memory
      to work on them, they could just figure out the physical address
      and access the pages directory from the direct physical map.

    * One PML4 entry for globaldata access magic (this would work if the
      PML4 table is per-cpu rather then per-process).

:3. how to do with the per-cpu data, should a PML4/PDP entry for each CPU?

    I don't think we can use segment descriptor table magic in long mode
    like we do in i386.  What I suggest is that one PML4 entry be used
    to allow us to movq the contents of a specific address into %fs 
    in order to get the per-cpu structure pointer, or something like that.

    Ultimately we will want to be able to support a larger user address
    space, either by storing multiple entries in the PML4 table during the
    switch code or with other magic, but I really think that can be left
    for later.

    One thing that I do not want to do is give each user process its own
    PML4 table.  I think that's a mistake.  Having the system manage
    just one PML4 table per cpu greatly simplifies the rest of the pmap
    code.

						-Matt




[Date Prev][Date Next]  [Thread Prev][Thread Next]  [Date Index][Thread Index]