DragonFly BSD
DragonFly kernel List (threaded) for 2011-01
[Date Prev][Date Next]  [Thread Prev][Thread Next]  [Date Index][Thread Index]

Re: pmap_enter vs pmap_qenter


From: Venkatesh Srinivas <me@xxxxxxxxxxxxxxxxxxx>
Date: Wed, 12 Jan 2011 00:16:00 -0500

On Tue, Jan 11, 2011 at 8:02 PM, Matthew Dillon
<dillon@apollo.backplane.com> wrote:
> :Hi,
> :
> :If I have a vm_page_t array, 'mp' for this exercise, should:
> :
> :    pmap_qenter(addr, mp, npages);
> :
> :be equivalent to
> :
> :    for (i = 0; i < npages; i += PAGE_SIZE)
> :        pmap_enter(&kernel_pmap, addr + i, mp[i / PAGE_SIZE], VM_PROT_ALL, 1);
> :
> :if all of the pages in the array have had m->valid set to
> :VM_PAGE_BITS_ALL and vm_page_wire() called on them?
> :
> :Thanks,
> :-- vs
>
>    Well, pmap_kenter() and pmap_qenter() are designed for the kernel_map
>    only, whereas pmap_enter() is designed for user pmaps but happens to
>    also work on the kernel_map.
>
>    p.s. on that example my assumption is 'npages' is meant to be in bytes
>    and not pages.
>
>                                        -Matt
>                                        Matthew Dillon
>                                        <dillon@backplane.com>

Okay. When dealing with the kernel pmap, are the two above exactly
equivalent? (assuming npages is correctly dealt with :D)

I ask because I've been working on converting kmem_slab_alloc() to use
pmap_qenter when possible; here is my patch thus far:

diff --git a/sys/kern/kern_slaballoc.c b/sys/kern/kern_slaballoc.c
index 68beda5..e6c7bf8 100644
--- a/sys/kern/kern_slaballoc.c
+++ b/sys/kern/kern_slaballoc.c
@@ -1311,6 +1311,8 @@ kmem_slab_alloc(vm_size_t size, vm_offset_t
align, int flags)
     vm_offset_t addr;
     int count, vmflags, base_vmflags;
     vm_page_t mp[ZALLOC_MAX_ZONE_SIZE / PAGE_SIZE];
+    vm_size_t premap_size;
+    vm_page_t m;
     thread_t td;

     size = round_page(size);
@@ -1368,8 +1370,6 @@ kmem_slab_alloc(vm_size_t size, vm_offset_t
align, int flags)
      * Allocate the pages.  Do not mess with the PG_ZERO flag yet.
      */
     for (i = 0; i < size; i += PAGE_SIZE) {
-	vm_page_t m;
-
 	/*
 	 * VM_ALLOC_NORMAL can only be set if we are not preempting.
 	 *
@@ -1447,14 +1447,42 @@ kmem_slab_alloc(vm_size_t size, vm_offset_t
align, int flags)

     /*
      * Enter the pages into the pmap and deal with PG_ZERO and M_ZERO.
+     *
+     * The first few vm_page_t are cached in mp[]; deal with them first,
+     * if there are leftovers, deal with them later.
      */
-    for (i = 0; i < size; i += PAGE_SIZE) {
-	vm_page_t m;
+    premap_size = min(size, NELEM(mp) * PAGE_SIZE);
+    for (i = 0; i < premap_size; i += PAGE_SIZE) {
+	m = mp[i / PAGE_SIZE];
+	m->valid = VM_PAGE_BITS_ALL;
+	vm_page_wire(m);
+    }

-	if ((i / PAGE_SIZE) < (sizeof(mp) / sizeof(mp[0])))
-	   m = mp[i / PAGE_SIZE];
-	else
-	   m = vm_page_lookup(&kernel_object, OFF_TO_IDX(addr + i));
+    /* Insert cached pages into pmap; use pmap_qenter to batch smp_invltlb */
+#if 0 /* XXX: Does not work */
+    pmap_qenter(addr, mp, premap_size / PAGE_SIZE);
+#else
+
+    for (i = 0; i < premap_size; i += PAGE_SIZE)
+	pmap_enter(&kernel_pmap, addr + i, mp[i / PAGE_SIZE], VM_PROT_ALL, 1);
+#endif
+
+    /* Zero and wake cached vm_pages */
+    for (i = 0; i < premap_size; i += PAGE_SIZE) {
+	m = mp[i / PAGE_SIZE];
+       	if ((m->flags & PG_ZERO) == 0 && (flags & M_ZERO))
+            bzero((char *)addr + i, PAGE_SIZE);
+        vm_page_flag_clear(m, PG_ZERO);
+        KKASSERT(m->flags & (PG_WRITEABLE | PG_MAPPED));
+        vm_page_flag_set(m, PG_REFERENCED);
+	vm_page_wakeup(m);
+    }
+
+    /*
+     * Handle uncached part of allocation; bypassed if mp[] was large enough
+     */
+    for (i = NELEM(mp) * PAGE_SIZE; i < size; i += PAGE_SIZE) {
+	m = vm_page_lookup(&kernel_object, OFF_TO_IDX(addr + i));
 	m->valid = VM_PAGE_BITS_ALL;
 	/* page should already be busy */
 	vm_page_wire(m);


----

I thought that the _qenter and the _enter() usages there would be
equivalent, but enabling the pmap_qenter path (and disabling the
_enter() one) for pages whose descriptors were stored leads to a panic
on boot when calling kmem_slab_alloc() to setup the zero page.
(Hitting the
panic: assertion: m->flags & (PG_WRITEABLE | PG_MAPPED) in kmem_slab_alloc
assertion for the first pass).

Thanks,
--vs




[Date Prev][Date Next]  [Thread Prev][Thread Next]  [Date Index][Thread Index]