DragonFly On-Line Manual Pages

TUNING(7)	  DragonFly Miscellaneous Information Manual	     TUNING(7)


tuning -- performance tuning under DragonFly SYSTEM SETUP - DISKLABEL, NEWFS, TUNEFS, SWAP Modern DragonFly systems typically have just three partitions on the main drive. In order, a UFS /boot, swap, and a HAMMER /. The installer usu- ally creates a multitude of PFSs (pseudo filesystems) on the HAMMER par- tition for /var, /tmp, and numerous other sub-trees. These PFSs exist to ease the management of snapshots and backups. Generally speaking the /boot partition should be around 768MB in size. The minimum recommended is around 350MB, giving you room for backup ker- nels and alternative boot schemes. In the old days we recommended that swap be sized to at least 2x main memory. These days swap is often used for other activities, including tmpfs(5). We recommend that swap be sized to the larger of 2x main mem- ory or 1GB if you have a fairly small disk and up to 16GB if you have a moderately endowed system and a large drive. If you are on a minimally configured machine you may, of course, configure far less swap or no swap at all but we recommend at least some swap. The kernel's VM paging algo- rithms are tuned to perform best when there is at least 2x swap versus main memory. Configuring too little swap can lead to inefficiencies in the VM page scanning code as well as create issues later on if you add more memory to your machine. Swap is a good idea even if you don't think you will ever need it as it allows the machine to page out completely unused data from idle programs (like getty), maximizing the ram available for your activities. If you intend to use the swapcache(8) facility with a SSD we recommend the SSD be configured with at least a 32G swap partition (the maximum default for i386). If you are on a moderately well configured 64-bit system you can size swap even larger. Finally, on larger systems with multiple drives, if the use of SSD swap is not in the cards, we recommend that you configure swap on each drive (up to four drives). The swap partitions on the drives should be approx- imately the same size. The kernel can handle arbitrary sizes but inter- nal data structures scale to 4 times the largest swap partition. Keeping the swap partitions near the same size will allow the kernel to optimally stripe swap space across the N disks. Do not worry about overdoing it a little, swap space is the saving grace of UNIX and even if you do not normally use much swap, it can give you more time to recover from a run- away program before being forced to reboot. Most DragonFly systems have a single HAMMER root and use PFSs to break it up into various administrative domains. All the PFSs share the same allocation layer so there is no longer a need to size each individual mount. Instead you should review the hammer(8) manual page and use the 'hammer viconfig' facility to adjust snapshot retention and other parame- ters. By default HAMMER keeps 60 days worth of snapshots. Usually snap- shots are not desired on PFSs such as /usr/obj or /tmp since data on these partitions cycles a lot. If a very large work area is desired it is often beneficial to configure it as a separate HAMMER mount. If it is integrated into the root mount it should at least be its own HAMMER PFS. We recommend naming the large work area /build. Similarly if a machine is going to have a large number of users you might want to separate your /home out as well. A number of run-time mount(8) options exist that can help you tune the system. The most obvious and most dangerous one is async. Do not ever use it; it is far too dangerous. A less dangerous and more useful mount(8) option is called noatime. UNIX filesystems normally update the last-accessed time of a file or directory whenever it is accessed. This operation is handled in DragonFly with a delayed write and normally does not create a burden on the system. However, if your system is accessing a huge number of files on a continuing basis the buffer cache can wind up getting polluted with atime updates, creating a burden on the system. For example, if you are running a heavily loaded web site, or a news server with lots of readers, you might want to consider turning off atime updates on your larger partitions with this mount(8) option. You should not gratuitously turn off atime updates everywhere. For example, the /var filesystem customarily holds mailboxes, and atime (in combination with mtime) is used to determine whether a mailbox has new mail.


In larger systems you can stripe partitions from several drives together to create a much larger overall partition. Striping can also improve the performance of a filesystem by splitting I/O operations across two or more disks. The vinum(8), lvm(8), and dm(8) subsystems may be used to create simple striped filesystems. We have deprecated ccd(4). Generally speaking, striping smaller partitions such as the root and /var/tmp, or essentially read-only partitions such as /usr is a complete waste of time. You should only stripe partitions that require serious I/O perfor- mance. We recommend that such partitions be completely separate mounts and not use the same storage media as your root mount. I should reiterate that last comment. Do not stripe your boot+root. Just about everything in there will be cached in system memory anyway. Neither would I recommend RAIDing your root. If robustness is needed placing your boot, swap, and root on a SSD which has about the same MTBF as your motherboard, then RAIDing everything which requires significant amounts of storage, should be sufficient. There isn't much point making the boot/swap/root storage even more redundant when the motherboard itself has no redundancy. When a high level of total system redundancy is required you need to be thinking more about having multiple physical machines that back each other up. When striping multiple disks always partition in multiples of at least 8 megabytes and use at least a 128KB stripe. A 256KB stripe is probably even better. This will avoid mis-aligning HAMMER big-blocks (which are 8MB) or causing a single I/O cluster from crossing a stripe boundary. DragonFly will issue a significant amount of read-ahead, upwards of a megabyte or more if it determines accesses are linear enough, which is sufficient to issue concurrent I/O across multiple stripes.


sysctl(8) variables permit system behavior to be monitored and controlled at run-time. Some sysctls simply report on the behavior of the system; others allow the system behavior to be modified; some may be set at boot time using rc.conf(5), but most will be set via sysctl.conf(5). There are several hundred sysctls in the system, including many that appear to be candidates for tuning but actually are not. In this document we will only cover the ones that have the greatest effect on the system. The kern.ipc.shm_use_phys sysctl defaults to 1 (on) and may be set to 0 (off) or 1 (on). Setting this parameter to 1 will cause all System V shared memory segments to be mapped to unpageable physical RAM. This feature only has an effect if you are either (A) mapping small amounts of shared memory across many (hundreds) of processes, or (B) mapping large amounts of shared memory across any number of processes. This feature allows the kernel to remove a great deal of internal memory management page-tracking overhead at the cost of wiring the shared memory into core, making it unswappable. The vfs.write_behind sysctl defaults to 1 (on). This tells the filesys- tem to issue media writes as full clusters are collected, which typically occurs when writing large sequential files. The idea is to avoid satu- rating the buffer cache with dirty buffers when it would not benefit I/O performance. However, this may stall processes and under certain circum- stances you may wish to turn it off. The vfs.hirunningspace sysctl determines how much outstanding write I/O may be queued to disk controllers system wide at any given instance. The default is usually sufficient but on machines with lots of disks you may want to bump it up to four or five megabytes. Note that setting too high a value (exceeding the buffer cache's write threshold) can lead to extremely bad clustering performance. Do not set this value arbitrarily high! Also, higher write queueing values may add latency to reads occur- ring at the same time. There are various other buffer-cache and VM page cache related sysctls. We do not recommend modifying these values. As of FreeBSD 4.3, the VM system does an extremely good job tuning itself. The net.inet.tcp.sendspace and net.inet.tcp.recvspace sysctls are of par- ticular interest if you are running network intensive applications. They control the amount of send and receive buffer space allowed for any given TCP connection. The default sending buffer is 32K; the default receiving buffer is 64K. You can often improve bandwidth utilization by increasing the default at the cost of eating up more kernel memory for each connec- tion. We do not recommend increasing the defaults if you are serving hundreds or thousands of simultaneous connections because it is possible to quickly run the system out of memory due to stalled connections build- ing up. But if you need high bandwidth over a fewer number of connec- tions, especially if you have gigabit Ethernet, increasing these defaults can make a huge difference. You can adjust the buffer size for incoming and outgoing data separately. For example, if your machine is primarily doing web serving you may want to decrease the recvspace in order to be able to increase the sendspace without eating too much kernel memory. Note that the routing table (see route(8)) can be used to introduce route-specific send and receive buffer size defaults. As an additional management tool you can use pipes in your firewall rules (see ipfw(8)) to limit the bandwidth going to or from particular IP blocks or ports. For example, if you have a T1 you might want to limit your web traffic to 70% of the T1's bandwidth in order to leave the remainder available for mail and interactive use. Normally a heavily loaded web server will not introduce significant latencies into other services even if the network link is maxed out, but enforcing a limit can smooth things out and lead to longer term stability. Many people also enforce artificial bandwidth limitations in order to ensure that they are not charged for using too much bandwidth. Setting the send or receive TCP buffer to values larger than 65535 will result in a marginal performance improvement unless both hosts support the window scaling extension of the TCP protocol, which is controlled by the net.inet.tcp.rfc1323 sysctl. These extensions should be enabled and the TCP buffer size should be set to a value larger than 65536 in order to obtain good performance from certain types of network links; specifi- cally, gigabit WAN links and high-latency satellite links. RFC 1323 sup- port is enabled by default. The net.inet.tcp.always_keepalive sysctl determines whether or not the TCP implementation should attempt to detect dead TCP connections by intermittently delivering ``keepalives'' on the connection. By default, this is disabled for all applications, only applications that specifi- cally request keepalives will use them. In most environments, TCP keepalives will improve the management of system state by expiring dead TCP connections, particularly for systems serving dialup users who may not always terminate individual TCP connections before disconnecting from the network. However, in some environments, temporary network outages may be incorrectly identified as dead sessions, resulting in unexpectedly terminated TCP connections. In such environments, setting the sysctl to 0 may reduce the occurrence of TCP session disconnections. The net.inet.tcp.delayed_ack TCP feature is largely misunderstood. His- torically speaking this feature was designed to allow the acknowledgement to transmitted data to be returned along with the response. For example, when you type over a remote shell the acknowledgement to the character you send can be returned along with the data representing the echo of the character. With delayed acks turned off the acknowledgement may be sent in its own packet before the remote service has a chance to echo the data it just received. This same concept also applies to any interactive pro- tocol (e.g. SMTP, WWW, POP3) and can cut the number of tiny packets flow- ing across the network in half. The DragonFly delayed-ack implementa- tion also follows the TCP protocol rule that at least every other packet be acknowledged even if the standard 100ms timeout has not yet passed. Normally the worst a delayed ack can do is slightly delay the teardown of a connection, or slightly delay the ramp-up of a slow-start TCP connec- tion. While we aren't sure we believe that the several FAQs related to packages such as SAMBA and SQUID which advise turning off delayed acks may be referring to the slow-start issue. The net.inet.tcp.inflight_enable sysctl turns on bandwidth delay product limiting for all TCP connections. The system will attempt to calculate the bandwidth delay product for each connection and limit the amount of data queued to the network to just the amount required to maintain opti- mum throughput. This feature is useful if you are serving data over modems, GigE, or high speed WAN links (or any other link with a high bandwidth*delay product), especially if you are also using window scaling or have configured a large send window. If you enable this option you should also be sure to set net.inet.tcp.inflight_debug to 0 (disable debugging), and for production use setting net.inet.tcp.inflight_min to at least 6144 may be beneficial. Note, however, that setting high mini- mums may effectively disable bandwidth limiting depending on the link. The limiting feature reduces the amount of data built up in intermediate router and switch packet queues as well as reduces the amount of data built up in the local host's interface queue. With fewer packets queued up, interactive connections, especially over slow modems, will also be able to operate with lower round trip times. However, note that this feature only affects data transmission (uploading / server-side). It does not affect data reception (downloading). Adjusting net.inet.tcp.inflight_stab is not recommended. This parameter defaults to 50, representing +5% fudge when calculating the bwnd from the bw. This fudge is on top of an additional fixed +2*maxseg added to bwnd. The fudge factor is required to stabilize the algorithm at very high speeds while the fixed 2*maxseg stabilizes the algorithm at low speeds. If you increase this value excessive packet buffering may occur. The net.inet.ip.portrange.* sysctls control the port number ranges auto- matically bound to TCP and UDP sockets. There are three ranges: A low range, a default range, and a high range, selectable via an IP_PORTRANGE setsockopt() call. Most network programs use the default range which is controlled by net.inet.ip.portrange.first and net.inet.ip.portrange.last, which defaults to 1024 and 5000 respectively. Bound port ranges are used for outgoing connections and it is possible to run the system out of ports under certain circumstances. This most commonly occurs when you are running a heavily loaded web proxy. The port range is not an issue when running serves which handle mainly incoming connections such as a normal web server, or has a limited number of outgoing connections such as a mail relay. For situations where you may run yourself out of ports we recommend increasing net.inet.ip.portrange.last modestly. A value of 10000 or 20000 or 30000 may be reasonable. You should also consider firewall effects when changing the port range. Some firewalls may block large ranges of ports (usually low-numbered ports) and expect systems to use higher ranges of ports for outgoing connections. For this reason we do not recommend that net.inet.ip.portrange.first be lowered. The kern.ipc.somaxconn sysctl limits the size of the listen queue for accepting new TCP connections. The default value of 128 is typically too low for robust handling of new connections in a heavily loaded web server environment. For such environments, we recommend increasing this value to 1024 or higher. The service daemon may itself limit the listen queue size (e.g. sendmail(8), apache) but will often have a directive in its configuration file to adjust the queue size up. Larger listen queues also do a better job of fending off denial of service attacks. The kern.maxfiles sysctl determines how many open files the system sup- ports. The default is typically a few thousand but you may need to bump this up to ten or twenty thousand if you are running databases or large descriptor-heavy daemons. The read-only kern.openfiles sysctl may be interrogated to determine the current number of open files on the system. The vm.swap_idle_enabled sysctl is useful in large multi-user systems where you have lots of users entering and leaving the system and lots of idle processes. Such systems tend to generate a great deal of continuous pressure on free memory reserves. Turning this feature on and adjusting the swapout hysteresis (in idle seconds) via vm.swap_idle_threshold1 and vm.swap_idle_threshold2 allows you to depress the priority of pages asso- ciated with idle processes more quickly than the normal pageout algo- rithm. This gives a helping hand to the pageout daemon. Do not turn this option on unless you need it, because the tradeoff you are making is to essentially pre-page memory sooner rather than later, eating more swap and disk bandwidth. In a small system this option will have a detrimen- tal effect but in a large system that is already doing moderate paging this option allows the VM system to stage whole processes into and out of memory more easily.


Some aspects of the system behavior may not be tunable at runtime because memory allocations they perform must occur early in the boot process. To change loader tunables, you must set their values in loader.conf(5) and reboot the system. kern.maxusers controls the scaling of a number of static system tables, including defaults for the maximum number of open files, sizing of net- work memory resources, etc. On DragonFly, kern.maxusers is automatically sized at boot based on the amount of memory available in the system, and may be determined at run-time by inspecting the value of the read-only kern.maxusers sysctl. Some sites will require larger or smaller values of kern.maxusers and may set it as a loader tunable; values of 64, 128, and 256 are not uncommon. We do not recommend going above 256 unless you need a huge number of file descriptors; many of the tunable values set to their defaults by kern.maxusers may be individually overridden at boot- time or run-time as described elsewhere in this document. The kern.dfldsiz and kern.dflssiz tunables set the default soft limits for process data and stack size respectively. Processes may increase these up to the hard limits by calling setrlimit(2). The kern.maxdsiz, kern.maxssiz, and kern.maxtsiz tunables set the hard limits for process data, stack, and text size respectively; processes may not exceed these limits. The kern.sgrowsiz tunable controls how much the stack segment will grow when a process needs to allocate more stack. kern.ipc.nmbclusters may be adjusted to increase the number of network mbufs the system is willing to allocate. Each cluster represents approx- imately 2K of memory, so a value of 1024 represents 2M of kernel memory reserved for network buffers. You can do a simple calculation to figure out how many you need. If you have a web server which maxes out at 1000 simultaneous connections, and each connection eats a 16K receive and 16K send buffer, you need approximately 32MB worth of network buffers to deal with it. A good rule of thumb is to multiply by 2, so 32MBx2 = 64MB/2K = 32768. So for this case you would want to set kern.ipc.nmbclusters to 32768. We recommend values between 1024 and 4096 for machines with mod- erates amount of memory, and between 4096 and 32768 for machines with greater amounts of memory. Under no circumstances should you specify an arbitrarily high value for this parameter, it could lead to a boot-time crash. The -m option to netstat(1) may be used to observe network clus- ter use.


There are a number of kernel options that you may have to fiddle with in a large-scale system. In order to change these options you need to be able to compile a new kernel from source. The config(8) manual page and the handbook are good starting points for learning how to do this. Gen- erally the first thing you do when creating your own custom kernel is to strip out all the drivers and services you do not use. Removing things like INET6 and drivers you do not have will reduce the size of your ker- nel, sometimes by a megabyte or more, leaving more memory available for applications. If your motherboard is AHCI-capable then we strongly recommend turning on AHCI mode.


The type of tuning you do depends heavily on where your system begins to bottleneck as load increases. If your system runs out of CPU (idle times are perpetually 0%) then you need to consider upgrading the CPU or moving to an SMP motherboard (multiple CPU's), or perhaps you need to revisit the programs that are causing the load and try to optimize them. If your system is paging to swap a lot you need to consider adding more memory. If your system is saturating the disk you typically see high CPU idle times and total disk saturation. systat(1) can be used to monitor this. There are many solutions to saturated disks: increasing memory for caching, mirroring disks, distributing operations across several machines, and so forth. If disk performance is an issue and you are using IDE drives, switching to SCSI can help a great deal. While modern IDE drives compare with SCSI in raw sequential bandwidth, the moment you start seeking around the disk SCSI drives usually win. Finally, you might run out of network suds. The first line of defense for improving network performance is to make sure you are using switches instead of hubs, especially these days where switches are almost as cheap. Hubs have severe problems under heavy loads due to collision backoff and one bad host can severely degrade the entire LAN. Second, optimize the network path as much as possible. For example, in firewall(7) we describe a firewall protecting internal hosts with a topology where the externally visible hosts are not routed through it. Use 100BaseT rather than 10BaseT, or use 1000BaseT rather than 100BaseT, depending on your needs. Most bottlenecks occur at the WAN link (e.g. modem, T1, DSL, whatever). If expanding the link is not an option it may be possible to use the dummynet(4) feature to implement peak shaving or other forms of traffic shaping to prevent the overloaded service (such as web services) from affecting other services (such as email), or vice versa. In home installations this could be used to give interactive traffic (your browser, ssh(1) logins) priority over services you export from your box (web services, email).


netstat(1), systat(1), dummynet(4), nata(4), login.conf(5), rc.conf(5), sysctl.conf(5), firewall(7), hier(7), boot(8), ccdconfig(8), config(8), disklabel(8), fsck(8), ifconfig(8), ipfw(8), loader(8), mount(8), newfs(8), route(8), sysctl(8), tunefs(8), vinum(8)


The tuning manual page was originally written by Matthew Dillon and first appeared in FreeBSD 4.3, May 2001. DragonFly 4.3 October 24, 2010 DragonFly 4.3